这篇文章主要是接着上一篇文章来继续学习,如果感兴趣,请点击查看
网格搜索法,是根据给定的评估指标,循环遍历定义的参数值列表,估计各个单独的模型,从而选择一个最优的模型。
调整参数需要用到tuning这个包
# 导包
import pyspark.ml.tuning as tune
# 指定模型和参数列表
logistic = cl.LogisticRegression(labelCol='INFANT_ALIVE_AT_REPORT')
grid = tune.ParamGridBuilder().addGrid(logistic.maxIter, [2, 10, 50]).addGrid(logistic.regParam, [0.01, 0.05, 0.3]).build()
ParamGridBuilder对象使用addGrid()方法将参数添加到网格中:参数一为要优化的模型的参数对象,参数二为要循环的列表的值。最后调用对象的build()方法,来构建网络。
接着创建模型评估的评估器
evaluator = ev.BinaryClassificationEvaluator(rawPredictionCol='probability', labelCol='INFANT_ALIVE_AT_REPORT')
cv = tune.CrossValidator(estimator=logistic, estimatorParamMaps=grid, evaluator=evaluator)
# 创建管道
pipeline = Pipeline(stages=[encoder, featuresCreator])
data_transformer = pipeline.fit(births_train)
# 建立模型
cvModel = cv.fit(data_transformer.transform(births_train))
# 检验效果
data_train = data_transformer.transform(births_test)
results = cvModel.transform(data_train)
print('ROC', evaluator.evaluate(results, {evaluator.metricName:'areaUnderROC'}))
print('PR', evaluator.evaluate(results, {evaluator.metricName:'areaUnderPR'}))
查看最佳模型的参数
results = [
([
{key.name: paramValue} for key, paramValue in zip(params.keys(), params.values())
], metric) for params, metric in zip(cvModel.getEstimatorParamMaps(), cvModel.avgMetrics)
]
sorted(results, key=lambda el:el[1], reverse=True)[0]
代码看起来稍微复杂了一些,但是仔细阅读的话 还是比较简单的,实际上就是两次遍历,提出了对应的参数名称和值
# ChiSqSelector 选择数据的特征数量
selector = ft.ChiSqSelector(numTopFeatures=5, featuresCol=featuresCreator.getOutputCol(), outputCol='selectedFeatures',
labelCol='INFANT_ALIVE_AT_REPORT')
# 创建管道
logistic = cl.LogisticRegression(labelCol='INFANT_ALIVE_AT_REPORT', featuresCol='selectedFeatures')
pipeline = Pipeline(stages=[encoder, featuresCreator, selector])
data_transformer = pipeline.fit(births_train)
# 创建模型
tvs = tune.TrainValidationSplit(estimator=logistic, estimatorParamMaps=grid, evaluator=evaluator)
# 拟合数据到模型
tvsModel = tvs.fit(data_transformer.transform(births_train))
data_train = data_transformer.transform(births_test)
results = tvsModel.transform(data_train)
# 评估结果
print('ROC', evaluator.evaluate(results, {evaluator.metricName:'areaUnderROC'}))
print('PR', evaluator.evaluate(results, {evaluator.metricName:'areaUnderPR'}))