Task4 - 建模与调参

1. 内容介绍

  1. 线性回归模型:
  • 线性回归对于特征的要求;
  • 处理长尾分布;
  • 理解线性回归模型;
  1. 模型性能验证:
  • 评价函数与目标函数;
  • 交叉验证方法;
  • 留一验证方法;
  • 针对时间序列问题的验证;
  • 绘制学习率曲线;
  • 绘制验证曲线;
  1. 嵌入式特征选择:
  • Lasso回归;
  • Ridge回归;
  • 决策树;
  1. 模型对比:
  • 常用线性模型;
  • 常用非线性模型;
  1. 模型调参:
  • 贪心调参方法;
  • 网格调参方法;
  • 贝叶斯调参方法;

2. 一些基本模型

  • 线性回归(Linear Regression): 链接
  • 决策树(Decision Tress): 链接
  • GBDT: 链接
  • XGBoost: 链接
  • LightGBM: 链接
  • CatBoost: 链接

GBDT的三大主流神器: XGBoost、LightGBM、CatBoost

3. 代码示例

3.1 reduce_mem_usage

通过调整数据类型来减少数据在内容中的占用

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    return df

3.2 交叉验证

from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_absolute_error,  make_scorer

def log_transfer(func):
    def wrapper(y, yhat):
        result = func(np.log(y), np.nan_to_num(np.log(yhat)))
        return result
    return wrapper
# 对未经过 log 变换的数据进行预测
scores = cross_val_score(model, X=train_X, y=train_y_org, verbose=1, cv = 5, scoring=make_scorer(log_transfer(mean_absolute_error)))
print('AVG-org:', np.mean(scores))
# 对经过 log 变换后的数据进行预测
scores = cross_val_score(model, X=train_X, y=train_y, verbose=1, cv = 5, scoring=make_scorer(log_transfer(mean_absolute_error)))
print('AVG:', np.mean(scores))

你可能感兴趣的:(Task4 - 建模与调参)