1500字范文,内容丰富有趣,写作好帮手!
1500字范文 > 《Python金融大数据风控建模实战》 第14章 决策树模型

《Python金融大数据风控建模实战》 第14章 决策树模型

时间:2024-02-19 16:34:38

相关推荐

《Python金融大数据风控建模实战》 第14章 决策树模型

《Python金融大数据风控建模实战》 第14章 决策树模型

本章引言Python代码实现及注释

本章引言

在评分卡建模中,模型可解释性也很重要。除了Logistic回归模型,决策树模型也是一个非常好理解的模型。

决策树模型的规则组合以树的形式展现,由根节点到每一个叶结点的路径构成了一条规则,路径上中间节点的特征对应着具体规则的条件,每个叶结点代表决策结果。同时,这个规则集合具有互斥并完备的性质,即每一个实例都有且只有一条路径或一条规则所覆盖。决策树模型也可以理解为定义在特征空间与类空间的条件概率分布,由训练数据集估计条件概率模型。

决策树模型一般通过递归选择最优特征,并根据特征对训练数据进行划分,使得每个划分子集都有一个最好的分类过程。决策树是一种贪心算法,得到的树不一定是最优的,而是效果较好的次优模型。

Python代码实现及注释

# 第14章:决策树模型import osimport pandas as pdimport numpy as npfrom sklearn.model_selection import train_test_splitimport variable_encode as var_encodefrom sklearn.metrics import confusion_matrix,recall_score, auc, roc_curve,precision_score,accuracy_scorefrom sklearn.model_selection import GridSearchCVfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.preprocessing import StandardScalerimport matplotlib.pyplot as pltimport matplotlibmatplotlib.rcParams['font.sans-serif']=['SimHei'] # 用黑体显示中文matplotlib.rcParams['axes.unicode_minus']=False# 正常显示负号import warningswarnings.filterwarnings("ignore") ##忽略警告##数据读取def data_read(data_path,file_name):df = pd.read_csv( os.path.join(data_path, file_name), delim_whitespace = True, header = None )##变量重命名columns = ['status_account','duration','credit_history','purpose', 'amount','svaing_account', 'present_emp', 'income_rate', 'personal_status','other_debtors', 'residence_info', 'property', 'age','inst_plans', 'housing', 'num_credits','job', 'dependents', 'telephone', 'foreign_worker', 'target']df.columns = columns##将标签变量由状态1,2转为0,1;0表示好用户,1表示坏用户df.target = df.target - 1##数据分为data_train和 data_test两部分,训练集用于得到编码函数,验证集用已知的编码规则对验证集编码data_train, data_test = train_test_split(df, test_size=0.2, random_state=0,stratify=df.target)return data_train, data_test ##离散变量与连续变量区分 def category_continue_separation(df,feature_names):categorical_var = []numerical_var = []if 'target' in feature_names:feature_names.remove('target')##先判断类型,如果是int或float就直接作为连续变量numerical_var = list(df[feature_names].select_dtypes(include=['int','float','int32','float32','int64','float64']).columns.values)categorical_var = [x for x in feature_names if x not in numerical_var]return categorical_var,numerical_varif __name__ == '__main__':path = 'D:\\code\\chapter14'data_path = os.path.join(path ,'data')file_name = 'german.csv'##读取数据data_train, data_test = data_read(data_path,file_name)sum(data_train.target ==0)data_train.target.sum()##区分离散变量与连续变量feature_names = list(data_train.columns)feature_names.remove('target')categorical_var,numerical_var = category_continue_separation(data_train,feature_names)###离散变量直接WOE编码var_all_bin = list(data_train.columns)var_all_bin.remove('target')##训练集WOE编码df_train_woe, dict_woe_map, dict_iv_values ,var_woe_name = var_encode.woe_encode(data_train,data_path,categorical_var, data_train.target,'dict_woe_map', flag='train')##测试集WOE编码df_test_woe, var_woe_name = var_encode.woe_encode(data_test,data_path,categorical_var, data_test.target, 'dict_woe_map',flag='test')#####连续变量缺失值做填补for i in numerical_var:if sum(data_train[i].isnull()) >0:data_train[i].fillna(data_train[i].mean(),inplace=True)###组成分箱后的训练集与测试集data_train.reset_index(drop=True,inplace=True)data_test.reset_index(drop=True,inplace=True)var_1 = numerical_varvar_1.append('target')data_train_1 = pd.concat([df_train_woe[var_woe_name],data_train[var_1]],axis=1)data_test_1 = pd.concat([df_test_woe[var_woe_name],data_test[var_1]],axis=1) ####取出训练数据与测试数据var_all = list(data_train_1.columns)var_all.remove('target')####变量归一化scaler = StandardScaler().fit(data_train_1[var_all])data_train_1[var_all] = scaler.transform(data_train_1[var_all]) data_test_1[var_all] = scaler.transform(data_test_1[var_all])x_train = np.array(data_train_1[var_all])y_train = np.array(data_train_1.target)x_test = np.array(data_test_1[var_all])y_test = np.array(data_test_1.target)########决策树模型##设置待优化的超参数DT_param = {'max_depth': np.arange(2,10,1),'class_weight': [{1: 1, 0: 1}, {1: 2, 0: 1}, {1: 3, 0: 1}] }##初始化网格搜索DT_gsearch = GridSearchCV(estimator=DecisionTreeClassifier(),param_grid=DT_param, cv=3, scoring='f1', n_jobs=-1, verbose=2)##执行超参数优化DT_gsearch.fit(x_train, y_train)print('DecisionTreeClassifier model best_score_ is {0},and best_params_ is {1}'.format(DT_gsearch.best_score_,DT_gsearch.best_params_))##用最优参数,初始化决策树模型DT_model_1 = DecisionTreeClassifier(max_depth = DT_gsearch.best_params_['max_depth'],class_weight=DT_gsearch.best_params_['class_weight'])##训练决策树模型DT_model_fit = DT_model_1.fit(x_train, y_train)##属性# DT_model_fit.feature_importances_# DT_model_fit.max_features_# DT_model_fit.n_outputs_##模型预测y_pred = DT_model_fit.predict(x_test)y_score_test = DT_model_fit.predict_proba(x_test)[:, 1]##计算混淆矩阵与recall、precisioncnf_matrix = confusion_matrix(y_test, y_pred)recall_value = recall_score(y_test, y_pred)precision_value = precision_score(y_test, y_pred)acc = accuracy_score(y_test, y_pred)print(cnf_matrix)print('Validation set: model recall is {0},and percision is {1}'.format(recall_value,precision_value)) ##计算fpr与tprfpr, tpr, thresholds = roc_curve(y_test, y_score_test)####计算AR。gini等roc_auc = auc(fpr, tpr)ks = max(tpr - fpr)ar = 2*roc_auc-1gini = arprint('test set: model AR is {0},and ks is {1}'.format(ar,ks)) ####ks曲线plt.figure(figsize=(10,6))fontsize_1 = 12plt.plot(np.linspace(0,1,len(tpr)),tpr,'--',color='black', label='正样本洛伦兹曲线')plt.plot(np.linspace(0,1,len(tpr)),fpr,':',color='black', label='负样本洛伦兹曲线')plt.plot(np.linspace(0,1,len(tpr)),tpr - fpr,'-',color='grey')plt.grid()plt.xticks( fontsize=fontsize_1)plt.yticks( fontsize=fontsize_1)plt.xlabel('概率分组',fontsize=fontsize_1)plt.ylabel('累积占比%',fontsize=fontsize_1)plt.legend(fontsize=fontsize_1)print( max(tpr - fpr))

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。