1500字范文,内容丰富有趣,写作好帮手!
1500字范文 > Python搭建投票分类器模型来进行机器学习实验

Python搭建投票分类器模型来进行机器学习实验

时间:2022-03-19 22:04:18

相关推荐

Python搭建投票分类器模型来进行机器学习实验

投票分类器模型是一种很常用的模型,在很多外文论文中多次见到,诸如随机森林这般强悍的分类器核心的思想就是:投票。投票分类器简单来说并不是一种固定具体的分类器模型,而是一种框架,在这种框架里面可以套用各式各样的基分类器模型,就像在随机森林中的基分类器就是决策时,在GBDT模型中的基分类器就是CART一样,关于如何选取合适的及分类器模型这里有几个建议:

1. 各个基分类器模型最好是各有优劣,以便于模型建立时可以做到取长补短,比如:设model1准确率较高但是召回率较低,model2正好相反,那么这样的组合是可以尝试的

2. 基分类器可以通过单独建立模型观察基分类器性能表现来综合选取,此时考虑到的指标主要是accuracy和F1值

3. 基分类器的个数最好是奇数个,能够避免一下麻烦

本文是在撰写硕士论文中的一个小实验室,使用Python来搭建投票分类器模型,实现很简单,下面是具体实现:

#!usr/bin/env python#encoding:utf-8from __future__ import division'''__Author__:沂水寒城功能:使用投票分类器模型来进行机器学习实验'''import jsonimport osimport csvimport numpy as npfrom sklearn.svm import SVCfrom sklearn import metricsfrom sklearn.svm import SVC from sklearn.metrics import roc_curve, aucfrom sklearn.ensemble import GradientBoostingClassifierfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifierfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.ensemble import ExtraTreesClassifierfrom sklearn.ensemble import GradientBoostingClassifierfrom sklearn import cross_validation, metricsfrom sklearn.metrics import mean_squared_errorfrom sklearn.model_selection import train_test_splitfrom sklearn.ensemble import RandomForestClassifier, VotingClassifierfrom sklearn.model_selection import cross_val_scorefrom sklearn.ensemble import BaggingClassifierfrom sklearn.ensemble import AdaBoostClassifier from sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import StandardScalerfrom sklearn.neural_network import MLPClassifierfrom sklearn.gaussian_process import GaussianProcessClassifierfrom sklearn.gaussian_process.kernels import RBFfrom sklearn.model_selection import StratifiedKFold from sklearn.model_selection import KFold from sklearn.neighbors import KNeighborsClassifierfrom sklearn.linear_model import SGDClassifierfrom sklearn.tree import DecisionTreeClassifierdef read_data(test_data='test_data.csv',n=1,label=1):'''加载数据的功能n:特征数据起始位label:是否是监督样本数据'''csv_reader=csv.reader(open(test_data))data_list=[]for one_line in csv_reader:data_list.append(one_line)x_list=[]y_list=[]for one_line in data_list[1:]:if label==1:y_list.append(int(one_line[-1])) #标志位one_list=[float(o) for o in one_line[n:-1]]x_list.append(one_list)else:one_list=[float(o) for o in one_line[n:]]x_list.append(one_list)return x_list, y_listdef split_data(data_list, y_list, ratio=0.30):'''按照指定的比例,划分样本数据集'''X_train, X_test, y_train, y_test = train_test_split(data_list, y_list, test_size=ratio, random_state=42)print '--------------------------------data shape-----------------------------------'print len(X_train), len(y_train)print len(X_test), len(y_test)return X_train, X_test, y_train, y_testdef cal_four_zhibiao(y_predict,y_true):'''对于预测结果文件计算模型的正确率(accuracy)、精确率(precision)、召回率(recall)accuracy=(TP+TN)/(P+N)precision=TP/(TP+FP)recall=TP/(TP+FN)'''TP,TN,FP,FN=[0]*4count=len(y_predict)for i in range(count):label,predict=y_true[i],y_predict[i]if int(label)==1 and int(label)==int(predict): TP+=1 elif int(label)==1 and int(predict)==0: FN+=1elif int(label)==0 and int(label)==int(predict): TN+=1elif int(label)==0 and int(predict)==1: FP+=1accuracy=(TP+TN)/(TP+TN+FP+FN)precision=TP/(TP+FP)recall=TP/(TP+FN)if TP!=0:F_value=(2*recall*precision)/(recall+precision)else:F_value=0return [accuracy,precision,recall,F_value]def cal_one_model_all_score(model,x_list,y_list,n):'''交叉验证计算一个模型的几种常用评分标准'''res_dict={}score_list=['accuracy','average_precision','f1','precision','recall','roc_auc']for one_score in score_list:this_scores=cross_val_score(model,x_list,y_list,scoring=one_score,cv=n)max_score=this_scores.max()mean_score=this_scores.mean()res_dict[one_score]=mean_scoreres_dict['F_value']=(2*res_dict['recall']*res_dict['precision'])/(res_dict['recall']+res_dict['precision'])return res_dictdef vote_models_predict(data='all.csv',n=100):'''投票分类器模型'''res_dict={}x_list,y_list=read_data(test_data=data,n=1,label=1)RF=RandomForestClassifier(n_estimators=20,min_samples_split=10,min_samples_leaf=20,max_depth=16)LR=LogisticRegression()SVM=SVC(C=1.0, kernel='rbf', degree=3, gamma='auto', coef0=0.0, shrinking=True, probability=True, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape='ovr', random_state=None)GNB=GaussianNB()Bag=BaggingClassifier(n_estimators=20)Ada=AdaBoostClassifier(n_estimators=80)GBDT=GradientBoostingClassifier(n_estimators=110,min_samples_split=12,min_samples_leaf=6,max_depth=6)SGD=SGDClassifier(penalty='l2',loss='log')DT=DecisionTreeClassifier()model_list=[('rf',RF),('lr',LR),('gnb',GNB),('gbdt',GBDT),('svm',SVM),('bag',Bag),('ada',Ada),('dt',DT),('sgd',SGD)]sign_list=['RF','LR','GNB','GBDT','SVM','Bag','Ada','DT','SGD']res_dict['RF']=cal_one_model_all_score(RF,x_list,y_list,n)res_dict['LR']=cal_one_model_all_score(LR,x_list,y_list,n)res_dict['GNB']=cal_one_model_all_score(GNB,x_list,y_list,n)res_dict['GBDT']=cal_one_model_all_score(GBDT,x_list,y_list,n)res_dict['SVM']=cal_one_model_all_score(SVM,x_list,y_list,n)res_dict['Bag']=cal_one_model_all_score(Bag,x_list,y_list,n)res_dict['Ada']=cal_one_model_all_score(Ada,x_list,y_list,n)res_dict['DT']=cal_one_model_all_score(DT,x_list,y_list,n)res_dict['SGD']=cal_one_model_all_score(SGD,x_list,y_list,n)use_list=[]zhibiao=['accuracy','precision','recall','F_value']for i in range(len(sign_list)):one_model=sign_list[i]one_res=res_dict[one_model]count=0for one in zhibiao:if one_res[one]>=0.85:count+=1if count>=2:use_list.append(model_list[i])if len(use_list)>2:vote_soft=VotingClassifier(estimators=use_list,voting='soft')res_dict['vote_soft']=cal_one_model_all_score(vote_soft,x_list,y_list,n)print 'use_list'print use_listreturn res_dictif __name__ == '__main__':res_dict=vote_models_predict_test(data='sampledata.csv',n=10)

该脚本按照格式套入自己的数据后就可以直接使用了,这里不在多说明仅仅是做一个记录,欢迎交流!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。