數(shù)據(jù)挖掘實驗報告_第1頁
數(shù)據(jù)挖掘實驗報告_第2頁
數(shù)據(jù)挖掘實驗報告_第3頁
數(shù)據(jù)挖掘實驗報告_第4頁
數(shù)據(jù)挖掘實驗報告_第5頁
已閱讀5頁,還剩2頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、精選優(yōu)質文檔-傾情為你奉上武漢理工大學理學院數(shù)學系課程上機實驗報告課 程 名 稱: 數(shù)據(jù)挖掘 班級信計1201日 期6.9成績評定 姓名張徐軍(26)李雪梅(35)張曉婷(33)實驗室數(shù)學207老師簽名  實驗名稱決策樹解決肝癌預測問題所用軟件 Python實驗目的及內容目的 :熟悉決策樹分類的基本思想以及運用ID3算法進行實例演練。內容:根據(jù)所給的肝癌數(shù)據(jù)進行分類,得到?jīng)Q策樹,以及驗證其正確性與適用性。實驗 原理步驟在編碼之前為了方便起見,將實驗數(shù)據(jù)的十個指標的屬性值分別用數(shù)字1,2,3,4表示,表示結果如下:X1 no 1 light 2 mid 3

2、 serious 4X2 no 1 branch 2 trunk 3X3 positive 1 negative 2X4 positive 1 negative 2X5 rightliver 1 leftliver 2 allliver 3X6 small 1 middle 2 big 3 verybig 4X7 dilation 1 infiltration 2X8 no 1 part 2 integrate 3X9 no 1 have 2X10 no 1 less 2 much 3代碼:# -*- coding: cp936 -*-import mathimport operator#計算

3、香農熵,分兩步,第一步計算頻率,第二步根據(jù)公式計算香農熵。def calcShannonEnt(dataSet): numEntries=len(dataSet) labelCounts= for featVec in dataSet: currentLabel=featVec-1 if currentLabel not in labelCounts.keys(): labelCountscurrentLabel=0 labelCountscurrentLabel+=1 shannonEnt=0.0 for key in labelCounts: prob =float(labelCounts

4、key)/numEntries shannonEnt-=prob*math.log(prob,2) return shannonEnt def createDataSet(): dataSet=3,2,2,2,1,2,1,2,1,2,'Y', 3,3,1,1,1,2,2,1,2,3,'N', 4,1,2,1,2,3,1,1,1,3,'Y', 1,1,2,2,3,4,1,3,1,3,'Y', 2,2,1,1,1,1,2,3,2,1,'N', 3,3,1,2,1,2,2,2,1,1,'Y', 2,2,1

5、,2,1,1,2,1,2,3,'Y', 1,3,2,1,3,3,1,2,1,2,'N', 3,2,1,2,1,2,1,3,2,2,'N', 1,1,2,1,1,4,1,2,1,1,'N', 4,3,2,2,1,3,2,3,2,2,'N', 2,3,1,2,3,1,1,1,1,2,'Y', 1,1,2,1,1,4,2,2,1,3,'N', 1,2,2,2,3,4,2,3,2,1,'N', 4,2,1,1,1,3,2,2,2,2,'Y', 3,1,2,1,

6、1,2,1,3,2,3,'N', 3,2,2,2,1,2,1,3,1,2,'N', 2,3,2,1,2,1,2,1,1,1,'Y', 1,3,2,1,1,4,2,1,1,1,'N', 1,1,1,1,1,4,1,2,1,2,'Y' labels='X1','X2','X3','X4','X5','X6','X7','X8','X9','X10' return

7、 dataSet,labels#劃分數(shù)據(jù)集,將滿足Xaixs=value的值都劃分到一起,返回一個劃分好的集合(不包括用來劃分的aixs屬性,因為不需要)def splitDataSet(dataSet, axis, value): retDataSet = for featVec in dataSet: if featVecaxis = value: reducedFeatVec = featVec:axis #chop out axis used for splitting reducedFeatVec.extend(featVecaxis+1:) retDataSet.append(re

8、ducedFeatVec) return retDataSet#選擇最好的屬性進行劃分,思路很簡單就是對每個屬性都劃分下,看哪個好。這里使用到了一個set來選取列表中唯一的元素。def chooseBestFeatureToSplit(dataSet): numFeatures = len(dataSet0) - 1 #因為數(shù)據(jù)集的最后一項是標簽 baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1 for i in range(numFeatures): #iterate over all the

9、 features featList = examplei for example in dataSet#create a list of all the examples of this feature uniqueVals = set(featList) #get a set of unique values newEntropy = 0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet) newEntropy +=

10、 prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy if (infoGain > bestInfoGain): #compare this to the best gain so far bestInfoGain = infoGain #if better than current best, set to best bestFeature = i return bestFeature #return

11、s an integer #因為我們遞歸構建決策樹是根據(jù)屬性的消耗進行計算的,所以可能會存在最后屬性用完了,但是分類還是沒有完,這是就會采用多數(shù)表決的方式計算節(jié)點分類。 def majorityCnt(classList): classCount= for vote in classList: if vote not in classCount.keys(): classCountvote = 0 classCountvote += 1 sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), r

12、everse=True) return sortedClassCount00#基于遞歸構建決策樹。這里的label更多是對于分類特征的名字,為了更好理解。def createTree(dataSet,labels): classList = example-1 for example in dataSet if classList.count(classList0) = len(classList): return classList0#stop splitting when all of the classes are equal if len(dataSet0) = 1: #stop sp

13、litting when there are no more features in dataSet return majorityCnt(classList) bestFeat = chooseBestFeatureToSplit(dataSet) bestFeatLabel = labelsbestFeat myTree = bestFeatLabel: del(labelsbestFeat) featValues = examplebestFeat for example in dataSet uniqueVals = set(featValues) for value in uniqu

14、eVals: subLabels = labels: #copy all of labels, so trees don't mess up existing labels myTreebestFeatLabelvalue = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) return myTreedef classify(inputTree,featLabels,testVec): firstStr = inputTree.keys()0 secondDict = inputTreefirstStr feat

15、Index = featLabels.index(firstStr) key = testVecfeatIndex valueOfFeat = secondDictkey if isinstance(valueOfFeat, dict): classLabel = classify(valueOfFeat, featLabels, testVec) else: classLabel = valueOfFeat return classLabeldef getResult(): dataSet,labels=createDataSet() # splitDataSet(dataSet,1,1)

16、chooseBestFeatureToSplit(dataSet) # print chooseBestFeatureToSplit(dataSet) #print calcShannonEnt(dataSet) mtree=createTree(dataSet,labels) print mtree print classify(mtree,'X1','X2','X3','X4','X5','X6','X7','X8','X9','X10&#

17、39;, 3,1,2,1,1,2,1,3,2,3)print classify(mtree,'X1','X2','X3','X4','X5','X6','X7','X8','X9','X10',3,2,2,2,1,2,1,3,1,2)print classify(mtree,'X1','X2','X3','X4','X5','X6','

18、;X7','X8','X9','X10',2,3,2,1,2,1,2,1,1,1)print classify(mtree,'X1','X2','X3','X4','X5','X6','X7','X8','X9','X10',1,3,2,1,1,4,2,1,1,1) print classify(mtree,'X1','X2','X3'

19、;,'X4','X5','X6','X7','X8','X9','X10', 1,1,1,1,1,4,1,2,1,2) if _name_='_main_': getResult() 實驗結果及分析第一步:首先將20個樣本作為訓練樣本生成決策樹,然后以原來的20個樣本作為測試樣本檢驗決策樹的正確性及適用性,得到的結果與事實全部相符,決策樹程序表示如下:'X8': 1: 'X1': 1: 'N', 2: 'Y', 3: 'N', 4: 'Y', 2: 'X1': 1: 'X3': 1: 'Y', 2: 'N', 3: 'Y', 4: 'Y', 3: 'X1': 1: 'X2': 1: 'Y', 2: 'N', 2: &

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論