數(shù)據(jù)挖掘?qū)嶒?yàn)報(bào)告_第1頁(yè)
數(shù)據(jù)挖掘?qū)嶒?yàn)報(bào)告_第2頁(yè)
數(shù)據(jù)挖掘?qū)嶒?yàn)報(bào)告_第3頁(yè)
數(shù)據(jù)挖掘?qū)嶒?yàn)報(bào)告_第4頁(yè)
數(shù)據(jù)挖掘?qū)嶒?yàn)報(bào)告_第5頁(yè)
已閱讀5頁(yè),還剩2頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、精選優(yōu)質(zhì)文檔-傾情為你奉上武漢理工大學(xué)理學(xué)院數(shù)學(xué)系課程上機(jī)實(shí)驗(yàn)報(bào)告課 程 名 稱: 數(shù)據(jù)挖掘 班級(jí)信計(jì)1201日 期6.9成績(jī)?cè)u(píng)定 姓名張徐軍(26)李雪梅(35)張曉婷(33)實(shí)驗(yàn)室數(shù)學(xué)207老師簽名  實(shí)驗(yàn)名稱決策樹解決肝癌預(yù)測(cè)問(wèn)題所用軟件 Python實(shí)驗(yàn)?zāi)康募皟?nèi)容目的 :熟悉決策樹分類的基本思想以及運(yùn)用ID3算法進(jìn)行實(shí)例演練。內(nèi)容:根據(jù)所給的肝癌數(shù)據(jù)進(jìn)行分類,得到?jīng)Q策樹,以及驗(yàn)證其正確性與適用性。實(shí)驗(yàn) 原理步驟在編碼之前為了方便起見,將實(shí)驗(yàn)數(shù)據(jù)的十個(gè)指標(biāo)的屬性值分別用數(shù)字1,2,3,4表示,表示結(jié)果如下:X1 no 1 light 2 mid 3

2、 serious 4X2 no 1 branch 2 trunk 3X3 positive 1 negative 2X4 positive 1 negative 2X5 rightliver 1 leftliver 2 allliver 3X6 small 1 middle 2 big 3 verybig 4X7 dilation 1 infiltration 2X8 no 1 part 2 integrate 3X9 no 1 have 2X10 no 1 less 2 much 3代碼:# -*- coding: cp936 -*-import mathimport operator#計(jì)算

3、香農(nóng)熵,分兩步,第一步計(jì)算頻率,第二步根據(jù)公式計(jì)算香農(nóng)熵。def calcShannonEnt(dataSet): numEntries=len(dataSet) labelCounts= for featVec in dataSet: currentLabel=featVec-1 if currentLabel not in labelCounts.keys(): labelCountscurrentLabel=0 labelCountscurrentLabel+=1 shannonEnt=0.0 for key in labelCounts: prob =float(labelCounts

4、key)/numEntries shannonEnt-=prob*math.log(prob,2) return shannonEnt def createDataSet(): dataSet=3,2,2,2,1,2,1,2,1,2,'Y', 3,3,1,1,1,2,2,1,2,3,'N', 4,1,2,1,2,3,1,1,1,3,'Y', 1,1,2,2,3,4,1,3,1,3,'Y', 2,2,1,1,1,1,2,3,2,1,'N', 3,3,1,2,1,2,2,2,1,1,'Y', 2,2,1

5、,2,1,1,2,1,2,3,'Y', 1,3,2,1,3,3,1,2,1,2,'N', 3,2,1,2,1,2,1,3,2,2,'N', 1,1,2,1,1,4,1,2,1,1,'N', 4,3,2,2,1,3,2,3,2,2,'N', 2,3,1,2,3,1,1,1,1,2,'Y', 1,1,2,1,1,4,2,2,1,3,'N', 1,2,2,2,3,4,2,3,2,1,'N', 4,2,1,1,1,3,2,2,2,2,'Y', 3,1,2,1,

6、1,2,1,3,2,3,'N', 3,2,2,2,1,2,1,3,1,2,'N', 2,3,2,1,2,1,2,1,1,1,'Y', 1,3,2,1,1,4,2,1,1,1,'N', 1,1,1,1,1,4,1,2,1,2,'Y' labels='X1','X2','X3','X4','X5','X6','X7','X8','X9','X10' return

7、 dataSet,labels#劃分?jǐn)?shù)據(jù)集,將滿足Xaixs=value的值都劃分到一起,返回一個(gè)劃分好的集合(不包括用來(lái)劃分的aixs屬性,因?yàn)椴恍枰ヾef splitDataSet(dataSet, axis, value): retDataSet = for featVec in dataSet: if featVecaxis = value: reducedFeatVec = featVec:axis #chop out axis used for splitting reducedFeatVec.extend(featVecaxis+1:) retDataSet.append(re

8、ducedFeatVec) return retDataSet#選擇最好的屬性進(jìn)行劃分,思路很簡(jiǎn)單就是對(duì)每個(gè)屬性都劃分下,看哪個(gè)好。這里使用到了一個(gè)set來(lái)選取列表中唯一的元素。def chooseBestFeatureToSplit(dataSet): numFeatures = len(dataSet0) - 1 #因?yàn)閿?shù)據(jù)集的最后一項(xiàng)是標(biāo)簽 baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1 for i in range(numFeatures): #iterate over all the

9、 features featList = examplei for example in dataSet#create a list of all the examples of this feature uniqueVals = set(featList) #get a set of unique values newEntropy = 0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet) newEntropy +=

10、 prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy if (infoGain > bestInfoGain): #compare this to the best gain so far bestInfoGain = infoGain #if better than current best, set to best bestFeature = i return bestFeature #return

11、s an integer #因?yàn)槲覀冞f歸構(gòu)建決策樹是根據(jù)屬性的消耗進(jìn)行計(jì)算的,所以可能會(huì)存在最后屬性用完了,但是分類還是沒有完,這是就會(huì)采用多數(shù)表決的方式計(jì)算節(jié)點(diǎn)分類。 def majorityCnt(classList): classCount= for vote in classList: if vote not in classCount.keys(): classCountvote = 0 classCountvote += 1 sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), r

12、everse=True) return sortedClassCount00#基于遞歸構(gòu)建決策樹。這里的label更多是對(duì)于分類特征的名字,為了更好理解。def createTree(dataSet,labels): classList = example-1 for example in dataSet if classList.count(classList0) = len(classList): return classList0#stop splitting when all of the classes are equal if len(dataSet0) = 1: #stop sp

13、litting when there are no more features in dataSet return majorityCnt(classList) bestFeat = chooseBestFeatureToSplit(dataSet) bestFeatLabel = labelsbestFeat myTree = bestFeatLabel: del(labelsbestFeat) featValues = examplebestFeat for example in dataSet uniqueVals = set(featValues) for value in uniqu

14、eVals: subLabels = labels: #copy all of labels, so trees don't mess up existing labels myTreebestFeatLabelvalue = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) return myTreedef classify(inputTree,featLabels,testVec): firstStr = inputTree.keys()0 secondDict = inputTreefirstStr feat

15、Index = featLabels.index(firstStr) key = testVecfeatIndex valueOfFeat = secondDictkey if isinstance(valueOfFeat, dict): classLabel = classify(valueOfFeat, featLabels, testVec) else: classLabel = valueOfFeat return classLabeldef getResult(): dataSet,labels=createDataSet() # splitDataSet(dataSet,1,1)

16、chooseBestFeatureToSplit(dataSet) # print chooseBestFeatureToSplit(dataSet) #print calcShannonEnt(dataSet) mtree=createTree(dataSet,labels) print mtree print classify(mtree,'X1','X2','X3','X4','X5','X6','X7','X8','X9','X10&#

17、39;, 3,1,2,1,1,2,1,3,2,3)print classify(mtree,'X1','X2','X3','X4','X5','X6','X7','X8','X9','X10',3,2,2,2,1,2,1,3,1,2)print classify(mtree,'X1','X2','X3','X4','X5','X6','

18、;X7','X8','X9','X10',2,3,2,1,2,1,2,1,1,1)print classify(mtree,'X1','X2','X3','X4','X5','X6','X7','X8','X9','X10',1,3,2,1,1,4,2,1,1,1) print classify(mtree,'X1','X2','X3'

19、;,'X4','X5','X6','X7','X8','X9','X10', 1,1,1,1,1,4,1,2,1,2) if _name_='_main_': getResult() 實(shí)驗(yàn)結(jié)果及分析第一步:首先將20個(gè)樣本作為訓(xùn)練樣本生成決策樹,然后以原來(lái)的20個(gè)樣本作為測(cè)試樣本檢驗(yàn)決策樹的正確性及適用性,得到的結(jié)果與事實(shí)全部相符,決策樹程序表示如下:'X8': 1: 'X1': 1: 'N', 2: 'Y', 3: 'N', 4: 'Y', 2: 'X1': 1: 'X3': 1: 'Y', 2: 'N', 3: 'Y', 4: 'Y', 3: 'X1': 1: 'X2': 1: 'Y', 2: 'N', 2: &

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論