



下載本文檔
版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
1、1 Give the definitions or your comprehensions of the following terms. (12')The inductive learning hypothesisP17OverfittingP49Consistent learnerP1482 Give brief answers to the following questions.(15')If the size of a version space is I VS* I. In general what is the smallest number of queries
2、 may be required by a concept learner using optimal query strategy to perfectly learn the target concept?P27In genaraL decision trees represent a disjunction of conjunctions of constrains on the attribute values of instanse.then what expression does the following decision tree corresponds to ?YesNoY
3、esNo3 Give the explaination to inductive bias, and list inductive bias of CANDIDATE-ELIMINATION algorithm, decision tree learning(ID3), BACKPROPAGATION algorithm。O')4 How to solve overfitting in decision tree and neural network?。") Solution: Decision tree: 及早停止樹增加(stop growing earlier) 后修剪法
4、(posl-pruning) Neural Network 權(quán)值衰減(weigh】 decay) 驗證數(shù)據(jù)集(validalion set)A5 Prove that the LMS weight update mle q 嗎 +(匕w加(/O-Vg)* perfomis a gradient descent to minimize the squared error. In particular, define the squared error E as in the text. Now Acalculate the derivative of E with respect to the
5、weight assuming that V(b) is a linear function as defined in the text. Gradient descent is achieved by updating each weight in proportion dEto. Therefore, you must show that the LMS training rule alters weights in this proportion物for each training example it encounters. ( E = 工 (%他-")2) (bJJaw&
6、amp;rainin txatnpleSolution:AAs Vtrain(b)<- V (Successorb)we can get E=工(匕疝3) - V(b)2V(/?) = w()4-w1x14-wx,+w3x3+w4x4+w5x5+w6x6酉/即=-2(%*)-93)/匕.()-09)/期=-2(ylrain(b)-v(b)-XiAs mentioned in LMS:用一用 + (匕.) V()%We can get q +7(-dE/owt)Therefore, gradient descent is achievement by updating each weigh
7、t in proportion to -dE! d;LMS rules alters weights in this proportion for each training example it encounters.6 Tnie or false: if decision tree D2 is an elaboration of tree DI, then DI is more-general-than D2. Assume DI and D2 are decision trees representing arbitrary boolean funcions, and that D2 i
8、s an elaboration of DI if ID3 could extend DI to D2. If true give a proof; if false, a counter example.(Definition: Let /?, and li be boolean-valued functions defined over X .then h; is jkjIf and only ifniore_general_than_or_equal_to hk (written li> hk(VxeX)(hk(x) = 1)-»(A;(x) = 1) then % &g
9、t;% =(% 1 %)人仇/ %) (10,)The hypothesis is false.One counter example is A XOR B while if A!=B, training examples are all positive, while if A=B. training examples are all negative, then, using ID3 to extend DI, the new tree D2 will be equivalent to DI,D2 is equal to DI.7 Design a two-input perceptron
10、 that implements the boolean function A a-iB .Design a two-layer network of perceptrons that implements A XOR 8.(10')8 Suppose that a hypothesis space containing three hypotheses,九喜.由,and the posterior probabilities of these typotheses given the training data are , and respectively. And if a new
11、 instance X is encountered, which is classified positive by %, but negative by % and h5 .then give the result and detail classification course of Bayes optimal classifier1 10')P1259 Suppose S is a collection of training-example days described by attributes including Humidity, which can have the
12、values High or Normal. Assume S is a collection containing 10 examples, 7+,3-L Of these 10 examples, suppose 3 of the positive and 2 of the negative examples have Humidity = High, and the remainder have Humidity = Normal. Please calculate the information gain due to sorting the original 10 examples
13、by the attribute Humidity.( log2l=0, log22=L log23=, log24=2, Iogi5=, Iog26=, logz7=, log28=3, logi9=, log210=,) (5')Solution:7733ia)Here we denote S=7+3-bthen Enlropyi 7+.3-l)= - -log2 - -log2 =;(b) Gain(S,Humidity)=Entropy(S)-Z yyEntropy(Sv) Gain(S.a2)vvaucs(IIuniidityi) |,/ Values( Humidity)=
14、High. NormalS岫=s e SI HiuMity(s) = HighEntropy(Sw,)=-1log, |-1log21 = 0.972,際而| = 5=4*' Entropy(5jVfWM/)=-log2 -1log21 = 0,72 ,屈爾/=5Thus Gain (S,Humidity)=(. x 0.972 + -* 0.72)=10 Finish the following algorithm. (IO")(1) GRADIENT-DESCENT(training examples, rj)Each training example is a pair
15、 of the form 卜),where x is the vector of input values, and t is the target output value. J is the learning rate . Initialize each q to some small random value Until the termination condition is met, Do Initialize each 弓 to zero. For each 卜,in training_examples/ Do Input the instance X to the unit and compute the output o For each linear unit weight , Do For each linear unit weight ,Do(2) FIND-S Algorit
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 紙盒包裝設(shè)計核心要素與流程
- 外資企業(yè)的中級經(jīng)濟(jì)師試題及答案
- 經(jīng)濟(jì)法概論理論基礎(chǔ)試題及答案
- 項目溝通的渠道與方式考核試題及答案
- 行政管理公共關(guān)系學(xué)行業(yè)分析試題及答案
- 水利水電工程學(xué)科交叉與融合試題及答案
- 行政管理中的組織管理試題及答案
- 關(guān)聯(lián)知識的市政工程試題及答案
- 2025年中級經(jīng)濟(jì)師提升學(xué)習(xí)效率的試題及答案
- 農(nóng)業(yè)經(jīng)濟(jì)管理體系建設(shè)與實施方案合同
- 消毒供應(yīng)中心手工清洗操作流程
- 發(fā)電量管理考核辦法
- 骨科常用藥物相關(guān)知識
- 2022級中餐烹飪(烹飪工藝與營養(yǎng)) 專業(yè)校企合作人才培養(yǎng)方案(五年制)
- 2025年音樂節(jié)演唱會明星藝人歌手樂隊演出場費價格表
- 青年紅色筑夢之旅創(chuàng)業(yè)計劃
- 2025年人工智能工程師專業(yè)知識考核試卷:人工智能在語音識別中的應(yīng)用試題
- 12.2.1.2+用條形圖和折線圖描述數(shù)據(jù)教案+2024-2025學(xué)年人教版數(shù)學(xué)七年級下冊
- 學(xué)校內(nèi)控制度及手冊
- 新蘇教版一年級數(shù)學(xué)下冊第七單元第1課時《觀察物體(1)》課件
- 新版《醫(yī)療器械經(jīng)營質(zhì)量管理規(guī)范》(2024)培訓(xùn)試題及答案
評論
0/150
提交評論