中科院機(jī)器學(xué)習(xí)題庫(kù)_第1頁(yè)
中科院機(jī)器學(xué)習(xí)題庫(kù)_第2頁(yè)
中科院機(jī)器學(xué)習(xí)題庫(kù)_第3頁(yè)
中科院機(jī)器學(xué)習(xí)題庫(kù)_第4頁(yè)
中科院機(jī)器學(xué)習(xí)題庫(kù)_第5頁(yè)
已閱讀5頁(yè),還剩3頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

本文格式為Word版,下載可任意編輯——中科院機(jī)器學(xué)習(xí)題庫(kù)機(jī)器學(xué)習(xí)題庫(kù)

一、極大似然

1、MLestimationofexponentialmodel(10)

AGaussiandistributionisoftenusedtomodeldataontherealline,butissometimesinappropriatewhenthedataareoftenclosetozerobutconstrainedtobenonnegative.Insuchcasesonecanfitanexponentialdistribution,whoseprobabilitydensityfunctionisgivenby

1?xp?x??eb

bGivenNobservationsxidrawnfromsuchadistribution:

(a)Writedownthelikelihoodasafunctionofthescaleparameterb.(b)Writedownthederivativeoftheloglikelihood.(c)GiveasimpleexpressionfortheMLestimateforb.

x??2、換成Poisson分布:p?x|????e,y?0,1,2,...

x!l?????log?p?xi|?????xilog????log?x!?

i?1i?1N?????xi?log??N???log?xi!?i?1?i?1?NNN3、二、貝葉斯

假設(shè)在考試的多項(xiàng)選擇中,考生知道正確答案的概率為p,猜測(cè)答案的概率為1-p,并且假

設(shè)考生知道正確答案答對(duì)題的概率為1,猜中正確答案的概率為1m,其中m為多項(xiàng)選擇項(xiàng)的數(shù)目。那么已知考生答對(duì)題目,求他知道正確答案的概率。1、

p?known|correct??p?known,correct?p?known??pp??1?p?1mConjugatepriors

Thereadingsforthisweekincludediscussionofconjugatepriors.Givenalikelihoodp?x|??foraclassmodelswithparametersθ,aconjugatepriorisadistributionp??|??withhyperparametersγ,suchthattheposteriordistribution

p??|X,????p?X|??p??|???p??|???

與先驗(yàn)的分布族一致

(a)Supposethatthelikelihoodisgivenbytheexponentialdistributionwithrateparameterλ:p?x|????e??x

Showthatthegammadistribution

Gamma??|?,???????1???_

?e????isaconjugatepriorfortheexponential.Derivetheparameterupdategivenobservationsx1,?,xNandthepredictiondistributionp?xN?1|x1,?,xN?.

(b)Showthatthebetadistributionisaconjugatepriorforthegeometricdistribution

p?x?k|????1???k?1?

whichdescribesthenumberoftimeacoinistosseduntilthefirstheadsappears,whentheprobabilityofheadsoneachtossisθ.Derivetheparameterupdateruleandpredictiondistribution.

(c)Supposep??|??isaconjugatepriorforthelikelihoodp?x|??;showthatthemixtureprior

p??|?1,...,?M???wmp??|?m?

m?1Misalsoconjugateforthesamelikelihood,assumingthemixtureweightswmsumto1.

(d)Repeatpart(c)forthecasewherethepriorisasingledistributionandthelikelihoodisamixture,andthepriorisconjugateforeachmixturecomponentofthelikelihood.

somepriorscanbeconjugateforseveraldifferentlikelihoods;forexample,thebetaisconjugatefortheBernoulliandthegeometricdistributionsandthegammaisconjugatefortheexponentialandforthegammawithfixedα

(e)(Extracredit,20)Explorethecasewherethelikelihoodisamixturewithfixedcomponentsandunknownweights;i.e.,theweightsaretheparameterstobelearned.

三、判斷題

(1)給定n個(gè)數(shù)據(jù)點(diǎn),假使其中一半用于訓(xùn)練,另一半用于測(cè)試,則訓(xùn)練誤差和測(cè)試誤差之間的區(qū)別會(huì)隨著n的增加而減小。

(2)極大似然估計(jì)是無偏估計(jì)且在所有的無偏估計(jì)中方差最小,所以極大似然估計(jì)的風(fēng)險(xiǎn)最小。(3)回歸函數(shù)A和B,假使A比B更簡(jiǎn)單,則A幾乎一定會(huì)比B在測(cè)試集上表現(xiàn)更好。(4)全局線性回歸需要利用全部樣本點(diǎn)來預(yù)計(jì)新輸入的對(duì)應(yīng)輸出值,而局部線性回歸只需利用查詢點(diǎn)附近的樣本來預(yù)計(jì)輸出值。所以全局線性回歸比局部線性回歸計(jì)算代價(jià)更高。

(5)Boosting和Bagging都是組合多個(gè)分類器投票的方法,二者都是根據(jù)單個(gè)分類器的正確率決定其權(quán)重。

(6)Intheboostingiterations,thetrainingerrorofeachnewdecisionstumpandthetrainingerrorofthecombinedclassifiervaryroughlyinconcert(F)

Whilethetrainingerrorofthecombinedclassifiertypicallydecreasesasafunctionofboostingiterations,theerroroftheindividualdecisionstumpstypicallyincreasessincetheexampleweightsbecomeconcentratedatthemostdifficultexamples.

(7)OneadvantageofBoostingisthatitdoesnotoverfit.(F)

(8)Supportvectormachinesareresistanttooutliers,i.e.,verynoisyexamplesdrawnfromadifferentdistribution.(F)

(9)在回歸分析中,最正確子集選擇可以做特征選擇,當(dāng)特征數(shù)目較多時(shí)計(jì)算量大;嶺回歸和Lasso模型計(jì)算量小,且Lasso也可以實(shí)現(xiàn)特征選擇。

(10)當(dāng)訓(xùn)練數(shù)據(jù)較少時(shí)更簡(jiǎn)單發(fā)生過擬合。

(11)梯度下降有時(shí)會(huì)陷于局部微小值,但EM算法不會(huì)。

(12)在核回歸中,最影響回歸的過擬合性和欠擬合之間平衡的參數(shù)為核函數(shù)的寬度。(13)IntheAdaBoostalgorithm,theweightsonallthemisclassifiedpointswillgoupbythesamemultiplicativefactor.(T)

(14)True/False:Inaleast-squareslinearregressionproblem,addinganL2regularizationpenaltycannotdecreasetheL2errorofthesolutionw?onthetrainingdata.(F)

(15)True/False:Inaleast-squareslinearregressionproblem,addinganL2regularizationpenaltyalwaysdecreasestheexpectedL2errorofthesolutionw?onunseentestdata(F).(16)除了EM算法,梯度下降也可求混合高斯模型的參數(shù)。(T)

(20)Anydecisionboundarythatwegetfromagenerativemodelwith

class-conditionalGaussiandistributionscouldinprinciplebereproducedwithanSVMandapolynomialkernel.

True!Infact,sinceclass-conditionalGaussiansalwaysyieldquadraticdecisionboundaries,theycanbereproducedwithanSVMwithkernelofdegreelessthanorequaltotwo.

(21)AdaBoostwilleventuallyreachzerotrainingerror,regardlessofthetypeofweak

classifierituses,providedenoughweakclassifiershavebeencombined.

False!Ifthedataisnotseparablebyalinearcombinationoftheweakclassifiers,AdaBoostcan’tachievezerotrainingerror.

(22)TheL2penaltyinaridgeregressionisequivalenttoaLaplacepriorontheweights.(F)

(23)Thelog-likelihoodofthedatawillalwaysincreasethroughsuccessiveiterationsoftheexpectationmaximationalgorithm.(F)

4、midterm2023problem4

6、Considertwoclassifiers:1)anSVMwithaquadratic(secondorderpolynomial)kernelfunctionand2)anunconstrainedmixtureoftwoGaussiansmodel,oneGaussianperclasslabel.TheseclassifierstrytomapexamplesinR2tobinarylabels.Weassumethattheproblemisseparable,noslackpenaltiesareaddedtotheSVMclassifier,andthatwehavesufficientlymanytrainingexamplestoestimatethecovariancematricesofthetwoGaussianc

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論