FAFU機(jī)器學(xué)習(xí)10-2 ANN課件_第1頁
FAFU機(jī)器學(xué)習(xí)10-2 ANN課件_第2頁
FAFU機(jī)器學(xué)習(xí)10-2 ANN課件_第3頁
FAFU機(jī)器學(xué)習(xí)10-2 ANN課件_第4頁
FAFU機(jī)器學(xué)習(xí)10-2 ANN課件_第5頁
已閱讀5頁,還剩33頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

FoundationsofMachineLearning

ArtificialNeuralNetworks2023/11/4ArtificialNeuralNetworksLesson10-1ArtificialNeuralNetworksArtificialNeuralNetworks:IntroductionSingleLayerNeuralNetworksMultipleLayerNeuralNetworksSelf-OrganizingMap(SOM)OtherNeuralNetworkssklearn.neural_network2023/11/4ArtificialNeuralNetworksLesson10-2ArtificialNeuralNetworks:IntroductionThebrainandtheneurons(腦與神經(jīng)元)NeuronsarethebuildingblocksofthebrainTheirinterconnectivityformstheprogrammingthatallowsustosolvealloureverydaytasksTheyareabletoperformparallelandfaulttolerantcomputationTheoreticalmodelsofhowtheneuronsinthebrainworkandhowtheylearnhavebeendevelopedfromthebeginningofArtificialIntelligenceMostofthesemodelsarereallysimple(butyetpowerful)andhaveaslimresemblancetorealbrainneurons2023/11/4ArtificialNeuralNetworksLesson10-3ArtificialNeuralNetworks:IntroductionAneuronmodel(神經(jīng)元模型)1943年,[McCullochandPitts,1943]抽象出“M-P神經(jīng)元模型”,在這個模型中,神經(jīng)元接收到來自n個其他神經(jīng)元傳遞過來的輸入信號,這些輸入信號通過帶權(quán)重的連接(connection)進(jìn)行傳遞,神經(jīng)元接收到的總輸入值將與神經(jīng)元的閾值進(jìn)行比較,然后通過“激活函數(shù)”(activationfunction)處理以產(chǎn)生神經(jīng)元的輸出。2023/11/4ArtificialNeuralNetworksLesson10-4ArtificialNeuralNetworks:Introduction理想中的激活函數(shù)是階躍函數(shù),它將輸入值映射為輸出值“0”或者“1”,顯然“1”對應(yīng)于神經(jīng)元興奮,“0”對應(yīng)于神經(jīng)元抑制。然而,階躍函數(shù)具有不連續(xù)、不光滑等不太好的性質(zhì),因此實(shí)際常用Sigmoid作為激活函數(shù),它把可能在較大范圍內(nèi)變化的輸入值擠壓到(0,

1)輸出值范圍內(nèi),因此有時也稱為"擠壓函數(shù)"(squashingfunction).2023/11/4ArtificialNeuralNetworksLesson10-5ArtificialNeuralNetworks:IntroductionOrganizationofneurons/Networks,神經(jīng)元組織/網(wǎng)絡(luò)Usuallyneuronsareinterconnectedformingnetworks,therearebasicallytwoarchitecturesFeedforwardnetworks(前饋網(wǎng)絡(luò)),neuronsareconnectedonlyinonedirectionRecurrentnetworks(遞歸網(wǎng)絡(luò),或者循環(huán)網(wǎng)絡(luò)),outputscanbeconnectedtotheinputsFeedforwardnetworksareorganizedinlayers,oneconnectedtotheotherSinglelayerneuralnetworks(perceptronnetworks,感知器網(wǎng)絡(luò)):inputlayer(輸入層),outputlayer(輸出層)Multiplelayerneuralnetworks:inputlayer,hiddenlayers(隱層),outputlayer2023/11/4ArtificialNeuralNetworksLesson10-6ArtificialNeuralNetworks:IntroductionNeuronsaslogicgates(神經(jīng)元作為邏輯門)InitialresearchinANNdefinedneuronsasfunctionscapableofemulatelogicgates(Thresholdlogicalunits,TLU,閾值邏輯單元)Inputsxi

∈{0,1},weightswi

∈{+1,?1},thresholdw0

∈R,activationfunction?thresholdfunction:g(x)=1ifx≥w0,0otherwiseSetsofneuronscancomputeBooleanfunctionscomposingTLUsthatcomputeOR,ANDandNOTfunctions2023/11/4ArtificialNeuralNetworksLesson10-7ArtificialNeuralNetworks:IntroductionNeuronsaslogicgates(神經(jīng)元作為邏輯門)

2023/11/4ArtificialNeuralNetworksLesson10-8ArtificialNeuralNetworksArtificialNeuralNetworks:IntroductionSingleLayerNeuralNetworksMultipleLayerNeuralNetworksSelf-OrganizingMap(SOM)OtherNeuralNetworkssklearn.neural_network2023/11/4ArtificialNeuralNetworksLesson10-9SingleLayerNeuralNetworksTheperceptron(感知機(jī))感知機(jī)(Perceptron)由兩層神經(jīng)元組成,輸入層接收外界輸入信號后傳遞給輸出層,輸出層是M-P神經(jīng)元。2023/11/4ArtificialNeuralNetworksLesson10-10SingleLayerNeuralNetworksTheperceptron(感知機(jī))感知機(jī)(Perceptron)由兩層神經(jīng)元組成,輸入層接收外界輸入信號后傳遞給輸出層,輸出層是M-P神經(jīng)元。2023/11/4ArtificialNeuralNetworksLesson10-11SingleLayerNeuralNetworksTheperceptron(感知機(jī))Theperceptronlearningrule(感知器學(xué)習(xí)規(guī)則)感知機(jī)學(xué)習(xí)規(guī)則非常簡單,對訓(xùn)練樣例(x,y),若當(dāng)前感知機(jī)的輸出為y’,則感知機(jī)權(quán)重將這樣調(diào)整:wi=wi+?wi?wi=η(y?y’)xi其中η屬于(0,1)為學(xué)習(xí)率(learningrate)2023/11/4ArtificialNeuralNetworksLesson10-12SingleLayerNeuralNetworksLimitationsoflinearperceptrons(線性感知器的不足)WithlinearperceptronswecanonlyclassifycorrectlylinearlyseparableproblemsThehypothesisspaceisnotpowerfulenoughforrealproblemsExample,theXORfunction:2023/11/4ArtificialNeuralNetworksLesson10-13ArtificialNeuralNetworksArtificialNeuralNetworks:IntroductionSingleLayerNeuralNetworksMultipleLayerNeuralNetworks(多層神經(jīng)網(wǎng)絡(luò))Self-OrganizingMap(SOM)OtherNeuralNetworkssklearn.neural_network2023/11/4ArtificialNeuralNetworksLesson10-14MultipleLayerNeuralNetworksMultilayerPerceptron要解決非線性可分問題,需考慮使用多層功能神經(jīng)元,比如對異或問題:2023/11/4ArtificialNeuralNetworksLesson10-15MultipleLayerNeuralNetworksMultilayerPerceptron一般地,多層神經(jīng)網(wǎng)絡(luò)中每層神經(jīng)元與下層神經(jīng)元全互連,神經(jīng)元之間不存在同層連接,也不存在跨層連接,這樣的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)通常稱為“多層前饋神經(jīng)網(wǎng)絡(luò)”(multi-layerfeedforwardneuralnetworks)。其中輸入層神經(jīng)元接收外界輸入,隱層與輸出層神經(jīng)元對信號進(jìn)行加工,最終結(jié)果由輸出層神經(jīng)元輸出。換言之,輸入層神經(jīng)元僅是接受輸入,不進(jìn)行函數(shù)處理,隱層與輸出層包含功能神經(jīng)元。2023/11/4ArtificialNeuralNetworksLesson10-16MultipleLayerNeuralNetworksLearningMultilayerNetworks(多層網(wǎng)絡(luò)的學(xué)習(xí))InthecaseofsinglelayernetworkstheparameterstolearnaretheweightsofonlyonelayerInthemultilayercasewehaveasetofparametersforeachlayerandeachlayerisfullyconnectedtothenextlayerForsinglelayernetworkswhenwehavemultipleoutputswecanlearneachoutputseparatelyInthecaseofmultilayernetworksthedifferentoutputsareinterconnected2023/11/4ArtificialNeuralNetworksLesson10-17MultipleLayerNeuralNetworksBackPropagation(反向傳播)–IntuitivelyTheerrorofthesinglelayerperceptronlinksdirectlythetransformationoftheinputintheoutputInthecaseofmultiplelayerseachlayerhasitsownerrorTheerroroftheoutputlayerisdirectlytheerrorcomputedfromthetruevaluesTheerrorforthehiddenlayersismoredifficulttodefineTheideaistousetheerrorofthenextlayertoinfluencetheweightsofthepreviouslayerWearepropagatingbackwardstheoutputerror,hencethenameofBackPropagation(BP)2023/11/4ArtificialNeuralNetworksLesson10-18MultipleLayerNeuralNetworksBackPropagation(反向傳播)–IntuitivelyTheideaistousetheerrorofthenextlayertoinfluencetheweightsofthepreviouslayer2023/11/4ArtificialNeuralNetworksLesson10-19MultipleLayerNeuralNetworksBackpropagation–Algorithm(BP算法)ThebackpropagationalgorithmworksintwostepsPropagatetheexamplesthroughthenetworktoobtaintheoutput(forwardpropagation)Propagatetheoutputerrorlayerbylayerupdatingtheweightsoftheneurons(backpropagation)2023/11/4ArtificialNeuralNetworksLesson10-20MultipleLayerNeuralNetworksBackpropagation–Algorithm(BP算法)ThebackpropagationalgorithmworksintwostepsPropagatetheexamplesthroughthenetworktoobtaintheoutput(forwardpropagation)Propagatetheoutputerrorlayerbylayerupdatingtheweightsoftheneurons(backpropagation)BP算法基于梯度下降(gradientdescent)策略,以目標(biāo)的負(fù)梯度方向?qū)?shù)進(jìn)行調(diào)整。Sigmoid函數(shù)2023/11/4ArtificialNeuralNetworksLesson10-21MultipleLayerNeuralNetworks多層前饋神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)的目標(biāo)是均方誤差,對(xk,yk)2023/11/4ArtificialNeuralNetworksLesson10-22MultipleLayerNeuralNetworksBP算法基本流程2023/11/4ArtificialNeuralNetworksLesson10-23輸入:訓(xùn)練集D={(xk,yk)},k=l…m;學(xué)習(xí)率η.過程:1:在(0,

1)范固內(nèi)隨機(jī)初始化網(wǎng)絡(luò)中所有連接權(quán)和閾值2:repeat3:forall(xk

,yk)inDdo4:

根據(jù)當(dāng)前參數(shù)計(jì)算當(dāng)前樣本的輸出5:

計(jì)算輸出層神經(jīng)元的梯度項(xiàng);6:

計(jì)算隱層神經(jīng)元的梯度項(xiàng);7:

更新連接權(quán)whj

和vih,更新輸出層閾值θj和隱層閾值γh8:endfor9:until達(dá)到停止條件輸出:連接權(quán)與閾值確定的多層前饋神經(jīng)網(wǎng)絡(luò)ArtificialNeuralNetworksArtificialNeuralNetworks:IntroductionSingleLayerNeuralNetworksMultipleLayerNeuralNetworksSelf-OrganizingMap(SOM),自組織映射OtherNeuralNetworkssklearn.neural_network2023/11/4ArtificialNeuralNetworksLesson10-24Self-OrganizingMapTheSelf-OrganizingMapisoneofthemostpopularneuralnetworkmodels.Itbelongstothecategoryofcompetitivelearningnetworks(競爭學(xué)習(xí)型網(wǎng)絡(luò)).TheSelf-OrganizingMapisbasedonunsupervisedlearning(無監(jiān)督學(xué)習(xí)),whichmeansthatnohumaninterventionisneededduringthelearningandthatlittleneedstobeknownaboutthecharacteristicsoftheinputdata.Wecould,forexample,usetheSOMforclusteringdatawithoutknowingtheclassmembershipsoftheinputdata.TheSOMcanbeusedtodetectfeaturesinherenttotheproblemandthushasalsobeencalledSOFM,theSelf-OrganizingFeatureMap.2023/11/4ArtificialNeuralNetworksLesson10-25SOM典型結(jié)構(gòu)SOM網(wǎng)絡(luò)是一種競爭學(xué)習(xí)型的無監(jiān)督神經(jīng)網(wǎng)絡(luò),將高維空間中相似的樣本點(diǎn)映射到網(wǎng)絡(luò)輸出層中的鄰近神經(jīng)元。典型SOM網(wǎng)絡(luò)共有兩層,輸入層模擬感知外界輸入信息的視網(wǎng)膜,輸出層模擬做出響應(yīng)的大腦皮層。2023/11/4ArtificialNeuralNetworksLesson10-26SOM網(wǎng)絡(luò)學(xué)習(xí)算法訓(xùn)練過程簡述:在接收到訓(xùn)練樣本后,每個輸出層神經(jīng)元會計(jì)算該樣本與自身攜帶的權(quán)向量之間的距離,距離最近的神經(jīng)元成為競爭獲勝者,稱為最佳匹配單元。然后最佳匹配單元及其鄰近的神經(jīng)元的權(quán)向量將被調(diào)整,以使得這些權(quán)向量與當(dāng)前輸入樣本的距離縮小。這個過程不斷迭代,直至收斂。輸入層:假設(shè)一個輸入樣本為x=[x1,x2,x3,…,xn],是一個n維向量,則輸入層神經(jīng)元個數(shù)為n個。輸出層(競爭層):通常輸出層的神經(jīng)元以矩陣等拓?fù)浣Y(jié)構(gòu)排列在二維空間中,每個神經(jīng)元都有一個權(quán)值向量。假設(shè)輸出層有m個神經(jīng)元,則有m個權(quán)值向量,Wi=[wi1,wi2,....,win],1<=i<=m。2023/11/4ArtificialNeuralNetworksLesson10-27SOM網(wǎng)絡(luò)學(xué)習(xí)算法流程1.初始化:權(quán)值使用較小的隨機(jī)值進(jìn)行初始化,并對輸入向量和權(quán)值做歸一化處理X‘=X/||X||,

ω’i=ωi/||ωi||,1<=i<=m,||X||和||ωi||分別為輸入的樣本向量和權(quán)值向量的歐幾里得范數(shù)。2.將樣本輸入網(wǎng)絡(luò):樣本與權(quán)值向量做點(diǎn)積,點(diǎn)積值最大的輸出神經(jīng)元贏得競爭,(或者計(jì)算樣本與權(quán)值向量的歐幾里得距離,距離最小的神經(jīng)元贏得競爭)記為獲勝神經(jīng)元。3.更新權(quán)值:對獲勝的神經(jīng)元拓?fù)溧徲騼?nèi)的神經(jīng)元進(jìn)行更新,并對學(xué)習(xí)后的權(quán)值重新歸一化。4.更新學(xué)習(xí)速率η及拓?fù)溧徲騈,N隨時間增大距離變小。5.判斷是否收斂。如果學(xué)習(xí)率η<=ηmin或達(dá)到預(yù)設(shè)的迭代次數(shù),結(jié)束算法,否則,返回第2步。2023/11/4ArtificialNeuralNetworksLesson10-28SOM網(wǎng)絡(luò)學(xué)習(xí)算法流程1.初始化權(quán)值,并對輸入向量和權(quán)值做歸一化處理2.將樣本輸入網(wǎng)絡(luò),尋找獲勝神經(jīng)元。3.更新權(quán)值:對獲勝的神經(jīng)元拓?fù)溧徲騼?nèi)的神經(jīng)元進(jìn)行更新,并對學(xué)習(xí)后的權(quán)值重新歸一化。

ω(t+1)=ω(t)+η(t,n)*(x-ω(t))

η(t,n):η為學(xué)習(xí)率,是關(guān)于訓(xùn)練時間t和與獲勝神經(jīng)元的拓?fù)渚嚯xn的函數(shù)。

η(t,n)=η(t)e-n4.更新學(xué)習(xí)速率η及拓?fù)溧徲騈,N隨時間增大距離變小。5.判斷是否收斂。如果學(xué)習(xí)率η<=ηmin或達(dá)到預(yù)設(shè)的迭代次數(shù),結(jié)束算法,否則,返回第2步。2023/11/4ArtificialNeuralNetworksLesson10-29SOM網(wǎng)絡(luò)學(xué)習(xí)算法流程1.初始化權(quán)值,并對輸入向量和權(quán)值做歸一化處理2.將樣本輸入網(wǎng)絡(luò),尋找獲勝神經(jīng)元。3.更新權(quán)值:對獲勝的神經(jīng)元拓?fù)溧徲騼?nèi)的神經(jīng)元進(jìn)行更新,并對學(xué)習(xí)后的權(quán)值重新歸一化。

ω(t+1)=ω(t)+η(t,n)*(x-ω(t))

η(t,n):η為學(xué)習(xí)率,是關(guān)于訓(xùn)練時間t和與獲勝神經(jīng)元的拓?fù)渚嚯xn的函數(shù)。

η(t,n)=η(t)e-n4.更新學(xué)習(xí)速率η及拓?fù)溧徲騈,N隨時間增大距離變小。5.判斷是否收斂。如果學(xué)習(xí)率η<=ηmin或達(dá)到預(yù)設(shè)的迭代次數(shù),結(jié)束算法,否則,返回第2步。2023/11/4ArtificialNeuralNetworksLesson10-30ArtificialNeuralNetworksArtificialNeuralNetworks:IntroductionSingleLayerNeuralNetworksMultipleLayerNeuralNetworksSelf-OrganizingMap(SOM),自組織映射OtherNeuralNetworkssklearn.neural_network2023/11/4ArtificialNeuralNetworksLesson10-31OtherNeuralNetworksRBF網(wǎng)絡(luò)RBF(RadialBasisFunction,徑向基函數(shù))網(wǎng)絡(luò)是一種單隱層前饋神經(jīng)網(wǎng)絡(luò),它使用徑向基函數(shù)作為隱層神經(jīng)元激活函數(shù),而輸出層則是對隱層神經(jīng)元輸出的線性組合.假定輸入為d維向量x,輸出為實(shí)值,

則RBF網(wǎng)絡(luò)可表示為:常用的高斯徑向基函數(shù)形如:2023/11/4ArtificialNeuralNetworksLesson10-32OtherNeuralNetworksRBF網(wǎng)絡(luò)受限玻爾茲曼機(jī),RestrictedBoltzmannmachines(RBM)RBM是由Hinton和Sejnowski于1986年提出的一種生成式隨機(jī)神經(jīng)網(wǎng)絡(luò)(generativestochasticneuralnetwork),該網(wǎng)絡(luò)由一些可見單元(visibleunit,對應(yīng)可見變量,亦即數(shù)據(jù)樣本)和一些隱藏單元(hiddenunit,對應(yīng)隱藏變量)構(gòu)成,可見變量和隱藏變量都是二元變量,亦即其狀態(tài)取{0,1}。整個網(wǎng)絡(luò)是一個二部圖,只有可見單元和隱藏單元之間才會存在邊,可見單元之間以及隱藏單元之間都不會有邊連接2023/11/4ArtificialNeuralNetworksLesson10-33OtherNeuralNetworksRBF網(wǎng)絡(luò)受限玻爾茲曼機(jī)RBMareunsupervisednonlinearfeaturelearnersbasedonaprobabilisticmodel.ThefeaturesextractedbyanRBMorahierarchyofRBMsoftengivegoodresultswhenfedintoalinearclassifiersuchasalinearSVMoraperceptronRBM中的神經(jīng)元都是布爾型的,即只能取0、1兩種狀態(tài).狀態(tài)1表示激活,狀態(tài)0表示抑制.令向量s

屬于{0,l}n

表幣n個神經(jīng)元的狀態(tài),ωij

表示神經(jīng)元i與j

之間的連接權(quán),θi也表示神經(jīng)元i

的閾值,則狀態(tài)向量s

所對應(yīng)的Boltzmann機(jī)能量定義為:2023/11/4ArtificialNeuralNetworksLesson10-34ArtificialNeuralNetworksArtificialNeuralNetworks:IntroductionSingleLayerNeuralNetworksMultip

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論