使用Python實(shí)現(xiàn)人工智能_第1頁
使用Python實(shí)現(xiàn)人工智能_第2頁
使用Python實(shí)現(xiàn)人工智能_第3頁
使用Python實(shí)現(xiàn)人工智能_第4頁
使用Python實(shí)現(xiàn)人工智能_第5頁
已閱讀5頁,還剩33頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

Lecture11:UsingPythonforArti?cialIntelligence

CS5001/CS5003:

IntensiveFoundationsofComputerScience

PDFofthispresentation

1

Lecture11:UsingPythonforArti?cialIntelligence

Today'stopics:IntroductiontoArti?cialIntelligence

IntroductiontoArti?cialNeuralNetworksExamplesofsomebasicneuralnetworksUsingPythonforArti?cialIntelligenceExample:PyTorch

2

Lecture11:IntroductiontoArti?cialIntelligence

VideoIntroduction

1950:

AlanTuring

:

TuringTest

1951:FirstAIprogram1965:

Eliza

(?rstchatbot)

1974:Firstautonomousvehicle

1997:

DeepBluebeatsGaryKasimov

atChess2004:FirstAutonomousVehiclechallenge2011:

IBMWatsonbeatsJeopardywinners

2016:

DeepMindbeatsGochampion

2017:

AlphaGoZerobeatsDeepMind

3

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

NNslearnrelationshipbetweencauseandeffectororganizelargevolumesofdataintoorderlyandinformativepatterns.

Slidesmodi?edfrom

PPT

byMohammedShbier

4

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

ANeuralNetworkisabiologicallyinspiredinformationprocessingidea,modeledafterourbrain.

Aneuralnetworkisalargenumberofhighlyinterconnectedprocessingelements(neurons)workingtogether

Likepeople,theylearnfromexperience(byexample)

5

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

NeuralnetworkstaketheirinspirationfromneurobiologyThisdiagramisthehumanneuron:

6

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Abiologicalneuronhasthreetypesofmaincomponents;dendrites,soma(orcellbody)andaxonDendritesreceivessignalsfromotherneurons

Thesoma,sumstheincomingsignals.Whensuf?cientinputisreceived,thecell?res;thatisittransmitasignaloveritsaxontoothercells.

7

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Anarti?cialneuralnetwork(ANN)isaninformationprocessingsystemthathascertainperformancecharacteristicsincommonwithbiologicalnets.

SeveralkeyfeaturesoftheprocessingelementsofANNaresuggestedbythepropertiesofbiologicalneurons:

Theprocessingelementreceivesmanysignals.

Signalsmaybemodi?edbyaweightatthereceivingsynapse.

Theprocessingelementsumstheweightedinputs.

Underappropriatecircumstances(suf?cientinput),theneurontransmitsasingleoutput.

Theoutputfromaparticularneuronmaygotomanyotherneurons.

8

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Fromexperience:examples/trainingdata

Strengthofconnectionbetweentheneuronsisstoredasaweight-valueforthespeci?cconnection.

Learningthesolutiontoaproblem=changingtheconnectionweights

9

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

ANNshavebeendevelopedasgeneralizationsofmathematicalmodelsofneuralbiology,basedontheassumptionsthat:

Informationprocessingoccursatmanysimpleelementscalledneurons.

Signalsarepassedbetweenneuronsoverconnectionlinks.

Eachconnectionlinkhasanassociatedweight,which,intypicalneuralnet,multipliesthesignaltransmitted.

Eachneuronappliesanactivationfunctiontoitsnetinputtodetermineitsoutputsignal.

10

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

11

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Modelofaneuron

12

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Aneuralnetconsistsofalargenumberofsimpleprocessingelementscalledneurons,units,cellsornodes.

Eachneuronisconnectedtootherneuronsbymeansofdirectedcommunicationlinks,eachwithassociatedweight.

Theweightrepresentinformationbeingusedbythenettosolveaproblem.

Eachneuronhasaninternalstate,calleditsactivationoractivitylevel,whichisafunctionoftheinputsithasreceived.Typically,aneuronsendsitsactivationasasignaltoseveralotherneurons.

Itisimportanttonotethataneuroncansendonlyonesignalatatime,althoughthatsignalisbroadcasttoseveralotherneurons.

13

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Neuralnetworksarecon?guredforaspeci?capplication,suchaspatternrecognitionordataclassi?cation,throughalearningprocess

Inabiologicalsystem,learninginvolvesadjustmentstothesynapticconnectionsbetweenneurons

Thisisthesameforarti?cialneuralnetworks(ANNs)!

14

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Aneuronreceivesinput,determinesthestrengthortheweightoftheinput,calculatesthetotalweightedinput,andcomparesthetotalweightedwithavalue(threshold)Thevalueisintherangeof0and1

Ifthetotalweightedinputgreaterthanorequalthethresholdvalue,theneuronwillproducetheoutput,andifthetotalweightedinputlessthanthethresholdvalue,nooutputwillbeproduced

15

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

16

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

17

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

18

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

19

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Let'smodelaslightlymorecomplicatedneuralnetwork:

Ifwetouchsomethingcoldweperceiveheat

Ifwekeeptouchingsomethingcoldwewillperceivecold

Ifwetouchsomethinghotwewillperceiveheat

WewillassumethatwecanonlychangethingsondiscretetimestepsIfcoldisappliedforonetimestepthenheatwillbeperceived

Ifacoldstimulusisappliedfortwotimestepsthencoldwillbeperceived

Ifheatisappliedatatimestep,thenweshouldperceiveheat

20

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

21

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Ittakestimeforthestimulus(appliedatX1andX2)tomakeitswaytoY1and

Y2whereweperceiveeitherheatorcold

Att(0),weapplyastimulustoX1andX2Att(1)wecanupdateZ1,Z2andY1

Att(2)wecanperceiveastimulusatY2

Att(2+n)thenetworkisfullyfunctional

22

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Wewantthesystemtoperceivecoldifacoldstimulusisappliedfortwotimesteps

Y2(t)=X2(t–2)ANDX2(t–1)

X2(t–2)

X2(t–1)

Y2(t)

1

1

1

1

0

0

0

1

0

0

0

0

23

Slidesmodi?edfromGrahamKendall'sIntroductiontoArti?cialIntelligence

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Wewantthesystemtoperceiveheatifeitherahotstimulusisappliedoracoldstimulusisapplied(foronetimestep)andthenremoved

Y1(t)=[X1(t–1)]OR[X2(t–3)ANDNOTX2(t–2)]

X2(t–3)

X2(t–2)

ANDNOT

X1(t–1)

OR

1

1

0

1

1

1

0

1

1

1

0

1

0

1

1

0

0

0

1

1

1

1

0

0

0

1

0

1

0

1

0

1

0

0

0

0

0

0

0

0

24

Lecture11:IntroductiontoArti?cialNeuralNetworks(ANNs)

Thenetworkshows

Y1(t)=X1(t–1)ORZ1(t–1)

Z1(t–1)=Z2(t–2)ANDNOTX2(t–2)Z2(t–2)=X2(t–3)

Substituting,weget

Y1(t)=[X1(t–1)]OR[X2(t–3)ANDNOTX2(t–2)]

whichisthesameasouroriginalrequirements

25

Lecture11:UsingPythonforArti?cialIntelligence

Thisisgreat...buthowdoyoubuildanetworkthatlearns?Wehavetouseinputtopredictoutput

Wecandothisusingamathematicalalgorithmcalledbackpropogation,whichmeasuresstatisticsfrominputvaluesandoutputvalues.

Backpropogationusesatrainingset

Wearegoingtousethefollowingtrainingset:

Canyou?gureoutwhatthequestionmarkshouldbe?

26

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

Thisisgreat...buthowdoyoubuildanetworkthatlearns?Wehavetouseinputtopredictoutput

Wecandothisusingamathematicalalgorithmcalledbackpropogation,whichmeasuresstatisticsfrominputvaluesandoutputvalues.

Backpropogationusesatrainingset

Wearegoingtousethefollowingtrainingset:

Canyou?gureoutwhatthequestionmarkshouldbe?

Theoutputisalwaysequaltothevalueoftheleftmostinputcolumn.Thereforetheansweristhe

‘?’shouldbe1.

27

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

Westartbygivingeachinputaweight,whichwillbeapositiveornegativenumber.

Largenumbers(positiveornegative)willhavealargeeffectontheneuron'soutput.

Westartbysettingeachweighttoarandomnumber,andthenwetrain:

Taketheinputsfromatrainingsetexample,adjustthembytheweights,andpassthemthroughaspecialformulatocalculatetheneuron’soutput.

Calculatetheerror,whichisthedifferencebetweentheneuron’soutputandthedesiredoutputinthetrainingsetexample.

Dependingonthedirectionoftheerror,adjusttheweightsslightly.

Repeatthisprocess10,000times.

28

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

Eventuallytheweightsoftheneuronwillreachanoptimumforthetrainingset.Ifweallowtheneurontothinkaboutanewsituation,thatfollowsthesamepattern,itshouldmakeagoodprediction.

29

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

Whatisthisspecialformulathatwe'regoingtousetocalculatetheneuron'soutput?

First,wetaketheweightedsumoftheneuron'sinputs:

∑weighti

×inputi

=weight1×input1+weight2×input2+weight3×input3

Nextwenormalizethis,sotheresultisbetween0and1.Forthis,weuseamathematicallyconvenientfunction,calledtheSigmoidfunction:

1 1+e?x

TheSigmoidfunctionlookslikethiswhenplotted:Noticethecharacteristic"S"

shape,andthatitisboundedby1and0.

30

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

Wecansubstitutethe?rstfunctionintotheSigmoid:

1 1+e?(∑weighti×inputi)

Duringthetraining,wehavetoadjusttheweights.Tocalculatethis,weusetheErrorWeightedDerivativeformula:

error×input×SigmoidCurvedGradient(output)

What'sgoingonwiththisformula?

Wewanttomakeanadjustmentproportionaltothesizeoftheerror

Wemultiplybytheinput,whichiseither1or0

Wemultiplybythe

gradient(steepness)oftheSigmoidcurve

.

31

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

What'sgoingonwiththisformula?

Wewanttomakeanadjustmentproportionaltothesizeoftheerror

Wemultiplybytheinput,whichiseither1or0

Wemultiplybythe

gradient(steepness)oftheSigmoidcurve

.

WhythegradientoftheSigmoid?

WeusedtheSigmoidcurvetocalculatetheoutputoftheneuron.

Iftheoutputisalargepositiveornegativenumber,itsigni?estheneuronwasquitecon?dentonewayoranother.

Fromthediagram,wecanseethatatlargenumbers,theSigmoidcurvehasashallowgradient.

Iftheneuroniscon?dentthattheexistingweightiscorrect,itdoesn’twanttoadjustitverymuch.MultiplyingbytheSigmoidcurvegradientachievesthis.

32

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

ThegradientoftheSigmoidcurve,canbefoundbytakingthederivative(remembercalculus?)

SigmoidCurvedGradient(output)=output×(1?output)

Sobysubstitutingthesecondequationintothe?rstequation(fromtwoslidesago),the?nalformulaforadjustingtheweightsis:

error×input×output×(1?output)

Thereareother,moreadvancedformulas,butthisoneisprettysimple.

33

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

Finally,Python!

Wewillusethenumpymodule,whichisamathematicslibraryforPython.Wewanttousefourmethods:

exp—thenaturalexponential

array—createsamatrix

dot—multipliesmatrices

random—givesusrandomnumbers

array()createslist-likearraysthatarefasterthanregularlists.E.g.,forthetrainingsetwesawearlier:

1training_set_inputs=array([[0,0,1],[1,1,1],[1,0,1],[0,1,1]])

2training_set_outputs=array([[0,1,1,0]]).T

The‘.T’function,transposesthematrixfromhorizontaltovertical.Sothe

0 0

?

?1 1

1??0?

1??1?

?

computerisstoringthenumberslikethis: 1

0

0 1??1?

1 1??0?

34

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

Lecture11:UsingPythonforArti?cialIntelligence

In10linesofPythoncode:

fromnumpyimportexp,array,random,dot

training_set_inputs=array([[0,0,1],[1,1,1],[1,0,1],[0,1,1]])

training_set_outputs=array([[0,1,1,0]]).T

random.seed(1)

synaptic_weights=2*random.random((3,1))-1

foriterationinrange(10000):

output=1/(1+exp(-(dot(training_set_inputs,synaptic_weights))))

synaptic_weights+=dot(training_set_inputs.T,(training_set_outputs-output)

*output*(1-output))

print1/(1+exp(-(dot(array([1,0,0]),synaptic_weights))))

35

Exampleborrowedfrom:

Howtobuildasimpleneuralnetworkin9linesofPythoncode

1fromnumpyimportexp,array,random,dot

2

3

4classNeuralNetwork():

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

definit(self):

#Seedtherandomnumbergenerator,soitgeneratesthesamenumbers#everytimetheprogramruns.

random.seed(1)

#Wemodelasingleneuron,with3inputconnectionsand1outputconnection.

#Weassignrandomweightstoa3x1matrix,withvaluesintherange-1to1#andmean0.

self.synaptic_weights=2*random.random((3,1))-1

#TheSigmoidfunction,whichdescribesanSshapedcurve.

#Wepasstheweightedsumoftheinputsthroughthisfunctionto#normalisethembetween0and1.

defsigmoid(self,x):return1/(1+exp(-x))

#ThederivativeoftheSigmoidfunction.

#ThisisthegradientoftheSigmoidcurve.

#Itindicateshowconfidentweareabouttheexistingweight.defsigmoid_derivative(self,x):

returnx*(1-x)

#Wetraintheneuralnetworkthroughaprocessoftrialanderror.#Adjustingthesynapticweightseachtime.

deftrain(self,training_set_inputs,training_set_outputs,number_of_training_iterations):foriterationinrange(number_of_training_iterations):

#Passthetrainingsetthroughourneuralnetwork(asingleneuron).output=self.think(training_set_inputs)

#Calculatetheerror(Thedifferencebetweenthedesiredoutput#andthepredictedoutput).

error=training_set_outputs-output

#MultiplytheerrorbytheinputandagainbythegradientoftheSigmoidcurve.#Thismeanslessconfidentweightsareadjustedmore.

#Thismeansinputs,whicharezero,donotcausechangestotheweights.adjustment=dot(training_set_inputs.T,error*self.sigmoid_derivative(output))

#Adjusttheweights.self.synaptic_weights+=adjustment

#Theneuralnetworkthinks.defthink(self,inputs):

#Passinputsthroughourneuralnetwork(oursingleneuron).returnself.sigmoid(dot(inputs,self.synaptic_weights))

51

52ifname=="main":53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

#Intialiseasingleneuronneuralnetwork.

neural_network=NeuralNetwork()

print("Randomstartingsynapticweights:")

print(neural_network.synaptic_weights)

#Thetrainingset.Wehave4examples,eachconsistingof3inputvalues#and1outputvalue.

training_set_inputs=array([[0,0,1],[1,1,1],[1,0,1],[0,1,1]])

training_set_outputs=array([[0,1,1,0]]).T

#Traintheneuralnetworkusingatrainingset.

#Doit10,000timesa

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論