深度學(xué)習(xí)可復(fù)現(xiàn)性和可解釋人工智能(XAI)_第1頁
深度學(xué)習(xí)可復(fù)現(xiàn)性和可解釋人工智能(XAI)_第2頁
深度學(xué)習(xí)可復(fù)現(xiàn)性和可解釋人工智能(XAI)_第3頁
深度學(xué)習(xí)可復(fù)現(xiàn)性和可解釋人工智能(XAI)_第4頁
深度學(xué)習(xí)可復(fù)現(xiàn)性和可解釋人工智能(XAI)_第5頁
已閱讀5頁,還剩23頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認(rèn)領(lǐng)

文檔簡介

DeepLearningReproducibilityandExplainableAI(XAI)

ResultsofBSI'sprojectresearch

Documenthistory

Version

Date

Editor

Description

1.0

02.03.2022

Dr.Leventi-Peetz

TransferredfromLaTeX

FederalOfficeforInformationSecurityPostBox200363

D-53133Bonn

Phone:+49228999582-0

E-Mail:

anastasia-maria.leventi-peetz@bsi.bund.de

Internet:

https://www.bsi.bund.de

?FederalOfficeforInformationSecurity2022

Abstract

FederalOfficeforInformationSecurity

3

Abstract

ThenondeterminismofDeepLearning(DL)trainingalgorithmsanditsinfluenceontheexplainabilityofneuralnetwork(NN)modelsareinvestigatedinthisworkwiththehelpofimageclassificationexamples.Todiscusstheissue,twoconvolutionalneuralnetworks(CNN)havebeentrainedandtheirresultscompared.Thecomparisonservestheexplorationofthefeasibilityofcreatingdeterministic,robustDLmodelsanddeterministicexplainableartificialintelligence(XAI)inpractice.Successesandlimitationofallherecarriedouteffortsaredescribedindetail.Thesourcecodeoftheattaineddeterministicmodelshasbeenlistedinthiswork.Reproducibilityisindexedasadevelopment-phase-componentoftheModelGovernanceFramework,proposedbytheEUwithintheirexcellenceinAIapproach.Furthermore,reproducibilityisarequirementforestablishingcausalityfortheinterpretationofmodelresultsandbuildingoftrusttowardstheoverwhelmingexpansionofAIsystemsapplications.Problemsthathavetobesolvedonthewaytoreproducibilityandwaystodealwithsomeofthem,areexaminedinthiswork.

TableofContents

FederalOfficeforInformationSecurity

5

TableofContents

Documenthistory 2

Abstract 3

Introduction 6

ReproducibleMLmodels

6

Factorshinderingtrainingreproducibility

6

Organizationandaimofthiswork

7

Grad-CAMNN-Explanations 9

NetworkarchitecturesandHW

9

InceptionV3

10

Soundnessandstabilityofexplanations

11

Xception

13

InceptionV3vs.Xception

15

Self-trainedModels 18

DeterministicConvNet

18

DeterministicminiXception

21

Conclusionsandfuturework 24

References 26

Introduction

6

BundesamtfürSicherheitinderInformationstechnik

Introduction

ReproducibleMLmodels

ThereproducibilityofMLmodelsisasubjectofdebatewithmanyaspectsunderinvestigationbyresearchersandpractitionersinthefieldofAIalgorithmsandtheirapplications.Reproducibilityreferstotheabilitytoduplicatepriorresultsusingthesamemeansasusedintheoriginalwork,forexamplethesameprogramcodeandrawdata.However,MLexperienceswhatiscalledareproducibilitycrisisanditisdifficulttoreproduceimportantMLresults,somealsodescribedaskeyresults[

22

,

21

,

13

,

29

].Experiencereportsrefertomanypublicationsasbeingnotreplicable,orbeingstatisticallyinsignificant,orsufferingfromnarrativefallacy[

5

].EspeciallyDeepReinforcementLearninghasreceivedalotofattentionwithmanypapers[

5

,

27

,

25

,

14

]andblogposts[

24

]investigatingthehighvarianceofsomeresults.BecauseitisdifficulttodecidewhichMLresultsaretrustworthyandgeneralizetoreal-worldproblems,theimportanceofreproducibilityisgrowing.Acommonproblemconcerningreproducibilityiswhenthecodeisnotopen-sourced.Thereviewof400publicationsoftwotopAIconferencesinthelastyears,showedthatonly6%ofthemsharedtheusedcode,onethirdsharedthedataonwhichalgorithmsweretestedandhalfsharedpseudocode[

16

,

23

].Initiativeslikethe2019ICLRreproducibilitychallenge[

34

]andtheReproducibilityChallengeofNeurIPS2019[

38

,

35

],thatinvitemembersoftheAIcommunitytoreproducepapersacceptedattheconferenceandreportontheirfindingsviatheOpenReviewplatform(

/group?

id=NeurIPS.cc/2019/Reproducibility_Challenge

),demonstrateanincreasingintentiontomakemachinelearningtrustworthybymakingitcomputationallyreproducible[

19

].Reproducibilityisimportantformanyreasons:Forinstance,toquantifyprogressinML,ithastobecertainthatnotedmodelimprovementsoriginatefromtrueinnovationandarenotthesheerproductofuncontrolledrandomness[

5

].Alsofromthedevelopmentpointofview,adaptationsofmodelstochangingrequirementsandplatformsarehardlypossibleintheabsenceofbaselineorreferencecode,whichworksaccordingtoagreeduponexpectations.Thelattercouldgettransparentlyextendedorchangedbeforetestedtomeetnewdemands.ForMLmodels,itisthesonamedinferentialreproducibilitywhichisimportantasarequirementandstatesthatwhentheinferenceprocedureisrepeated,theresultsshouldbequalitativelysimilartothoseoftheoriginalprocedure[

13

].However,trainingreproducibilityisalsoanecessarysteptowardstheformationofasystematicframeworkforanend-to-endcomparisonofthequalityofMLmodels.ToourknowledgesuchaframeworkdoesnotyetexistanditshouldbeessentialifcriteriaandguaranteesregardingthequalityofMLmodelshavetobeprovided.Securityandsafetyconsiderationsareinevitablyinvolved:Forinstance,whenamodelexecutesapureclassificationexercise,decidingforexampleifatestimageshowsacatoradog,itisnotnecessarilycriticalwhenthemodel’sdecisionturnsouttobewrong.Ifhoweverthemodelisincorporatedintoaclinicaldecision-makingsystem,thathelpsmakepredictionsaboutpathologicconditionsonthebasisofpatients’data,orispartofanautomateddrivingsystem(ADS)whichactivelydecidesifavehiclehastoimmediatelystoporkeepspeeding,thenthedecisionhastobeverifiablycorrectandunderstandableateverystageofitsformation.TheincreasingdependencyonMLfordecisionmakingleadstoanincreasingconcernthattheintegrationofmodelswhichhavenotbeenfullyunderstoodcanleadtounintendedconsequences[

20

].

Factorshinderingtrainingreproducibility

Itiswellknownthatwhenamodelistrainedagainwiththesamedataitcanproducedifferentpredictions[

8

,

7

].Tothereasonsthatmakereproducibilitydifficulttherebelong:differentproblemformulations,missingcompatibilitybetweenDNN-architectures,missingappropriatebenchmarks,differentOS,differentnumericallibraries,systemarchitecturesorsoftwareenvironmentslikethePythonversionetc.Reproducibilityasabasisforthegenerationofsoundexplanationsandinterpretationsofmodeldecisionsisalsoessentialinviewoftheimmensecomputationaleffortandcostsinvolvedwhenapplyingoradaptingalgorithms,oftenwithoutspecificknowledgeaboutthehardware,theparameter-tuningandthe

Introduction

FederalOfficeforInformationSecurity

7

energyconsumptiondemandedforthetrainingofamodel,whichattheendmightleadtoinconclusiveresults.Furthermore,itisalsodifficulttotrainmodelstoexpectedaccuracyevenwhentheprogramcodeandthetrainingdataareavailable.ChangesinTensorFlow,inGPUdrivers,orevenslightchangesinthedatasets,canhurtaccuracyinsubtleways[

46

,

45

].Inaddition,manyMLmodelsaretrainedonrestricteddatasets,forexamplethosecontainingsensitivepatientinformation,thatcan’tbemadepubliclyavailable[

1

].Whenprivacybarriersareimportantconsiderationsfordatasharing,socalledreplicationprocesseshavetobeused,toinvestigatetheextenttowhichtheoriginalmodelgeneralizestonewcontextsandnewdatapopulations,anddecidewhethersimilarconclusionstothoseoftheoriginalmodelcanbedelivered.However,thereexistalsocertainuniquechallengeswhichMLreproducibilityposes.ThetrainingofMLmodelsmakesuseofrandomness,especiallyforDL,usuallyemployingstochasticgradientdescent,regularizationtechniquesetc.[

3

].Randomizedproceduresresultindifferentfinalvaluesforthemodelparameterseverytimethecodeisexecuted.Onecansetallpossiblerandomseeds,howeveradditionalparameters,commonlynamedsilentparameters,associatedwithmoderndeeplearning,havebeenfoundtoalsohaveaprofoundinfluenceonbothmodelperformanceandreproducibility.High-levelframeworkslikeKerasarereportedtohidelow-levelimplementationdetailsandcomewithimplicithyperparameterchoicesalreadymadefortheuser.Alsohiddenbugsinthesourcecodecanleadtodifferentoutcomesindependenceoflinkedlibrariesanddifferentexecutionenvironments.Moreover,thecosttoreproducestate-of-the-artdeeplearningmodelsisoftenextremelyhigh.Innaturallanguageprocessing(NLP),transformersrequirehugeamountsofdataandcomputationalpowerandcanhaveinexcessof100billiontrainableparameters.Largeorganizationsproducemodels(likeOpenAI’sGPT-3)whichcancostmillionsofdollarsincomputingpowertotrain[

1

,

3

,

12

].Tofindthetransformerthatachievesthebestpredictiveperformanceforagivenapplication,meta-learnerstestthousandsofpossibleconfigurations.Thecosttoreproduceoneofthemanypossibletransformermodelshasbeenestimatedtorangefrom1millionto

3.2millionUSDwithusageofpubliclyavailablecloudcomputingresources[

39

,

3

].ThisprocessisestimatedtogenerateCO2emissionswithavolumewhichamountstothefivefoldofemissionsofanaveragecar,generatedoveritsentirelifetimeontheroad.Theenvironmentalimplicationsattachedtoreproducibilityendeavorsofthisrangearedefinitelyprohibitive[

3

].Aspossiblesolutiontothisproblem,therehasbeenproposedtheoptiontoletexpensivelargemodelsgetproducedonlyonce,whileadaptationsofthesemodelsforspecialapplicationsshouldbemadetransparentandreproduciblewiththeuseofmoremodestresources[

3

].

Organizationandaimofthiswork

ThemajorityofmethodsforexplainableAIareattributebased,theyhighlightthosedatafeatures(attributes),thatmostlycontributedtothemodel’spredictionordecision.Convolutionalneuralnetworks(CNN,orConvNet)arestate-of-the-artarchitectures,forwhichvisualexplanationscanbeproduced,forexamplewiththeGradient-weightedClassActivationMappingmethod(Grad-CAM)[

37

,

11

],whichisalsothemethodusedinthiswork.Inthesecondpartofthiswork,Grad-CAMexplanationsfortwopre-trainedandestablishedCNNmodels,whichuseTensorFlow,willbediscussedwithfocusonthedifferencesoftheirresults,whenthesametest-dataaregivenasinput.Itiswellknownthatwhendifferentexplainabilitymethodsareappliedonaneuralnetwork,differentresultsaretobeexpected.Thefactthatasingleexplainabilitymethod,whenappliedontwosimilarCNN-architectures,canproducedifferentresultsforthesametest-data,hasreceivedlessattentionintheliteraturebutisworthtoanalyzeinthereproducibilitycontext.Inthethirdpart,theownimplementation,trainingandresultsoftworelativelysimpleCNNmodelsarediscussed.DifferencesoftheGrad-CAM-explanationsforidenticalimagesclassifiedwiththesetwonetworksareanalyzed,withspecialfocusontheinfluenceofthecomputinginfrastructureonthemodelexecution.TheeffortstorenderthesetwomodelsdeterministicaredescribedinSection

3

indetail,againwithspecialfocusontheinfluenceofthecomputinginfrastructureontheresults.Successandlimitationsarenoted,thepartlyachieveddeterministiccodeislisted.ItisworthmentioningthatdifferentbehaviorsacrossversionsofTensorFlow,aswellasacrossdifferentcomputationalframeworksaredocumentedtobenormallyexpected.TensorFlowwarnsthatfloatingpointvaluescomputedbyops,maychangeatanytimeandusersshouldrelyonlyonapproximateaccuracyandnumericalstability,notonthe

Introduction

8

BundesamtfürSicherheitinderInformationstechnik

specificbitscomputed.Therecouldbefoundnoexperiencereports,astohowachangeofspecificbitscouldinfluenceMLresults,forinstanceinworstcasebyalteringthenetwork’sclassificationoritsexplanation,orboth.AccordingtoTensorFlow,changestonumericalformulasinminorandpatchreleases,shouldresultincomparableorimprovedaccuracyofspecificformulas,withthecautionthatthismightdecreasetheaccuracyfortheoverallsystem.AlsomodelsimplementedinoneversionofTensorFlow,cannotrunwithnextsubversionsandversionsofTensorFlow.Thereforepublishedcodewhichwasonceprovedtowork,ispossiblynottouseagainwithinshorttimeafteritscreation.Torunmorethanonesubversionsonthesamesystem,whenusinggraphicHWsupport,wasnotpossible.ThisworkaimsatdrawingattentiontothechallengesthatadheretocreatingreproducibletrainingprocessesinDeepLearninganddemonstratespracticalstepstowardsreproducibility,discussingtheirpresentlimitations.InSection

4

conclusionsofthisworkandviewstowardsfutureinvestigationsinthesamedirectionarepresentedinasummary.Ithastobenotedthattheimpactofwhatiscalledunderspecification,wherebythesametrainingprocessesproducesmultiplemachine-learningmodelswhichdemonstratedifferencesintheirperformance,isoutofscopeofthiswork[

18

].

Grad-CAMNN-Explanations

FederalOfficeforInformationSecurity

9

Grad-CAMNN-Explanations

NetworkarchitecturesandHW

Convolutionalneuralnetworks,originallydevelopedfortheanalysisandclassificationofobjectsindigitalimages,representthecoreofmoststate-of-the-artcomputervisionsolutionsforawidevarietyoftasks[

41

].AbriefbutcomprehensivehistoryofCNNcanbefoundinmanysources,forexamplein[

9

],wherebythetendencyhasalwaysbeentowardsmakingCNNincreasinglydeeper.DevelopmentsofthelastyearshaveledtotheInceptionarchitecture,whichincorporatesthesocalledInceptionmodules,thatexistalreadyinseveraldifferentversions.Anewarchitecture,whichinsteadofstacksofsimpleconvolutionalnetworks,containsstacksofconvolutionsitself,wasproposedbyFran?oisCholletwithhisExtremeInceptionorXceptionmodel.Xceptionwasprovedtobecapableoflearningricherrepresentationswithlessparameters[

9

].CholletdeliveredtheXceptionimprovementstotheInceptionfamilyofNN-architectures,byentirelyreplacingInceptionmoduleswithdepthwiseseparableconvolutions.Xceptionalsousesresidualconnections,placedinallflowsofthenetwork[

9

,

17

].Theroleofresidualswasobservedasespeciallyimportantfortheconvergenceofthenetwork[

44

],howeverCholletmoderatesthisimportance,becausenon-residualmodelshavebeenbenchmarkedwiththesameoptimizationconfigurationastheresidualones,whichleavesthepossibilityopen,thatanotherconfigurationmighthaveprovedthenon-residualversionbetter[

9

].Finally,thebuildingoftheimprovedXceptionmodelswasmadepossiblebecauseanefficientdepthwiseconvolutionimplementationbecameavailableinTensorFlow.TheXceptionarchitecturehasasimilarnumberofparametersasInceptionV3.ItsperformancehoweverhasbeenfoundtobebetterthanthatofInception,accordingtotestsontwolarge-scaleimageclassificationtasks[

9

].Forpracticaltestsinthiswork,InceptionV3andXceptionhavebeenchosenforresultscomparisons.ThetwonetworksarepretrainedonatrimmedlistoftheImageNetdataset,soastobeabletorecognizeonethousandnon-overlappingobjectclasses[

9

].

InceptionV3

Theexactdescriptionofthenetwork,itsparametersandperformancearegivenintheworkofChristianSzegedy[

42

].Thedescriptionofthetraininginfrastructurereferstoasystemof50replicas,(probablyidenticalsystems),runningeachonaNVidiaKeplerGPU,withbatchsize32,for100epochs.Thetimedurationofeachepochisnotgiven.

Xception

Chollethasused60NVIDIAK80GPUsforthetraining,whichtookadurationof3daystime.Thenumberofepochsisnotgiven.Thenetworkandtechnicaldetailsaboutthetrainingarelistedintheoriginalwork[

9

].

Xceptionhasasimilarnumberofparameters(ca.23million)asInceptionV3(ca.24million).TheHWexecutionenvironmentsemployedfortheheredescribedexperimentsarethefollowing:

HW-1:GPU:NVIDIATITANRTX:24GB(GDDR6),576NVIDIATuringmixed-precisionTensorCores,4608CUDACores.

HW-2:CPU:AMDEPYC7502P32-Core,SMT,2GHz(T:2.55GHz),RAM128GB.

HW-3:GPU:NVIDIAGeForceRTX2060:6GB(GDDR6),240NVIDIATuringmixed-precisionTensorCores,1920CUDACores.

HW-4:CPU:AMDRyzenThreadripper3970X32-Core,SMT,3.7GHz(T:4.5GHz),RAM256GB.

HW-5:CPU:AMDRyzen75800X8-Core,SMT,3.8GHz(T:4.7GHz),RAM64GB.

EachofthepretrainedmodelsisverifiedtodeliverthesameresultsforallhereconsideredCPUorGPUdifferentexecutionenvironments.Theclassificationsandtheaccordingnetworkexplanationsaredeterministicwhenperformedunderlaboratoryconditions,asalsoexpected.Plausibilityandstabilityissuesoftheexplanationswillbementionedparalleltothetests.

Grad-CAMNN-Explanations

10

BundesamtfürSicherheitinderInformationstechnik

InceptionV3

Inthispartexamplesofpredictions,calculatedwiththeInceptionV3networkarediscussed.InFig.

1

(a)and

(b)respectively,therearedepictedactivationheatmapswhichhavebeenproducedtoidentifythoseregionsoftheimagechow-cat,thatcorrespondtothedog(“chow”)andthecat(“tabby”)respectively.IdenticalrespectiveaccuracieshavebeencalculatedforeachclassificationindependentoftheemployedHW,aswasverifiedbythetestsperformedwithallHW-environmentslistedattheendofsection

2.1

.The“chow”hasbeenpredictedwith30%probabilityandstandsinthefirstplaceonthetop-predictions-list,whilethecatgetsthethirdpositionwithaprobabilityof2.4%.

(a) (b)

Figure1:chow-cat:Grad-CAMexplanationsofInceptionV3fortheidentificationofthedog“chow”(a),inthefirstplaceonthetop-predictions-listandthecat“tabby”(b),inthethirdplaceonthetop-predictions-list.Thesecondplaceoccupiesa“Labradordog”.

InFig.

2

,heatmapsproducedbytheidentificationofthe“cockerspaniel”(a),the“toypoodle”(b),andthe“Persiancat”(c)respectively,havebeendemonstratedfortheimagespaniel-kitty.

(a) (b) (c)

Figure2:spaniel-kitty:Grad-CAMexplanationofInceptionV3fortheidentificationofthe“cockerspaniel”(a),the“toypoodle”(b)andthe“Persiancat”(c),see

Table1

.

Grad-CAMNN-Explanations

Class

HW-2

HW-1

1 cockerspaniel

0.56762594

0.56761914

2 toypoodle

0.08013367

0.08014054

3 clumber

0.02106595

0.02107035

4 DandieDinmont

0.01964365

0.01964012

5 Pekinese

0.01867950

0.01868443

6 miniaturepoodle

0.01846011

0.01846663

7 Blenheimspaniel

0.01425239

0.01424699

8 Maltesedog

0.01124849

0.01124578

9 Chihuahua

0.01103328

0.01103479

10 Norwichterrier

0.00741338

0.00741514

11 Sussexspaniel

0.00703137

0.00703068

12 Yorkshireterrier

0.00689254

0.00689154

13 Norfolkterrier

0.00662250

0.00662296

14 Lhasa

0.00609926

0.00609862

15 Pomeranian

0.00608485

0.00608792

16 Persiancat

0.00489533

0.00489470

17 goldenretriever

0.00428663

0.00428840

Table1:InceptionV3:ClassificationProbabilitiesfortheimagespaniel-kitty,seeFig.

2

.

In

Table1

therearelistedthescoresofthefirst17classesonthetop-predictions-list,ascalculatedintwoHWexecutions(HW-1,HW-2).Thepredictionscoresarealmostidentical,asisobviousbycomparingthecolumnsin

Table1

,whileinthefewcases,whenslightdifferencesexistintheprobabilityvalues,thesedifferencesappearonlyafterthefourthdecimalplace.The“cockerspaniel”isthetoppredictionandrepresentsactuallythecorrectclassificationofthedograce,predictedwithaprobabilityofalmost57%,whilethe“Persiancat”inplace16ofthelist,whichisalsoacorrectprediction,hasaprobabilityofapproximately0.5%.The“toypoodle”with8.0%probabilitystandsinthesecondplaceonthelist,whiletherestoflistplaces,downtoplacesixteenofthe“Persiancat”,arealloccupiedbydograces(see

Table1

).

Soundnessandstabilityofexplanations

Acarefulobservationofthedeliverednetworkexplanationsshowsthattheyarepartlyarbitraryandhardlyintuitive,andthisindependentlyofawrong,orrightclassprediction.Forexample,thenetworkreasoningbehindthe“toypoodle”classificationinFig.

2

(b),whichiswrongasfarastheraceofthedogisconcerned,butrightasfarastheanimalcategoryidentified(adog),cannotbenotedassound.Themainreasonisbecausethemostactivated,andthereforethemostrelevanttothetargetidentificationregion(markedred),pointstoapartoftheimagethatliesinemptyspace,beyondthecontourofthetarget.Themarkedredregionliesclosetowhatonecoulddescribeasagenericfeature,thepaws,whichiscommontoavarietyofanimals.Atoogenericfeatureofferslittleconfidenceinbeingagoodexplanation,ifassumedthatitisonlytheaccuracyofthefeature’slocalizationintheimagethatfails.Besides,thealgorithmcouldhavefocusedonthevicinityofthepawsoutofreasonsnotdirectlyassociatedwiththerecognitionofthe“poodle”.Observingthattheexplanationfortheidentificationofthe“Persiancat”,seeFig.

2

(c),highlightsthesamepaws,makestheunambiguityordefinitenessoftheexplanationsquestionable.Importantisalsotheinvestigationofthestabilityandconsistencyofthenetwork’sexplanations,astheyrelatetothereproducibilityofthenetworktoo.Forexample,itwouldbeexpectedthatanetworkwhichconcentratedonthedog’sheadtoexplainthefirstplaceofthetop-predictions-list,the“cockerspaniel”inFig.

2

(a),wouldprobablyalsopicktheheadtomainlyidentifythesecondmostprobableclassificationonthelist,whichthe“toypoodle”,seeninFig.

2

(b).Thisishowevernotthecase,whichmakestheconsistencybehindthelogicofexplanationsdoubtful.Obviously,thecat’sheadalsoreceiveshardlyanyattentionfortheexplanationoftherecognitionofthecatinFig.

2

(c).Itisnotpossibletoidentifysomecertainstrategywhichthenetwork

FederalOfficeforInformationSecurity 11

12

BundesamtfürSicherheitinderInformationstechnik

Grad-CAMNN-Explanations

consistentlyemploysinordertoexplainclassifications,inthiscaseofanimals.Forfurtherinvestigations,asmallpartoftheimagespaniel-kitty,namelythepartcontainingthepaws,hasbeenremovedfromtheimageandthetop-predictions-listhasbeencalculatedagain.Withthenewtestimage,spaniel-kitty-paws-cutasinput,the“cockerspaniel”keepsthefirstplaceonthetop-predictions-list,see

Table2

,howeverthe“Persiancat”climbesnowfromplace16toplace2withaclassificationprobabilityrisingfrom0.5%to30%,whilethe“toypoodle”fallsdowntotheplace4ofthelist.

Class

HW-2

HW-1

1 cockerspaniel

0.43387938

0.43393657

2 Persiancat

0.03001592

0.03000891

3 Pekinese

0.02654952

0.02654130

4 toypoodle

0.01810920

0.01810851

5 DandieDinmont

0.01457902

0.01457707

6 Sussexspaniel

0.01415453

0.01415372

7 Goldenretriever

0.01363987

0.01363916

8 Miniaturepoodle

0.01088122

0.01088199

Table2:InceptionV3:ClassificationProbabilitiesfortheimagespaniel-kitty-paws-cut.

In

Table2

,thenewtop-fourpredictedclassesandtheirnewscoresaredisplayed.Therearenogreatchangesintheexplanationconcerningthe“cockerspaniel”forthemodifiedimage,theheadbeingtheparthighlightedagain.Howeverthevisualexplanationsfortheidentificationofthe“toypoodle”andthe“cat”havechangedconsiderably,asinFig.

3

tosee.

(a) (b) (c)

Figure3:spaniel-kitty-paws-cut:Grad-CAMexplanationofInceptionV3fortheidentificationofthe“cockerspaniel”(a),the“toypoodle”(b)andthe“Persiancat”(c),whenthepawsareremovedfromtheimage(compareresultsofFig.

2

).

The“toypoodle”isnowoverlayedbyadoubleheatspot,aminoroneattheendofthecat’sbodyandthemainonetotherightofthecat’shead,bothlyingoutsidethecontouroftherecognized“poodle”,seeFig.

3

(b).Althoughinthiscasetheclassificationiscorrect,theexplanationdoesn’tmakesenseatall,becausetheactivationregionliesentirelyoutsidethetarget(“toypoodle”).Onecouldarguethatatleasttheexplanationforthe“Persiancat”inFig.

3

(c)hasbeenimproved,incomparisontotheunchangedimage.Thehotactivationregionapproachesnowthecat’sheadinsteadofthepawswhichismorecharacteristicofthetarget.However,aconsiderablepartoftheclassactivationmapping(markedred),stillliesbeyondthecontourofthecatandtherefore,atleastthepositionoftherecognizedtarget,canbedescribedasnotaccurateorevenwrong.InceptionV3deliversidenticalresults,withrespecttochangingexecutionenvironments,thereforetheexplanationsandclassificationsofthenetworkareprovedtobedeterministic

FederalOfficeforInformationSecurity

13

Grad-CAMNN-Explanations

underlaboratoryconditions,thatiswhennointentionalorunintentionalperturbationsareinsertedtothetestdata.

Xception

Inanalogyto

2.2

,objectdetectionsandtheirexplanationscalculatedwiththeXceptionnetworkareherediscussed.InFig.

4

(a)and(b)therearepresentedtheactivationheatmaps,producedbythenetworkfortheidentificationoftheimageregionsthatcorrespondtothe“dog”(“chow”),andthe“cat”respectively,(hereidentifiedas“Egyptiancat”,whereasInceptionV3identifiedthecatasa“Tabbycat”,compareFig.

1

).

(a)“chow” (b)“Egyptian_cat”

Figure4:chow-cat:Grad-CAMexplanationsofXceptionfortheidentificationofthe“chow”(a),inthefirstplaceofthetop-predictions-listandthe“Egyptiancat”(b),inthesecondplace.Thirdonthelististhe“tigercat”andfourththe“tabbycat”.Foracomparison,theorderofexplanationsgeneratedbyInceptionV3isgiveninthecaptionofFig.

1

.

InFig.

5

theactivationmapscorrespondingtotheidentificationofthe“cockerspaniel”,the“Frenchbulldog”,the“toypoodle”andthe“Persiancat”respectivelyaredemonstrated.SimilarlytotheInceptionV3case,describedintheprevioussection,allpredictionscoresarealmostidenticalbetweenallHWenvironmentexecutions.

Grad-CAM

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論