Capgemini-釋放對人工智能的信心-凱捷Generative AI實驗室的一本劇本(英)_第1頁
Capgemini-釋放對人工智能的信心-凱捷Generative AI實驗室的一本劇本(英)_第2頁
Capgemini-釋放對人工智能的信心-凱捷Generative AI實驗室的一本劇本(英)_第3頁
Capgemini-釋放對人工智能的信心-凱捷Generative AI實驗室的一本劇本(英)_第4頁
Capgemini-釋放對人工智能的信心-凱捷Generative AI實驗室的一本劇本(英)_第5頁
已閱讀5頁,還剩67頁未讀 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

CONFIDENCE

INAI

APlaybookbyCapgeminiGenerativeAILab

2024

SUCCESSFUL,CONFIDENT

ADOPTIONOFAIRELIES

NOTJUSTONCREATING

AITHATWORKS,BUTON

CREATINGAITHATWORKS

RELIABLY,AITHAT’S

ALIGNEDTOHUMAN

EXPECTATIONS,ANDAI

THATWORKSINPEOPLE’S

BESTINTERESTS.

2GenAILab2024

3

TABLEOF

CONTENTS

4GenAILab2024

AITHATWORKS

ProvenAccuracy08

AITHATWORKSRELIABLY

Robustness

10

Dependability

12

Stability

14

AITHAT’SALIGNEDTOHUMANEXPECTATIONS

Sensibility

16

Humility

18

FailsGracefully/ExtrapolatesSensibly

20

Explainability

22

AITHATWORKSINPEOPLE’SBESTINTERESTS

Fairness24

Sustainability26

Privacy28

MARKROBERTS

DeputyHeadGenerativeAILab

Editorinchief

ROBERTENGELS

HeadGenerativeAILab

Editorinchief

AI:Beinggoodwastheeasybit.Nowweneedtobeuseful

Artificialintelligence(AI)issuddenlyeverywhere.Powerfulcontent-generationservicesthatmighthavebeenviewedasbeingfromtherealmofsciencefictionjust12monthsagoarenowabigpartofconversationfromtheboardroomtotheschoolplayground.

Onehugefactorinthisupswingininterestistheriseof

GenerativeAI.Duringthepast12months,theemergenceofhigh-profileGenerativeAIserviceshaspushedAItothefrontpages.WhereAIwasonceperceivedasanicheareaoftechnology,it’snowbeingusedbyallkindsofpeopleforallkindsofuses,whetherit’saskingquestions,writingtext,orgeneratingphotosandcode.

However,don’tconfusetherapidriseofGenerativeAIwitharevolution.WhileaneffectiveuserinterfacelikeChatGPTdemocratizesaccesstopowerfullargelanguagemodels,

themovetowardsAI-poweredserviceswashappening

anyway.Today’sinterestinGenerativeAIissimplythevisiblemanifestationofabehind-the-scenesevolutionthat’sbeenmanyyearsinthemaking.

What’smore,thosedecadesofexperiencegiveusa

proveninsightintothecriticalsuccessfactorsthatmustbeconsideredifwearetoturnaninterestinall-thingsAIintosomethingthathasgenuinecommercialvalue.

Understandingthe

scaleofinvestment

Aswellasthehigh-profilegenerativeservicesthatdominatethenewsagenda,there’sdiversearrayofotherAIproductsandservicesthatarebeingannounced,launchedand

marketedeveryday.ResearcherIDCreportsthatglobal

spendingonAI,includingsoftware,hardware,andservices,will

reach$154billionin2023

,anincreaseof26.9%ontheamountspentduring2022.

ThetechanalystsaysthecontinuedinvestmentinAIwill

meanspendingsurpasses$300billionin2026.Thiscashisalreadyfundingabroadrangeofproof-of-conceptprojects.Whetherthey’reusingAItoimprovecustomerservices,

solvehardscienceandengineeringproblemsoridentifyfraudulenttransactions,companiesareinvestingbillionsofdollarsinrelativelynewtechnologytotryandgain

competitiveadvantageovertheirrivals.

Fromtheoutsidelookingin,thisinvestmentinAIlooks

likeagreatsuccessstory.Thefundingwillcreateproducts

andservicesthathelpshapethefutureoftechnology

andbusiness.Yetthere’sadownside,too–likeallnew

technologywaves,notalloftheseinvestmentswillpayoff.WeseethiseffectacrossCapgemini’sbroadcustomerbase.ManyAIprojects,evenonesthatareapparentlysuccessful,

donotescapetheproof-of-conceptstages.VarioussurveysinrecentyearsputthefailurerateofAIprojectsashighas80%.Whatemergesisacontradiction:whilemanyorganizationsbelieveabiginvestmentinAIwillbecommerciallypositive,largenumbersoftheseprojectsarenotnecessarilypayingoff.So,howcanwereconcilethesetwoverydifferentviewsandcreatecommerciallyusefulAIinitiatives?

5

Changinghowwe

measuresuccess

Thekeychallengeweneedtoovercomeisthatwe’re

allmeasuringthesuccessofAIprojectsinthewrong

way.Whetherit’speoplewhoareusingAI,specialists

developingtools,orthemedia,analystsandinvestors,

we’realllockedintoacollectivedelusionthataccuracyistheonlythingthatmatters.

Successistoooftenmeasuredintermsofhavinghigh

accuracyonnarrowbenchmarktests,orbeingimpressiveorentertaining,whileothercrucialsuccessfactors–areignoredbecausethey’renotwell-understood,excitingorheadline-grabbing.

WhenanAIsystemdoessomethingcorrectly,whether

that’sasimpleclassificationperformedbyatraditional

machine-learningsystem,oraGenerativeAItoolansweringaquestioncorrectly,weattachalotofsignificancetothisaccuracy.Infact,weoftenbaseourentireopinionofthe

systemonthissinglemeasureofaccuracy.

Accuracyissoreveredthateverydayweseebreathlessheadlinesdeclaringthatnewsystemshaveachieved

highlevelsofaccuracyonaparticularproblem.Figuresof“90%accurate”or99%or99.9%arethrownaround–

themore9sthebetter,suchistheobsessionwithhigh

levelsofaccuracy.Toexpertsinthefield,however,this

obsessionwithaccuracyisbothna?veandunhelpful,

asitdrawsattentionawayfromthefactorsthatreally

matterforlong-termsuccess.Inthemajorityofreal-

worlddeployments,howbadlyandAIsystemfailsisfar

moreimportantthathowoftenitsucceeds.Inreality,

anAIsystemthat’s99.99%accuratecouldbedeemeda

completefailureifthe0.001%offailuresarecatastrophic.

Accuracyisnottheonlyimportantfactor–andit’s

certainlynotthemaincauseofmostAIprojectfailures.ThecommercialsuccessofanAIprojectisdependentonacomplexcombinationoffactors,whicharetoooftenignoredorrelegatedtosecondaryconcerns.

However,thesesupposedlysecondaryconcernsare

actuallycriticaltosuccess.Thesefactorsarejustas

importantasaccuracy,maybemoreso,becausetheyareoftentherootcausebehindproblematicbehaviorandfailedAIinvestments.Thesesuccessfactors,whichareoutlinedhere,mustbeconsideredduringthe

developmentandimplementationofanyAIsystemastheywillinstillconfidenceamongthesystem’susersandintheleadersthataredrivingandpayingforit:

AIThatWorks

?ProvenAccuracy–Isgoodatsolvingtheproblem,asmeasuredbybenchmarktests.

AIThatWorksReliably

?Robustness–Handlesunusualormaliciousoutputseffectively.

?Dependability–Alwaysproducesanoutputwithintherequiredtimeframe.

?Stability–Performanceisconsistentanddoesnotdriftovertime.

AIThat’sAlignedtoHumanExpectations

?Sensibility–Makesdecisionsinlinewithhowtheworldorsocietyworks.

?Humility–Understandsitsownlimitations,andrefusestoanswerquestionswhereitdoesn’tknowtheanswer.

?Extrapolatessensibly/Failsgracefully–Actssensiblywhenconfrontedwithscenariosbeyondthoseinwhichitwastrainedandfailssafely.

?Explainability–Canjustifyhowitsolvedtheproblemratherthanworkingasamysteriousblackbox.

AIThatWorksinPeople’sBestInterests

?Fairness–Non-biased.Isequallyfairtoallsub-groups.

?Sustainability–Minimizesharmfulimpactsfromtrainingandongoinguse.

?Privacy–Protectsthesensitivedatathatitwastrainedon.

Conclusion:MakingAIusefulforeveryone

Weseenowthatactuallysolvingataskaccuratelyisjustoneof12equallyimportantfactorsthathelpeveryonetofeel

muchmoreconfidentabouttheAIproductsandservicestheyuse.

Weshouldn’tmakethemistakeofthinkingelements

likehumility,sustainabilityandreliabilityaretheboring

secondaryelementsofanAIendeavor.Whilefocusingonthesefactorswon’tcreatetheexcitementthatcomesfromanAI-generatedimageoressay,itwillensuretheoutputsyourbusinesscreatesaretrustedanduseful.Andonce

thathappens,overtime,thechancesoffailurewillreduce,thelevelsofadoptionwillincrease,andthelikelihoodofcommercialsuccesswillberaisedsignificantly.

AsAIplaysanever-increasinglyimportantroleinourlives,

peoplemustfeelconfidentinthesolutionstheyuse.

Ensuringthese12factorsarealwaysconsideredwillmeanyourbusinessdeliverssignificantcommercialvaluefromAI.Inthisplaybook,wewilldiscusseachofthese12factorsinmoredetail.

6GenAILab2024

Thingspeoplenormally

focusoninAI

Refusingtoanswer,oratleastreportingwhenitdoesn’tknowsomething

ExtrapolatesSensibly

willdosomethingsensible

whenconfrontedwithunseen

databeyondtheboundsof

whatitwastrainedon

Robustness

Willhandle

unusualormalicious

inputswell

Privacy

willnotleak

sensitivedataitwas

trainedon

ProvenAccuracy

Isitgoodatsolvingtheproblem,asmeasuredbytests?

FailsGracefully

Ifitfails,willitfailinasafe&sensibleway?

Sustainability

Impactoftraining

andongoinguseis

notharmful

Explainability

Canitexplain/justify

howitsolvedtheproblem?

-AIthatworks

CONFIDENCE/

TRUSTINAN

AISOLUTION

Thingswenow

recognizearecrucialtomakeAIsuccessful

Stability

performancewill

notunknowinglydrift

overtime

Sensibility

Makesdecisionsinlinewithhowtheworld/nature/physics/cultureworks

Dependability

Willalwaysproduce

anoutput,inthe

requiredtimeframe

Humility

Fairness

Outputisnotbiased

againstanysub-groups

AIthat’salignedwithhumanexpectations

AIthatworksreliably

AIthatworksinpeople’s

bestinterests

7

TIJANANIKOLI?

EXPERTINRESIDENCE

PROVEN

in

ACCURACY

WhendowegettosaythatAIisgoodenough?Whatdoes“good”evenmean?

tobasedecisionsonasthedifferentaccuracymeasuresweusecandramaticallyinfluencehowweinterprettheiroutputs.

Itisimperativetoalsoconsiderreal-worlddimensions.Amodel

mightperformexceptionallyintestsbutfailprofoundlywhen

appliedtoreal-worldscenarios.Thisdiscrepancyhighlightsthe

importanceofacomprehensivedefinitionofgoodness—onethatincorporatesvariousfacetssuchasethicalimplications,social

impact,andalignmentwithhumanvalues.

GenerativeAIhasthrustAIintothespotlightinsectorsfrom

creativeartstodataanalysis,andcustomerservicetoengineering.

However,thisrapidrisehasbroughttoprominencealong-standing

questioninAI:WhatdoesitmeanforAItobe“good”?Traditionally,

theperformanceofmachinelearningmodelshasbeenassessed

onlythroughnarrowmeasuresoftestandvalidationscores.

However,thenewfocusonGenerativeAIwithitscreativityand

hallucinationshasforcedustoreconsiderwhataccuracyreally

meansorwhetheraccuracyisevenrelevantinthisnewworld.

Simplisticmeasuresofaccuracyarenolongergoodenoughforus

8GenAILab2024

WHY?

?AnyonewhoisinvolvedindecisionmakingaroundAIneedstounderstanditsperformance.Thisistruebothofthe

usersofasystem,andofthepeopledesigning,buildingandfundingit.

?Thisneedtounderstandperformancemakesithighlydesirabletocreateasingle,easilydigestiblenumber–accuracy,whichrepresentsthatperformanceprofile.

?However,inalmostallcases,nosinglenumbercantell

youthewholestoryofhowamachinelearningsystem

performs,soweneedoftenneedtousemultiplemetricstodescribetheperformanceprofile.

?Evenifwecouldcapturethehow“good”amodelisinasinglenumber,thatisnotenoughas“good”isasubjectiveterm.

?UnderstandingthemultifacetedessenceofwhatsuccessinAIlookslikeispivotalduetothepotentialconsequencesoffocusingtoomuchonanyonefacet.

?Insomecases,focusingonthewrongtypeofaccuracycancausereal-worldharm.Forexample,astudyof

breast-cancerscreeningintheUKshowedthatana?ve

focusonthewrongsortofaccuracyledtoover-diagnosesandmanywomenunnecessarilyundergoingpainfulandstressfultreatments.

WHAT?

?ConsiderasimplemeasureofaccuracyforanAI

computervisionsystemclassifying100objects,eitherapplesororanges.Wecouldcalculatetheaccuracy

ofthatsystembyjustmeasuringthepercentageofclassificationsthatarecorrect.

?However,thispercentagewouldonlybeausefulmeasureiftherewereexactlythesamenumberofitemsin

bothclasses.If,howeverthereweremoreapplesthan

oranges,asimplepercentageaccuracyfigurewouldnotaccuratelyreflecttheperformanceoftheclassifier.Inanextremecase,iftherewere99applesandoneorange,

andtheclassifieralwayssaid“apple”it’sna?veaccuracy

wouldbe99%,eventhoughithadnoabilitytodetectthedifferencebetweentheclasses.

?Forthisreason,morecomplexstatisticalmeasuresare

used,oftensuchasprecision&recall,orsensitivity&

specificity.Thesemeasuresdescribedifferentfacetsof

accuracy,showinghowwellitperformsinbothitspositiveandnegativepredictions,repeatablyovermultipleuses.

?However,evenusingthesemoresophisticatedmeasuressuchasaccuracy,precisionandrecalldoesnotmeanyourmodel’sreal-worldsuccessisguaranteed.

?Infact,aswewillshowinthisPlaybook,accuracyon

benchmarktestsisonlyoneofmanyequallyimportant

facetsofsuccessthatmustbeconsideredinordertonotjustbesuccessfulonpaper,buttohavegenuinereal-worldsuccesswithuserswhoareconfidentinthatsystem.

RECOMMENDATIONS

?First,ensureyouaremeasuringandcommunicating

accuracyeffectively.Itisextremelyunlikelythataccuracycanberepresentedbyasinglenumber,sousemore

appropriatemeasurestosetusers’expectationsabouttheperformanceprofileofasystem.

?Don’tusesimplisticmeasuresofaccuracyasthesolecriteriafordeclaringsuccessinanAIsystem.

?EducateeveryoneinthebusinessabouthowtotalkaboutaccuracyinAIsystems.Striveforaculturewhereeveryone,rightuptotheboardroom,iscomfortableaskingquestionsaboutsensitivityandspecificity,precisionandrecalletc.

?Beyondaccuracy,aholisticapproachisnecessary.

Organizationsmustembracetransparency,ethics,andfairnessintheirAIendeavors.Considerusingaplaybook,likethisone,toremindeveryoneinvolvedinAIsystemsdesigntothinkaboutthemultiplefacetsthatleadto

successfulAI,notjustonaccuracyalone.

?Oneoftheprimarypitfallsisamyopicfocusontechnicalmetrics.Ignoringbiasesintrainingdata,overlooking

ethicalimplications,orneglectingcommunityfeedbackcanleadtocatastrophicoutcomes.Contextualfit,forinstance,cannotbemeasuredeasily.Butisthefinal

definingfactorfor“goodness”

LINKS

?ValidatingLargeLanguageModelswithReLM.Kuschnicketal.CarnegieMellonUniversity,2023.

/

pdf/2211.15458.pdf

?Langchainblogpost:“HowCorrectareLLMEvaluators”,problematizingthepossibilitiestofacilitatemeasurementof“provenaccuracy”.

https://blog.langchain.dev/how

-

correct-are-llm-evaluators/

?GEDLTprojectonprompting,writingstylesandqualityof

answeringTheGDELTProjectisarealtimenetworkdiagramanddatabaseofglobalhumansocietyforopenresearch:

/large-language-models-llms

-

planetary-scale-realtime-data-current-limitations/

9

PROVENABILITY

MITALIAGRAWAL

ROBUST

in

EXPERTINRESIDENCE

WillanAIsystemalwaysrespondtosimilarinputsinaconsistentmanner?Canitcope

withdeliberatemaliciousattacksintheinput?Allofthesequestionsrelatetotheideaofrobustness-ameasureofhowwellanAIsystembehaveswhenthesignalsitreceivesarenotthesameaswhatitwastrainedon.

RobustnessisacornerstoneofreliableAIsystems,ensuringresilienceinthefaceofadversity.Inthedynamiclandscapeofartificialintelligence,twoparamountchallengesarise:dealingwiththehugevariationofinputsasystemwillencounterintherealworldinaconsistentmanneranddefendingagainstdeliberatelymaliciousinputs,oftenmanifestedasadversarialattacks.

UnderstandingandfortifyingAIagainstthesechallengesisessentialinshapingafuturewhereAItechnologiescanbetrustedandreliedupon.

WHY?

.InanerawhereAIisincreasinglyprevalentinourdailylives,robustnessisafundamentalpillaroftrustworthiness.

.Duetotheircomplexitythough,AIsystemsaresusceptibleto

variousvulnerabilities,bothinthealgorithmsandthedatatheyaretrainedon.

.AsimplewaytodemonstrateifanAIsystemisrobustornotistoaskittoperformasimilartasktwice.Providingsignificantlydifferent

answerstothesamequestionwillcausehumanstorapidlylosetrustinthesystem,butmanyAIsystemswillfailthissimpletest.

.Therewillalwaysbeconfusinginputsinthereal-world,and

unfortunatelytherewillalwaysbemaliciousactorswhotryto

deliberatelyaffecttheoutputsofourAIsystems.Eveninthebestcases,withnomaliciousactor,wewillstillforeverbelockedinanarmsracebetweenourmachinelearningmodelsandtheinfinitecomplexitythattherealworldwillthrowatthem.

.Therefore,therewillalwaysbeaneedtouseapproachesto

maximizetherobustnessofourAImodels,andinsomecaseswerequireverifiableproofofthatrobustness.

.Byaddressingthecomplexitiesofunusualdataandadversarial

10GenAILab2024

ROBUST

attacks,wepavethewayforAIsystemsthatnotonlyexcelunderidealcircumstancesbutareresilientinthefaceof

unexpectedinputsanddeliberateattacks.

WHAT?

.Whilstmachinelearningexpertshavelongknownabouttheproblemsofrobustness,GenerativeAItoolsnow

alloweveryonetoseetheextentofthisproblem–evensmallchangesinthephrasingofapromptcanproducecompletelydifferentoutputsandmeanings.

.Deliberatelymaliciousinputs,knownasadversarial

attacks,exploitthevulnerabilitiesofAIsystems,leadingthemtomakeerroneousjudgments.Theseattackscanhavedireconsequences,especiallyinsafety-related

applicationssuchasautonomousvehiclesorhealthcaresystems,somakingrobustdefensesisimperative.

.Adversarialattacksfallintotwomainclasses

.White-boxattackswhichuseknowledgeofthemodeltoachievetheirimpact.

.Black-boxattackswhichdonothaveknowledgeoftheunderlyingmodel.

.Theseattacksmightalsobeuntargeted,wheretheaimistojustachieveanycorruptionoftheoutput,ortargeted,wheretheaimistocoercethemodeltoproduceaspecificoutput.

.Thankfully,maliciousattacksonAIsystemsarerelativelyrare,andthemorecommonproblemiswhereAI

systemsencounteratypical,unfamiliardatainreal-worldscenarios.Thiscanrangefromnovelenvironmental

conditionsforautonomousvehiclestounprecedenteduserinputsinchatbots,challengingthesystem’sabilitytomakeaccuratepredictionsordecisions.

.Whenfacedwithunusualdata,AIsystemsmightexhibitunpredictablebehavior,potentiallyjeopardizingthe

trustabilityoftheoutputs.Ensuringrobustnessinsuchsituationsnecessitatestrainingmodelsnotjustonlargerdatasetsbutonmorediversedatasetsthatencompass

(A)

awidearrayofpossibleinputs,preparingthemforunforeseenscenarios.

RECOMMENDATIONS

.Makesureyouunderstandthescaleoftheproblemin

yourusecase–testyoursystemstomakesurethatsmallchangesintheinputdonotproducesignificantchangesinthemeaningoftheoutput.

.ForLLMsspecifically,guaranteedrobustnessismuch

hardertoachievebecausethesemodelsdonotactuallyunderstandthemeaningofthelanguagetokenstheymanipulate.Whereahumanmightseetwophrases

asbeingthesame,theycouldbeinterpretedinverydifferentwaysbyanLLMandproducingsubstantiallydifferentoutputs.

.Additionally,inputpreprocessingtechniquescould

(B)

enhanceasystem’sabilitytoprovidemorerobustresultsbyensuringmultiplerephrasedversionsofthepromptproduceconsistentoutput.

.Traditionallyusedincybersecurity,redteaminginvolvessimulatingadversarialattackstoidentifyvulnerabilitiesandweaknessesinasystem.WhenappliedtoAI,red

teamingservesasapotenttooltoassesstheresilienceofmachinelearningmodels,algorithms,andapplicationsagainstmaliciousintentandunexpectedinputs.

.Wecanalsouseothermachinelearningsystemsasaredteam,exploitingmalicioustechniquesforpositiveuseinanapproachcalledadversarialtraining.Inthisapproachmodelsareexposedtoadversarialexamplesduring

training,enablingthemtorecognizeandresistsuch

inputs.Thisapproachpitsonemachinelearningsystemagainsttheother,resultinginbothbeingbetterandtheoverallsystembeingsignificantlymorerobust.

.Insomecases,itmaybepossibletouseverifiably

robustapproachestotraining,suchasIntervalBound

Propagation(IBP),whichcanguaranteecertainlevelsofrobustness,althoughoftenattheexpenseofaccuracyi.e.overallaccuracymaybelower,butyoucanbesurethat

whenitdoesmakeapredictionitiscorrect.

Imagesshowingconfusingdataan

AIvisionsystemmightencounterin

thereal-world,sometimesnaturally

occurring,sometimesasaresultof

maliciousattacks.

Anexampleofhowaseeminglyinconsequentialreframing

oftheinputtoanLLMcanproducesubstantiallydifferent,

andincorrect,output.

11

WEIWEIFENG

DEPENDABILITY

in

EXPERTINRESIDENCE

WhilstmostofthepropertiesdescribedinthisplaybookrelatetothecontentandqualityofanAIsystem’soutput,weoftenneglectsomeoftheoperationalconsiderationsfor

deployingAIinreal-worldsituations.Oneofthemostimportantoftheseisdependability–willanAIsystemactuallygiveusananswerwhenweneedit?

AswestarttomoveAIsystemsfromthelabtothereal-world,one

ofthepracticalrealitiesthatwemustconsideristiming.Inmany

cases,thespeedofanAIsystem’sresponseiscrucial.Itdoesn’treallymatterifacustomer-servicechatbottakes10secondstorespond,butitwouldclearlybeabigproblemifanautonomousvehicletook10

secondstoconsideritsactionswhilstdrivingatspeedonaroad.

Thispresentsanimportantanddifficultdilemmatosolve.ModernAIhasachievedmanyimpressiveresults,butthisislargelypoweredby

hugeneuralnetworkmodelswhichareslowtoexecuteandrequirelevelsofcomputepowerthatarenormallynotavailableinreal-worlddeploymentsofAIsystems.Ifwedon’thaveguaranteesthatanAI

systemwillrespondasquicklyasweneeditto,thenconfidence

andadoptionwillfalter.Fundamentallydifferentarchitecturesarerequiredwheretimelinessofresponseisimportant.

12GenAILab2024

DEPENDABILITY

WHY?

?Inreal-worlddeploymentsofAI,timingmatters.Ahigh-performingmodelwithgoodaccuracyisworthlessifit

doesn’trespondquicklyenoughforitsoutputtobeused.

?Thisismostobviousinsafety-criticalandreal-timecontrolsituationswherenon-negotiableguaranteesonresponsetimesarepresent.

?Eveninsituationsthatarenotsafety-related,responsetimecandramaticallyaffecttheuserexperiencetothepointthattheymightloseconfidenceinasystemthatdoesn’trespondquicklyenough.

?Inothersituations,lowlatencyresponsesarerequiredforreasonsofresponsivenessandthroughput.Longdeliberationinpursuitoftheperfectanswerinthesetypesofproblemscouldcausewidespreaddisruption.Forexample–

?Incredit-cardfrauddetection,wherevastnumbersof

transactionsmustbeassessedquicklytopreventdelays.

?InAI-supportedemergencyresponsesystems,wherestressandtheneedforswiftinformationavailabilityhaveadirectimpactonoutcomes.

?Inreal-timeschedulingproblems,suchastraffic-lightscheduling,elevatordispatchingetc.

?Dynamicadvertisementselectiononwebsites,whereaslowdecisionwouldruintheuser-experienceofthehostwebsite.

WHAT?

?ThespeedofanAIsystem’sresponsehasalwaysbeena

primaryconsiderationinAIresearch,asmanyoftheclassicbenchmarksofartificialintelligencehaveatimingelementtothem–playinggames,havingconversations,driving

vehicles,interactiverobotsetc.

?Inmanycasesitmaybepossibletosolveamachine

learningtaskwithgoodorevenperfectaccuracyiftimingisnotanissue,butsolvingtheengineeringproblemof

deployingthatsamemodelintoamoreconstrainedandtime-criticalenvironmentmaybeimpossible.

?Insomecases,itmaybepossibletocompressorprunealargemodel,toimproveitsresponsetime.ThisisalreadycommonplaceinmanyEdgeAIdeploymentsinorder

tosqueezemoreperformanceoutoflimitedhardware.However,whilstthisapproachimprovesperformance,itcannotguaranteeperformance.

?Mostmachinelearningmodelsareeffectivelynon-

deterministic,meaningthattheexecutionofthe

model(inference)willneverbepredictable.Therefore,ifguaranteesofperformancearerequired,then

justshrinkingabigmodelwillneverbetheanswer.Fundamentallydifferentarchitecturesarerequired.

?Themostobviousandwell-knownarchitectureisthe

so-calledclassifier-cascade.Inthiscase,machinelearning

modelsarearrangedinacascade,startingwithextremelysimpleandsmallclassifiersthatcanprovideaquickanswerimmediately.Iftimeallows,processingpassesontoamorecomplexbuttime-consumingclassifier,andthisprocess

continuestothebottomofthestack.Thisarchitecturemeansthatananswercanbewecaninterruptthe

processingatanypointandgetananswer.

?Thisissimilartowhatweseeinhumandecisionmaking,wherewehavefast“System1”thinkingtogivean

immediateresponse,followedbyslowerandmore

deliberate“System2”thinking.Inthecaseofclassifiercascades,theremaybehundredsoflevels,iterativelyrefiningtheanswerasfarastimeallows.

?Inmostcases,thefirstlevelofsuchacascadewouldbeanon-AIsystem,whichencodesbasicdefaultbehavior.

?Theperformanceofsystemscanbeenhancedthroughtieredapproaches.Ateachtier,thesolutionshould

beevaluatedagainsttheprevioustiertodetermineifitprovidesasignificantimprovement.Thisevaluationprocessallowsforearlytermina

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論