美國NIST《人工智能風(fēng)險管理框架(征求意見稿)》_第1頁
美國NIST《人工智能風(fēng)險管理框架(征求意見稿)》_第2頁
美國NIST《人工智能風(fēng)險管理框架(征求意見稿)》_第3頁
美國NIST《人工智能風(fēng)險管理框架(征求意見稿)》_第4頁
美國NIST《人工智能風(fēng)險管理框架(征求意見稿)》_第5頁
已閱讀5頁,還剩39頁未讀 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認(rèn)領(lǐng)

文檔簡介

InitialDraft

i

AIRiskManagementFramework:InitialDraft

March17,2022

ThisinitialdraftoftheArtificialIntelligenceRiskManagementFramework(AIRMF,or

Framework)buildsontheconceptpaperreleasedinDecember2021andincorporatesthe

feedbackreceived.TheAIRMFisintendedforvoluntaryuseinaddressingrisksinthedesign,development,use,andevaluationofAIproducts,services,andsystems.

AIresearchanddeploymentisevolvingrapidly.Forthatreason,theAIRMFanditscompaniondocumentswillevolveovertime.WhenAIRMF1.0isissuedinJanuary2023,NIST,workingwithstakeholders,intendstohavebuiltouttheremainingsectionstoreflectnewknowledge,

awareness,andpractices.

PartIoftheAIRMFsetsthestageforwhytheAIRMFisimportantandexplainsitsintendeduseandaudience.PartIIincludestheAIRMFCoreandProfiles.PartIIIincludesacompanionPracticeGuidetoassistinadoptingtheAIRMF.

ThatPracticeGuidewhichwillbereleasedforcommentincludesadditionalexamplesand

practicesthatcanassistinusingtheAIRMF.TheGuidewillbepartofaNISTAIResourceCenterthatisbeingestablished.

NISTwelcomesfeedbackonthisinitialdraftandtherelatedPracticeGuidetoinformfurther

developmentoftheAIRMF.Commentsmaybeprovidedata

workshoponMarch29-31,2022,

andalsoarestronglyencouragedtobesharedviaemail.NISTwillproduceaseconddraftfor

comment,aswellashostathirdworkshop,beforepublishingAIRMF1.0inJanuary2023.Pleasesendcommentsonthisinitialdraftto

AIframework@

byApril29,2022.

InitialDraft

ii

Commentsareespeciallyrequestedon:

1.WhethertheAIRMFappropriatelycoversandaddressesAIrisks,includingwiththerightlevelofspecificityforvarioususecases.

2.WhethertheAIRMFisflexibleenoughtoserveasacontinuingresourceconsideringevolvingtechnologyandstandardslandscape.

3.WhethertheAIRMFenablesdecisionsabouthowanorganizationcanincreaseunderstandingof,communicationabout,andeffortstomanageAIrisks.

4.Whetherthefunctions,categories,andsubcategoriesarecomplete,appropriate,andclearlystated.

5.WhethertheAIRMFisinalignmentwithorleveragesotherframeworksandstandardssuchasthosedevelopedorbeingdevelopedbyIEEEorISO/IECSC42.

6.WhethertheAIRMFisinalignmentwithexistingpractices,andbroaderriskmanagementpractices.

7.WhatmightbemissingfromtheAIRMF.

8.WhetherthesoontobepublisheddraftcompaniondocumentcitingAIriskmanagementpracticesisusefulasacomplementaryresourceandwhatpracticesorstandardsshouldbeadded.

9.Others?

Note:ThisfirstdraftdoesnotincludeImplementationTiersasconsideredintheconceptpaper.

ImplementationTiersmaybeaddedlaterifstakeholdersconsiderthemtobeahelpfulfeatureintheAIRMF.Commentsarewelcome.

InitialDraft

iii

TableofContents

Part1:Motivation

1OVERVIEW1

2SCOPE2

3AUDIENCE3

4FRAMINGRISK5

4.1UnderstandingRiskandAdverseImpacts5

4.2ChallengesforAIRiskManagement6

5AIRISKSANDTRUSTWORTHINESS7

5.1TechnicalCharacteristics8

5.1.1Accuracy9

5.1.2Reliability9

5.1.3Robustness10

5.1.4ResilienceorMLSecurity10

5.2Socio-TechnicalCharacteristics10

5.2.1Explainability11

5.2.2Interpretability11

5.2.3Privacy11

5.2.4Safety12

5.2.5ManagingBias12

5.3GuidingPrinciples12

5.3.1Fairness13

5.3.2Accountability13

5.3.3Transparency13

Part2:CoreandProfiles

6AIRMFCORE14

6.1Map15

6.2Measure16

6.3Manage17

6.4Govern18

7AIRMFPROFILES20

8EFFECTIVENESSOFTHEAIRMF20

Part3:PracticalGuide

9PRACTICEGUIDE20

InitialDraft

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

AIRiskManagementFramework:InitialDraft-

Part1:Motivation

1Overview

Remarkablesurgesinartificialintelligence(AI)capabilitieshaveledtoawiderangeof

innovationswiththepotentialtobenefitnearlyallaspectsofoursocietyandeconomy–

everythingfromcommerceandhealthcaretotransportationandcybersecurity.AIsystemsareusedfortaskssuchasinformingandadvisingpeopleandtakingactionswheretheycanhavebeneficialimpact,suchassafetyandhousing.

AIsystemssometimesdonotoperateasintendedbecausetheyaremakinginferencesfrom

patternsobservedindataratherthanatrueunderstandingofwhatcausesthosepatterns.Ensuringthattheseinferencesarehelpfulandnotharmfulinparticularusecases–especiallywhen

inferencesarerapidlyscaledandamplified–isfundamentaltotrustworthyAI.WhileanswerstothequestionofwhatmakesanAItechnologytrustworthydiffer,therearecertainkey

characteristicswhichsupporttrustworthiness,includingaccuracy,explainabilityand

interpretability,privacy,reliability,robustness,safety,security(resilience)andmitigationofharmfulbias.Therealsoarekeyguidingprinciplestotakeintoaccountsuchasaccountability,fairness,andequity.

CultivatingtrustandcommunicationabouthowtounderstandandmanagetherisksofAIsystemswillhelpcreateopportunitiesforinnovationandrealizethefullpotentialofthistechnology.

ItisimportanttonotethattheAIRMFis

neitherachecklistnorshouldbeusedin

anywaytocertifyanAIsystem.Likewise,

usingtheAIRMFdoesnotsubstitutefor

duediligenceandjudgmentby

organizationsandindividualsindeciding

whethertodesign,develop,anddeployAI

technologies–andifso,underwhat

conditions.

ManyactivitiesrelatedtomanagingriskforAI

arecommontomanagingriskforothertypesof

technology.AnAIRiskManagementFramework

(AIRMF,orFramework)canaddresschallenges

uniquetoAIsystems.ThisAIRMFisaninitial

attempttodescribehowtherisksfromAI-based

systemsdifferfromotherdomainsandto

encourageandequipmanydifferentstakeholders

inAItoaddressthoseriskspurposefully.

Thisvoluntaryframeworkprovidesaflexible,

structured,andmeasurableprocesstoaddressAIrisksthroughouttheAIlifecycle,offeringguidanceforthedevelopmentanduseoftrustworthyandresponsibleAI.Itisintendedto

improveunderstandingofandtohelporganizationsmanagebothenterpriseandsocietalrisks

relatedtothedevelopment,deployment,anduseofAIsystems.AdoptingtheAIRMFcanassistorganizations,industries,andsocietytounderstandanddeterminetheiracceptablelevelsofrisk.

InitialDraft

2

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

Inaddition,itcanbeusedtomapcomplianceconsiderationsbeyondthoseaddressedbythisframework,includingexistingregulations,laws,orothermandatoryguidance.

Riskstoanysoftwareorinformation-basedsystemapplytoAI;thatincludesimportantconcernsrelatedtocybersecurity,privacy,safety,andinfrastructure.ThisframeworkaimstofillthegapsrelatedspecificallytoAI.Ratherthanrepeatinformationinotherguidance,usersoftheAIRMFareencouragedtoaddressthosenon-AIspecificissuesviaguidancealreadyavailable.

ForthepurposesoftheNISTAIRMFthetermartificialintelligencerefers

toalgorithmicprocessesthatlearnfromdatainanautomatedorsemi-automatedmanner.

Part1ofthisframeworkestablishesthecontextfortheAIriskmanagementprocess.Part2providesguidanceonoutcomesandactivitiestocarryoutthatprocesstomaximizethebenefitsandminimizetherisksofAI.

Part3[yettobedeveloped]assistsinusingtheAI

RMFandofferssamplepracticestobeconsideredin

carryingoutthisguidance,before,during,andafterAIproducts,services,andsystemsaredevelopedanddeployed.

TheFramework,andsupportingresources,willbeupdatedandimprovedbasedonevolving

technologyandthestandardslandscapearoundtheglobe.Inaddition,astheAIRMFisputintouse,additionallessonswillbelearnedthatcaninformfutureupdatesandadditionalresources.

NIST’sdevelopmentoftheAIRMFincollaborationwiththeprivateandpublicsectorsis

consistentwithits

broaderAIefforts

calledforbytheNationalAIInitiativeActof2020(P.L.

116-283),the

NationalSecurityCommissiononArtificialIntelligence

recommendations,andthe

PlanforFederalEngagementinAIStandardsandRelatedTools.

EngagementwiththebroadAIcommunityduringthisFramework’sdevelopmentalsoinformsAIresearchanddevelopment

andevaluationbyNISTandothers.

24

2Scope

25

26

27

28

29

30

31

32

33

34

35

36

TheNISTAIRMFoffersaprocessformanagingrisksrelatedtoAIsystemsacrossawide

spectrumoftypes,applications,andmaturity.Thisframeworkisorganizedandintendedtobeunderstoodandusedbyindividualsandorganizations,regardlessofsector,size,orlevelof

familiaritywithaspecifictypeoftechnology.Ultimately,itwillbeofferedinmultipleformats,includingonlineversions,toprovidemaximumflexibility.

TheAIRMFservesasapartofabroaderNISTresourcecentercontainingdocuments,

taxonomy,suggestedtoolkits,datasets,code,andotherformsoftechnicalguidancerelatedtothedevelopmentandimplementationoftrustworthyAI.ResourceswillincludeaknowledgebaseofterminologyrelatedtotrustworthyandresponsibleAIandhowthosetermsareusedbydifferentstakeholders.

TheAIRMFisnotachecklistnoracompliancemechanismtobeusedinisolation.ItshouldbeintegratedwithintheorganizationdevelopingandusingAIandbeincorporatedintoenterprise

InitialDraft

3

1riskmanagement;doingsoensuresthatAIwillbetreatedalongwithothercriticalrisks,yielding

2amoreintegratedoutcomeandresultinginorganizationalefficiencies.

3AttributesoftheAIRMF

4TheAIRMFstrivesto:

51.Berisk-based,resourceefficient,andvoluntary.

62.Beconsensus-drivenanddevelopedandregularlyupdatedthroughanopen,transparentprocess.

7AllstakeholdersshouldhavetheopportunitytocontributetotheAIRMF’sdevelopment.83.Useclearandplainlanguagethatisunderstandablebyabroadaudience,includingsenior

9executives,governmentofficials,non-governmentalorganizationleadership,andthosewhoare

10notAIprofessionals–whilestillofsufficienttechnicaldepthtobeusefultopractitioners.TheAI

11RMFshouldallowforcommunicationofAIrisksacrossanorganization,betweenorganizations,

12withcustomers,andtothepublicatlarge.

134.ProvidecommonlanguageandunderstandingtomanageAIrisks.TheAIRMFshouldoffer

14taxonomy,terminology,definitions,metrics,andcharacterizationsforAIrisk.

155.Beeasilyusableandmeshwithotheraspectsofriskmanagement.UseoftheFrameworkshould

16beintuitiveandreadilyadaptableaspartofanorganization’sbroaderriskmanagementstrategy

17andprocesses.ItshouldbeconsistentoralignedwithotherapproachestomanagingAIrisks.

186.Beusefultoawiderangeofperspectives,sectors,andtechnologydomains.TheAIRMFshould

19bebothtechnologyagnosticandapplicabletocontext-specificusecases.

207.Beoutcome-focusedandnon-prescriptive.TheFrameworkshouldprovideacatalogofoutcomes

21andapproachesratherthanprescribeone-size-fits-allrequirements.

228.Takeadvantageofandfostergreaterawarenessofexistingstandards,guidelines,bestpractices,

23methodologies,andtoolsformanagingAIrisks–aswellasillustratetheneedforadditional,

24improvedresources.

259.Belaw-andregulation-agnostic.TheFrameworkshouldsupportorganizations’abilitiesto

26operateunderapplicabledomesticandinternationallegalorregulatoryregimes.

2710.Bealivingdocument.TheAIRMFshouldbereadilyupdatedastechnology,understanding,and

28approachestoAItrustworthinessandusesofAIchangeandasstakeholderslearnfrom

29implementingAIriskmanagementgenerallyandthisframeworkinparticular.

303Audience

31AIriskmanagementisacomplexandrelativelynewarea,andthelistofindividuals,groups,

32communities,andorganizationsthatcanbeaffectedbyAItechnologiesisextensive.Identifying

33andmanagingAIrisksandimpacts–bothpositiveandadverse–requiresabroadsetof

34perspectivesandstakeholders.

InitialDraft

4

1

2Figure1:KeystakeholdergroupsassociatedwiththeAIRMF.

3AsFigure1illustrates,NISThasidentifiedfourstakeholdergroupsasintendedaudiencesofthis

4Framework:AIsystemstakeholders,operatorsandevaluators,externalstakeholders,andthe

5generalpublic.Ideally,membersofallstakeholdergroupswouldbeinvolvedorrepresentedin

6theriskmanagementprocess,includingthoseindividualsandcommunityrepresentativesthat

7maybeaffectedbytheuseofAItechnologies.

8AIsystemstakeholdersarethosewhohavethemostcontrolandresponsibilityoverthedesign,

9development,deployment,andacquisitionofAIsystems,andtheimplementationofAIrisk

10managementpractices.ThisgroupcomprisestheprimaryadoptersoftheAIRMF.Theymay

11includeindividualsorteamswithinoramongorganizationswithresponsibilitiestocommission,

12fund,procure,develop,ordeployanAIsystem:businessteams,designanddevelopmentteams,

13internalriskmanagementteams,andcomplianceteams.Smalltomedium-sizedorganizations

14facedifferentchallengesinimplementingtheAIRMFthanlargeorganizations.

15Operatorsandevaluatorsprovidemonitoringandformal/informaltest,evaluation,validation,

16andverification(TEVV)ofsystemperformance,relativetobothtechnicalandsocio-technical

17requirements.Thesestakeholders,whichincludeorganizationswhichoperateoremployAI

18systems,usetheoutputfordecisionsortoevaluatetheirperformance.Thisgroupcaninclude

19userswhointerpretorincorporatetheoutputofAIsystemsinsettingswithahighpotentialfor

20adverseimpacts.Theymightincludeacademic,public,andprivatesectorresearchers;

21professionalevaluatorsandauditors;systemoperators;andexpertendusers.

22Externalstakeholdersprovideformaland/orquasi-formalnormsorguidanceforspecifyingand

23addressingAIrisks.ExternaltotheprimaryadoptersoftheAIRMF,theycanincludetrade

InitialDraft

5

1groups,standardsdevelopingorganizations,advocacygroups,andcivilsocietyorganizations.

2Theiractionscandesignateboundariesforoperation(technicalorlegal)andbalancesocietal

3valuesandprioritiesrelatedtocivillibertiesandrights,theeconomy,andsecurity.

4ThegeneralpublicismostlikelytodirectlyexperiencepositiveandadverseimpactsofAI

5technologies.Theymayprovidethemotivationforactionstakenbytheotherstakeholdersand

6canincludeindividuals,communities,andconsumersinthecontextwhereanAIsystemis

7developedordeployed.

84FramingRisk

9AIsystemsholdthepotentialtoadvanceourqualityoflifeandleadtonewservices,support,

10andefficienciesforpeople,organizations,markets,andsociety.Identifying,mitigating,and

11minimizingrisksandpotentialharmsassociatedwithAItechnologiesareessentialstepstowards

12theacceptanceandwidespreaduseofAItechnologies.Ariskmanagementframeworkshould

13provideastructured,yetflexible,approachformanagingenterpriseandsocietalriskresulting

14fromtheincorporationofAIsystemsintoproducts,processes,organizations,systems,and

15societies.Organizationsmanaginganenterprise’sAIriskalsoshouldbemindfuloflarger

16societalAIconsiderationsandrisks.Ifariskmanagementframeworkcanhelptoeffectively

17addressandmanageAIriskandadverseimpacts,itcanleadtomoretrustworthyAIsystems.

184.1UnderstandingRiskandAdverseImpacts

19Riskisameasureoftheextenttowhichanentityisnegativelyinfluencedbyapotential

20circumstanceorevent.Typically,riskisafunctionof1)theadverseimpactsthatcouldariseif

21thecircumstanceoreventoccurs;and2)thelikelihoodofoccurrence.Entitiescanbeindividuals,

22groups,orcommunitiesaswellassystems,processes,ororganizations.

23TheimpactofAIsystemscanbepositive,negative,orbothandcanaddress,create,orresultin

24opportunitiesorthreats.AccordingtotheInternationalOrganizationforStandardization(Guide2573:2009;IEC/ISO31010),certainriskscanbepositive.Whileriskmanagementprocesses

26addressadverseimpacts,thisframeworkintendstoofferapproachestominimizeanticipated

27negativeimpactsofAIsystemsandidentifyopportunitiestomaximizepositiveimpacts.

28Additionally,thisframeworkisdesignedtoberesponsivetonewrisksastheyemergeratherthan

29enumeratingallknownrisksinadvance.Thisflexibilityisparticularlyimportantwhereimpacts

30arenoteasilyforeseeable,andapplicationsareevolvingrapidly.WhileAIbenefitsandsomeAI

31risksarewell-known,theAIcommunityisonlybeginningtounderstandandclassifyincidents

32andscenariosthatresultinharm.Figure2providesexamplesofpotentialharmsfromAI

33systems.

34RiskmanagementcanalsodriveAIdevelopersanduserstounderstandandaccountforthe

35inherentuncertaintiesandinaccuracyoftheirmodelsandsystems,whichinturncanincreasethe

InitialDraft

6

1overallperformanceandtrustworthinessofthosemodels.Managingriskandadverseimpacts

2contributestobuildingtrustworthyAItechnologiesandapplications

3

4Figure2:ExamplesofpotentialharmsfromAIsystems.

54.2ChallengesforAIRiskManagement

64.2.1RiskMeasurement

7AIrisksandimpactsthatarenotwell-definedoradequatelyunderstoodaredifficulttomeasure

8quantitativelyorqualitatively.Thepresenceofthird-partydataorsystemsmayalsocomplicate

9riskmeasurement.Thoseattemptingtomeasuretheadverseimpactonapopulationmaynotbe

10awarethatcertaindemographicsmayexperienceharmdifferentlythanothers.

11AIriskscanhaveatemporaldimension.MeasuringriskatanearlierstageintheAIlifecycle

12mayyielddifferentresultsthanmeasuringriskatalaterstage.SomeAIrisksmayhavealow

13probabilityintheshorttermbuthaveahighlikelihoodforadverseimpacts.Otherrisksmaybe

14latentatpresentbutmayincreaseinthelongtermasAIsystemsevolve.

15Furthermore,inscrutableAIsystemscancomplicatethemeasurementofrisk.Inscrutabilitycan

16bearesultoftheopaquenatureofAItechnologies(lackofexplainabilityorinterpretability),

17lackoftransparencyordocumentationinAIsystemdevelopmentordeployment,orinherent

18uncertaintiesinAIsystems.

194.2.2RiskThresholds

20Thresholdsrefertothevaluesusedtoestablishconcretedecisionpointsandoperationallimits

21thattriggeraresponse,action,orescalation.AIriskthresholds(sometimesreferredtoasKey

22RiskIndicators)caninvolvebothtechnicalfactors(suchaserrorratesfordeterminingbias)and

23humanvalues(suchassocialorlegalnormsforappropriatelevelsoftransparency).These

InitialDraft

7

1factorsandvaluescanestablishlevelsofrisk(e.g.,low,medium,orhigh)basedonbroad

2categoriesofadverseimpactsorharms.

3ThresholdsandvaluescanalsodeterminewhereAIsystemspresentunacceptableriskstocertain

4organizations,systems,socialdomains,ordemographics.Inthesecases,thequestionisnothow

5tobettermanageriskofAI,butwhetheranAIsystemshouldbedesigned,developed,or

6deployedatall.

7TheAIRMFdoesnotprescriberiskthresholdsorvalues.Risktolerance–thelevelofriskor

8degreeofuncertaintythatisacceptabletoorganizationsorsociety–iscontextandusecase-

9specific.Therefore,riskthresholdsshouldbesetthroughpoliciesandnormsthatcanbe

10establishedbyAIsystemowners,organizations,industries,communities,orregulators(who

11oftenareactingonbehalfofindividualsorsocieties).Riskthresholdsandvaluesarelikelyto

12changeandadaptovertimeaspoliciesandnormschangeorevolve.Inaddition,different

13organizationsmayhavedifferentriskthresholds(ortolerance)duetovaryingorganizational

14prioritiesandresourceconsiderations.Evenwithinasingleorganizationtherecanbeabalancing

15ofprioritiesandtradeoffsbetweentechnicalfactorsandhumanvalues.Emergingknowledgeand

16methodsforbetterinformingthesedecisionsarebeingdevelopedanddebatedbybusiness,

17governments,academia,andcivilsociety.Totheextentthatchallengesforspecifyingrisk

18thresholdsordeterminingvaluesremainunresolved,theremaybecontextswherearisk

19managementframeworkisnotyetreadilyapplicableformitigatingAIrisksandadverseimpacts.

20TheAIRMFprovidestheopportunityfororganizationstospecificallydefinetheirrisk

21thresholdsandthentomanagethoseriskswithintheirtolerances.

224.2.3OrganizationalIntegration

23TheAIRMFisnotachecklistnoracompliancemechanismtobeusedinisolation.Itshouldbe

24integratedwithintheorganizationdevelopingandusingAItechnologiesandbeincorporatedinto

25enterpriseriskmanagement;doingsoensuresthatAIwillbetreatedalongwithothercritical

26risks,yieldingamoreintegratedoutcome

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論