治理人工智能:未來藍(lán)圖-Governing AI A Blueprint for the Future_第1頁
治理人工智能:未來藍(lán)圖-Governing AI A Blueprint for the Future_第2頁
治理人工智能:未來藍(lán)圖-Governing AI A Blueprint for the Future_第3頁
治理人工智能:未來藍(lán)圖-Governing AI A Blueprint for the Future_第4頁
治理人工智能:未來藍(lán)圖-Governing AI A Blueprint for the Future_第5頁
已閱讀5頁,還剩78頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

GoverningAI:

ABlueprintfortheFuture

May25,2023

2

GoverningAI:ABlueprintfortheFuture

Tableofcontents

3

Foreword

ByMicrosoftViceChairandPresidentBradSmith

9

Part1GoverningAI:Alegalandregulatoryblueprintforthefuture

10

Implementingandbuildinguponnewgovernment-ledAIsafetyframeworks

12

RequiringeffectivesafetybrakesforAIsystemsthatcontrol

criticalinfrastructure

15

Developingabroadlegalandregulatoryframeworkbasedonthetechnology

architectureforAI

22

PromotetransparencyandensureacademicandnonprofitaccesstoAI

26

Pursuenewpublic-privatepartnershipstouseAIasaneffectivetooltoaddress

theinevitablesocietalchallengesthatcomewithnewtechnology

28

Part2Responsiblebydesign:Microsoft’sapproachtobuildingAI

systemsthatbenefitsociety

29

Microsoft’scommitmenttodevelopingAIresponsibly

30

OperationalizingResponsibleAIatMicrosoft

34

Casestudy:ApplyingourResponsibleAIapproachtothenewBing

36

AdvancingResponsibleAIthroughcompanyculture

39

EmpoweringcustomersontheirResponsibleAIjourney

41

Conclusion

3

Microsoft

GoverningAI:ABlueprintfortheFuture

Foreword:HowDoWeBestGovernAI?

BradSmith,ViceChair

andPresident,Microsoft

“Don’taskwhatcomputerscando,askwhattheyshoulddo.”

ThatisthetitleofthechapteronAIandethicsinabookIcoauthoredin2019.Atthetime,wewrotethat“thismaybeoneofthedefiningquestionsofourgeneration.”Fouryearslater,thequestionhasseizedcenterstagenotjustinthe

world’scapitals,butaroundmanydinnertables.

AspeoplehaveusedorheardaboutthepowerofOpenAI’sGPT-4foundationmodel,theyhaveoftenbeensurprisedorevenastounded.Manyhavebeenenthusedoreven

excited.Somehavebeenconcernedorevenfrightened.Whathasbecomecleartoalmosteveryoneissomethingwenotedfouryearsago—wearethefirstgenerationinthehistoryofhumanitytocreatemachinesthatcanmakedecisionsthatpreviouslycouldonlybemadebypeople.

Countriesaroundtheworldareaskingcommonquestions.Howcanweusethisnewtechnologytosolveourproblems?Howdoweavoidormanagenewproblemsitmightcreate?Howdowecontroltechnologythatissopowerful?

Thesequestionscallnotonlyforbroadandthoughtful

conversation,butdecisiveandeffectiveaction.Thispaperofferssomeofourideasandsuggestionsasacompany.

Thesesuggestionsbuildonthelessonswe’vebeenlearningbasedontheworkwe’vebeendoingforseveralyears.

MicrosoftCEOSatyaNadellasetusonaclearcoursewhenhe

wrotein2016

that“perhapsthemostproductivedebatewecanhaveisn’toneofgoodversusevil:Thedebate

shouldbeaboutthevaluesinstilledinthepeopleandinstitutionscreatingthistechnology.”

Sincethattime,we’vedefined,published,andimplementedethicalprinciplestoguideourwork.Andwe’vebuiltout

constantlyimprovingengineeringandgovernancesystems

toputtheseprinciplesintopractice.Todaywehavenearly350peopleworkingonresponsibleAIatMicrosoft,helpingusimplementbestpracticesforbuildingsafe,secure,andtransparentAIsystemsdesignedtobenefitsociety.

Newopportunitiestoimprovethehumancondition

Theresultingadvancesinourapproachhavegivenusthecapabilityandconfidencetoseeever-expandingways

forAItoimprovepeople’slives.We’veseenAIhelpsave

individuals’eyesight,makeprogressonnewcuresfor

cancer,generatenewinsightsaboutproteins,andprovidepredictionstoprotectpeoplefromhazardousweather.

Otherinnovationsarefendingoffcyberattacksandhelpingtoprotectfundamentalhumanrights,eveninnations

afflictedbyforeigninvasionorcivilwar.

Everydayactivitieswillbenefitaswell.Byactingasa

copilotinpeople’slives,thepoweroffoundationmodelslikeGPT-4isturningsearchintoamorepowerfultoolforresearchandimprovingproductivityforpeopleatwork.

Andforanyparentwhohasstruggledtorememberhowtohelptheir13-year-oldchildthroughanalgebrahomeworkassignment,AI-basedassistanceisahelpfultutor.

Insomanyways,AIoffersperhapsevenmorepotential

forthegoodofhumanitythananyinventionthathas

precededit.Sincetheinventionoftheprintingpresswithmovabletypeinthe1400s,humanprosperityhasbeen

growingatanacceleratingrate.Inventionslikethesteamengine,electricity,theautomobile,theairplane,computing,andtheinternethaveprovidedmanyofthebuildingblocksformoderncivilization.Andliketheprintingpressitself,

AIoffersanewtooltogenuinelyhelpadvancehumanlearningandthought.

4

Microsoft

GoverningAI:ABlueprintfortheFuture

Guardrailsforthefuture

Anotherconclusionisequallyimportant:it’snotenoughtofocusonlyonthemanyopportunitiestouseAItoimprovepeople’slives.Thisisperhapsoneofthemostimportant

lessonsfromtheroleofsocialmedia.Littlemorethana

decadeago,technologistsandpoliticalcommentators

alikegushedabouttheroleofsocialmediainspreading

democracyduringtheArabSpring.Yetfiveyearsafter

that,welearnedthatsocialmedia,likesomanyother

technologiesbeforeit,wouldbecomebothaweaponandatool—inthiscaseaimedatdemocracyitself.

Today,weare10yearsolderandwiser,andweneedtoputthatwisdomtowork.Weneedtothinkearlyonandina

clear-eyedwayabouttheproblemsthatcouldlieahead.Astechnologymovesforward,it’sjustasimportanttoensurepropercontroloverAIasitistopursueitsbenefits.WearecommittedanddeterminedasacompanytodevelopanddeployAIinasafeandresponsibleway.Wealsorecognize,

however,thattheguardrailsneededforAIrequirea

broadlysharedsenseofresponsibilityandshouldnotbe

lefttotechnologycompaniesalone.

WhenweatMicrosoftadoptedoursixethicalprinciplesfor

AIin2018,wenotedthatoneprinciplewasthebedrockfor

everythingelse—accountability.Thisisthefundamental

need:toensurethatmachinesremainsubjecttoeffective

oversightbypeopleandthepeoplewhodesignand

operatemachinesremainaccountabletoeveryoneelse.In

short,wemustalwaysensurethatAIremainsunderhuman

control.Thismustbeafirst-orderpriorityfortechnology

companiesandgovernmentsalike.

Thisconnectsdirectlywithanotheressentialconcept.Ina

democraticsociety,oneofourfoundationalprinciplesis

thatnopersonisabovethelaw.Nogovernmentisabove

thelaw.Nocompanyisabovethelaw,andnoproductor

technologyshouldbeabovethelaw.Thisleadstoacritical

conclusion:peoplewhodesignandoperateAIsystems

5

Afive-pointblueprintforgoverningAI

GoverningAI:ABlueprintfortheFuture

cannotbeaccountableunlesstheirdecisionsandactionsaresubjecttotheruleoflaw.

Inmanyways,thisisattheheartoftheunfoldingAIpolicyandregulatorydebate.HowdogovernmentsbestensurethatAIissubjecttotheruleoflaw?Inshort,whatform

shouldnewlaw,regulation,andpolicytake?

Afive-pointblueprintforthepublicgovernanceofAI

Part1ofthispaperoffersafive-pointblueprinttoaddressseveralcurrentandemergingAIissuesthroughpublic

policy,law,andregulation.Weofferthisrecognizing

thateverypartofthisblueprintwillbenefitfrombroaderdiscussionandrequiredeeperdevelopment.Butwehopethiscancontributeconstructivelytotheworkahead.

First,implementandbuilduponnewgovernment-led

AIsafetyframeworks.Thebestwaytosucceedisoftentobuildonthesuccessesandgoodideasofothers.Especiallywhenonewantstomovequickly.Inthisinstance,there

isanimportantopportunitytobuildonworkcompleted

justfourmonthsagobytheU.S.NationalInstituteof

StandardsandTechnology,orNIST.PartoftheDepartment

ofCommerce,NISThascompletedandlaunchedanewAI

RiskManagementFramework.

Weofferfourconcretesuggestionstoimplementandbuild

uponthisframework,includingcommitmentsMicrosoftis

makinginresponsetoarecentWhiteHousemeetingwith

leadingAIcompanies.WealsobelievetheAdministration

andothergovernmentscanacceleratemomentumthrough

procurementrulesbasedonthisframework.

Second,requireeffectivesafetybrakesforAIsystems

thatcontrolcriticalinfrastructure.Insomequarters,

thoughtfulindividualsincreasinglyareaskingwhetherwe

cansatisfactorilycontrolAIasitbecomesmorepowerful.

ConcernsaresometimesposedregardingAIcontrolof

criticalinfrastructureliketheelectricalgrid,watersystem,

andcitytrafficflows.

Thisistherighttimetodiscussthisquestion.This

blueprintproposesnewsafetyrequirementsthat

ineffectwouldcreatesafetybrakesforAIsystems

thatcontroltheoperationofdesignatedcritical

1

Implementandbuilduponnewgovernment-ledAIsafetyframeworks

2

RequireeffectivesafetybrakesforAIsystemsthatcontrolcriticalinfrastructure

3

DevelopabroaderlegalandregulatoryframeworkbasedonthetechnologyarchitectureforAl

4

PromotetransparencyandensureacademicandpublicaccesstoAl

5

Pursuenewpublic-privatepartnershipstouseAlasaneffectivetooltoaddresstheinevitablesocietalchallengesthatcomewithnewtechnology

6

Microsoft

GoverningAI:ABlueprintfortheFuture

infrastructure.Thesefail-safesystemswouldbepart

ofacomprehensiveapproachtosystemsafetythat

wouldkeepeffectivehumanoversight,resilience,androbustnesstopofmind.Inspirit,theywouldbesimilartothebrakingsystemsengineershavelongbuiltintoothertechnologiessuchaselevators,schoolbuses,andhigh-speedtrains,tosafelymanagenotjusteverydayscenarios,butemergenciesaswell.

Inthisapproach,thegovernmentwoulddefinethe

classofhigh-riskAIsystemsthatcontrolcritical

infrastructureandwarrantsuchsafetymeasuresaspartofacomprehensiveapproachtosystemmanagement.

Newlawswouldrequireoperatorsofthesesystemsto

buildsafetybrakesintohigh-riskAIsystemsbydesign.Thegovernmentwouldthenensurethatoperatorstesthigh-risksystemsregularlytomakecertainthatthe

systemsafetymeasuresareeffective.AndAIsystemsthatcontroltheoperationofdesignatedcriticalinfrastructurewouldbedeployedonlyinlicensedAIdatacentersthatwouldensureasecondlayerofprotectionthroughtheabilitytoapplythesesafetybrakes,therebyensuring

effectivehumancontrol.

Third,developabroadlegalandregulatoryframework

basedonthetechnologyarchitectureforAI.Webelieve

therewillneedtobealegalandregulatoryarchitecture

forAIthatreflectsthetechnologyarchitectureforAIitself.

Inshort,thelawwillneedtoplacevariousregulatory

responsibilitiesupondifferentactorsbasedupontheirrole

inmanagingdifferentaspectsofAItechnology.

Forthisreason,thisblueprintincludesinformationabout

someofthecriticalpiecesthatgointobuildingandusing

newgenerativeAImodels.Usingthisascontext,itproposes

thatdifferentlawsplacespecificregulatoryresponsibilities

ontheorganizationsexercisingcertainresponsibilitiesat

threelayersofthetechnologystack:theapplicationslayer,

themodellayer,andtheinfrastructurelayer.

Thisshouldfirstapplyexistinglegalprotectionsatthe

applicationslayertotheuseofAI.Thisisthelayerwherethe

safetyandrightsofpeoplewillmostbeimpacted,especially

becausetheimpactofAIcanvarymarkedlyindifferent

technologyscenarios.Inmanyareas,wedon’tneednew

lawsandregulations.Weinsteadneedtoapplyandenforce

existinglawsandregulations,helpingagenciesandcourts

developtheexpertiseneededtoadapttonewAIscenarios.

KY3C:

ApplyingtoAIservicesthe“KnowYourCustomer“conceptdevelopedforfinancialservices

KnowyourCloud

KnowyourCustomer

KnowyourContent

7

Microsoft

GoverningAI:ABlueprintfortheFuture

Therewillthenbeaneedtodevelopnewlawand

regulationsforhighlycapableAIfoundationmodels,

bestimplementedbyanewgovernmentagency.Thiswillimpacttwolayersofthetechnologystack.Thefirstwill

requirenewregulationsandlicensingforthesemodels

themselves.Andthesecondwillinvolveobligationsfor

theAIinfrastructureoperatorsonwhichthesemodelsaredevelopedanddeployed.Theblueprintthatfollowsofferssuggestedgoalsandapproachesforeachoftheselayers.

Indoingso,thisblueprintbuildsinpartonaprinciple

developedinrecentdecadesinbankingtoprotectagainstmoneylaunderingandcriminalorterroristuseoffinancialservices.The“KnowYourCustomer”—orKYC—principle

requiresthatfinancialinstitutionsverifycustomeridentities,establishriskprofiles,andmonitortransactionstohelp

detectsuspiciousactivity.ItwouldmakesensetotakethisprincipleandapplyaKY3CapproachthatcreatesintheAIcontextcertainobligationstoknowone’scloud,one’scustomers,andone’scontent.

Inthefirstinstance,thedevelopersofdesignated,powerfulAImodelsfirst“knowthecloud”onwhichtheirmodelsaredevelopedanddeployed.Inaddition,suchasforscenariosthatinvolvesensitiveuses,thecompanythathasadirect

relationshipwithacustomer—whetheritbethemodel

developer,applicationprovider,orcloudoperatoronwhichthemodelisoperating—should“knowthecustomers”thatareaccessingit.

Also,thepublicshouldbeempoweredto“knowthe

content”thatAIiscreatingthroughtheuseofalabelor

othermarkinformingpeoplewhensomethinglikeavideooraudiofilehasbeenproducedbyanAImodelratherthanahumanbeing.Thislabelingobligationshouldalsoprotectthepublicfromthealterationoforiginalcontentandthecreationof“deepfakes.”Thiswillrequirethedevelopmentofnewlaws,andtherewillbemanyimportantquestions

anddetailstoaddress.Butthehealthofdemocracyand

futureofcivicdiscoursewillbenefitfromthoughtful

measurestodetertheuseofnewtechnologytodeceiveordefraudthepublic.

Fourth,promotetransparencyandensureacademicandnonprofitaccesstoAI.Webelieveacriticalpublicgoalistoadvancetransparencyandbroadenaccessto

AIresources.Whiletherearesomeimportanttensions

betweentransparencyandtheneedforsecurity,thereexistmanyopportunitiestomakeAIsystemsmoretransparentinaresponsibleway.That’swhyMicrosoftiscommittingtoanannualAItransparencyreportandotherstepstoexpandtransparencyforourAIservices.

WealsobelieveitiscriticaltoexpandaccesstoAIresourcesforacademicresearchandthenonprofitcommunity.

Basicresearch,especiallyatuniversities,hasbeenof

fundamentalimportancetotheeconomicandstrategic

successoftheUnitedStatessincethe1940s.Butunless

academicresearcherscanobtainaccesstosubstantially

morecomputingresources,thereisarealriskthatscientificandtechnologicalinquirywillsuffer,includingrelatingtoAIitself.Ourblueprintcallsfornewsteps,includingstepswewilltakeacrossMicrosoft,toaddressthesepriorities.

Fifth,pursuenewpublic-privatepartnershipstouseAIasaneffectivetooltoaddresstheinevitablesocietalchallengesthatcomewithnewtechnology.One

lessonfromrecentyearsiswhatdemocraticsocietiescanaccomplishwhentheyharnessthepoweroftechnologyandbringthepublicandprivatesectorstogether.It’salessonweneedtobuildupontoaddresstheimpactofAIonsociety.

Wewillallbenefitfromastrongdoseofclear-eyed

optimism.AIisanextraordinarytool.Butlikeother

technologies,ittoocanbecomeapowerfulweapon,andtherewillbesomearoundtheworldwhowillseektouseitthatway.ButweshouldtakesomeheartfromthecyberfrontandthelastyearandahalfinthewarinUkraine.

Whatwefoundisthatwhenthepublicandprivatesectorsworktogether,whenlike-mindedalliescometogether,

andwhenwedeveloptechnologyanduseitasashield,it’smorepowerfulthananyswordontheplanet.

ImportantworkisneedednowtouseAItoprotect

democracyandfundamentalrights,providebroadaccesstotheAIskillsthatwillpromoteinclusivegrowth,andusethepowerofAItoadvancetheplanet’ssustainabilityneeds.Perhapsmorethananything,awaveofnewAItechnologyprovidesanoccasionforthinkingbigandactingboldly.Ineacharea,thekeytosuccesswillbetodevelopconcrete

initiativesandbringgovernments,respectedcompanies,

8

Microsoft

GoverningAI:ABlueprintfortheFuture

andenergeticNGOstogethertoadvancethem.Weoffersomeinitialideasinthisreport,andwelookforwardtodoingmuchmoreinthemonthsandyearsahead.

GoverningAIwithinMicrosoft

Ultimately,everyorganizationthatcreatesoruses

advancedAIsystemswillneedtodevelopandimplementitsowngovernancesystems.Part2ofthispaperdescribestheAIgovernancesystemwithinMicrosoft—wherewe

began,wherewearetoday,andhowwearemovingintothefuture.

Asthissectionrecognizes,thedevelopmentofanew

governancesystemfornewtechnologyisajourneyin

andofitself.Adecadeago,thisfieldbarelyexisted.TodayMicrosofthasalmost350employeesspecializinginit,andweareinvestinginournextfiscalyeartogrowthisfurther.

Asdescribedinthissection,overthepastsixyearswehavebuiltoutamorecomprehensiveAIgovernancestructureandsystemacrossMicrosoft.Wedidn’tstartfromscratch,borrowinginsteadfrombestpracticesfortheprotectionofcybersecurity,privacy,anddigitalsafety.Thisisall

partofthecompany’scomprehensiveEnterpriseRisk

Management(ERM)system,whichhasbecomeacriticalpartofthemanagementofcorporationsandmanyotherorganizationsintheworldtoday.

WhenitcomestoAI,wefirstdevelopedethicalprinciples

andthenhadtotranslatetheseintomorespecific

corporatepolicies.We’renowonversion2ofthecorporatestandardthatembodiestheseprinciplesanddefinesmoreprecisepracticesforourengineeringteamstofollow.

We’veimplementedthestandardthroughtraining,tooling,

andtestingsystemsthatcontinuetomaturerapidly.Thisissupportedbyadditionalgovernanceprocessesthatincludemonitoring,auditing,andcompliancemeasures.

Aswitheverythinginlife,onelearnsfromexperience.

WhenitcomestoAIgovernance,someofourmost

importantlearninghascomefromthedetailedwork

requiredtoreviewspecificsensitiveAIusecases.In2019,wefoundedasensitiveusereviewprogramtosubject

ourmostsensitiveandnovelAIusecasestorigorous,

specializedreviewthatresultsintailoredguidance.Sincethattime,wehavecompletedroughly600sensitiveusecasereviews.ThepaceofthisactivityhasquickenedtomatchthepaceofAIadvances,withalmost150such

reviewstakingplaceinthelast11months.

Allofthisbuildsontheworkwehavedoneandwill

continuetodotoadvanceresponsibleAIthroughcompanyculture.ThatmeanshiringnewanddiversetalenttogrowourresponsibleAIecosystemandinvestinginthetalentwealreadyhaveatMicrosofttodevelopskillsandempower

themtothinkbroadlyaboutthepotentialimpactofAI

systemsonindividualsandsociety.Italsomeansthatmuchmorethaninthepast,thefrontieroftechnologyrequiresamultidisciplinaryapproachthatcombinesgreatengineerswithtalentedprofessionalsfromacrosstheliberalarts.

Allthisisofferedinthispaperinthespiritthatwe’reonacollectivejourneytoforgearesponsiblefutureforartificialintelligence.Wecanalllearnfromeachother.Andno

matterhowgoodwemaythinksomethingistoday,wewillallneedtokeepgettingbetter.

Astechnologychangeaccelerates,theworktogovernAIresponsiblymustkeeppacewithit.Withtherightcommitmentsandinvestments,webelieveitcan.

BradSmith

ViceChairandPresident,Microsoft

GoverningAI:ABlueprintfortheFuture

01

GoverningAI:

ALegalandRegulatory

BlueprintfortheFuture

10

Microsoft

GoverningAI:ABlueprintfortheFuture

GoverningAI:ALegalandRegulatory

BlueprintfortheFuture

Aroundtheworld,governmentsarelookingforor

developingwhatineffectarenewblueprintstogovern

artificialintelligence.There,ofcourse,isnosingleorrightapproach.Weofferhereafive-pointapproachtohelp

governanceadvancemorequickly,basedonthequestionsandissuesthatarepressingtomany.Everypartofthis

blueprintwillbenefitfrombroaderdiscussionandrequiredeeperdevelopment.Butwehopethiscancontribute

constructivelytotheworkahead.

ThisblueprintrecognizesthemanyopportunitiestouseAItoimprovepeople’sliveswhilealsoquicklydevelopingnewcontrols,basedonbothgovernmentalandprivateinitiative,includingbroaderinternationalcollaboration.Itoffersspecificstepsto:

?Implementandbuilduponnewgovernment-ledAIsafetyframeworks.

?RequireeffectivesafetybrakesforAIsystemsthatcontrolcriticalinfrastructure.

?DevelopabroaderlegalandregulatoryframeworkbasedonthetechnologyarchitectureforAI.

?PromotetransparencyandensureacademicandpublicaccesstoAI.

?Pursuenewpublic-privatepartnershipstouseAIasaneffectivetooltoaddresstheinevitablesocietalchallengesthatcomewithnew

technology.

ThisplanrespondsinparttotheWhiteHouse’srecentcallforcommitmentsfromAIcompaniestoensureAIsafetyandsecurity,anditincludesseveralspecificcommitmentsthatMicrosoftisofferinginresponse.

1.Implementandbuilduponnew

government-ledAIsafetyframeworks.

Oneofthemosteffectivewaystomovequicklyistobuildonrecentadvancesingovernmentalworkthatadvance

AIsafety.Thismakesfarmoresensethanstartingfrom

scratch,especiallywhenthereisarecentandstrongfootingonwhichtostart.

Aseventshaveit,justfourmonthsago,theNational

InstituteofStandardsandTechnologyintheUnited

States,orNIST,completedayearandahalfofintensiveworkandlaunchedanimportantnewAIsafetyinitiative.ThisnewAIRiskManagementFrameworkbuildson

NIST’syearsofexperienceinthecybersecuritydomain,wheresimilarframeworksandstandardshaveplayedacriticalrole.

WebelievethenewAIRiskManagementFrameworkprovidesastrongfoundationthatcompaniesand

governmentsalikecanimmediatelyputintoactionto

ensurethesaferuseofartificialintelligence.Whilenosinglesucheffortcananswereveryquestion,theimmediate

adoptionofthisframeworkwillaccelerateAIsafety

momentumaroundtheworld.Andwecanallbuilduponitinthemonthsahead.

PartoftheU.S.DepartmentofCommerce,NISTdevelopeditsnewframeworkbasedondirectionbyCongressin

theNationalArtificialIntelligenceInitiativeActof2020.TheframeworkisdesignedtoenableorganizationstohelpmanageAIrisksandpromotethetrustworthyandresponsibledevelopmentanduseofAIsystems.Itwasdevelopedthroughaconsensus-drivenandtransparentprocessinvolvingworkbygovernmentagencies,civilsocietyorganizations,andseveraltechnologyleaders,includingMicrosoft.

11

Microsoft

GoverningAI:ABlueprintfortheFuture

NISTbringsyearsofexperiencetotheAIriskmanagementspacefromitsyearsofworkdevelopingcriticaltoolsto

addresscybersecurityrisks.MicrosofthaslongexperienceworkingwithNISTonthecybersecurityfront,andit’s

encouragingtoseeNISTapplythisexpertisetohelp

organizationsgovern,map,measure,andmanagethe

risksassociatedwithAI.We’renotaloneinourhigh

regardforNIST’sapproach,asnumerousgovernments,

internationalorganizations,andleadingbusinesseshave

alreadyvalidatedthevalueofthenewAIRiskManagementFramework.

NowthequestionishowtobuilduponthisrecentprogresssowecanallmovefastertoaddressAIrisks.Webelieve

thereareatleastfourimmediateopportunities:

First,MicrosoftiscommittingtotheWhiteHouse,in

responsetoitsrecentmeeting,thatwewillimplementNIST’sAIRiskManagementFramework.Microsoft’s

internal

ResponsibleAIStandard

iscloselyalignedwiththeframeworkalready,andwewillnowworkoverthesummertoimplementitsothatallourAIservicesbenefitfromit.

Second,wearesimilarlycommittingthatwewill

augmentMicrosoft’sexistingAItestingworkwithnewstepstofurtherstrengthenourengineeringpracticesrelatingtohigh-riskAIsystems.

UnderMicrosoft’sResponsibleAIStandard,ourAI

engineeringteamsalreadyworktoidentifypotential

harms,measuretheirpropensitytooccur,andbuild

mitigationstoaddressthem.Wehavefurtherdeveloped

redteamingtechniquesusingmultidisciplinaryteams,

whichwereoriginallydevelopedtoidentifycybersecurityvulnerabilities,tostresstestAIsystemswithawiderangeofexpertise,includingprivacy,security,andfairness.

Forhigh-risksystems,Microsoftiscommittingthatredteamingisconductedbeforedeploymentbyqualified

expertswhoareindependentoftheproductteams

buildingthosesystems,adoptingabestpracticefrom

thefinancialservicesindustry.Wewillrelyuponthese

redteams,togetherwithourproductteamswhoare

responsibleforsystematicevaluationsoftheproducts

thattheybuild,tohelpusidentify,measure,andmitigatepotentialharms.

Inadditiontocontinuallymonitoring,tracking,and

evaluatingourAIsystems,wewillusemetricstomeasureandunderstandsystemicissuesspecifictogenerativeAI

experiences,suchastheextenttowhichamodel’soutputissupportedbyinformationcontainedininputsources.(WearereleasingthefirstofthesemetricsthisweekaspartofourAzureOpenAIServiceatBuild,ourannualdeveloper

conference.)

Third,webelievetheAdministrationcanaccelerate

momentumthroughanExecutiveOrderthatrequires

vendorsofcriticalAIsystemstotheU.S.Governmenttoself-attestthattheyareimplementingNIST’sAIRiskManagementFramework.

It’simportantforgovernmentstomovefaster,usingbothcarrotsandsticks.IntheUnitedStates,federalprocurementmechanismshaverepeatedlydemonstratedtheirvalueinimprovingthequalityofproductsandadvancingindustrypracticemoregenerally.Buildingonsimilarapproaches

usedforkeytechnologyprioritieslikecybersecurity,theU.S.GovernmentcouldinsertrequirementsrelatedtotheAIRiskManagementFrameworkintothefederal

procurementprocessforAIsyst

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論