人工智能與戰(zhàn)略決策:人工智能賦能情報中的溝通信任與不確定性_第1頁
人工智能與戰(zhàn)略決策:人工智能賦能情報中的溝通信任與不確定性_第2頁
人工智能與戰(zhàn)略決策:人工智能賦能情報中的溝通信任與不確定性_第3頁
人工智能與戰(zhàn)略決策:人工智能賦能情報中的溝通信任與不確定性_第4頁
人工智能與戰(zhàn)略決策:人工智能賦能情報中的溝通信任與不確定性_第5頁
已閱讀5頁,還剩78頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

AIandStrategicDecision-Making

CommunicatingtrustanduncertaintyinAI-enrichedintelligence

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

April2024

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

Foreword 2

AboutCETaS 3

Acknowledgements 3

ExecutiveSummary 4

1.Introduction 7

1.1Theintelligencecycle 9

1.2Researchmethodology 10

2.AI-enrichedIntelligenceandUncertainty 13

2.1UKintelligenceassessmentprinciples 13

2.2PotentialrisksassociatedwithAIinintelligenceanalysis 15

2.3Challengestobestpracticeinintelligenceassessment 19

2.4BuildingtrustinAIsystems 20

3.IntegratingAIintoAnalysisandAssessmentProcesses 24

3.1Opportunitiesandbenefits 24

3.2Assurance 28

3.3WhentocommunicateAI-enrichedintelligence 29

4.HowtoCommunicateAI-enrichedIntelligencetoStrategicDecision-Makers 32

4.1Balancingaccessibilityandtechnicaldetail 32

4.2Training,governance,andoversight 35

5.ConclusionandRecommendations 37

AbouttheAuthors 40

1

AIandStrategicDecision-Making

Foreword

Advancesinartificialintelligence(AI)bringnewopportunitiesandholdexcitingpotentialforbothintelligenceproductionandassessment,helpingtosurfacenewintelligenceinsightsandboostingproductivity.AIisnotnewtoGCHQortheintelligenceassessment

community.Buttheacceleratingpaceofchangeis.Inanincreasinglycontestedandvolatileworld,weneedtocontinuetoexploitAItoidentifythreatsandemergingrisks,alongsideourimportantcontributiontoensuringAIsafetyandsecurity.

Acrossintelligenceproductionandall-sourceassessment,AIcanhelptosurfacenew

insightsandensurethatouranalystscanaccess,atspeed,afargreaterrangeofdataandinformation.WemustharnessthepotentialofAItomakesenseoftheever-expanding

volumeofmaterialwhichcaninformourassessments.Ifwedon't,weriskdrowningindataandfailingtospotemergingrisksortrendsasaresult.

Atthesametime,advancesinAIbringsomenewchallengesforintelligenceproductionandassessment.Questionsofbias,robustness,andsourcevalidationapplyjustasmuchtoAIsystemsastheydotothemoretraditionalsourcesofinsight.

Thiswelcome,groundbreakingreportexploressomeofthewaysinwhichwemayneedtoadaptourintelligencesystemtosuccessfullyintegrateAItoolsintoourwork.AnditseekstoanswerthedifficultquestionofwhatneedstobeinplaceforAI-enrichedinsightstobeusedeffectivelyandwiselyintheassessmentswhichinformNationalSecuritydecisions.

WearegratefultotheAlanTuringInstitute'sCentreforEmergingTechnologyandSecurity(CETaS)forhelpingusexplorethisimportantissue,andtothelargenumberofpeople

acrossGovernmentwhohavecontributedtothisresearch.

MadeleineAlessandriCMGAnneKeast-Butler

ChairoftheJointIntelligenceCommitteeDirectorGCHQ

2

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

AboutCETaS

TheCentreforEmergingTechnologyandSecurity(CETaS)isaresearchcentrebasedatTheAlanTuringInstitute,theUK’snationalinstitutefordatascienceandartificial

intelligence.TheCentre’smissionistoinformUKsecuritypolicythroughevidence-based,interdisciplinaryresearchonemergingtechnologyissues.ConnectwithCETaSat

cetas.turing.ac.uk.

ThisresearchwassupportedbyTheAlanTuringInstitute’sDefenceandSecurity

Programme.Allviewsexpressedinthisreportarethoseoftheauthors,anddonot

necessarilyrepresenttheviewsofTheAlanTuringInstituteoranyotherorganisation.

Acknowledgements

Theauthorsaregratefultoallthosewhotookpartinaresearchinterview,focusgrouporexerciseforthisproject,withoutwhomtheresearchwouldnothavebeenpossible.The

authorsareespeciallygratefultoSamforhiscontributionsandinsightsthroughoutthe

research,andtoClaireandAnnforsupportingtheprojectandfacilitatingstakeholder

engagement.TheauthorswouldalsoliketothankSirDavidOmand,RupertBarrett-Taylor,Vivien,Rosie,TomandEmilyfortheirvaluablefeedbackonanearlierversionofthisreport.DesignforthisreportwasledbyMichelleWronski.

ThisworkislicensedunderthetermsoftheCreativeCommonsAttributionLicense4.0

whichpermitsunrestricteduse,providedtheoriginalauthorsandsourcearecredited.Thelicenseisavailableat:

/licenses/by-nc-sa/4.0/legalcode.

Citethisworkas:MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta,“AIandStrategicDecision-Making:CommunicatingtrustanduncertaintyinAI-enriched

intelligence,”CETaSResearchReports(April2024).

3

AIandStrategicDecision-Making

ExecutiveSummary

ThisreportpresentsthefindingsofaCETaSresearchprojectcommissionedbytheJoint

IntelligenceOrganisation(JIO)andGCHQ,onthetopicofartificialintelligence(AI)and

strategicdecision-making.ThereportassesseshowAI-enrichedintelligenceshouldbe

communicatedtostrategicdecision-makersingovernment,toensuretheprinciplesof

analyticalrigour,transparency,andreliabilityofintelligencereportingandassessmentareupheld.ThefindingsarebasedonextensiveprimaryresearchacrossUKassessment

bodies,intelligenceagencies,andothergovernmentdepartments,conductedoveraseven-monthperiodthroughout2023-24.

‘AI-enrichedintelligence’inthiscontextreferstointelligenceinsightsthathavebeenderivedinpartorinwholefromtheuseofmachinelearninganalysisorgenerativeAIsystemssuchaslargelanguagemodels.

Theresearchconsidered:

1.Whethernationalsecuritydecision-makersaresufficientlyequippedtoassessthelimitationsanduncertaintyinherentinassessmentsinformedbyAI-enriched

intelligence.

2.WhenandhowthelimitationsofAI-enrichedintelligenceshouldbecommunicatedtonationalsecuritydecision-makerstoensureabalanceisstruckbetweenaccessibilityandtechnicaldetail.

3.Whetherfurthergovernance,guidelines,orupskillingmayberequiredtoenablenationalsecuritydecision-makerstomakehigh-stakesdecisionsbasedonAI-enrichedinsights.

Keyfindingsfromtheresearchareasfollows:

1.AIisavaluableanalyticaltoolforall-sourceintelligenceanalysts.AIsystemscan

processvolumesofdatafarbeyondthecapacityofhumananalysts,identifyingtrendsandanomaliesthatmayotherwisegounnoticed.ChoosingnottomakeuseofAIfor

intelligencepurposesthereforeriskscontraveningtheprincipleofcomprehensive

coverageinintelligenceassessment,setoutintheProfessionalHeadofIntelligenceAssessmentCommonAnalyticalStandards.Further,ifkeypatternsandconnectionsaremissed,thefailuretoadoptAItoolscouldunderminetheauthorityandvalueofall-sourceintelligenceassessmentstogovernment.

4

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

2.However,theuseofAIexacerbatesdimensionsofuncertaintyinherentinintelligenceassessmentanddecision-makingprocesses.TheoutputsofAIsystemsareprobabilisticcalculations(notcertainties)andarecurrentlypronetoinaccuracieswhenpresented

withincompleteorskeweddata.TheopaquenatureofmanyAIsystemsalsomakesitdifficulttounderstandhowAI-derivedconclusionshavebeenreached.

3.Thereisacriticalneedforcarefuldesign,continuousmonitoring,andregular

adjustmentofAIsystemsusedinintelligenceanalysisandassessmenttomitigatetheriskofamplifyingbiasanderrors.

4.Theintelligencefunctionproducingtheassessmentproductremainsultimatelyresponsibleforevaluatingrelevanttechnicalmetrics(suchasaccuracyanderrorrates)inAImethodsusedforintelligenceanalysisandassessment,andall-sourceintelligenceanalystsmusttakeintoaccountanylimitationsanduncertaintieswhenproducingtheirconclusionsandjudgements.

5.Nationalsecuritydecision-makerscurrentlyrequireahighlevelofassurancerelatingtoAIsystemperformanceandsecuritytomakedecisionsbasedonAI-enriched

intelligence.

6.IntheabsenceofarobustassuranceprocessforAIsystems,nationalsecuritydecision-makersgenerallyexhibitedgreaterconfidenceintheabilityofAItoidentifyevents

andoccurrencesthantheabilityofAItodeterminecausality.Decision-makersweremorepreparedtotrustAI-enrichedintelligenceinsightswhentheywerecorroboratedbynon-AI,interpretableintelligencesources.

7.TechnicalknowledgeofAIsystemsvariedgreatlyamongdecision-makers.ResearchparticipantsrepeatedlysuggestedthatabaselineunderstandingofthefundamentalsofAI,currentcapabilities,andcorrespondingassuranceprocesses,wouldbenecessary

fordecision-makerstomakeload-bearingdecisionsbasedonAI-enrichedintelligence.

ThisreportrecommendsthefollowingactionstoembedbestpracticewhencommunicatingAI-enrichedintelligencetostrategicdecision-makers.

1.TheProfessionalHeadofIntelligenceAssessment(PHIA)shoulddevelopguidance

forcommunicatinguncertaintywithinAI-enrichedintelligenceinall-source

assessment.Thisguidanceshouldoutlinestandardisedterminologytobeusedif

articulatingAI-relatedlimitationsandcaveatstodecision-makers.GuidanceshouldalsobeprovidedonthethresholdatwhichassessmentsshouldcommunicatetheuseofAI-enrichedintelligencetodecision-makers.

5

AIandStrategicDecision-Making

2.Alayeredapproachshouldbetakenbytheassessmentcommunitywhenpresentingtechnicalinformationtostrategicdecision-makers.Assessmentsinafinal

intelligenceproductpresentedtodecision-makersshouldalwaysremaininterpretabletonon-technicalaudiences.However,additionalinformationonsystemperformanceandlimitationsshouldbeavailableonrequestforthosewithmoretechnicalexpertise.

3.TheUKIntelligenceAssessmentAcademyshouldcompleteaTrainingNeedsAnalysisonbehalfoftheall-sourceassessmentcommunitytoidentifytherequirementfor

trainingfornewandexistinganalysts.TheAcademyshouldworkwithall-source

assessmentorganisationstodevelopappropriatetraininginresponsetotheAnalysis.

4.Trainingshouldbeofferedtonationalsecuritydecision-makers(andtheirstaff)to

buildtheirtrustinassessmentsinformedbyAI-enrichedintelligence.Decision-makersshouldbegivenbasicbriefingsonthefundamentalsofAIandcorrespondingassurance

processes.

5.Short,optionalexpertbriefingsshouldbeofferedimmediatelypriortohigh-stakesnationalsecuritydecision-makingsessionswhereAI-enrichedintelligenceunderpinsload-bearingdecisions.Thesesessionsshouldbriefdecision-makersonkeytechnical

detailsandlimitations,andensuretheyaregivenadvancedopportunitytoconsider

confidenceratings.ThesebriefingsshouldbejointlycoordinatedbytheJIOandNationalSecuritySecretariatandshoulddrawfromcross-governmentalexpertisefromthe

networkofChiefScientificAdvisersandrelevantScientificAdvisoryCouncils.Guidanceonwhentoofferbriefingsshouldbeproduced,andtheneedforbriefingsshouldbe

continuouslyassessed;asdecision-makersbecomemorecomfortablewithconsumingAI-enrichedintelligence,thelevelofdesiredassurancemayreduce,andbriefingsmayeventuallybecomeunnecessary.

6.AformalaccreditationprogrammeshouldbedevelopedforAIsystemsusedin

intelligenceanalysisandassessmenttoensuremodelsmeetminimumpolicy

requirementsofrobustness,security,transparency,andarecordofinherentbiasandmitigation.Technicalassurancefortheapplicationofasystemtoaspecificproblem

shouldbedevolvedtorelevantorganisations,andeachorganisation’sassurance

processshouldbeaccredited.Thisprogrammewillrequirededicatedresourcing,

bringingtogetherunderstandingofintelligenceassessmentstandardsandprocesseswithtechnicalexpertise.PHIAshouldassistindevelopingprinciplesandrequirements,whiletechnicalexpertiseforaccreditationandtestingshouldbedrawnfromtechnicalauthoritiesintheintelligencecommunityandacrossgovernment.

6

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

1.Introduction

ThisreportpresentsthefindingsofaCETaSresearchprojectcommissionedbytheJointIntelligenceOrganisation(JIO)andGCHQonthetopicofartificialintelligence(AI)and

strategicdecision-making.Theresearchsoughttoexaminethequestion:

‘HowshouldAI-enrichedintelligencebecommunicatedtostrategicdecision-makersingovernment,toensuretheprinciplesofanalyticalrigour,transparency,and

reliabilityofintelligencereportingandassessmentareupheld?’

Throughoutthisreport,‘AI’isusedtorefertomachinelearning(ML),andthephrase‘AI-

enrichedintelligence’referstointelligenceinsightsthathavebeenderivedinpartorin

wholefromtheuseofMLanalysis,orgenerativeAIsystemssuchaslargelanguagemodels(LLMs).

AkeyfunctionoftheUKintelligenceanalysisprofessionistoprovidetimelyandaccurate

insightstosupportstrategicdecision-making.All-sourceintelligenceanalystsdrawtogetherdiversesourcesofinformationandcontextualisethemforstrategicdecision-makers(SDMs)acrossgovernment.Thisinvolvesdrawingonintelligenceandotherinformationandadding

alayerofprofessionaljudgementtoformall-sourceintelligenceassessmentstosupportdecision-making.1Analystsdrawconclusionsfromincompleteinformationwhilst

highlightinggapsinknowledgeandeffectivelycommunicatinguncertainty.

Assessingandevaluatingincompleteandunreliableinformationisacoreresponsibilityofanintelligenceanalyst.Thedecisionstakenonthebasisofintelligenceassessmentscanbehighlyconsequentialandload-bearing–forinstance,whethertoauthorisemilitaryactivity,

diplomaticresponses,ordomesticpublicsafetymeasuresintheeventofnationalemergencies.

Overthepasttwodecades,therehasbeenahugegrowthinthevolumesofdatapotentiallyavailableforanalysis.Intelligenceassessmentfunctionshaveasignificantchallengeto

identify,process,andanalysetheseexponentiallygrowingsourcesandquantitiesofinformation.AIhasthepotentialtoofferbothincrementalandtransformational

improvementstotherigourandspeedofintelligenceassessments,andhasbeenshownto

1HMGovernment,Aboutus(IntelligenceAnalysis),

.uk/government/organisations/civil-service-intelligence

-analysis-profession/about.

7

AIandStrategicDecision-Making

beacrucialtoolforimprovingproductivityandeffectivenessinintelligenceanalysisandassessment.2

In2020,theRoyalUnitedServicesInstitute’sindependentreviewofAIandUKNational

Securityidentified‘numerousopportunitiesfortheUKnationalsecuritycommunity’touseAItoimproveefficiencyandeffectivenessofexistingprocesses,concludingthat‘AI

methodscanrapidlyderiveinsightsfromlarge,disparatedatasetsandidentifyconnectionsthatwouldotherwisegounnoticedbyhumanoperators’.Thereviewidentifiedthreespecificprioritiesfor‘AugmentedIntelligence’systemswithinintelligenceanalysis:

(i)Naturallanguageprocessingandaudiovisualanalysis(suchasmachinetranslation,speakeridentification,objectrecognitionorvideo

summarisation);

(ii)Filteringandtriageofmaterialgatheredthroughbulkcollection;

(iii)Behaviouralanalyticstoderiveinsightsattheindividualsubjectlevel.

AccordingtooneUS-basedstudy,anall-sourceanalystcouldsavemorethan45daysayearwiththesupportofAI-enabledsystemscompletingtaskssuchastranscriptionand

research.3AIhasalsobeenidentifiedaskeytomaintainingstrategicintelligenceadvantageoverincreasinglysophisticatedadversaries.4AfailuretoadoptAItoolscouldthereforeleadtoafailuretoprovidestrategicwarning.

However,theuseofAI-enrichedintelligencetoinformall-sourceintelligenceassessmentisnotwithoutrisk.AIcouldbothexacerbateknownrisksinintelligenceworksuchasbiasanduncertainty,andmakeitdifficultforanalyststoevaluateandcommunicatethelimitationsofAI-enrichedintelligence.AkeychallengefortheassessmentcommunitywillbemaximisingtheopportunitiesandbenefitsofAI,whilemitigatinganyrisks.

Thisreportconsidersstrategicdecision-makinginthecontextofnationalsecurityanddefinesstrategicdecision-makingastheprocessofmakingkeydecisionsthathaveasignificantimpactonnationalsecurityoutcomes.Suchdecisionstypicallyinclude

2AdamCandRichardCarter,“LargeLanguageModelsandIntelligenceAnalysis,”CETaSExpertAnalysis(July2023);Anna

Knack,RichardCarterandAlexanderBabuta,“Human-MachineTeaminginIntelligenceAnalysis:Requirementsfordevelopingtrustinmachinelearningsystems,”CETaSResearchReports(December2022);AlexanderBabuta,ArdiJanjevaandMarion

Oswald,“ArtificialIntelligenceandUKNationalSecurity:PolicyConsiderations,”RUSIOccasionalPapers(April2020);GCHQ,“PioneeringaNewNationalSecurity,”(2021),

.uk/files/GCHQAIPaper.pdf

.

3Mitcheletal.,“Thefutureofintelligenceanalysis,”TheDeloitteCenterforGovernmentInsights,(2019).

4CSISTechnologyandIntelligenceTaskForce,MaintainingtheIntelligenceEdge(CenterforStrategic&InternationalStudies:January2021).

8

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

considerationofthepotentialimpactonthesafetyandprosperityofthepublicorthe

country’sglobalstandingintheworld.Astrategicdecision-makerisanindividualwhosecontributiontotheprocesshasamaterialbearingontheoutcome.Suchdecision-makersmaybegovernmentofficialssuchasseniorcivilservants(e.g.relevantdepartmental

DirectorGeneralsorPermanentSecretaries),orministersandSecretariesofState

attendingtheNationalSecurityCouncil(e.g.theForeignSecretary,DefenceSecretaryorPrimeMinister).

Thisreportexamineswhether,intoday’scontextofdataproliferationandfast-developingAItechnology,currentpracticesaresufficienttomaintaintherigour,transparency,and

reliabilitydemandedbyintelligenceassessmentstandards.Uncertaintyisnotneworunique

toAI–itisinherentinallintelligenceanalysisandassessment.However,AIhasthe

potentialtoexacerbateuncertainty.TheresearchinvestigatedwhenandhowthelimitationsofAI-enrichedintelligenceshouldbecommunicatedbyall-sourceintelligenceanalyststo

nationalsecuritySDMs,whileensuringabalanceisstruckbetweenaccessibilityand

technicaldetail.Additionally,theresearchexploredwhetherfurthergovernance,guidance,orupskillingmayberequired–bothtoenabletheeffectivecommunicationofAI-enrichedintelligencewithintheassessmentcommunity,andtoenableSDMstomakeload-bearingdecisionsbasedonjudgementsinformedbyAI-enrichedinsights.

1.1Theintelligencecycle

ThissectionpresentsasimplifiedoverviewoftheUKintelligenceprocesstooutlinethestagesatwhichAI-enrichedintelligencemaybecomerelevant.Thesimplifiedcycle

presentedherehasfourcorefunctions:tasking(ordirection,wherebyrequirementsfor

informationareset),collection(conductedbytheintelligenceagencies),all-sourceanalysisandassessment(orprocessing,conductedbyassessmentbodiesincludingtheJoint

IntelligenceOrganisation),anddisseminationoffinishedproductstodecision-makers.Whilethisispresentedasafour-stageprocess,allactivitiesmaybeconductedconcurrently,andthereiscontinuouscommunicationandreviewbetweeneachstage.Thisisillustrated

below.

9

AIandStrategicDecision-Making

Figure1:JointDoctrinePublication2-00,Intelligence,Counter-intelligenceandSecuritySupporttoJointOperations,MinistryofDefence,2023

AI-enrichedintelligencecouldentertheintelligencecycleeitheratthecollectionor

processingstage.Ineitherinstance,itwouldbetheresponsibilityoftheall-sourceanalysisandassessmentfunctiontocontextualisetheAI-enrichedintelligence(alongsideallotheravailableinformationheldonthesamerequirement)andensurethatanylimitationsintheevidencebasearecommunicatedappropriatelytoSDMs.Thisreportisthereforefocusedontheanalysisandassessmentanddisseminationstagesoftheintelligencecycle.

1.2Researchmethodology

1.2.1Aimsandresearchquestions

ThemainresearchaimwastogathernewinsightonthefactorsthatshapethedegreeofconfidenceSDMsfeelwhenmakingload-bearingdecisionsonthebasisofAI-enrichedintelligenceassessment.Thisreportaddressesthefollowingresearchquestions:

10

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

?RQ1:Inwhatcircumstances(ifany)isitnecessarytocommunicateanddistinguishtheuseofAItostrategicdecision-makers,andatwhatstageinthereportingchaindoestheuseofAIbecomeunnecessarytocommunicate?

?RQ2:HowshouldAI-enrichedinformationbecommunicatedtostrategicdecision-makerstoensuretheyunderstandthereliability,confidenceandlimitationsoftheintelligenceproduct–andhowdoesthisvaryacrossintelligencecontextsandtypesofAIsystem?

?RQ3:Howdoweeffectivelyeducatestrategicdecision-makerstomakehigh-stakesdecisionsbasedonAI-enrichedreporting,andachievetheappropriatelevelofunderstanding,trustandconfidenceinAIsystemsandtheiroutputs?

?RQ4:Whatadditionalgovernance,oversightandupskillingisrequiredtoprovideassurancesthatAI-generatedinsightsarebeingusedappropriatelytosupportseniordecision-makinginthiscontext?

1.2.2Methodology

Theprimarydatasourcesforthisstudycomprisedsemi-structuredinterviewsandfocusgroupswithstakeholdersfromassessmentbodiesacrossgovernmentandtheUK

intelligencecommunity(UKIC).5Atabletopexercisewasalsoconductedwithagroupof

seniorgovernmentofficials,totestSDMs’responsestoAI-enrichedintelligenceina

simulatedscenario.Thisstudywasconductedoveraseven-monthperiodfromJune2023–January2024.Datacollectioninvolvedthefollowingcoreresearchactivities:

?Systematicliteraturereviewofacademicandgreyliteraturetoestablishthestate-of-the-artincurrentmethodologies,challenges,andperspectivesregardingtrustinAI.Asmallnumberofexpertsfromacademiaandindustryalsoprovidedtheir

viewpointsonapproachestodevelopingandimplementingtrustworthyAIsystemsinhigh-stakesenvironments.

?Semi-structuredinterviewsandfocusgroupswithintelligenceanalysts,

assessmentstaff,andothergovernmentofficials.Atotalof30researchparticipantsengagedinthisphaseoftheresearch.

?Tabletopexercise(TTX)with16seniorofficialsfromnumerousUKGovernmentdepartmentsandagencies.ThepurposeoftheTTXwastoexaminethedecision-makingprocessofSDMswhenpresentedwithassessmentsthatwerenotionally

5TheUKICisdefinedhereastheSecurityService(MI5),theSecretIntelligenceService(MI6)andtheGovernmentCommunicationHeadquarters(GCHQ).

11

AIandStrategicDecision-Making

basedonAI-enrichedintelligenceinasimulatedhigh-stakesscenario.ThescenariousedfortheTTXcentredonthethemeofelectionsecurity,anddiscussionswere

framedaroundfictitiousoutputsfromanotional(buttechnicallyplausible)MLclassificationsystem.

ThisreportisnarrowlyfocusedontheuseofAIinintelligenceanalysisandassessmenttoinformstrategicdecision-makingfornationalsecurity.Thefollowingthemesareoutof

scopeofthisprojectandarerecommendedastopicsforfutureresearch:

?TheuseofAItoinformoperationalandtacticaldecision-making(asopposedtostrategicdecision-making).

?CommunicatinguncertaintyinAI-enrichedintelligencesharedbyalliesandpartnersoutsidetheUKIC.

?TheuseofAI-enrichedintelligencetojustifyinvestigativeactivityorwarrantapplications.

?ThevulnerabilitiesofAIsystemsusedwithinnationalsecuritytoadversarialattacksortampering.

Thisreporttacklesasensitiveandunder-researchedtopicandthereforeheavilyreliesuponprimaryresearch.ParticipantsduringtheTTXmayhavebeensubjecttotheHawthorne

effect,wherebysubjectsmaychangetheirbehaviourinresponsetotheirawarenessofbeingobserved.

Theremainderofthisreportisstructuredasfollows.Section2outlineschallengesrelatingtointroducingAIintocurrentanalysisandassessmentpractices.Section3presents

opportunitiesforAIinintelligenceanalysisandassessment.Section4exploresenablingfactorsforcommunicatingAI-enrichedintelligencetostrategicdecision-makers.Section5concludeswithasetofrecommendationsforbestpracticewhencommunicatingAI-

enrichedintelligencetostrategicdecision-makers.

12

MeganHughes,RichardCarter,AmyHarlandandAlexanderBabuta

2.AI-enrichedIntelligenceandUncertainty

ThissectionprovidesanoverviewoftheProfessionalHeadofIntelligenceAssessment(PHIA)CommonAnalyticalStandardsforbestpracticeacrosstheUKintelligence

assessmentcommunity,andthetwokeyreviewswhichinformedthedevelopmentof

contemporaryUKintelligenceassessmentstandards:LordButler’s2004Reviewof

IntelligenceonWeaponsofMassDestructioninIraq;6andSirJohnChilcot’ssubsequent

ReportoftheIraqInquiry,publishedin2016.7ItalsoconsidershowAI-enrichedintelligencemayposechallengestoexistingintelligenceassessmentstandards,andoutlinesstrategiesforbuildingtrustinAIsystemsusedtoinformintelligenceassessment.

2.1UKintelligenceassessmentprinciples

2.1.1InterpretingtheButlerandChilcotprinciples

TheButlerReviewandChilcotInquiryarelandmarkevaluationsoftheintelligence

processesanddecision-makingproceduresthatledtheUKintoconflictinIraqin2003.Thereportssoughttounderstandhowandwhythestrategicdecision-makingsystemfaltered,andproposedrecommendationstoavoidfuturemissteps.

TheButlerReviewfoundthatseveralkeyjudgementsintheJointIntelligenceCommittee's(JIC)assessmentsinthelead-uptotheIraqconflictdidnotappropriatelyreflectthe

limitationsoftheunderlyingintelligence.8TheButlerReviewemphasisedseveralkeyprinciplesforeffectiveandrobustintelligenceanalysis,including:

?Accesstoinformation

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論