人工智能安全與自動(dòng)化偏見(jiàn):人機(jī)交互的風(fēng)險(xiǎn)_第1頁(yè)
人工智能安全與自動(dòng)化偏見(jiàn):人機(jī)交互的風(fēng)險(xiǎn)_第2頁(yè)
人工智能安全與自動(dòng)化偏見(jiàn):人機(jī)交互的風(fēng)險(xiǎn)_第3頁(yè)
人工智能安全與自動(dòng)化偏見(jiàn):人機(jī)交互的風(fēng)險(xiǎn)_第4頁(yè)
人工智能安全與自動(dòng)化偏見(jiàn):人機(jī)交互的風(fēng)險(xiǎn)_第5頁(yè)
已閱讀5頁(yè),還剩62頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

ExecutiveSummary

Automationbiasisthetendencyforanindividualtoover-relyonanautomated

system.Itcanleadtoincreasedriskofaccidents,errors,andotheradverseoutcomeswhenindividualsandorganizationsfavortheoutputorsuggestionofthesystem,eveninthefaceofcontradictoryinformation.

Automationbiascanendangerthesuccessfuluseofartificialintelligencebyerodingtheuser’sabilitytomeaningfullycontrolanAIsystem.AsAIsystemshave

proliferated,sotoohaveincidentswherethesesystemshavefailedorerredinvariousways,andhumanusershavefailedtocorrectorrecognizethesebehaviors.

Thisstudyprovidesathree-tieredframeworktounderstandautomationbiasbyexaminingtheroleofusers,technicaldesign,andorganizationsininfluencing

automationbias.Itpresentscasestudiesoneachofthesefactors,thenofferslessonslearnedandcorrespondingrecommendations.

UserBias:TeslaCaseStudy

Factorsinfluencingbias:

●User’spersonalknowledge,experience,andfamiliaritywithatechnology.

●User’sdegreeoftrustandconfidenceinthemselvesandthesystem.

Lessonslearnedfromcasestudy:

●Disparitiesbetweenuserperceptionsandsystemcapabilitiescontributetobiasandmayleadtoharm.

Recommendation:

●Createandmaintainqualificationstandardsforuserunderstanding.Usermisunderstandingofasystem’scapabilitiesorlimitationsisa

significantcontributortoincidentsofharm.Sinceuserunderstandingiscriticaltosafeoperation,systemdevelopersandvendorsmustinvestinclearcommunicationsabouttheirsystems.

CenterforSecurityandEmergingTechnology|1

TechnicalDesignBias:AirbusandBoeingDesignPhilosophiesCaseStudy

Factorsinfluencingbias:

●Thesystem’soveralldesign,userinterface,andhowitprovidesuserfeedback.

Lessonslearnedfromcasestudy:

●Evenwithhighlytraineduserssuchaspilots,systemsinterfacescontributetoautomationbias.

●Differentdesignphilosophieshavedifferentrisks.Nosingleapproachisnecessarilyperfect,andallrequireclear,consistentcommunicationandapplication.

Recommendation:

●Valueandenforceconsistentdesignanddesignphilosophiesthat

accountforhumanfactors,especiallyforsystemslikelytobeupgraded.

Whennecessary,justifyandmakeclearanydeparturesfromadesign

philosophytolegacyusers.Wherepossible,developcommondesign

criteria,standards,andexpectations,andconsistentlycommunicatethem(eitherthroughorganizationalpolicyorindustrystandard)toreducetheriskofconfusionandautomationbias.

OrganizationalPoliciesandProcedureBias:ArmyPatriotMissileSystemvs.NavyAEGISCombatSystemCaseStudy

Factorsinfluencingbias:

●Organizationaltraining,processes,andpolicies.

Lessonslearnedfromcasestudy:

●Organizationscanemploythesametoolsandtechnologiesinvery

differentwaysbasedonprotocols,operations,doctrine,training,andcertification.Choicesineachoftheseareasofgovernancecanembedautomationbiases.

●Organizationaleffortstomitigateautomationbiascanbesuccessfulbutmishapsarestillpossible,especiallywhenhumanusersareunderstress.

CenterforSecurityandEmergingTechnology|2

Recommendation:

●Whereautonomoussystemsareusedbyorganizations,designand

regularlyrevieworganizationalpoliciesappropriatefortechnical

capabilitiesandorganizationalpriorities.Updatepoliciesandprocesses

astechnologieschangetobestaccountfornewcapabilitiesandmitigate

novelrisks.Ifthereisamismatchbetweenthegoalsoftheorganization

andpoliciesgoverninghowcapabilitiesareused,automationbiasandpooroutcomesaremorelikely.

Acrossthesethreecasestudies,itisclearthat“human-in-the-loop”cannotpreventallaccidentsorerrors.Properlycalibratingtechnicalandhumanfail-safesforAI,however,posesthebestchanceformitigatingtherisksofusingAIsystems.

CenterforSecurityandEmergingTechnology|3

TableofContents

ExecutiveSummary 1

Introduction 5

WhatIsAutomationBias? 6

AFrameworkforUnderstandingandMitigatingAutomationBias 8

CaseStudies 10

CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias 10

Tesla’sRoadtoAutonomy 10

BehindtheWheel:Tesla’sAutopilotandtheHumanElement 11

CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias 13

TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies 14

BoeingIncidents 16

AirbusIncidents 17

CaseStudy3:HowOrganizationsCanInstitutionalizeAutomationBias 18

DivergentOrganizationalApproachestoAutomation:Armyvs.Navy 19

Patriot:ABiasTowardstheSystem 21

AEGIS:ABiasTowardstheHuman 22

Conclusion 24

Authors 26

Acknowledgments 26

Endnotes 27

CenterforSecurityandEmergingTechnology|4

Introduction

Incontemporarydiscussionsaboutartificialintelligence,acriticalbutoftenoverlookedaspectisautomationbias—thetendencyofhumanuserstooverlyrelyonAIsystems.Leftunaddressed,automationbiascanandhasharmedbothAIandautonomous

systemusersandinnocentbystandersinexamplesthatrangefromfalselegal

accusationstodeath.Automationbias,therefore,presentsasignificantchallengeinthe

real-worldapplicationofAI,particularlyinhigh-stakescontextssuchasnationalsecurityandmilitaryoperations.

SuccessfuldeploymentofAIsystemsreliesonacomplexinterdependencebetweenAIsystemsandthehumansresponsibleforoperatingthem.Addressingautomationbias

isnecessarytoensuresuccessful,ethical,andsafeAIdeployment,especiallywhentheconsequencesofoverrelianceormisusearemostsevere.AssocietiesincorporateAI

intosystems,decision-makersthusneedtobepreparedtomitigatetherisksassociatedwithautomationbias.

Automationbiascanmanifestandbeinterceptedattheuser,technicaldesign,and

organizationallevels.Weprovidethreecasestudiesthatexplainhowfactorsateachoftheselevelscanmakeautomationbiasmoreorlesslikely,derivelessonslearned,and

highlightpossiblemitigationstrategiestoalleviatethesecomplexissues.

CenterforSecurityandEmergingTechnology|5

WhatIsAutomationBias?

Automationbiasisthetendencyforahumanusertooverlyrelyonanautomated

system,reflectingacognitivebiasthatemergesfromtheinteractionbetweenahumanandanAIsystem.

Whenaffectedbyautomationbias,userstendtodecreasetheirvigilanceinmonitoringboththeautomatedsystemandthetaskitisperforming.1Instead,theyplaceexcessivetrustinthesystem’sdecision-makingcapabilitiesandinappropriatelydelegatemore

responsibilitytothesystemthanitisdesignedtohandle.Insevereinstances,usersmightfavorthesystem’srecommendationsevenwhenpresentedwithcontradictoryevidence.

Automationbiasmostoftenpresentsintwoways:asanerrorofomission,whena

humanfailstotakeactionbecausetheautomationdidnotalertthem(asdiscussedinthefirstcasestudyonvehicles);orasanerrorofcommission,whenahumanfollowsincorrectdirectionsfromtheautomation(asdiscussedinthecasestudyonthePatriotMissileSystem).2Inthisanalysis,wealsodiscussaninstancewhereabiasagainsttheautomationcausesharm(i.e.,thethirdcasestudyontheAEGISweaponssystem).

Automationbiasdoesnotalwaysresultincatastrophicevents,butitincreasesthelikelihoodofsuchoutcomes.Mitigatingautomationbiascanhelptoimprovehumanoversight,operation,andmanagementofAIsystemsandthusmitigatesomerisksassociatedwithAI.

ThechallengeofautomationbiashasonlygrownwiththeintroductionofprogressivelymoresophisticatedAI-enabledsystemsandtoolsacrossdifferentapplicationareas

includingpolicing,immigration,socialwelfarebenefits,consumerproducts,and

militaries(seeBox1).HundredsofincidentshaveoccurredwhereAI,algorithms,andautonomoussystemsweredeployedwithoutadequatetrainingforusers,clear

communicationabouttheircapabilitiesandlimitations,orpoliciestoguidetheiruse.3

CenterforSecurityandEmergingTechnology|6

Box1.AutomationBiasandtheUKPostOfficeScandal

Inanotablecaseofautomationbias,afaultyaccountingsystememployedbytheUKPostOfficeledtothewrongfulprosecutionof736UKsub-postmastersforembezzlement.AlthoughitdidnotinvolveanAIsystem,automationbiasandthemythof“infalliblesystems”playedasignificantrole—userswillinglyacceptedsystemerrorsdespitesubstantialevidencetothecontrary,favoringtheunlikelycasethathundredsofpostmasterswereinvolvedintheftandfraud.4Asoneauthorofanongoingstudyintothecasehighlighted,“Thisisnotascandalabouttechnologicalfailing;itisascandalaboutthegrossfailureofmanagement.”5

Whileautomationbiasisachallengingproblem,itisatractableissuethatsocietycantacklethroughouttheAIdevelopmentanddeploymentprocess.Theavenuesthroughwhichautomationbiascanmanifest—namelyattheuser,technical,andorganizationallevels—alsorepresentpointsofinterventiontomitigateautomationbias.

CenterforSecurityandEmergingTechnology|7

AFrameworkforUnderstandingandMitigatingAutomationBias

Technologymustbefitforpurposes,andusersmustunderstandthosepurposestobeabletoappropriatelycontrolsystems.Furthermore,knowingwhentotrustAIand

whenandhowtocloselymonitorAIsystemoutputsiscriticaltoitssuccessful

deployment.6Avarietyoffactorscalibratetrustandrelianceinthemindsofoperators,andtheygenerallyfallintooneofthreecategories(thougheachcategorycanbe

shapedbythecontextwithinwhichtheinteractionmayoccur,suchassituationsofextremestressor,conversely,fatigue):7

?factorsintrinsictothehumanuser,suchasbiases,experience,andconfidenceinusingthesystem;

?factorsinherenttotheAIsystem,suchasitsfailuremodes(thespecificwaysinwhichitmightmalfunctionorunderperform)andhowitpresentsand

communicatesinformation;and,

?factorsshapedbyorganizationalorregulatoryrulesandnorms,mandatoryprocedures,oversightrequirements,anddeploymentpolicies.

OrganizationsimplementingAImustavoidmyopicallyfocusingonlyonthetechnical“machine”sidetoensurethesuccessfuldeploymentofAI.Managementofthehumanaspectofthesesystemsdeservesequalconsideration,andmanagementstrategies

shouldbeadjustedaccordingtocontext.

Recognizingthesecomplexitiesandpotentialpitfalls,thispaperpresentscasestudiesforthreecontrollablefactorsaffectingautomationbias(user,technical,organizational)thatcorrespondtotheaforementionedfactorsthatshapethedynamicsofhuman-

machineinteraction(seeTable1).

CenterforSecurityandEmergingTechnology|8

Table1.FactorsAffectingAutomationBias

Factors

Description

CaseStudy

User

User’spersonalknowledge,

experience,andfamiliaritywithatechnology

User’sdegreeoftrustand

confidenceinthemselvesandthesystem

Teslaanddrivingautomation

TechnicalDesign

Thesystem’soveralldesign,thestructureofitsuserinterface,andhowitprovidesuserfeedback

AirbusandBoeingdesignphilosophies

Organization

OrganizationalprocessesshapingAIuseandreliance

U.S.Army’smanagement

andoperationofthePatriotMissileSystemvs.U.S.

Navy’smanagementandoperationoftheAEGIS

CombatSystem

Anadditionallayeroftask-specificfactors,suchastimeconstraints,taskdifficulty,

workload,andstress,canexacerbateoralternativelyreduceautomationbias.8Thesefactorsshouldbedulyconsideredinthedesignofthesystem,aswellastrainingandorganizationalpolicies,butarebeyondthescopeofthispaper.

CenterforSecurityandEmergingTechnology|9

CaseStudies

CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias

Individualsbringtheirpersonalexperiences—andbiases—totheirinteractionswithAIsystems.9Researchshowsthatgreaterfamiliarityanddirectexperiencewithself-

drivingcarsandautonomousvehicletechnologiesmakeindividualsmorelikelyto

supportautonomousvehicledevelopmentandconsiderthemsafetouse.Conversely,behavioralscienceresearchdemonstratesthatalackoftechnologicalknowledgecanleadtofearandrejection,whilehavingonlyalittlefamiliaritywithaparticular

technologycanresultinoverconfidenceinitscapabilities.10Thecaseofincreasingly

“driverless”carsillustrateshowtheindividualcharacteristicsandexperiencesofuserscanshapetheirinteractionsandautomationbias.Furthermore,asthecasestudyon

Teslabelowilluminates,evensystemimprovementsdesignedtomitigatetherisksofautomationbiasmayhavelimitedeffectivenessinthefaceofaperson’sbias.

Tesla’sRoadtoAutonomy

Carshavebecomeincreasinglyautomatedovertime.Manufacturersandengineers

haveintroducedcruisecontrolandaflurryofotheradvanceddriverassistancesystems(ADAS)aimedatimprovingdrivingsafetyandreducingthelikelihoodofhumanerror,alongsideotherfeaturessuchaslanedriftsystemsandblindspotsensors.TheU.S.

NationalHighwayTrafficSafetyAdministrationsuggeststhatfullautomationhasthepotentialto“offertransformativesafetyopportunitiesattheirmaturity,”butcaveatthattheseareafuturetechnology.*Astheymakeclearontheirwebsiteinboldedcapital

letters,carsthatperform“allaspectsofthedrivingtaskwhileyou,asthedriver,areavailabletotakeoverdrivingifrequested...ARENOTAVAILABLEONTODAY’S

VEHICLESFORCONSUMERPURCHASEINTHEUNITEDSTATES.”11Evenifthese

*TheSocietyofAutomotiveEngineers(SAE)(incollaborationwiththeInternationalOrganizationfor

Standardization,orISO)hasestablishedsixlevelsofdrivingautomation,from0to5.Level0,orno

automation,representscarswithoutsystemssuchasadaptivecruisecontrol.Ontheotherendofthe

spectrum,Levels4and5suggestcarsthatmaynotevenrequireasteeringwheeltobeinstalled.Levels

1and2includethosesystemswithincreasinglycompetentdriversupportfeatureslikethosementionedabove.Inallofthesesystems,however,thehumanisdriving,“evenifyourfeetareoffthepedalsand

youarenotsteering.”ItisatLevel3,whereautomationbeginstotakeover,thatthelinebetween“self-driving”and“driverless”becomesfuzzier,withthevehiclerelyinglessonthedriverunlessthevehicle

requeststheirengagement.Levels4and5neverrequirehumanintervention.See“SAELevelsofDrivingAutomation?RefinedforClarityandInternationalAudience,”SAEInternationalBlog,May3,2021,

/blog/sae-j3016-update

.

CenterforSecurityandEmergingTechnology|10

carswereavailable,itisimportanttoconsiderthepossibilitythatwhileautonomy

mighteliminatecertainkindsofaccidentsorhumanerrors(likedistracteddriving),ithasthepotentialtocreatenewones(likeover-trustingautopilot).12

StudiessuggestthatADASadoptionbydriversisoftenopportunistic,andsimplya

byproductofupgradingtheirvehicles.Driverslearnaboutthevehicle’scapabilitiesinanad-hocmanner,sometimesjustreceivinganover-the-airsoftwareupdatethatcomes

withwrittennotes.Therearenoexamsorcertificationsrequiredfortheseupdates.

StudieshavealsoshownthatwhereuseofanADASsystemissolelyexperiential,suchaswhenadriveradoptsanautonomousvehiclewithoutpriortraining,humanmisuse

ormisunderstandingofADASsystemscanhappenafteronlyafewencountersbehindthewheel.13Furthermore,atleastonestudyfoundthatdriverswhoareexposedto

morecapableautomatedsystemsfirsttendedtoestablishabaselineoftrustwhen

interactingwithother(potentiallylesscapable)automatedsystems.14Thistrustand

confidenceinADASvehiclescanmanifestasdistracteddriving,tothepointofdriversignoringwarnings,takinglongertoreacttoemergencies,ortakingriskstheywouldnottakeintheabsenceofautomation.15

BehindtheWheel:Tesla’sAutopilotandtheHumanElement

IntheweeksleadinguptothefirstfatalU.S.accidentinvolvingTesla’sAutopilotin

2016,thecompany’sthen-president,JonMcNeill,personallytestedthesystemina

ModelX.Inanemailfollowinghistest,McNeillpraisedthesystem’sseeminglyflawlessperformance,admitting,“IgotsocomfortableunderAutopilotthatIendedupblowingbyexitsbecauseIwasimmersedinemailsorcalls(Iknow,Iknow,notarecommendeduse).”16

DespitemarketingthatsuggeststheTeslaFullSelf-DrivingCapability(FSD)might

achievefullautonomywithouthumanintervention,thesefeaturescurrentlyreside

firmlywithinthesuiteofADAScapabilities.17Investigationsintothatfirstfatalaccidentfoundthatthedriverhadbeenwatchingamovieandhadignoredmultiplealertsto

maintainhandsonthewheelwhentheAutopilotfailedtodistinguishawhitetrailer

fromabrightsky,leadingtoacollisionthatkilledthedriver.18Sincethen,therehave

beenarangeofincidentsinvolvingTesla’sAutopilotsuiteofsoftware,whichincludeswhatiscalleda“FullSelf-DrivingCapability.”TheseincidentsledtheNationalHighwayTrafficSafetyAdministration(NHTSA)toexaminenearlyonethousandcrashesand

launchover40investigationsintoaccidentsinwhichAutopilotfeatureswerereportedtohavebeeninuse.19Initsinitialinvestigations,NHTSAfound“atleast13crashes

involvingoneormorefatalitiesandmanymoreinvolvingseriousinjuriesinwhich

CenterforSecurityandEmergingTechnology|11

foreseeabledrivermisuseofthesystemplayedanapparentrole.”20Also,amongNHTSA’sconclusionswasthat“Autopilot’sdesignwasnotsufficienttomaintaindrivers’engagement.”21

InresponsetoNHTSA’sinvestigationandincreasingscrutiny,inDecember2023Tesla

issuedasafetyrecalloftwomillionofitsvehiclesequippedwiththeAutosteerfunctionality.22Initsrecallannouncement,Teslaacknowledgedthat:

“IncertaincircumstanceswhenAutosteerisengaged,theprominenceandscopeofthefeature’scontrolsmaynotbesufficienttopreventdrivermisuseofthe

SAELevel2advanceddriver-assistancefeature.”23

Asapartofthisrecall,Teslasoughttoaddressthedriverengagementproblemwithan

over-the-airsoftwareupdatethataddedmorecontrolsandalertsto“encouragethedrivertoadheretotheircontinuousdrivingresponsibilitywheneverAutosteeris

engaged.”Thatencouragementmanifestedas:

“increasingtheprominenceofvisualalertsontheuserinterface,simplifying

engagementanddisengagementofAutosteer,additionalchecksuponengagingAutosteerand…eventualsuspensionfromAutosteeruseifthedriverrepeatedlyfailstodemonstratecontinuousandsustaineddrivingresponsibilitywhilethe

featureisengaged.”24

Trainingorcertificationwasnotincludedwiththesoftwareupdate;however,atext

summaryofthesoftwareupdatewasprovidedforuserstooptionallyreview,and

videosofusersindicatethattheinstructionswereeasytoignore.Usersalsohadtheoptiontoignoresafetyfeaturesintheupdatealtogether.Theefficacyofthesespecific

changes(eitherindividuallyorintotal)isnotyetclear.InApril2024,NHTSAlauncheda

newinvestigationintoTesla’sAutosteerandthesoftwareupdateitperformedin

December2023but,asexplainedearlier,experientialencountersalonecanimproperlycalibratethetrustnewdriversplaceintheirautonomousvehicles.25

CaseStudy1:KeyTakeawaysfromUserLevelCaseStudy

●Widergapsinmisalignmentbetweenperceivedandactualtechnologycapabilitiescanleadto,orotherwiseexacerbate,automationbias.

●Automationbiaswillbeimpactedbytheuser’slevelofpriorknowledgeandexperience,whichshouldbeofparticularconcerninsafetycriticalsituations.

CenterforSecurityandEmergingTechnology|12

IntheU.S.,driversareoftenconsideredtheresponsiblepartyincaraccidents,

particularlywhenitcomestotheroleofthedriverandtheroleofthesystem.26AsDavidZipper,SeniorFellowattheMITMobilityInitiative,explained:

“IntheUnitedStates,theresponsibilityforroadsafetylargelyfallsontheindividualsittingbehindthewheel,orridingabike,orcrossingthestreet.

Americantransportationdepartments,lawenforcementagencies,andnews

outletsfrequentlymaintainthatmostcrashes—indeed,94percentofthem,

accordingtothemostwidelycirculatedstatistic—aresolelyduetohumanerror.Blamingthebaddecisionsofroadusersimpliesthatnobodyelsecouldhave

preventedthem.”27

However,eventhemostexperiencedandknowledgeablehumanusersarenotfree

fromtheriskofoverrelianceinthefaceofpoorinterfaceandsystemdesign,andthereisapeculiardynamicatplaywithautonomousvehicles:Whenincidentsoccur,blameoftenfallsonthesoftware.28Whilethesoftwaremaynotbeblameless,the

combinationofthesystemandinappropriatehumanusemustalsobeconsideredin

identifyingthecausesofharm.Therefore,waysofinterveningormonitoringtopreventinappropriateusebydriversshouldbesoughtoutalongsidewaysofimprovingthe

system’stechnicalfeaturesanddesign.

CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias

Areviewofcrashesintheaviationindustrydemonstratesthatevenincaseswhere

usersarehighlytrained,activelymonitored,possessathoroughunderstandingofthe

technology’scapabilitiesandlimitations,andcanbeassurednottomisuseorabusethetechnology,apoorlydesignedinterfacecanmakeautomationbiasmorelikely.

Fieldsdedicatedtooptimizingtheselinksbetweentheuserandthesystem,suchas

humanfactorsengineeringandUI/UXdesign,aredevotedtointegratingandapplying

knowledgeabouthumancapabilities,limitations,andpsychologyintothedesignand

developmentoftechnologicalsystems.29Physicaldetails,fromthesizeandlocationofabuttontotheshapeofaleverorselectionmenutothecolorofaflashinglightorimage,seemsmallorinsignificant.Yetthesefeaturescanplayapivotalroleinshapinghumaninteractionswithtechnologyandultimatelydeterminingasystem’sutility.

Theimportanceofconsideringhumaninteractioninthedesignandoperationofthesesystemscannotbeoverstated—neglectingthehumanelementindesigncanleadto

inefficienciesatbest,andunsafeanddangerousconditionsatworst.Poorlydesigned

CenterforSecurityandEmergingTechnology|13

interfaces,characterizedbyfeaturesassimpleasdrop-downmenuswithalackofcleardistinctions,were,forexample,atthecoreoftheaccidentalissuanceofawidespread

emergencyalertinHawaiithatwarnedofanimminent,inboundballisticmissileattack.30

Designchoices,intentionallyornot,shapeandestablishspecificbehavioralpathwaysforhowhumansoperateandrelyonthesystemsthemselves.Inotherwords,these

designchoicescandirectlyembedand/orexacerbatecertaincognitivebiases,includingautomationbias.Thesedesignchoicesareespeciallyconsequentialwhenitcomesto

hazardalerts,suchasvisual,haptic,andauditoryalarms.Thecommercialaviation

industryillustrateshowautomationbiascanbedirectlyinfluencedbysystemdesigns:

TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies

Automationhasbeencentraltotheevolutionoftheairplanesinceitsinception—ittooklessthantenyearsfromthefirstpoweredflighttotheearliestiterationsofautopilot.31Intheyearssince,aircraftflightmanagementsystems,includingthosethatareAI-

enabled,havebecomesuccessivelycapable.Today,agreatdealoftheroutineworkofflyingaplaneishandledbyautomatedsystems.Thishasnotrenderedpilotsobsolete,however.32Onthecontrary,pilotsmustnowincorporatetheaircraftsystem’s

interpretationandreactiontoexternalconditionsbeforedeterminingthemost

appropriateresponse,ratherthandirectlyengagingwiththeirsurroundings.While

overall,flyinghasbecomesaferduetoautomation,automationbiasrepresentsanever-presentriskfactor.33Asearlyas2002,ajointFAA-industrystudywarnedthatthe

significantchallengefortheindustrywouldbetomanufactureaircraftanddesign

proceduresthatarelesserror-proneandmorerobusttoerrorsinvolvingincorrecthumanresponseafterfailure.34

Whilethereareinternationalstandardsaswellasageneralconsensusamongaircraftmanufacturersthatflightcrewsareultimatelyresponsibleforsafeaircraftoperation,thetwoleadingcommercialaircraftprovidersintheUnitedStates,AirbusandBoeing,areknownfortheiroppositedesignphilosophies.35Thedifferencesbetweenthem

illustratedifferentapproachestotheautomationbiaschallenge.

InAirbusaircraft,theautomatedsystemisdesignedtoinsulateandprotectpilotsandflightcrewsfromhumanerror.Thepilot’scontrolisboundedby“hard”limits,designedtoallowformanipulationoftheflightcontrolsbutprohibitiveofanychangesinaltitudeorspeed,forexample,thatwouldleadtostructuraldamageorlossofcontrolofthe

aircraft(inotherwords,actionstoexceedthemanufacturer’sdefinedflightenvelope).

CenterforSecurityandEmergingTechnology|14

Incontrast,inBoeingaircraft,thepilotistheabsoluteandfinalauthorityandcanusenaturalactionswiththesystemstoessentially“insist”uponacourseofaction.These“soft”limitsexisttowarnandalertthepilotbutcanbeoverriddenanddisregarded,evenifitmeanstheaircraftwillexceedthemanufacturer’sflightenvelope.

Thesedesigndifferencesmayhelpexplainwhysomeairlinesonlyoperatesingle-typefleets;pilotstypicallysticktoonetypeofaircraft,andcross-trainingpilotsispossiblebutcostlyand,therefore,uncommon.36

Table2showsanFAAsummaryofthedifferentdesignphilosophies:

Table2:AirbusandBoeingDesignPhilosophies

Airbus

Boeing

Automationmustnotreduceoverall

aircraftreliability;itshouldenhance

aircraftandsystemssafety,efficiency,andeconomy.

Thepilotisthefinalauthorityfortheoperationoftheairplane.

Bothcrewmembersareultimately

responsibleforthesafeconductofthe

Automationmustnotleadtheaircraftoutofthesafeflightenvelope,anditshouldmaintaintheaircraftwithinthenormal

flightenvelope.

flight.

Flightcrewtasks,ino

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論