版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
ExecutiveSummary
Automationbiasisthetendencyforanindividualtoover-relyonanautomated
system.Itcanleadtoincreasedriskofaccidents,errors,andotheradverseoutcomeswhenindividualsandorganizationsfavortheoutputorsuggestionofthesystem,eveninthefaceofcontradictoryinformation.
Automationbiascanendangerthesuccessfuluseofartificialintelligencebyerodingtheuser’sabilitytomeaningfullycontrolanAIsystem.AsAIsystemshave
proliferated,sotoohaveincidentswherethesesystemshavefailedorerredinvariousways,andhumanusershavefailedtocorrectorrecognizethesebehaviors.
Thisstudyprovidesathree-tieredframeworktounderstandautomationbiasbyexaminingtheroleofusers,technicaldesign,andorganizationsininfluencing
automationbias.Itpresentscasestudiesoneachofthesefactors,thenofferslessonslearnedandcorrespondingrecommendations.
UserBias:TeslaCaseStudy
Factorsinfluencingbias:
●User’spersonalknowledge,experience,andfamiliaritywithatechnology.
●User’sdegreeoftrustandconfidenceinthemselvesandthesystem.
Lessonslearnedfromcasestudy:
●Disparitiesbetweenuserperceptionsandsystemcapabilitiescontributetobiasandmayleadtoharm.
Recommendation:
●Createandmaintainqualificationstandardsforuserunderstanding.Usermisunderstandingofasystem’scapabilitiesorlimitationsisa
significantcontributortoincidentsofharm.Sinceuserunderstandingiscriticaltosafeoperation,systemdevelopersandvendorsmustinvestinclearcommunicationsabouttheirsystems.
CenterforSecurityandEmergingTechnology|1
TechnicalDesignBias:AirbusandBoeingDesignPhilosophiesCaseStudy
Factorsinfluencingbias:
●Thesystem’soveralldesign,userinterface,andhowitprovidesuserfeedback.
Lessonslearnedfromcasestudy:
●Evenwithhighlytraineduserssuchaspilots,systemsinterfacescontributetoautomationbias.
●Differentdesignphilosophieshavedifferentrisks.Nosingleapproachisnecessarilyperfect,andallrequireclear,consistentcommunicationandapplication.
Recommendation:
●Valueandenforceconsistentdesignanddesignphilosophiesthat
accountforhumanfactors,especiallyforsystemslikelytobeupgraded.
Whennecessary,justifyandmakeclearanydeparturesfromadesign
philosophytolegacyusers.Wherepossible,developcommondesign
criteria,standards,andexpectations,andconsistentlycommunicatethem(eitherthroughorganizationalpolicyorindustrystandard)toreducetheriskofconfusionandautomationbias.
OrganizationalPoliciesandProcedureBias:ArmyPatriotMissileSystemvs.NavyAEGISCombatSystemCaseStudy
Factorsinfluencingbias:
●Organizationaltraining,processes,andpolicies.
Lessonslearnedfromcasestudy:
●Organizationscanemploythesametoolsandtechnologiesinvery
differentwaysbasedonprotocols,operations,doctrine,training,andcertification.Choicesineachoftheseareasofgovernancecanembedautomationbiases.
●Organizationaleffortstomitigateautomationbiascanbesuccessfulbutmishapsarestillpossible,especiallywhenhumanusersareunderstress.
CenterforSecurityandEmergingTechnology|2
Recommendation:
●Whereautonomoussystemsareusedbyorganizations,designand
regularlyrevieworganizationalpoliciesappropriatefortechnical
capabilitiesandorganizationalpriorities.Updatepoliciesandprocesses
astechnologieschangetobestaccountfornewcapabilitiesandmitigate
novelrisks.Ifthereisamismatchbetweenthegoalsoftheorganization
andpoliciesgoverninghowcapabilitiesareused,automationbiasandpooroutcomesaremorelikely.
Acrossthesethreecasestudies,itisclearthat“human-in-the-loop”cannotpreventallaccidentsorerrors.Properlycalibratingtechnicalandhumanfail-safesforAI,however,posesthebestchanceformitigatingtherisksofusingAIsystems.
CenterforSecurityandEmergingTechnology|3
TableofContents
ExecutiveSummary 1
Introduction 5
WhatIsAutomationBias? 6
AFrameworkforUnderstandingandMitigatingAutomationBias 8
CaseStudies 10
CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias 10
Tesla’sRoadtoAutonomy 10
BehindtheWheel:Tesla’sAutopilotandtheHumanElement 11
CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias 13
TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies 14
BoeingIncidents 16
AirbusIncidents 17
CaseStudy3:HowOrganizationsCanInstitutionalizeAutomationBias 18
DivergentOrganizationalApproachestoAutomation:Armyvs.Navy 19
Patriot:ABiasTowardstheSystem 21
AEGIS:ABiasTowardstheHuman 22
Conclusion 24
Authors 26
Acknowledgments 26
Endnotes 27
CenterforSecurityandEmergingTechnology|4
Introduction
Incontemporarydiscussionsaboutartificialintelligence,acriticalbutoftenoverlookedaspectisautomationbias—thetendencyofhumanuserstooverlyrelyonAIsystems.Leftunaddressed,automationbiascanandhasharmedbothAIandautonomous
systemusersandinnocentbystandersinexamplesthatrangefromfalselegal
accusationstodeath.Automationbias,therefore,presentsasignificantchallengeinthe
real-worldapplicationofAI,particularlyinhigh-stakescontextssuchasnationalsecurityandmilitaryoperations.
SuccessfuldeploymentofAIsystemsreliesonacomplexinterdependencebetweenAIsystemsandthehumansresponsibleforoperatingthem.Addressingautomationbias
isnecessarytoensuresuccessful,ethical,andsafeAIdeployment,especiallywhentheconsequencesofoverrelianceormisusearemostsevere.AssocietiesincorporateAI
intosystems,decision-makersthusneedtobepreparedtomitigatetherisksassociatedwithautomationbias.
Automationbiascanmanifestandbeinterceptedattheuser,technicaldesign,and
organizationallevels.Weprovidethreecasestudiesthatexplainhowfactorsateachoftheselevelscanmakeautomationbiasmoreorlesslikely,derivelessonslearned,and
highlightpossiblemitigationstrategiestoalleviatethesecomplexissues.
CenterforSecurityandEmergingTechnology|5
WhatIsAutomationBias?
Automationbiasisthetendencyforahumanusertooverlyrelyonanautomated
system,reflectingacognitivebiasthatemergesfromtheinteractionbetweenahumanandanAIsystem.
Whenaffectedbyautomationbias,userstendtodecreasetheirvigilanceinmonitoringboththeautomatedsystemandthetaskitisperforming.1Instead,theyplaceexcessivetrustinthesystem’sdecision-makingcapabilitiesandinappropriatelydelegatemore
responsibilitytothesystemthanitisdesignedtohandle.Insevereinstances,usersmightfavorthesystem’srecommendationsevenwhenpresentedwithcontradictoryevidence.
Automationbiasmostoftenpresentsintwoways:asanerrorofomission,whena
humanfailstotakeactionbecausetheautomationdidnotalertthem(asdiscussedinthefirstcasestudyonvehicles);orasanerrorofcommission,whenahumanfollowsincorrectdirectionsfromtheautomation(asdiscussedinthecasestudyonthePatriotMissileSystem).2Inthisanalysis,wealsodiscussaninstancewhereabiasagainsttheautomationcausesharm(i.e.,thethirdcasestudyontheAEGISweaponssystem).
Automationbiasdoesnotalwaysresultincatastrophicevents,butitincreasesthelikelihoodofsuchoutcomes.Mitigatingautomationbiascanhelptoimprovehumanoversight,operation,andmanagementofAIsystemsandthusmitigatesomerisksassociatedwithAI.
ThechallengeofautomationbiashasonlygrownwiththeintroductionofprogressivelymoresophisticatedAI-enabledsystemsandtoolsacrossdifferentapplicationareas
includingpolicing,immigration,socialwelfarebenefits,consumerproducts,and
militaries(seeBox1).HundredsofincidentshaveoccurredwhereAI,algorithms,andautonomoussystemsweredeployedwithoutadequatetrainingforusers,clear
communicationabouttheircapabilitiesandlimitations,orpoliciestoguidetheiruse.3
CenterforSecurityandEmergingTechnology|6
Box1.AutomationBiasandtheUKPostOfficeScandal
Inanotablecaseofautomationbias,afaultyaccountingsystememployedbytheUKPostOfficeledtothewrongfulprosecutionof736UKsub-postmastersforembezzlement.AlthoughitdidnotinvolveanAIsystem,automationbiasandthemythof“infalliblesystems”playedasignificantrole—userswillinglyacceptedsystemerrorsdespitesubstantialevidencetothecontrary,favoringtheunlikelycasethathundredsofpostmasterswereinvolvedintheftandfraud.4Asoneauthorofanongoingstudyintothecasehighlighted,“Thisisnotascandalabouttechnologicalfailing;itisascandalaboutthegrossfailureofmanagement.”5
Whileautomationbiasisachallengingproblem,itisatractableissuethatsocietycantacklethroughouttheAIdevelopmentanddeploymentprocess.Theavenuesthroughwhichautomationbiascanmanifest—namelyattheuser,technical,andorganizationallevels—alsorepresentpointsofinterventiontomitigateautomationbias.
CenterforSecurityandEmergingTechnology|7
AFrameworkforUnderstandingandMitigatingAutomationBias
Technologymustbefitforpurposes,andusersmustunderstandthosepurposestobeabletoappropriatelycontrolsystems.Furthermore,knowingwhentotrustAIand
whenandhowtocloselymonitorAIsystemoutputsiscriticaltoitssuccessful
deployment.6Avarietyoffactorscalibratetrustandrelianceinthemindsofoperators,andtheygenerallyfallintooneofthreecategories(thougheachcategorycanbe
shapedbythecontextwithinwhichtheinteractionmayoccur,suchassituationsofextremestressor,conversely,fatigue):7
?factorsintrinsictothehumanuser,suchasbiases,experience,andconfidenceinusingthesystem;
?factorsinherenttotheAIsystem,suchasitsfailuremodes(thespecificwaysinwhichitmightmalfunctionorunderperform)andhowitpresentsand
communicatesinformation;and,
?factorsshapedbyorganizationalorregulatoryrulesandnorms,mandatoryprocedures,oversightrequirements,anddeploymentpolicies.
OrganizationsimplementingAImustavoidmyopicallyfocusingonlyonthetechnical“machine”sidetoensurethesuccessfuldeploymentofAI.Managementofthehumanaspectofthesesystemsdeservesequalconsideration,andmanagementstrategies
shouldbeadjustedaccordingtocontext.
Recognizingthesecomplexitiesandpotentialpitfalls,thispaperpresentscasestudiesforthreecontrollablefactorsaffectingautomationbias(user,technical,organizational)thatcorrespondtotheaforementionedfactorsthatshapethedynamicsofhuman-
machineinteraction(seeTable1).
CenterforSecurityandEmergingTechnology|8
Table1.FactorsAffectingAutomationBias
Factors
Description
CaseStudy
User
User’spersonalknowledge,
experience,andfamiliaritywithatechnology
User’sdegreeoftrustand
confidenceinthemselvesandthesystem
Teslaanddrivingautomation
TechnicalDesign
Thesystem’soveralldesign,thestructureofitsuserinterface,andhowitprovidesuserfeedback
AirbusandBoeingdesignphilosophies
Organization
OrganizationalprocessesshapingAIuseandreliance
U.S.Army’smanagement
andoperationofthePatriotMissileSystemvs.U.S.
Navy’smanagementandoperationoftheAEGIS
CombatSystem
Anadditionallayeroftask-specificfactors,suchastimeconstraints,taskdifficulty,
workload,andstress,canexacerbateoralternativelyreduceautomationbias.8Thesefactorsshouldbedulyconsideredinthedesignofthesystem,aswellastrainingandorganizationalpolicies,butarebeyondthescopeofthispaper.
CenterforSecurityandEmergingTechnology|9
CaseStudies
CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias
Individualsbringtheirpersonalexperiences—andbiases—totheirinteractionswithAIsystems.9Researchshowsthatgreaterfamiliarityanddirectexperiencewithself-
drivingcarsandautonomousvehicletechnologiesmakeindividualsmorelikelyto
supportautonomousvehicledevelopmentandconsiderthemsafetouse.Conversely,behavioralscienceresearchdemonstratesthatalackoftechnologicalknowledgecanleadtofearandrejection,whilehavingonlyalittlefamiliaritywithaparticular
technologycanresultinoverconfidenceinitscapabilities.10Thecaseofincreasingly
“driverless”carsillustrateshowtheindividualcharacteristicsandexperiencesofuserscanshapetheirinteractionsandautomationbias.Furthermore,asthecasestudyon
Teslabelowilluminates,evensystemimprovementsdesignedtomitigatetherisksofautomationbiasmayhavelimitedeffectivenessinthefaceofaperson’sbias.
Tesla’sRoadtoAutonomy
Carshavebecomeincreasinglyautomatedovertime.Manufacturersandengineers
haveintroducedcruisecontrolandaflurryofotheradvanceddriverassistancesystems(ADAS)aimedatimprovingdrivingsafetyandreducingthelikelihoodofhumanerror,alongsideotherfeaturessuchaslanedriftsystemsandblindspotsensors.TheU.S.
NationalHighwayTrafficSafetyAdministrationsuggeststhatfullautomationhasthepotentialto“offertransformativesafetyopportunitiesattheirmaturity,”butcaveatthattheseareafuturetechnology.*Astheymakeclearontheirwebsiteinboldedcapital
letters,carsthatperform“allaspectsofthedrivingtaskwhileyou,asthedriver,areavailabletotakeoverdrivingifrequested...ARENOTAVAILABLEONTODAY’S
VEHICLESFORCONSUMERPURCHASEINTHEUNITEDSTATES.”11Evenifthese
*TheSocietyofAutomotiveEngineers(SAE)(incollaborationwiththeInternationalOrganizationfor
Standardization,orISO)hasestablishedsixlevelsofdrivingautomation,from0to5.Level0,orno
automation,representscarswithoutsystemssuchasadaptivecruisecontrol.Ontheotherendofthe
spectrum,Levels4and5suggestcarsthatmaynotevenrequireasteeringwheeltobeinstalled.Levels
1and2includethosesystemswithincreasinglycompetentdriversupportfeatureslikethosementionedabove.Inallofthesesystems,however,thehumanisdriving,“evenifyourfeetareoffthepedalsand
youarenotsteering.”ItisatLevel3,whereautomationbeginstotakeover,thatthelinebetween“self-driving”and“driverless”becomesfuzzier,withthevehiclerelyinglessonthedriverunlessthevehicle
requeststheirengagement.Levels4and5neverrequirehumanintervention.See“SAELevelsofDrivingAutomation?RefinedforClarityandInternationalAudience,”SAEInternationalBlog,May3,2021,
/blog/sae-j3016-update
.
CenterforSecurityandEmergingTechnology|10
carswereavailable,itisimportanttoconsiderthepossibilitythatwhileautonomy
mighteliminatecertainkindsofaccidentsorhumanerrors(likedistracteddriving),ithasthepotentialtocreatenewones(likeover-trustingautopilot).12
StudiessuggestthatADASadoptionbydriversisoftenopportunistic,andsimplya
byproductofupgradingtheirvehicles.Driverslearnaboutthevehicle’scapabilitiesinanad-hocmanner,sometimesjustreceivinganover-the-airsoftwareupdatethatcomes
withwrittennotes.Therearenoexamsorcertificationsrequiredfortheseupdates.
StudieshavealsoshownthatwhereuseofanADASsystemissolelyexperiential,suchaswhenadriveradoptsanautonomousvehiclewithoutpriortraining,humanmisuse
ormisunderstandingofADASsystemscanhappenafteronlyafewencountersbehindthewheel.13Furthermore,atleastonestudyfoundthatdriverswhoareexposedto
morecapableautomatedsystemsfirsttendedtoestablishabaselineoftrustwhen
interactingwithother(potentiallylesscapable)automatedsystems.14Thistrustand
confidenceinADASvehiclescanmanifestasdistracteddriving,tothepointofdriversignoringwarnings,takinglongertoreacttoemergencies,ortakingriskstheywouldnottakeintheabsenceofautomation.15
BehindtheWheel:Tesla’sAutopilotandtheHumanElement
IntheweeksleadinguptothefirstfatalU.S.accidentinvolvingTesla’sAutopilotin
2016,thecompany’sthen-president,JonMcNeill,personallytestedthesystemina
ModelX.Inanemailfollowinghistest,McNeillpraisedthesystem’sseeminglyflawlessperformance,admitting,“IgotsocomfortableunderAutopilotthatIendedupblowingbyexitsbecauseIwasimmersedinemailsorcalls(Iknow,Iknow,notarecommendeduse).”16
DespitemarketingthatsuggeststheTeslaFullSelf-DrivingCapability(FSD)might
achievefullautonomywithouthumanintervention,thesefeaturescurrentlyreside
firmlywithinthesuiteofADAScapabilities.17Investigationsintothatfirstfatalaccidentfoundthatthedriverhadbeenwatchingamovieandhadignoredmultiplealertsto
maintainhandsonthewheelwhentheAutopilotfailedtodistinguishawhitetrailer
fromabrightsky,leadingtoacollisionthatkilledthedriver.18Sincethen,therehave
beenarangeofincidentsinvolvingTesla’sAutopilotsuiteofsoftware,whichincludeswhatiscalleda“FullSelf-DrivingCapability.”TheseincidentsledtheNationalHighwayTrafficSafetyAdministration(NHTSA)toexaminenearlyonethousandcrashesand
launchover40investigationsintoaccidentsinwhichAutopilotfeatureswerereportedtohavebeeninuse.19Initsinitialinvestigations,NHTSAfound“atleast13crashes
involvingoneormorefatalitiesandmanymoreinvolvingseriousinjuriesinwhich
CenterforSecurityandEmergingTechnology|11
foreseeabledrivermisuseofthesystemplayedanapparentrole.”20Also,amongNHTSA’sconclusionswasthat“Autopilot’sdesignwasnotsufficienttomaintaindrivers’engagement.”21
InresponsetoNHTSA’sinvestigationandincreasingscrutiny,inDecember2023Tesla
issuedasafetyrecalloftwomillionofitsvehiclesequippedwiththeAutosteerfunctionality.22Initsrecallannouncement,Teslaacknowledgedthat:
“IncertaincircumstanceswhenAutosteerisengaged,theprominenceandscopeofthefeature’scontrolsmaynotbesufficienttopreventdrivermisuseofthe
SAELevel2advanceddriver-assistancefeature.”23
Asapartofthisrecall,Teslasoughttoaddressthedriverengagementproblemwithan
over-the-airsoftwareupdatethataddedmorecontrolsandalertsto“encouragethedrivertoadheretotheircontinuousdrivingresponsibilitywheneverAutosteeris
engaged.”Thatencouragementmanifestedas:
“increasingtheprominenceofvisualalertsontheuserinterface,simplifying
engagementanddisengagementofAutosteer,additionalchecksuponengagingAutosteerand…eventualsuspensionfromAutosteeruseifthedriverrepeatedlyfailstodemonstratecontinuousandsustaineddrivingresponsibilitywhilethe
featureisengaged.”24
Trainingorcertificationwasnotincludedwiththesoftwareupdate;however,atext
summaryofthesoftwareupdatewasprovidedforuserstooptionallyreview,and
videosofusersindicatethattheinstructionswereeasytoignore.Usersalsohadtheoptiontoignoresafetyfeaturesintheupdatealtogether.Theefficacyofthesespecific
changes(eitherindividuallyorintotal)isnotyetclear.InApril2024,NHTSAlauncheda
newinvestigationintoTesla’sAutosteerandthesoftwareupdateitperformedin
December2023but,asexplainedearlier,experientialencountersalonecanimproperlycalibratethetrustnewdriversplaceintheirautonomousvehicles.25
CaseStudy1:KeyTakeawaysfromUserLevelCaseStudy
●Widergapsinmisalignmentbetweenperceivedandactualtechnologycapabilitiescanleadto,orotherwiseexacerbate,automationbias.
●Automationbiaswillbeimpactedbytheuser’slevelofpriorknowledgeandexperience,whichshouldbeofparticularconcerninsafetycriticalsituations.
CenterforSecurityandEmergingTechnology|12
IntheU.S.,driversareoftenconsideredtheresponsiblepartyincaraccidents,
particularlywhenitcomestotheroleofthedriverandtheroleofthesystem.26AsDavidZipper,SeniorFellowattheMITMobilityInitiative,explained:
“IntheUnitedStates,theresponsibilityforroadsafetylargelyfallsontheindividualsittingbehindthewheel,orridingabike,orcrossingthestreet.
Americantransportationdepartments,lawenforcementagencies,andnews
outletsfrequentlymaintainthatmostcrashes—indeed,94percentofthem,
accordingtothemostwidelycirculatedstatistic—aresolelyduetohumanerror.Blamingthebaddecisionsofroadusersimpliesthatnobodyelsecouldhave
preventedthem.”27
However,eventhemostexperiencedandknowledgeablehumanusersarenotfree
fromtheriskofoverrelianceinthefaceofpoorinterfaceandsystemdesign,andthereisapeculiardynamicatplaywithautonomousvehicles:Whenincidentsoccur,blameoftenfallsonthesoftware.28Whilethesoftwaremaynotbeblameless,the
combinationofthesystemandinappropriatehumanusemustalsobeconsideredin
identifyingthecausesofharm.Therefore,waysofinterveningormonitoringtopreventinappropriateusebydriversshouldbesoughtoutalongsidewaysofimprovingthe
system’stechnicalfeaturesanddesign.
CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias
Areviewofcrashesintheaviationindustrydemonstratesthatevenincaseswhere
usersarehighlytrained,activelymonitored,possessathoroughunderstandingofthe
technology’scapabilitiesandlimitations,andcanbeassurednottomisuseorabusethetechnology,apoorlydesignedinterfacecanmakeautomationbiasmorelikely.
Fieldsdedicatedtooptimizingtheselinksbetweentheuserandthesystem,suchas
humanfactorsengineeringandUI/UXdesign,aredevotedtointegratingandapplying
knowledgeabouthumancapabilities,limitations,andpsychologyintothedesignand
developmentoftechnologicalsystems.29Physicaldetails,fromthesizeandlocationofabuttontotheshapeofaleverorselectionmenutothecolorofaflashinglightorimage,seemsmallorinsignificant.Yetthesefeaturescanplayapivotalroleinshapinghumaninteractionswithtechnologyandultimatelydeterminingasystem’sutility.
Theimportanceofconsideringhumaninteractioninthedesignandoperationofthesesystemscannotbeoverstated—neglectingthehumanelementindesigncanleadto
inefficienciesatbest,andunsafeanddangerousconditionsatworst.Poorlydesigned
CenterforSecurityandEmergingTechnology|13
interfaces,characterizedbyfeaturesassimpleasdrop-downmenuswithalackofcleardistinctions,were,forexample,atthecoreoftheaccidentalissuanceofawidespread
emergencyalertinHawaiithatwarnedofanimminent,inboundballisticmissileattack.30
Designchoices,intentionallyornot,shapeandestablishspecificbehavioralpathwaysforhowhumansoperateandrelyonthesystemsthemselves.Inotherwords,these
designchoicescandirectlyembedand/orexacerbatecertaincognitivebiases,includingautomationbias.Thesedesignchoicesareespeciallyconsequentialwhenitcomesto
hazardalerts,suchasvisual,haptic,andauditoryalarms.Thecommercialaviation
industryillustrateshowautomationbiascanbedirectlyinfluencedbysystemdesigns:
TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies
Automationhasbeencentraltotheevolutionoftheairplanesinceitsinception—ittooklessthantenyearsfromthefirstpoweredflighttotheearliestiterationsofautopilot.31Intheyearssince,aircraftflightmanagementsystems,includingthosethatareAI-
enabled,havebecomesuccessivelycapable.Today,agreatdealoftheroutineworkofflyingaplaneishandledbyautomatedsystems.Thishasnotrenderedpilotsobsolete,however.32Onthecontrary,pilotsmustnowincorporatetheaircraftsystem’s
interpretationandreactiontoexternalconditionsbeforedeterminingthemost
appropriateresponse,ratherthandirectlyengagingwiththeirsurroundings.While
overall,flyinghasbecomesaferduetoautomation,automationbiasrepresentsanever-presentriskfactor.33Asearlyas2002,ajointFAA-industrystudywarnedthatthe
significantchallengefortheindustrywouldbetomanufactureaircraftanddesign
proceduresthatarelesserror-proneandmorerobusttoerrorsinvolvingincorrecthumanresponseafterfailure.34
Whilethereareinternationalstandardsaswellasageneralconsensusamongaircraftmanufacturersthatflightcrewsareultimatelyresponsibleforsafeaircraftoperation,thetwoleadingcommercialaircraftprovidersintheUnitedStates,AirbusandBoeing,areknownfortheiroppositedesignphilosophies.35Thedifferencesbetweenthem
illustratedifferentapproachestotheautomationbiaschallenge.
InAirbusaircraft,theautomatedsystemisdesignedtoinsulateandprotectpilotsandflightcrewsfromhumanerror.Thepilot’scontrolisboundedby“hard”limits,designedtoallowformanipulationoftheflightcontrolsbutprohibitiveofanychangesinaltitudeorspeed,forexample,thatwouldleadtostructuraldamageorlossofcontrolofthe
aircraft(inotherwords,actionstoexceedthemanufacturer’sdefinedflightenvelope).
CenterforSecurityandEmergingTechnology|14
Incontrast,inBoeingaircraft,thepilotistheabsoluteandfinalauthorityandcanusenaturalactionswiththesystemstoessentially“insist”uponacourseofaction.These“soft”limitsexisttowarnandalertthepilotbutcanbeoverriddenanddisregarded,evenifitmeanstheaircraftwillexceedthemanufacturer’sflightenvelope.
Thesedesigndifferencesmayhelpexplainwhysomeairlinesonlyoperatesingle-typefleets;pilotstypicallysticktoonetypeofaircraft,andcross-trainingpilotsispossiblebutcostlyand,therefore,uncommon.36
Table2showsanFAAsummaryofthedifferentdesignphilosophies:
Table2:AirbusandBoeingDesignPhilosophies
Airbus
Boeing
Automationmustnotreduceoverall
aircraftreliability;itshouldenhance
aircraftandsystemssafety,efficiency,andeconomy.
Thepilotisthefinalauthorityfortheoperationoftheairplane.
Bothcrewmembersareultimately
responsibleforthesafeconductofthe
Automationmustnotleadtheaircraftoutofthesafeflightenvelope,anditshouldmaintaintheaircraftwithinthenormal
flightenvelope.
flight.
Flightcrewtasks,ino
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025年度建筑材料加工生產(chǎn)合同范本4篇
- 專(zhuān)業(yè)出國(guó)留學(xué)輔導(dǎo)協(xié)議樣本(2024)版B版
- 2025年度醫(yī)療器械緊急運(yùn)輸服務(wù)協(xié)議3篇
- 2025年度數(shù)據(jù)中心場(chǎng)地租賃合作協(xié)議4篇
- 2025年度食品試用及消費(fèi)者滿意度調(diào)查合同4篇
- 2025年度綠色建筑設(shè)計(jì)與施工一體化服務(wù)合同4篇
- 2025年度市政基礎(chǔ)設(shè)施改造鏟車(chē)租賃協(xié)議書(shū)4篇
- 二零二四全新建筑工程施工聯(lián)營(yíng)協(xié)議書(shū)下載3篇
- 2024重慶離婚協(xié)議書(shū)標(biāo)準(zhǔn)范文
- 二婚再婚2024年度財(cái)產(chǎn)共有協(xié)議
- 2024年黑河嫩江市招聘社區(qū)工作者考試真題
- 第22單元(二次函數(shù))-單元測(cè)試卷(2)-2024-2025學(xué)年數(shù)學(xué)人教版九年級(jí)上冊(cè)(含答案解析)
- 藍(lán)色3D風(fēng)工作總結(jié)匯報(bào)模板
- 安全常識(shí)課件
- 河北省石家莊市2023-2024學(xué)年高一上學(xué)期期末聯(lián)考化學(xué)試題(含答案)
- 2024年江蘇省導(dǎo)游服務(wù)技能大賽理論考試題庫(kù)(含答案)
- 2024年中考英語(yǔ)閱讀理解表格型解題技巧講解(含練習(xí)題及答案)
- 新版中國(guó)食物成分表
- 浙江省溫州市溫州中學(xué)2025屆數(shù)學(xué)高二上期末綜合測(cè)試試題含解析
- 2024年山東省青島市中考生物試題(含答案)
- 保安公司市場(chǎng)拓展方案-保安拓展工作方案
評(píng)論
0/150
提交評(píng)論