




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
ExecutiveSummarylanguagemodelshavegarneredinteresttotheirremarkable
to“generate”human-likeresponsestonaturallanguagequeries—athreshold
onetimewasconsidered“proof”ofsentience—performothertime-saving
tasks.Indeed,LLMsregardedbymanyorthe,pathwaytogeneralartificial
intelligence(GAI)—hypothesizedstatewherecomputers(orevenexceed)
skillsmostortasks.ThelureofAI’sholygrailthroughLLMsdrawninvestmentinthebillions
ofbythosefocusedonthistheUnitedStatesEuropeespecially,big
privatesectorcompaniesledthewayandtheirfocusonLLMsovershadowed
researchonotherapproachestoGAI,despiteLLM’sknowndownsidessuchcost,
powerconsumption,or“hallucinatory”output,deficitsinreasoning
abilities.thesecompanies’betsonLLMsfailtodeliveronexpectationsofprogress
GAI,westernAIdevelopersbepositionedtofallbackon
approaches.contrast,Chinafollowsastate-driven,diverseAIdevelopmentLiketheUnited
States,investsinLLMsbutsimultaneouslypursuestoGAI,
thosemoreexplicitlybrain-inspired.Thisreportdrawsonpublicstatements
bytopscientists,theirassociatedresearch,andongovernment
announcementstodocumentChina’smultifacetedTheChinesegovernmentsponsorsresearchtoinfuse“values”intoAIintendedto
guideautonomousprovideAIsafety,ensureChina’sAI
reflectstheneedsofthepeoplethestate.Thisreportconcludesbyrecommending
U.S.governmentsupportforalternativegeneralAIprogramsforcloserscrutinyof
AIresearch.CenterforSecurityEmergingTechnology|1Introduction:GenerativeAIandAIAchievinggeneralartificialintelligenceorGAI—definedAIthatreplicatesor
exceedsmostcognitiveacrossaoftasks,suchimage/video
understanding,continuallearning,planning,reasoning,skilltransfer,andcreativity1—is
akeystrategicgoalofintenseresearcheffortsbothinChinatheUnitedStates.2
Thereisvigorousdebateintheinternationalscientificcommunityregardingwhichth
leadtoGAImostquicklywhichpathsbestarts.theUnitedStates,
LLMsdominatedthediscussion,yetquestionsremainabouttheirabilityto
achieveGAI.SincechoosingthepathcanpositiontheUnitedStatesa
strategicdisadvantage,thisraisestheurgencyofalternativeapproaches
othercountriesbepursuing.theUnitedStates,expertsbelievethesteptoGAIwilloccur
therolloutofnewversionsofLLMssuchasOpenAI’so1,Google’sGemini,
Anthropic’sClaude,Meta’sLlama.3Othersargue,pointingtopersistentproblems
suchLLMhallucinations,mountofcompute,feedback,ormultimodaldata
sourcesLLMstoachieveGAI.4StillotherAIscientistsseerolesforLLMsin
GAIplatformsbutnottheonly,orevenmain,component.5PonderingthequestionofhowGAIcanbeachievedisimpobecauseittoucheson
optionsavailabletodeveloperspursuingAI’straditionalholygrail—human-level
intelligence.thepath—orapath—toGAIacontinuationofLLMdevelopment,
possiblyaugmentedbymodules?OrLLMsadeadend,necessitating
other,differentapproachesbasedonacloseremulationof
cognitionandbrainfunction?GiventhesuccessofLLMs,thelevelsofinvestment,6endorsementsbyregarded
AIscientists,optimismcreatedbyexamples,thedifficultyofreimagining
newapproachesinthefaceofmodelsinwhichcompaniesgreatcommitment,itis
easytooverlooktheofrelyingona“monoculture”basedonasingleresearch
m.7therearelimitationstoLLMsdeliver,withoutasufficiently
diversifiedesearchportfolio,itisunclearhowwellwesterncompaniesgovernmentsbetopursueothersolutionscanovercomeLLMsproblems
pathwaystoGAI.AdiversifiedresearchportfolioispreciselyChina’sapproachtoitsstate-sponsored
goalofachieving“generalartificialintelligence”通用人工智能.8Thisreportshow
that—inadditiontoChina’sknownprodigiousefforttofieldChatGPTLLMs,9—significantresourcesaredirectedinChinaalternativepathwaystoGAIbyCenterforSecurityEmergingTechnology|2scientistshavewell-foundedconcernsaboutthepotentialof“bigsmall
task”大數(shù)據(jù),小任務(wù))approachestoreachhumancapabilities.10Accordingly,thispaperaddressesquestions:criticismsdoChinesescientists
ofLLMstogeneralAI?howisChinamanagingLLMs’alleged
shortcomings?Thepaperbegins(section1)critiquesbyprominentnon-ChinaAIscientistsof
languagemodelstheirtosupportGAI.Thesectionprovidescontext
forviewsofChinesescientiststowardLLMs(section2)describedinsources.
Section3thencitesresearchsupportsChina’spublic-facingclaimsaboutthenon-
viabilityofLLMsapathtoGAI.section4,assesstheseclaimsasafor
recommendationsinsection5onChina’salternativeprojectsmustbetaken
seriously.CenterforSecurityEmergingTechnology|3LargeLanguageModelsTheirCriticsThetermlanguagemodel”capturestheylargenetworkstypically
billionstotrillionsofparameters,theytrainedonnaturallanguage,
terabytesoftextingestedfromtheinternetothersources.LLMs,neural
networks(NN)generally,typologicallydistinctfrom“goodoldfashioned”(GOFAI)
symbolicAIthatdependsonrule-basedcoding.addition,today’slargemodelscan
todifferentdegrees,multimodalinputsoutputs,includingimages,video,
audio.11LLMsdebutedin2017,whenGoogleengineersproposedaNNarchitecture—a
transformer—optimizedtopatternsinsequencesoftextbytoattention”tothecooccurrencerelationshipsbetween“tokens”orof
words)inthetrainingcorpus.12Unlikehumanknowledge,knowledgecapturedinLLMs
isnotobtainedthroughinterionsthenaturalenvironmentbutdependson
probabilitiesderivedfromthepositionalrelationshipsbetweenthetokensin
sequences.MassiveexposuretocorporatrainingallowstheLLMtoidentify
regularitiesintheaggregate,beusedtogenerateresponsestopromptsafterthetraining.Hence,theOpenAIproduct“GPT”(generativepre-
trainedtransformer).TheofLLMsto“blend”differentsourcesofinformation(whichplaysto
strengthsofneuralnetworksinpatternmatchinganduncovering
similaritiesincomplexspaces)hasgiventoapplicationsindiversetext
summarization,translation,codetheoremproving.Yet,itbeenhotlydebatedwhetherthisabilitytoexploitregularitiesis
sufficienttoachieveGAI.Initialenthusiasticreportsthe“sentience”ofLLMs
increasinglysupplementedbyreportsshowingseriousdeficitsinLLMs’abilityto
understandlanguagetoreasoninahuman-like.13SomepersistentinLLMs,inbasicmath,14ppearcorrectablebys,15
i.e.,externalprogramsspecializedforofLLMeaknesses.such—ofanetworkofsystemsspecializedindifferentaspectsofcognition—
wouldbemorelikethewhichhasdedicatedmodules,e.g.,forepisodicmemory,
reasoning,etc.,ratherthanasingleprocessinLLMs.16SomescientistshopeincreasesincomplexityhelpovercomeLLMs’
defects.ForGeoffreyHinton,creditingintuitionofIlyaSutskever(OpenAI’s
formerchiefscientist,studiedHinton),believessolvesomeof
theseproblems.thisview,LLMs“reasoning”byvirtueoftheirability“toCenterforSecurityEmergingTechnology|4predictthenextsymbolpredictionisaprettytheoryofhowtheis
g.”17Indeed,increasesincomplexity(fromGPT-2throughGPT-4)ledto
increasedpeonvariousbenchmarktasks,such“theoryof18
aboutmentalstates),deficitswerenotedforGPT-3.5.19Othersuchdeficitsarehardertoaddressandpersistdespiteincreasesinmodel
complexity.Specifically,“hallucinations,”i.e.,LLMsmakingincorrectclaims(aproblem
inherenttoneuralnetworksthataredesignedtointerpolateunlikethebrain,do
notseparatethestorageoffrominterpolations)errorsinreasoningbeendifficulttoovercome20recentstudiesthatthelikelihoodof
incorrect/hallucinatorysweincreasesgreatermodelcomplexity.21addition,thestrategyofincreasingmodelcomplexityinthehopeofhievingnovel,
qualitativelydifferent“emergent”behaviorsappearonceacomputational
thresholdbeencrossedlikewisebeencalledintoquestionbyresearch
thatpreviouslynoted“emergent”inmodelswereartefactsof
themetricsusednotindicativeofanyqualitativeinmodelperformance.22
Correspondingly,claimsof“emergence”inLLMsdeclinedintherecentliterature,
evenmodelcomplexitiesincreased.23Indeed,thereisthejustifiedconcerntheighperformanceofLLMson
standardizedtestscouldbeascribedmoretothewell-knownpatternmatching
prowessofneuralnetworksthanthediscoveryofnewstrategies.24StillotherofLLMscenteronfundamentalcognitiveandphilosophicalissues
suchtheabilitytogeneralize,formdeepabstractions,create,self-direct,modeltime
space,showcommonsense,reflectontheirownoutput,25manageambiguous
expressions,unlearnbasedonnewinformation,evaluateproconarguments
decisions),graspnuance.26thesedeficitsdiscussedinthewesternresearchliterature,others
suchLLMs’inabilitytoeasilyknowledgebeyondthecontextwindowwithout
thebasemodel,orthecomputationalenergydemandsofLLM
mostcurrentinvestmentofcommercialplayersintheAIspace(e.g.,OpenAI,
Anthropic)iscontinuingdownthissameroad.Theproblemisnotonly“weinvestinginidealfuturemaynotmaterialize”27butratherLLMs,inGoogle
AIresearcherFranoisChollet’swords,“suckedtheoxygenoutoftheroom.Everyone
isdoingLLMs.”28CenterforSecurityEmergingTechnology|5ChineseViewsofasaPathtoGeneralAI(orNot)AreviewofstatementsbyscientiststopAIresearchinstitutes
revealsahighdegreeofskepticismaboutLLMs’tolead,bythemselves,toGAI.
Theseresemblethoseofinternationalexperts,becausebothgroupsthe
problemsbecauseChina’sAIexpertsinteractwiththeirglobalpeersa
matterofcourse.29HerefollowseveralChinesescientists’viewsonLLMsapathtogeneralTang唐杰)isprofessorofcomputerscienceTsinghuaUniversity,thefounderof
智譜),30aleadingfigureintheAcademyofIntelligence(BAAI),31
thedesierofseveralindigenousLLMs.32Despitesuccessstatistical
models,argueshuman-levelAIrequiresthemodelstobe“embodiedinthe
d.”33Althoughbelievesthescalinglaw(規(guī)模法則34“stillalongwaytogo,”
onedoesnotguaranteeGAIwillbeachieved.35Amofruitfulpathwouldtake
cuesfrombiology.hiswords:“GAIormachineintelligencebasedonlargemodelsdoesnotnecessarilytobe
thethemechanismofhumanbraincognition,butanalyzingtheofthebrainmaybettertherealizationofGAI.”36ZhangYaqin張亞勤,AKAYa-QinZhang)co-foundedMicrosoftResearchAsiisthe
formerpresidentoffoundingdeanofTsinghua’sInstituteforAIIndustry
Research智能產(chǎn)業(yè)研究院)aadvisor.ZhangcitesthreeproblemsLLMs,
namely,theirlowcomputationalefficiency,inabilityto“trulyunderstandthephysical
world,”socalled“boundaryissues”邊界問題i.e.,tokenization.37Zhangbelieves
Goertzel)“weneedtoexplorehowtocombinegenerativeprobabilistic
modelsexisting‘firstprinciples’[ofthephysicalworld]orrealmodelsand
knowledges.”38HuangTiejun黃鐵軍)isfounderformerdirectorofandvicedeanofPeking
University’sInstituteforIntelligence(人工智能研究院Huangnames
threetoGAI:“informationmodels”basedonbigdatabigcompute,
“embodiedmodels”trainedthroughreinforcementbrainemulation—in
astake.39HuangagreesLLMscalinglawswillcontinueto
operatebut“itisnotonlyecessarytocollectstaticdata,butalsotoobtainprocessmultiplesensoryinformationinrealtime.”40Inview,GAIdependson
integratingstatisticalmodelsbrain-inspiredAIandembodiment,CenterforSecurityEmergingTechnology|6LLMsrepresent“staticemergencebasedonbigdat”是基于大數(shù)據(jù)的靜態(tài)涌現(xiàn).
Brain-inspiredintelligence,bycontrast,isbasedoncomplexdynamics.Embodied
intelligencediffersinthatitgeneratesnewabilitiesbyinteractingthe
environment41Bo徐波,deoftheSchoolofArtificialIntelligenceUniversityofChinese
AcademyofSciences(UCAS)中國科學(xué)院大學(xué)人工智能學(xué)院)directorofthe
ChineseAcademyofSciences(CAS)InstituteofAutomation(CASIA,中國科學(xué)院自動(dòng)化
研究所,42Muming蒲慕明,AKAMumingPoo),directorofCAS’sCenterfor
ExcelleinBrainScienceIntelligenceTechnology腦科學(xué)與智能技術(shù)卓
越創(chuàng)新中心43believeembodimentenvironmentalinteractionfacilitateLLMs’
growthtodGAI.AlthoughtheartificialneuralnetworksonwhichLLMsdepend
wereinspiredbybiology,theybyadding“moreneurons,layersconnections”
donotbegintoemulatethecomplexityofneurontypes,selective
connectivity,modularstructure.particular,“Computationallycostlybackpropagationalgorithms…couldbeimprovedoreven
replacedbyplausiblealgorithms.”Thesecandidatesinclude
spiketimesynapticplasticity,“neuromodulatordependentmetaplasticity”“short-
termvs.long-termmemorystoragerulessetthestabilityofsynaptics.”44ZhuSongchun朱松純,AKASong-deanofPKU’sInstituteofIntelligencedirectoroftheInstituteforGeneralArtificialIntelligence北京
通用人工智能研究院)foundedonthepremisebigdata-basedLLMsa
dead-endintermsoftheirtoemulatehuman-levelcognition.45pullspunches:“Achievinggeneralartificialintelligenceistheoriginalintentionandultimategoalof
artificialintelligenceresearch,butcontinuingtoexpandtheparameterbasedon
existinglargemodelscannotachievegeneralartificialintelligence.”comparesChina’sLLM’sachievementstoMt.Everest”whenthereal
goalistoreachthemoon.Inview,LLMs“inherentlyuninterpretable,ofdataleakage,donotacognitivearchitecture,lackcausalandmathematical
reasoningcapabilities,otherlimitations,sotheyleadto‘generalartificial
intelligence’.”46ZengYi曾毅,directorofCASIA’sBrain-inspiredCognitiveIntelligence類腦認(rèn)知
智能實(shí)驗(yàn)室foundingdirectorofitsInternationalResearchCenterforAIEthicsCenterforSecurityEmergingTechnology|7Governance,47isbuildingaGAIplatformbasedontime-dependentspikingneural
networks.hiswords:“Ourbraincognitiveintelligenceteamfirmlybelievesonlybythe
structureofthebrainitsintelligentwellthelawsofnaturalevolution,achieveintelligenceistruly
meaningfulbeneficialtohumans.”48ofLLMsbyotherChineseAIscientistslegion.?ShenXiangyang沈向洋,HarryShumAKAHeung-YeungShum),former
MicrosoftexecutiveVPdirectoroftheAcademicCommitteeofPKU’s
InstituteofIntelligence,lamentsAIresearch“clear
understandingoftheofintelligence.”Shensupportsaviewattributes
toNewYorkUniversityprofessoremeritusLLMcriticMarcusthat“no
matterhowChatGPTdevelops,thecurrenttechnicalroutenotbeto
usrealintelligence.”49?Qinghua(鄭慶華presidentofTongjiUniversityaChineseAcademy
ofEngineeringacademician,statedthatLLMshaveflaws:theyconsume
toomuchdatacomputingresources,susceptibletocatastrophic
forgetting,logicalreasoningcapabilities,donotknowwhenthey
ortheyareg.50?LiWu李武directoroftheStateKeyLaboratoryofCognitiveNeuroscienceBeijingNormalUniversity,statedhisbelief“currentneural
networksrelativelyspecializeddonotconformtothethehuman
works.youdesperatelyhypethemodelitselfonlyfocusonthe
expansionofparametersfrombillionsortensofbillionstohundredsofbillions,
younotbetoachievetrueintelligence.”51RecognitionoftheneedtosupplementLLMresearchwithalternativetoGAIisevidencedinstatementsbynationalandmunicipalgovernments.On30,2023,citygovernment—whosejurisdictionmuchof
GAI-orientedLLMresearchisplace—issuedastatementcallingfor
developmentofmodelsothergeneralartificialintelligencetechnology
systems”系統(tǒng)構(gòu)建大模型等通用人工智能技術(shù)體系.52Sectionthreefiveitems(7-
11),thefirstfourofwhichpertaintoLLMs(algorithms,trainingdata,evaluation,a
softwarehardwaresystem).Item11reads“exploringnew新路徑)for
generalartificialintelligence”andcallsfor:CenterforSecurityEmergingTechnology|8Developingabasictheoreticalsystem基礎(chǔ)理論體系)forGAI,autonomous
collaborationdecision-making,embodiedintelligence,brain-inspired類腦)
intelligence,supportedbyaunifiedtheoreticalframework,ratingandtesting
programminglanguages.Embodiedsystems(robots)[trainopenenvironments,generalizedscenarios,continuoustasks.Themandatesthefollowing:“Supporttheexplorationofbrainintelligence,studytheconnectionpatterns,
codingmechanisms,informationprocessingandothercoretechnologiesofneurons,inspirenewartificialneuralnetworkmodelingandtrainingmethods.”AlternativestoLLMswerecitedthenationallevelin2024,whenvice
presidentWu吳朝暉,formerlyviceministerofChina’ssciencepresidentofUniversity),53statedAIismovingtoward“synergybetween
andsmallmodels”大小模型協(xié)同,addingChinamust“explorethe
developmentofGAIinmultipleways”多路徑地探索通用人工智能發(fā)展Thelatter
“embodiedintelligence,distributedgroupintelligence,humanhybrid
intelligence,enhancedintelligence,autonomousdecisionmaking.”54ThefollowingmonthHaidianDistrictgovernment,sdictionover1,300
AIcompanies,morethan90ofdevelopingbigmodels,55issuedathree-year
tofacilitateresearchinembodied具身)AI.Thedefines“embodiment”“theofintelligentsystemormachinetointeracttheenvironmentinreal
timethroughperceptioninteraction”andismeanttoserveaplatformfor
development.Itsdetailsplansforhumanoidrobotsfacilitatedby
replicatingbrainfunctionality.56OuranalysisofpublicstatementsbygovernmentinstitutionsandrankingChineseAI
scientistsindicatesinfluentialpartofChina’sAIcommunitysharestheconcerns
misgivingsheldbywesternofLLMsseeksalternativepathwaysto
generalartificialintelligence.CenterforSecurityEmergingTechnology|9本報(bào)告來源于三個(gè)皮匠報(bào)告站(),由用戶Id:349461下載,文檔Id:611736,下載日期:2025-02-17WhatDoesAcademicRecordstatementsbyscientistsonemeasureofapproachtoGAI.Anotheris
theirrecordofscholarship.reviewsofChineseliteraturedetermined
ChinaispursuingGAIbymultiplemeans,includinggenerativelanguage
models,57brain-inspiredmodels,58byenhancingcognitionthroughbrain-computer
interfaces.59OurpresenttaskistotheliteratureforevidenceChinese
—beyondwhatpositivefeaturesbrain-basedmodelshave—drivento
seekalternativebyLLM’sshortcomings.end,rankeywordsearchesinChineseEnglishfor“AGI/GAI+
LLM”theircommonvariantsinCSET’sMergedCorpus60forpaperspublishedin
2021orlaterprimaryChineseauthorship.Some35documentswereA
separatequeryweb-basedsearchesrecovered43morepapers.6115ofthe78
paperswererejectedbythestudy’sleadanalystofftopic.Theremain63papers
werereviewedbythestudy’ssubjectmatterexpert,highlightedthefollowing24
examplesofChineseresearchaddressingLLMproblemsstandintheof
modelsachievingthegeneralityassociatedGAI.621.曹博西HAN韓先培SUNLe(孫樂“CanPromptProbe
PretrainedLanguageModels?UnderstandingtheRisksfromaCausal
View,”preprintarXiv:2203.12258v12.CHENG程兵,“ArtificialIntelligenceGenerativeContentincluding
OpensaNewBigParadigmSpaceofEconomicsSocialScience
Research”以ChatGPT為代表的大語言模型打開了經(jīng)濟(jì)學(xué)和其他社會(huì)科學(xué)研究范
式的巨大新空間ChinaJournalofEconometrics計(jì)量經(jīng)濟(jì)學(xué)報(bào))3,no.3(July
2023).3.CHENG程岱宣HUANG黃少涵WEIFuru韋福如“AdaptingLargeLanguageModelstoDomainsviaReadingComprehension,”
preprintarXiv:2309.09530v44.DINGNing丁寧ZHENGHai-Tao鄭海濤SUNMaosong孫茂松“Parameter-efficientFine-tuningofLarge-scalePre-trainedLanguageModels,”
NatureIntelligence,March2023.5.DONGQingxiu董青秀SUIZhifang穗志方LILei李磊,“ASurveyonIn-
contextarXivpreprintarXiv:2301.00234v4(2024).6.HUANGJiangyong黃江勇YONGSilong雍子隆,63HUANGSiyuan黃思遠(yuǎn)“AnEmbodiedGeneralistAgentin3DWorld,”Proceedingsofthe41st
ConferenceonMachineLearning,Austria,235.
2024.CenterforSecurityEmergingTechnology|107.JINFeihu金飛虎ZHANG張家俊,“UnifiedPromptMakesPre-
trainedLanguageModelsBetterFew-shotLearners,”IEEEInternational
ConferenceonAcoustics,SpeechSignalProcessing,June2023.8.LIHengli李珩立ZHUSongchun朱松純ZHENG鄭子隆,“DiPlomat:
ADialogueDatasetforSituatedPragmaticReasoning,”37thConferenceon
NeuralInformationProcessingSystems(NeurIPS2023).9.LIJiaqi(李佳琪ZHENG鄭子隆ZHANG張牧涵,“LooGLE:Can
Long-ContextLanguageModelsUnderstandLongContext?”preprint
arXiv:2311.04939v1(2023).10.LIYuanchun李元春ZHANGYaqin張亞勤Yunxin劉云新,“Personal
LLMAgents:InsightsandSurveyabouttheCapability,EfficiencySecurity,”
preprintarXiv:2401.05459v211.MAYuxi馬煜曦ZHUSongchun朱松純,“BraininaonPieces
towardsArtificialGeneralIntelligenceinLargeModels,”arXiv
preprintarXiv:2307.03762v112.NIBolin尼博琳PENGHouwen彭厚文CHENZHANGSongyang
張宋揚(yáng)),LINGHaibin凌海濱),“ExpandingLanguagePretrainedModels
forGeneralVideoRecognition,”preprintarXiv:2208.02816v1(2022).13.PENGYujia彭玉佳ZHUSongchun朱松純,“TheTongTest:GeneralIntelligencethroughDynamicEmbodiedSocial
Interactions,”Engineering34,(2024).14.SHENGuobin申國斌ZENGYi曾毅,“Brain-inspiredNeuralCircuitEvolution
forSpikingNeuralNetworks,”PNAS39(2023).15.TANG唐曉娟ZHUSongchun朱松純LIANGYitao梁一韜ZHANG張牧涵“LargeLanguageModelsAreIn-contextSemantic
ReasonersRatherthanSymbolicReasoners,”arXivpreprintarXiv:2305.14825v2(2023).16.WANGJunqi王俊淇PENGYujia彭玉佳ZHUYixin朱毅鑫Lifeng范
麗鳳,“EvaluatingModelingSocialIntelligence:aComparativeStudyof
HumanAICapabilities,”arXivpreprintarXiv:2405.11841v1(2024).17.Fangzhi徐方植Jun(劉軍,ErikCambria,“AreLargeLanguageModels
GoodReasoners?”arXivpreprintarXiv:2306.09841v218.Zhihao徐智昊DAIQionghai(戴瓊海FANGLu方璐,“Large-scale
PhotonicChipletEmpowers160-TOPS/WGeneralIntelligence,”
Science,April2024.19.YUANLuyao袁路遙ZHUSongchun朱松純),“CommunicativeLearning:a
UnifiedFormalism,”Engineering,March2023.CenterforSecurityEmergingTechnology|1120.ZHANG張馳ZHUYixin朱毅鑫ZHUSongchun朱松純),“Human-level
shotConceptInductionthroughMinimaxEntropyScience
Advances,April2024.21.ZHANGTielin張鐵林徐波,“ABrain-inspiredAlgorithmthat
MitigatesCatastrophicForgettingofArtificialandSpikingNeuralNetworksLowComputationalCost,”ScienceAdvances,August2023.22.ZHANGYue章岳Leyang崔樂陽SHIShuming史樹明),“Siren’sSongin
theAIOcean:aSurveyonHallucinationinLargeModels,”arXiv
preprintarXiv:2309.01219v223.ZHAOZhuoya趙卓雅ZENGYi曾毅,“ABrain-inspiredTheoryofSpikingNeuralNetworkImprovesMulti-CooperationCompetition.”
Patterns,August2023.24.ZOU鄒旭YANG楊植麟TANGJie唐杰,“ControllableGeneration
fromPre-trainedLanguageModelsviaInversePrompting,”arXivpreprint
arXiv:2103.10685v3(2021).ThestudiescollectivelyaddresstheofLLMdeficitsdescribedinthispaper’s
sections12,namely,thoseassociatedtheoryof(ToM)failures,
inductive,deductive,abductivereasoningdeficits,problemslearningnew
tasksthroughanalogytoprevioustasks,ofgrounding/embodiment,
unpredictabilityoferrorsandhallucinations,lackofintelligence,insufficient
understandingofreal-worldinput,inparticularinvideoform,difficultyindealingcontexts,challengesassociatedtheneedtotuneoutputs,costof
operation.Proposedsolutionstotheseproblemsfrommodules,emulatingbrain
structureprocesses,rigorousstandardsandtesting,real-worldembedding,to
thecomputingsubstrateoutrightwithimprovedtypes.SeveralprominentChinesescientistscitedinthisstudy’ssection2,madepublic
statementssupportingGAImodels,includingTangJie,ZhangYaqin,Bo,
Songchun,ZengYi,areonthebylinesofofthesepapers,adding
authenticitytotheirdeclarations.addition,vofChina’stopinstitutionscompaniesengagedinGAI
research,includingtheAcademyofArtificialIntelligence北京智源人工智能研
究院theInstituteforGeneralArtificialIntelligence北京通用人工智能研究院theChineseAcademyofSciences’InstituteofAutomation中國科學(xué)院自動(dòng)化研究所PekingUniversity北京大學(xué)TsinghuaUniversity清華大學(xué)UniversityofChineseCenterforSecurityEmergingTechnology|12AcademyofSciences中國科學(xué)院大學(xué))andAlibaba,ByteDance,Huawei,TencentAIrepresentedintheselectedcorpus,inmostcasesonmultiplepapers.64Therecordofmetadataadducedhere,conclusionsdrawninpriorCSETresearh65
supportthepresentstudy’scontentionmajorelementsinChina’sAIcommunity
questionLLMs’potentialtoachieveGAI—throughincreasesinscaleormodalities—
arecontemplatingorpursuingalternativeCenterforSecurityEmergingTechnology|13Assessment:DoAllPathstotheBuddha?WhenLLM-basedchatbotsfirstbecameavailable,earlyclaimsLLMsmightbe
sentient,i.e.,experiencefeelingssensations,orevenshowself-awareness,66were
prevalentmuchdiscussed.Sincethen,coolerheadsprevailed,67tfocus
shiftedfromphilosophicalspeculationsabouttheinteriorlivesofLLMstomore
concretemeasurementsofLLMabilitiesonkeyof“intelligent”behaviortheimportantquestionofwhetherLLMsmightbecapableofgeneral
artificialintelligenceitisfarfromwhetherconsciousnessthecapacityforemotionstoGAI,whatisisthataGAIsystemmustbetoreasonto
separatefromhallucinations.Asthingsstand,LLMsexplicitmechanisms
wouldenablethemtoperformthesecorerequirementsofintelligentbehavior.
Rather,thehopeofLLMenthusiastsisthat,somehow,reasoningabilitieswill
“emerge”LLMstrainedtobecomeeverbetterpredictingthenextwordina
conversation.Yet,thereistheoreticalforthisbelief.Tothecontrary,research
shownthatLLMs’vasttextmemorymaskeddeficienciesinreasoning.68Heuristicattemptstoimprovereasoning(e.g.,chain-of-thought),69likelythesfor
improvedperformanceinOpenAI’snew“o1”LLM,morerecentapproachessuch
“rephraserespond,”70“tree-of-thought71orthoughts”72yieldedimprovements,butdonotsolvetheunderlyingproblemofthebseofa
core“reasoningengine.”thetoken,multipleattemptstofixLLMs’hallucinationproblem73run
intodeadendsbecausetheytoaddressthecoreproblemisinherenttoLLMs’
togeneralizefromtrainingdatatonewcontexts.Indeed,currenteffortsto
improvereasoningabilitiesfixhallucinationsabitlikeplaying“whack-a-mole”
butmoleshidinginabillion-dimensionalamalletis
uncertaintointended.Theresultingsystemsbesufficientfor
situationshumansassessthequalityofLLMoutput,e.g.,cover
letters,designingtravelitinerariesorcreatingessaysontopicsthatareperennial
favoritesofschoolteachers.Yet,theseafarfromGAI.ThepublicdebatesinthewesternontheappropriatepathtoGAItendtobe
drownedoutbycompaniesfinancialinterestsinpromotingtheirlatestLLMsof“humanlikeintelligence”or“sparksofartificialgeneralintelligence,”74even
intheofevermoreshortcomingsofLLMs,detailedinsection1.The
ofcommercialinterestspromoteLLMssuretoGAICenterforSecurityEmergingTechnology|14negativelyaffectedtheofacademicresearchintheU.S.topursue
alternativeapproachestoGAI.75Thesituationisdifferentinina.WhiletherecompaniesinChinadeveloping
LLMsforcommercialpurposes,leadingChineseAIscientistsandgovernmentofficials,
detailedinthispaper,thatLLMsfundamentallimitationsmakeit
importanttoinvestigateotherapproachestoGAIorsupplementLLMperformance
“brainlike”Thelatterstrategy,ofpursuing“braininspired”AIledtobreakthroughsintheforexample,bydeeplearning76—
modeledonthesensoryprocessinghierarchy—reinforcementlearning77—
modelinghowthebrainstrategiesfromrewards—into“deepreinforcement
g,”78which,forinstance,formedthebasisofAlphaGo,79thefirstartificialneural
networktbeathumanchampionsinthegameofGo.Thisdifferenceinresearch
directionsgiveChinaadvantageintheracetoachieveGAI.behelpfultothecurrentsituationtohowChinatodominate
theglobalmarketforphotovoltaicpanels(or,morerecently,batterytechnology
electricvehicles),basedonChinesegovernmentdecisionstheofthemillenniumtobecomeaworldleaderinTheensuingpolicydecisionsinvestmentstobuildupthedomesticindustryincreasetheefficiencyofpanelsledtoinnovationeconomiesofscalenowhaveChinaproducingleast75%oftheworld’spanels.AdecisionbyChinatostrategicallyinvestin
non-LLM-basedapproachestoGAI80repeatthissuccess,albeitinafieldofeven
greaterthanphotovoltaics.CenterforSecurityEmergingTechnology|15ManagingaChinaFirst-MoverGeoffreyHinton,recentNobelwinnerrecipientofaTuringAwardforworkonmultilayerneuralnetworks—thefirstAINNarchitectureledto
superhumanperformanceonarangeofbenchmarktasksincomputervisionother
—acknowledges“arace,clearly,betweenChinatheU.S.,neitherisgoing
toslowdown.”81Thisraceto
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 個(gè)人健康習(xí)慣的養(yǎng)成與維持
- 分布式網(wǎng)絡(luò)下的區(qū)塊鏈安全挑戰(zhàn)分析
- 2025年飲料及冷飲服務(wù)項(xiàng)目發(fā)展計(jì)劃
- 醫(yī)院護(hù)士護(hù)理工作計(jì)劃
- AI技術(shù)投資中的道德風(fēng)險(xiǎn)評(píng)估
- 2025年中國吸頂燈頭市場(chǎng)調(diào)查研究報(bào)告
- 2025年中國吊頂機(jī)市場(chǎng)調(diào)查研究報(bào)告
- 創(chuàng)新技術(shù)助力實(shí)現(xiàn)醫(yī)療數(shù)據(jù)的全面保護(hù)
- EPS再生料在醫(yī)療包裝中的創(chuàng)新應(yīng)用
- 2025年中國十字盤頭自攻螺釘數(shù)據(jù)監(jiān)測(cè)研究報(bào)告
- (高清版)DZT 0205-2020 礦產(chǎn)地質(zhì)勘查規(guī)范 巖金
- 《嬰幼兒感覺統(tǒng)合訓(xùn)練》課件-前庭覺
- 人教版數(shù)學(xué)七年級(jí)下冊(cè)期中考試試卷8
- 管道完整性管理基礎(chǔ)知識(shí)課件
- 學(xué)生戶外抓魚活動(dòng)方案
- 材料方案設(shè)計(jì)
- 購車金融方案
- 墻面油漆工程的詳細(xì)施工工序
- 血液透析水處理系統(tǒng)通用課件
- 知識(shí)產(chǎn)權(quán)與人工智能
- 人工晶體脫位查房
評(píng)論
0/150
提交評(píng)論