大型語(yǔ)言模型作為通用模式機(jī)器 Large Language Models as General Pattern Machines_第1頁(yè)
大型語(yǔ)言模型作為通用模式機(jī)器 Large Language Models as General Pattern Machines_第2頁(yè)
大型語(yǔ)言模型作為通用模式機(jī)器 Large Language Models as General Pattern Machines_第3頁(yè)
大型語(yǔ)言模型作為通用模式機(jī)器 Large Language Models as General Pattern Machines_第4頁(yè)
大型語(yǔ)言模型作為通用模式機(jī)器 Large Language Models as General Pattern Machines_第5頁(yè)
已閱讀5頁(yè),還剩31頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

arXiv:2307.04721v1[cs.AI]10Jul2023

2Dgridswithpatternsthatevokeabstractconcepts(e.g.,infilling,counting,it,:0,0ip,:0,0ip,t,:0,0androtatingshapes).Eachproblemprovidesasmallnumberofinput-output#,,examples,followedbytestinput(s)forwhichtheobjectiveistopredictottt,0thecorrespondingoutput.Mostmethods(basedonprogramsynthesis)are#,,00,,00,,0Bmanuallyengineeredwithdomain-specificlanguages[21,22,23,24]or0@evaluatedonsimplifiedextensionsorsubsetsofthebenchmark[

25

,

26

,

27

].Fig.1:LLMsout-of-the-boxEnd-to-endmachinelearningmethodsonlysolveahandfuloftestproblemscancomplete(highlighted)[

28

];however,ourexperimentsindicatethatLLMsin-contextpromptedincomplexARCpatterns[

20

]thestyleofASCIIart(see

Fig.1

)cancorrectlypredictsolutionsforupto85expressedinarbitrarytokens.(outof800)problems–exceedingsomeofthebestperformingmethodstodate[

21

,

22

,

24

],without

LargeLanguageModelsasGeneralPatternMachines

SuvirMirchandani1,FeiXia2,PeteFlorence2,BrianIchter2,DannyDriess23,MontserratGonzalezArenas2,KanishkaRao2,DorsaSadigh12,AndyZeng2

1StanfordUniversity,2GoogleDeepMind,3TUBerlin

https://general-pattern-machines.github.io

Abstract:Weobservethatpre-trainedlargelanguagemodels(LLMs)arecapableofau-toregressivelycompletingcomplextokensequences–fromarbitraryonesprocedurally

generatedbyprobabilisticcontext-freegrammars(PCFG),tomorerichspatialpatternsfoundintheAbstractReasoningCorpus(ARC),ageneralAIbenchmark,promptedinthestyleofASCIIart.Surprisingly,patterncompletionproficiencycanbepartiallyretainedevenwhenthesequencesareexpressedusingtokensrandomlysampledfromthevocabulary.Theseresultssuggestthatwithoutanyadditionaltraining,LLMscanserveasgeneralsequencemodelers,drivenbyin-contextlearning.Inthiswork,weinvestigatehowthesezero-shotcapabilitiesmaybeappliedtoproblemsinrobotics–fromextrapolatingsequencesofnumbersthatrepresentstatesovertimetocompletesimplemotions,toleast-to-mostpromptingofreward-conditionedtrajectoriesthatcandiscoverandrepresentclosed-looppolicies(e.g.,astabilizingcontrollerforCartPole).Whiledifficulttodeploytodayforrealsystemsduetolatency,contextsizelimitations,andcomputecosts,theapproachofusingLLMstodrivelow-levelcontrolmayprovideanexcitingglimpseintohowthepatternsamongwordscouldbetransferredtoactions.

Keywords:largelanguagemodels,in-contextlearning,languageforrobotics

1Introduction

Largelanguagemodels(LLMs)aretrainedtoabsorbthemyriadofpatternsthatarewovenintothestructureoflanguage.Theynotonlyexhibitvariousout-of-the-boxcapabilitiessuchasgeneratingchainsofreasoning[

1

,

2

],solvinglogicproblems[

3

,

4

],andcompletingmathpuzzles[

5

],butalsohavebeenappliedinroboticswheretheycanserveashigh-levelplannersforinstructionfollowingtasks[

6

,

7

,

8

,

9

,

10

,

11

,

12

],synthesizeprogramsrepresentingrobotpolicies[

13

,

14

],designrewardfunctions[

15

,

16

],andgeneralizeuserprefer-ences[

17

].Thesesettingsrelyonthefew-shotin-contextexamplesintextpromptsthatspecifythedomainandinput-outputformatfortheirtasks[

18

,

19

],andremainhighlysemanticintheirinputsandoutputs.

Akeyobservationofourwork–andperhapscontrarytothepredominant

intuition–isthatanLLM’sabilitytorepresent,manipulate,andextrapolate

moreabstract,nonlinguisticpatternsmayallowthemtoserveasbasicversions

ofgeneralpatternmachines.Toillustratethisidea,considertheAbstract

ReasoningCorpus[

20

],ageneralAIbenchmarkthatcontainscollectionsof

additionalmodeltrainingorfine-tuning.Surprisingly,wefindthisextendsbeyondASCIInumbers,andPreprint.

|

100

2

100

-

···78,76,72,66,60,53,46···

Fig.2:Pre-trainedLLMsout-of-the-boxmayserveasbasicversionsofgeneralpatternmachinesthatcanrecognizeandcompletesequencesofnumericorarbitrary(symbolic)tokensexpressingabstractproblemsinroboticsandsequentialdecision-making.Experimentsshowthattoanextent,LLMscanin-contextlearn(i)sequencetransformations(e.g.,toreasonoverspatialrearrangementsofsymbols,fordynamicsmodelingandnextstatepredictionondownsampledimages),(ii)completionofsimplefunctions(e.g.,toextrapolatekinestheticdemonstrations),or(iii)meta-patternstoimprovereturn-conditionedpolicies(e.g.,todiscoveroscillatorybehaviorstostabilizeaCartPole).

thatwhentheyarereplacedwithamappingtorandomlysampledtokensinthevocabulary,LLMscanstillgeneratevalidsolutions.Theseresultssuggestanintriguinginsight:thatLLMsmayexhibitmoregeneralcapabilitiesofrepresentingandextrapolatingsymbolicpatterns,invarianttothespecifictokensinvolved.Thisisin-linewith–andcomplementaryto–recentobservationsthatusingrandomorabstractlabelmappingsforin-contextclassificationretainssomeperformancecomparedtoground-truthlabels[

29

,

30

].WehypothesizethatthecapabilitiesthatdrivepatternreasoningontheARCmayallowgeneralpatternmanipulationatvariouslevelsofabstractionusefulforroboticsandsequentialdecisionmaking[

31

,

32

],whereinadiversearrayofproblemsinvolvepatternsthatmaybedifficulttoreasonaboutpreciselyinwords.Forexample,aprocedureforspatiallyrearrangingtabletopobjectscouldberepresentedusingarbitrarytokens(see

Fig.2

).Asanotherexample,optimizingatrajectorywithrespecttoarewardfunctioncanbeframedasextrapolatingasequenceconsistingofstateandactiontokenswithincreasingreturns.

Orthogonalandcomplementarytoeffortsthatdevelopmulti-taskpoliciesbypre-trainingonlargeamountsofrobotdata[

33

],orroboticsfoundationmodels[

34

]thatcanbefine-tunedfordownstreamtasks[

35

,

36

,

37

],ourgoalisinsteadto(i)assessthezero-shotcapabilitiesthatLLMsmayalreadycontaintoperformsomedegreeofgeneralpatternmanipulation,and(ii)investigatehowtheseabilitiescanbeusedinrobotics.Thesecapabilitiesarecertainlynotsufficienttoreplacespecializedalgorithms;nonetheless,theyareusefultocharacterize,anddoingsomayhelpinformprioritiesfortraininggeneralistmodelsinrobotics.

WeassessLLMsaspatternmachinescategorizedintothreeareas:sequencetransformation,sequencecompletion,andsequenceimprovement(see

Fig.2

).First,weshowthatLLMsarecapableofgeneralizingcertainsequencetransformationsofincreasingcomplexitywithadegreeoftokeninvariance,andpositthatthiscancarryovertospatialreasoningcapabilitiesinrobotictasks.Next,weassessLLMs’abilitytocompletepatternsfromsimplefunctions(e.g.,sinusoids)andshowthiscanbeappliedtorobotictaskslikeextendingawipingmotionfromkinestheticdemonstrations,ordrawingpatternsonawhiteboard.Thecombinationofin-contextsequencetransformationandextrapolationfurtherenablesLLMstodobasicformsofsequenceimprovement.Weshowthatprovidingreward-labeledtrajectoriesascontext,coupledwithonlineinteraction,canenableanLLM-basedagenttolearntonavigatethroughasmallgrid,discoverastabilizingCartPolecontroller,andoptimizesimpletrajectoriesviahuman-in-the-loop“clicker”rewardtraining.Code,benchmarks,andvideoswillbemadeavailableat

https://general-pattern-machines.github.io

.

3

2RelatedWork

Patternreasoningbypromptingpre-trainedLLMswithfew-shotinput-outputexamplesisdrivenbyin-contextlearning[

38

,

39

].Theexamplesserveasaformoftaskspecification,wherethemodelisexpectedtocompletefurtherinstancesofthetaskbysimplypredictingwhatcomesnext.In-contextlearningextendstheconceptof“taskprefixes”(predefinedtask-specifictokensequencese.g.,[

40

]),butswappedinwithactualtaskexamplesinstead.Brownetal.[

39

]observesthatitimproves(inparticular,out-of-distributiongeneralization)fromscalingmodelsize.Thisisincontrasttoscalingmodelsforpre-training+fine-tuning,whichhasbeenshowntonotnecessarilyimproveOODgeneralizationonlanguagetasks[

41

].Nonetheless,despitecompellingOODgeneralizationabilities,in-contextlearningstillcomesatacost,asitcontinuestolagbehindintermsofabsoluteperformanceonbenchmarkscomparedtotask-specificfine-tuning[

38

].

In-contextlearningisexplicitlytrainedforbypackingexamplesfromthesametaskanddatasetintothesamecontextbufferthatisfedasinputtoanLLMwithanunsupervisedautoregressiveobjective[

39

],sometimesreferredtoasmeta-training.However,itcanalsoemergeimplicitlyfromtrainingonunsuperviseddatasetswheretokensexhibitaZipfiandistribution[

42

]onTransformerarchitectures,butnotnecessarilywithrecurrentarchitectures(e.g.,vanillaRNNsorLSTMs)[

42

].Otherworkshaveshownthatin-contextlearningwithTransformerscanlearnsimplefunctionclassesonparwithleastsquares[

43

,

44

],andcangeneralizetoaseeminglyunboundednumberoftasks(whentrainedontasksfromthesametaskfamily)betterthanmultitaskMLPs[

45

],withBayesianinterpretationsofthisphenomenon[

46

][

47

].

In-contextlearningoccursduringinferencewithoutgradientupdatestotheweightsofthemodel,andcanbedifferentiatedfromin-weightslearning,whichreliesoninformationstoredintheweightsofthemodelduringLLMtraining[

48

](andcanbeusefulforcompletiontaskssuchas“AbrahamLincolnwasborn in”).Chanetal.[

48

]observesthatgeneralizationofin-contextlearningcanbecharacterizedasmore“exemplar-based”(onthebasisofsimilaritytoin-contextexamples[

49

]),asopposedtogeneralizationof in-weightslearningwhichtendstobemore“rule-based”(onthebasisofminimalfeaturesthatsupport categoryboundariesinthetrainingdata[

50

]).ThevastcapabilitiesofLLMs[

39

,

51

,

52

,

53

,

54

]havebeendrivenbyacombinationofbothformsoflearning.Inthiswork,weareparticularlyinterestedinin-context learning,and(dependingonthetask)usingthesemanticpriorsofnumerictokens(e.g.,“0”to“100”)todrivenewcapabilitiessuchasin-contextsequencecompletion(

Section5

)andimprovement(

Section6

).

LLMshavebeenappliedacrossanumberofareasinrobotics–mostrecentlyindecomposinghigh-leveltaskdomaindescriptionsinnaturallanguagetomid-levelstep-by-stepplans[

6

,

7

,

55

,

56

,

57

,

58

],robotcode[

13

,

17

,

14

,

59

],andplanningdomaindefinitionlanguages[

10

].ThesemethodsleveragethesemanticpriorsstoredinLLMstocomposenewplansorparameterizeprimitiveAPIs,butwhetherLLMscandirectlyinfluencecontrol(e.g.,attheleveloftrajectories)inazero-shotmannerremainsanopenproblem.Asareactiontothis,weinvestigatehowthepatternreasoningcapabilitiesofLLMsmaydrivevariouscontroltasks,toextendoroptimizelow-levelactionsequences.Whileitispossibletoexplicitlytrainmodelsforthesecapabilities[

60

,

61

,

62

,

63

],thisworkinsteadfocusesontheinherentabilitiesofLLMsout-of-the-box,whichmayhavedownstreamimplicationsfortheroleoflanguagepre-trainingforbuildinggeneralistembodiedAIsystems.Ourfindingsmayalsobenefitdomainswheredatacollectionisexpensiveordifficulttoscale.CloselyrelatedtoourworkisBrooksetal.[

64

],whichusesanLLMtorepresentarollout-policyandworld-modelin-context,andthenusesmodel-basedQ-learningtodrivepolicyimprovementacrossacollectionoftoyenvironmentswithlinguisticrepresentations.OuruseofLLMsforsequenceimprovementcanbeseenasasimplificationofin-contextpolicyiterationthatsupportsbothlearningfromdemonstrationsandin-contextRL,drivenbythegeneralityofLLMsaspatternmachines.

3LanguageModelsasGeneralPatternMachines

ThecapacityofLLMstoactasgeneralpatternmachinesisdrivenbytheirabilitytoperformin-contextlearningonsequencesofnumericorarbitrarytokens.AnLLMtypicallyrepresentssequencemodelingautoregressively,withadecoder-onlyTransformer[

65

],byfactorizingtheprobabilityofasequencex,whichisasequenceofsymbols(s1,...,sn),intotheproductofconditionalprobabilitiesp(x)=

4

∏?p(si|s1,...,si?1).Toperformin-contextlearning,themodelcanbeconditionedwithapromptthatprovidestheinitialtokensinthesequences1:k=(s1,...,sk)andusesthemodeltocompletesk+1:n.

Theadaptabilityofin-contextlearningliesintheamountofflexibilitythatcanbepackedintos1:k–thispromptsequencecanitselfcontainmanysequences,eachaninput-outputpair,andperhapsadditionaltaskconditioning[

38

,

29

].Specifically,amodelcanin-contextlearntocompleteapromptwhichisasetofNexampless1:k=(x1,x2,...,xN)whereeachxiisavariable-lengthsequence(s,s,...,si).

Ratherthaninvestigatingin-contextlearningwithnaturallanguagetasks[

39

],inthisworkweareinterestedininvestigatingmoreabstractnotionsofnon-linguisticpatterns.ThefollowingsectionsevaluatethesecapabilitiesacrossLLMs,andshowhowtheycanbeusedinrobotics.Byvaryingthenotionofwhateachxishouldbe,wecancharacterizein-contextpatternlearningcapabilitiesintothefollowing3categories.

?SequenceTransformation(

Section4

):eachx1,...,xN?1isasequence-to-sequenceinput-outputpair;i.e.,xi=(xnput,xutput),eachsubsequenceofvariablelength,andxNisthequeryinput(xut).

?SequenceCompletion(

Section5

):ratherthancontaininginput-outputpairs,andratherthancontainingmanyexamplesofdifferentsequences,thepromptx=(s1,...,sk)correspondstodiscretesamplesfromasinglefunction,e.g.,oftheformsi=a·sin(bi),whichcanbeextrapolated.

?SequenceImprovement(

Section6

):eachx1,...,xN?1isacollectionoftrajectories(potentiallylabeledwithcorrespondingtotalrewards),andxNpromptsthemodelto“improve”thesequencesbyinferringabetterone,e.g.,withleast-to-mostprompting[

66

]–thisprocesscanbeiterativeandappliedtoavariety

offormulations,e.g.,offlinetrajectoryoptimizationoronlinein-contextreinforcementlearning.

4SequenceTransformation

LLMsarecapableofin-contextlearningthedistributionoffunctionsthatrepresentsequencetransformationsbycompletingabstractpatternsobservedamongexamplesofinput-outputsequencesxi=(xnput,xutput)ofarbitrarytokens,eachdrawnfromafixedalphabetA.Forexample,supposethatwearegivenastringofinput-outputexamplessuchas“530,35;761,67;923,29;485,”.HereAconsistsoftokensthatrepresentspace-prefixeddigits0–9,acommatokentoseparateinputsfromoutputs,andasemi-colontokentodelineateexamplesfromeachother.Ageneralpatternmachineshouldinferthecompletion“84”byrecognizingthatthepatternistoswapthefirst2tokens,thenremovethe3rd.

WeusetheARCbenchmark[

20

]toevaluateLLMsonsuchsequencetransformations,wherebytokenpatternsaresub-

stantiallymorecomplex,coveringawiderangeofabstractspatialtasks:infilling,counting,translatingandrotatingshapes,etc.Eachtaskcomeswithseveralinput-outputexam-ples(3.3onaverage),and1-3testinputswhichcanberep-resentedas2Dgrids.Sizesbetweeninputsandoutputsmaydifferandarenotprovidedbeforehand,therebyaddingtothedifficultyofapplyingstandardmachinelearningalgorithms,whichtypicallyassumefixedsize.AutoregressiveLLMscanbeusedfortheARCbyflatteningthegridsandpredictingeachnewoutputgriditeminrow-majororder,whichnatu-rallysupportsvariablelengthoutputs.WhileLLMsarenotoriginallytrainedforrasterizingspatialoutputsinthisway,wehypothesizethatageneralpatternmachinewouldbeca-pableofimplicitlyrecognizingthelong-rangedependenciesbetweenrows(usingpositionalencodingasabias[

67

])topickuppatternsthatextendacrossthe2nddimension.

Method

Total(of800)

(d3)text-davinci-003

85

(d3)w/randomA

?44±6

(d2)text-davinci-002[

51

]

64

(p)PaLM[

53

,

54

]

42

(d1)text-davinci-001[

39

]

11

(d1)finetuned

9

Ainoosonetal,2023[

23

]

??130

Kaggle1stPlace,2022

?64

Xuetal.,2022[

22

]

?57

Alfordetal.,2021[

24

]Ferretal.,2021[

21

]

35

32

*Reportedfrom[

22

]outof160object-orientedproblems.

?Numbersaveragedacross5randomlysampledalphabets.**Basedonbruteforcesearchoverarichhand-designedDSL.Tab.1:LLMsout-of-the-boxcansolveanon-trivialnumberofproblemsontheARC,compet-itivewiththebestexistingmethodsusinghand-crafteddomain-specificlanguages[

21

,

24

,

22

].

Result:ARCbenchmark.Ourexperimentsin

Table1

showthatLLMs(PaLM,InstructGPTseriesinacronymsd1-d3)promptedwithinputgridsrepresentedastokensdrawnfromanalphabetofdigits,cancorrectlyinfersolutionsforupto85problems.Surprisingly,thisoutperformsanumberofrecentsystems[

21

,

24

,

22

]basedonprogramsynthesisthatusemanuallyengineereddomain-specificlanguages(DSLs).

5

output:

36

WhileLLMshaveyettosurpassbrute-forcesearch[

23

]tocomposefunctionsfromahandcraftedAPIofgridoperators,LLMsareperhapsthebestperforminggeneralistmethodthatexiststoday.(WeaddresstheimportantcaveatthatpartsoftheARCmaybepresentinthetrainingdataofLLMslaterinthissection.)

Observation:consistenttokenizationmatters.TheARCcanbefoundamongthesuiteoftasksinBIG-Bench[

68

],buthasoftenbeenoverlookedsincemanylanguagemodelsappeartoperformpoorly(nearoratzeroperformance).Weobservethisoccursduetotheformattingofthebenchmark,wheregridelementsarerepresentedasneighboringcharactersinastringi.e.,“8686”(insteadof“8686”).Whilesubtle,thisdifferenceisenoughforcertainByte-PairEncoding(orSentencePiece)tokenizers[

69

,

70

](thatdonottokenizeperdigit)togrouptogethermultiplegridelements(“8”and“6”)intoasingletoken(“86”)whichmapstoadifferenttokenembeddingaltogetherinthevocabulary.Thiscausesinconsistencieswithhowthepatternsareexpressedatthetokenlevel.Forexample,givenataskexpressedinastring“8686,6868;7979,”iftheLLMtokenizergroupstogetherpairsofdigits86,68,79,respectively,thenthesequentialinductivepatternsofthetask(toswapandrepeatindividualdigits)islost.Asimplework-aroundistodirectlypasstokenindicesorembeddingstothelanguagemodel,orusetokenalphabetsunlikelytobegroupedbythetokenizer.Thiswork-aroundgeneralizestootherpatternmanipulationtasksbeyondtheARC;ingeneral,itisimportanttotokenizeinamannerthatisconsistentwiththepatternbeingrepresented.

Observation:tokenmappinginvariance.ThehypothesisthatLLMscanserveasgeneralpatternmachinesstemsfromtheobservationthattheycansurprisinglystillsolveanon-trivialnumberofARCproblemsusingalphabetsAsampledrandomlyfromtheLLM’stokenvocabulary.Forinstance,givenaparticularalphabet:{8→?falls,6→?+#,7→?Ul,9→?Chev,3→?慶,2→?2010},apatternmachineatsufficientproficiencycanbeexpectedtocompletetheprompt“falls+#falls+#,+#falls+#falls;UIChevUIChev,ChevUIChevUI;慶2010慶2010,”bypredicting“2010慶2010慶”.Forexample,text-davinci-003[

51

,

39

]withthefollowingmappingA={0→?offence,1→?Subject,2→?Lub,3→?Fail,4→?Chev,5→?symb,6→?swung,7→?Ul,8→?escalate,9→?Chromebook}solves52ARCproblems,andacross5differentrandomalphabetssolvesanaverageof43.6problems.Interestingly,wefindthattokenmappinginvarianceholdstoanextentonsimplepatterntransformationsforrandomlysampledembeddingsaswell(i.e.,suchthatembeddingsarenotassociatedwithanytokeninthevocabulary;seeAppendix).

Theimplicationsoftokenmappinginvariancearetwo-fold.First,notethatitispossiblethatpartsoftheARC(andotherstaticexamplesofpatterntransformations)arepresentinthetrainingdataofanLLM(i.e.,duetocontamination).Therefore,measuringtheperformanceofLLMsunderrandomalphabetsmayprovideacloserestimateoftheirtrueunderlyingin-contextsequencetransformationcapabilities.(AsadditionalevidencethatLLMs’sequencetransformationabilityisnotsimplyduetomemorization,wealsoprovideanewprocedurally-generatedpatterntransformationbenchmarkwhichwedescribebelow.)

Second,wehypothesizethatthepatternma-nipulationcapabilitieswhichtokeninvarianceimpliescouldhelptodrivepositivetransferfrompatternslearnedacrossInternet-scalelanguagedatatonewmodalitiesorsymbolicrepresentationsforrobotreasoning.Asanexampleofthisidea,(i)

Fig.3

(top)showsagrasp(Skittles)detectorwhichoutputstar-getcoordinateswithinadownsampledimage(with6in-contextexamples),and(ii)

Fig.3

(bottom)showsspatialrearrangementviapre-dictingsimpleforwarddynamicswherethe

Output(Rendered)

Input

Input(Low-Res)Input&Output(Tokens)

input:

676792

868687

916187

929293

879293

629314692

6262.91

44438787

4468112112

93118117118

93

92

87

93

93

12361

12487

12343

12369

118123

input:

63474763777761575862

634241424.237373742

63464646464637374142

63626262626262625842

63636262626262626262

output:

63474763777761575862

633737424.242424242

63535357464242424242

63585862466262624642

63636363626262626262

Fig.3:ExampleLLMpredictionasanin-contextgraspdetector(top)andasimpleforwarddynamicsmodel(bottom).

redbowlmovestothegreenplate(with9in-contextexamplesofdownsampledimagesasinputsandoutputs).Thegeneralityofwhatthearbitrarytokenscouldrepresentmayallowpatterntransformationcapabilities–especiallyasLLMsimprove–tobeleveragedatvariouslevelsofabstractioninrobotics(includingatthelevelofpixelsorrobotjointpositions).Incorporatingmoresemanticpriorsintorepre-sentationsmayalsoboostperformanceandenablefurtherLLM-drivenreasoning(e.g.,reducingvisual

6

Function

ExampleInputs

ExampleOutputs

530

35

remove_second(swap(s1,s2),s3)

761

67

echo(copy(swap(swap(

prepend(removesecond(

6

77815989

1

59897766

swap(echo(s1s2)),s3s4),s5s6s7s8s9s10)

430350238

502383344

Tab.2:IllustrationsoftransformationsinourPCFGbenchmark.Row1showsatransformationcomposedofk=2operationsoverw=3tokens,androw2showsatransformationcomposedofk=8operationsoverw=10tokens,respectively.Foreachtransformationfunction,weshowtwoexampleinputsandthecorrespondingoutputs.

dataintomoresemanticspatialrepresentations).Itmay

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論