




版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
arXiv:2307.04721v1[cs.AI]10Jul2023
2Dgridswithpatternsthatevokeabstractconcepts(e.g.,infilling,counting,it,:0,0ip,:0,0ip,t,:0,0androtatingshapes).Eachproblemprovidesasmallnumberofinput-output#,,examples,followedbytestinput(s)forwhichtheobjectiveistopredictottt,0thecorrespondingoutput.Mostmethods(basedonprogramsynthesis)are#,,00,,00,,0Bmanuallyengineeredwithdomain-specificlanguages[21,22,23,24]or0@evaluatedonsimplifiedextensionsorsubsetsofthebenchmark[
25
,
26
,
27
].Fig.1:LLMsout-of-the-boxEnd-to-endmachinelearningmethodsonlysolveahandfuloftestproblemscancomplete(highlighted)[
28
];however,ourexperimentsindicatethatLLMsin-contextpromptedincomplexARCpatterns[
20
]thestyleofASCIIart(see
Fig.1
)cancorrectlypredictsolutionsforupto85expressedinarbitrarytokens.(outof800)problems–exceedingsomeofthebestperformingmethodstodate[
21
,
22
,
24
],without
LargeLanguageModelsasGeneralPatternMachines
SuvirMirchandani1,FeiXia2,PeteFlorence2,BrianIchter2,DannyDriess23,MontserratGonzalezArenas2,KanishkaRao2,DorsaSadigh12,AndyZeng2
1StanfordUniversity,2GoogleDeepMind,3TUBerlin
https://general-pattern-machines.github.io
Abstract:Weobservethatpre-trainedlargelanguagemodels(LLMs)arecapableofau-toregressivelycompletingcomplextokensequences–fromarbitraryonesprocedurally
generatedbyprobabilisticcontext-freegrammars(PCFG),tomorerichspatialpatternsfoundintheAbstractReasoningCorpus(ARC),ageneralAIbenchmark,promptedinthestyleofASCIIart.Surprisingly,patterncompletionproficiencycanbepartiallyretainedevenwhenthesequencesareexpressedusingtokensrandomlysampledfromthevocabulary.Theseresultssuggestthatwithoutanyadditionaltraining,LLMscanserveasgeneralsequencemodelers,drivenbyin-contextlearning.Inthiswork,weinvestigatehowthesezero-shotcapabilitiesmaybeappliedtoproblemsinrobotics–fromextrapolatingsequencesofnumbersthatrepresentstatesovertimetocompletesimplemotions,toleast-to-mostpromptingofreward-conditionedtrajectoriesthatcandiscoverandrepresentclosed-looppolicies(e.g.,astabilizingcontrollerforCartPole).Whiledifficulttodeploytodayforrealsystemsduetolatency,contextsizelimitations,andcomputecosts,theapproachofusingLLMstodrivelow-levelcontrolmayprovideanexcitingglimpseintohowthepatternsamongwordscouldbetransferredtoactions.
Keywords:largelanguagemodels,in-contextlearning,languageforrobotics
1Introduction
Largelanguagemodels(LLMs)aretrainedtoabsorbthemyriadofpatternsthatarewovenintothestructureoflanguage.Theynotonlyexhibitvariousout-of-the-boxcapabilitiessuchasgeneratingchainsofreasoning[
1
,
2
],solvinglogicproblems[
3
,
4
],andcompletingmathpuzzles[
5
],butalsohavebeenappliedinroboticswheretheycanserveashigh-levelplannersforinstructionfollowingtasks[
6
,
7
,
8
,
9
,
10
,
11
,
12
],synthesizeprogramsrepresentingrobotpolicies[
13
,
14
],designrewardfunctions[
15
,
16
],andgeneralizeuserprefer-ences[
17
].Thesesettingsrelyonthefew-shotin-contextexamplesintextpromptsthatspecifythedomainandinput-outputformatfortheirtasks[
18
,
19
],andremainhighlysemanticintheirinputsandoutputs.
Akeyobservationofourwork–andperhapscontrarytothepredominant
intuition–isthatanLLM’sabilitytorepresent,manipulate,andextrapolate
moreabstract,nonlinguisticpatternsmayallowthemtoserveasbasicversions
ofgeneralpatternmachines.Toillustratethisidea,considertheAbstract
ReasoningCorpus[
20
],ageneralAIbenchmarkthatcontainscollectionsof
additionalmodeltrainingorfine-tuning.Surprisingly,wefindthisextendsbeyondASCIInumbers,andPreprint.
|
100
2
100
-
···78,76,72,66,60,53,46···
Fig.2:Pre-trainedLLMsout-of-the-boxmayserveasbasicversionsofgeneralpatternmachinesthatcanrecognizeandcompletesequencesofnumericorarbitrary(symbolic)tokensexpressingabstractproblemsinroboticsandsequentialdecision-making.Experimentsshowthattoanextent,LLMscanin-contextlearn(i)sequencetransformations(e.g.,toreasonoverspatialrearrangementsofsymbols,fordynamicsmodelingandnextstatepredictionondownsampledimages),(ii)completionofsimplefunctions(e.g.,toextrapolatekinestheticdemonstrations),or(iii)meta-patternstoimprovereturn-conditionedpolicies(e.g.,todiscoveroscillatorybehaviorstostabilizeaCartPole).
thatwhentheyarereplacedwithamappingtorandomlysampledtokensinthevocabulary,LLMscanstillgeneratevalidsolutions.Theseresultssuggestanintriguinginsight:thatLLMsmayexhibitmoregeneralcapabilitiesofrepresentingandextrapolatingsymbolicpatterns,invarianttothespecifictokensinvolved.Thisisin-linewith–andcomplementaryto–recentobservationsthatusingrandomorabstractlabelmappingsforin-contextclassificationretainssomeperformancecomparedtoground-truthlabels[
29
,
30
].WehypothesizethatthecapabilitiesthatdrivepatternreasoningontheARCmayallowgeneralpatternmanipulationatvariouslevelsofabstractionusefulforroboticsandsequentialdecisionmaking[
31
,
32
],whereinadiversearrayofproblemsinvolvepatternsthatmaybedifficulttoreasonaboutpreciselyinwords.Forexample,aprocedureforspatiallyrearrangingtabletopobjectscouldberepresentedusingarbitrarytokens(see
Fig.2
).Asanotherexample,optimizingatrajectorywithrespecttoarewardfunctioncanbeframedasextrapolatingasequenceconsistingofstateandactiontokenswithincreasingreturns.
Orthogonalandcomplementarytoeffortsthatdevelopmulti-taskpoliciesbypre-trainingonlargeamountsofrobotdata[
33
],orroboticsfoundationmodels[
34
]thatcanbefine-tunedfordownstreamtasks[
35
,
36
,
37
],ourgoalisinsteadto(i)assessthezero-shotcapabilitiesthatLLMsmayalreadycontaintoperformsomedegreeofgeneralpatternmanipulation,and(ii)investigatehowtheseabilitiescanbeusedinrobotics.Thesecapabilitiesarecertainlynotsufficienttoreplacespecializedalgorithms;nonetheless,theyareusefultocharacterize,anddoingsomayhelpinformprioritiesfortraininggeneralistmodelsinrobotics.
WeassessLLMsaspatternmachinescategorizedintothreeareas:sequencetransformation,sequencecompletion,andsequenceimprovement(see
Fig.2
).First,weshowthatLLMsarecapableofgeneralizingcertainsequencetransformationsofincreasingcomplexitywithadegreeoftokeninvariance,andpositthatthiscancarryovertospatialreasoningcapabilitiesinrobotictasks.Next,weassessLLMs’abilitytocompletepatternsfromsimplefunctions(e.g.,sinusoids)andshowthiscanbeappliedtorobotictaskslikeextendingawipingmotionfromkinestheticdemonstrations,ordrawingpatternsonawhiteboard.Thecombinationofin-contextsequencetransformationandextrapolationfurtherenablesLLMstodobasicformsofsequenceimprovement.Weshowthatprovidingreward-labeledtrajectoriesascontext,coupledwithonlineinteraction,canenableanLLM-basedagenttolearntonavigatethroughasmallgrid,discoverastabilizingCartPolecontroller,andoptimizesimpletrajectoriesviahuman-in-the-loop“clicker”rewardtraining.Code,benchmarks,andvideoswillbemadeavailableat
https://general-pattern-machines.github.io
.
3
2RelatedWork
Patternreasoningbypromptingpre-trainedLLMswithfew-shotinput-outputexamplesisdrivenbyin-contextlearning[
38
,
39
].Theexamplesserveasaformoftaskspecification,wherethemodelisexpectedtocompletefurtherinstancesofthetaskbysimplypredictingwhatcomesnext.In-contextlearningextendstheconceptof“taskprefixes”(predefinedtask-specifictokensequencese.g.,[
40
]),butswappedinwithactualtaskexamplesinstead.Brownetal.[
39
]observesthatitimproves(inparticular,out-of-distributiongeneralization)fromscalingmodelsize.Thisisincontrasttoscalingmodelsforpre-training+fine-tuning,whichhasbeenshowntonotnecessarilyimproveOODgeneralizationonlanguagetasks[
41
].Nonetheless,despitecompellingOODgeneralizationabilities,in-contextlearningstillcomesatacost,asitcontinuestolagbehindintermsofabsoluteperformanceonbenchmarkscomparedtotask-specificfine-tuning[
38
].
In-contextlearningisexplicitlytrainedforbypackingexamplesfromthesametaskanddatasetintothesamecontextbufferthatisfedasinputtoanLLMwithanunsupervisedautoregressiveobjective[
39
],sometimesreferredtoasmeta-training.However,itcanalsoemergeimplicitlyfromtrainingonunsuperviseddatasetswheretokensexhibitaZipfiandistribution[
42
]onTransformerarchitectures,butnotnecessarilywithrecurrentarchitectures(e.g.,vanillaRNNsorLSTMs)[
42
].Otherworkshaveshownthatin-contextlearningwithTransformerscanlearnsimplefunctionclassesonparwithleastsquares[
43
,
44
],andcangeneralizetoaseeminglyunboundednumberoftasks(whentrainedontasksfromthesametaskfamily)betterthanmultitaskMLPs[
45
],withBayesianinterpretationsofthisphenomenon[
46
][
47
].
In-contextlearningoccursduringinferencewithoutgradientupdatestotheweightsofthemodel,andcanbedifferentiatedfromin-weightslearning,whichreliesoninformationstoredintheweightsofthemodelduringLLMtraining[
48
](andcanbeusefulforcompletiontaskssuchas“AbrahamLincolnwasborn in”).Chanetal.[
48
]observesthatgeneralizationofin-contextlearningcanbecharacterizedasmore“exemplar-based”(onthebasisofsimilaritytoin-contextexamples[
49
]),asopposedtogeneralizationof in-weightslearningwhichtendstobemore“rule-based”(onthebasisofminimalfeaturesthatsupport categoryboundariesinthetrainingdata[
50
]).ThevastcapabilitiesofLLMs[
39
,
51
,
52
,
53
,
54
]havebeendrivenbyacombinationofbothformsoflearning.Inthiswork,weareparticularlyinterestedinin-context learning,and(dependingonthetask)usingthesemanticpriorsofnumerictokens(e.g.,“0”to“100”)todrivenewcapabilitiessuchasin-contextsequencecompletion(
Section5
)andimprovement(
Section6
).
LLMshavebeenappliedacrossanumberofareasinrobotics–mostrecentlyindecomposinghigh-leveltaskdomaindescriptionsinnaturallanguagetomid-levelstep-by-stepplans[
6
,
7
,
55
,
56
,
57
,
58
],robotcode[
13
,
17
,
14
,
59
],andplanningdomaindefinitionlanguages[
10
].ThesemethodsleveragethesemanticpriorsstoredinLLMstocomposenewplansorparameterizeprimitiveAPIs,butwhetherLLMscandirectlyinfluencecontrol(e.g.,attheleveloftrajectories)inazero-shotmannerremainsanopenproblem.Asareactiontothis,weinvestigatehowthepatternreasoningcapabilitiesofLLMsmaydrivevariouscontroltasks,toextendoroptimizelow-levelactionsequences.Whileitispossibletoexplicitlytrainmodelsforthesecapabilities[
60
,
61
,
62
,
63
],thisworkinsteadfocusesontheinherentabilitiesofLLMsout-of-the-box,whichmayhavedownstreamimplicationsfortheroleoflanguagepre-trainingforbuildinggeneralistembodiedAIsystems.Ourfindingsmayalsobenefitdomainswheredatacollectionisexpensiveordifficulttoscale.CloselyrelatedtoourworkisBrooksetal.[
64
],whichusesanLLMtorepresentarollout-policyandworld-modelin-context,andthenusesmodel-basedQ-learningtodrivepolicyimprovementacrossacollectionoftoyenvironmentswithlinguisticrepresentations.OuruseofLLMsforsequenceimprovementcanbeseenasasimplificationofin-contextpolicyiterationthatsupportsbothlearningfromdemonstrationsandin-contextRL,drivenbythegeneralityofLLMsaspatternmachines.
3LanguageModelsasGeneralPatternMachines
ThecapacityofLLMstoactasgeneralpatternmachinesisdrivenbytheirabilitytoperformin-contextlearningonsequencesofnumericorarbitrarytokens.AnLLMtypicallyrepresentssequencemodelingautoregressively,withadecoder-onlyTransformer[
65
],byfactorizingtheprobabilityofasequencex,whichisasequenceofsymbols(s1,...,sn),intotheproductofconditionalprobabilitiesp(x)=
4
∏?p(si|s1,...,si?1).Toperformin-contextlearning,themodelcanbeconditionedwithapromptthatprovidestheinitialtokensinthesequences1:k=(s1,...,sk)andusesthemodeltocompletesk+1:n.
Theadaptabilityofin-contextlearningliesintheamountofflexibilitythatcanbepackedintos1:k–thispromptsequencecanitselfcontainmanysequences,eachaninput-outputpair,andperhapsadditionaltaskconditioning[
38
,
29
].Specifically,amodelcanin-contextlearntocompleteapromptwhichisasetofNexampless1:k=(x1,x2,...,xN)whereeachxiisavariable-lengthsequence(s,s,...,si).
Ratherthaninvestigatingin-contextlearningwithnaturallanguagetasks[
39
],inthisworkweareinterestedininvestigatingmoreabstractnotionsofnon-linguisticpatterns.ThefollowingsectionsevaluatethesecapabilitiesacrossLLMs,andshowhowtheycanbeusedinrobotics.Byvaryingthenotionofwhateachxishouldbe,wecancharacterizein-contextpatternlearningcapabilitiesintothefollowing3categories.
?SequenceTransformation(
Section4
):eachx1,...,xN?1isasequence-to-sequenceinput-outputpair;i.e.,xi=(xnput,xutput),eachsubsequenceofvariablelength,andxNisthequeryinput(xut).
?SequenceCompletion(
Section5
):ratherthancontaininginput-outputpairs,andratherthancontainingmanyexamplesofdifferentsequences,thepromptx=(s1,...,sk)correspondstodiscretesamplesfromasinglefunction,e.g.,oftheformsi=a·sin(bi),whichcanbeextrapolated.
?SequenceImprovement(
Section6
):eachx1,...,xN?1isacollectionoftrajectories(potentiallylabeledwithcorrespondingtotalrewards),andxNpromptsthemodelto“improve”thesequencesbyinferringabetterone,e.g.,withleast-to-mostprompting[
66
]–thisprocesscanbeiterativeandappliedtoavariety
offormulations,e.g.,offlinetrajectoryoptimizationoronlinein-contextreinforcementlearning.
4SequenceTransformation
LLMsarecapableofin-contextlearningthedistributionoffunctionsthatrepresentsequencetransformationsbycompletingabstractpatternsobservedamongexamplesofinput-outputsequencesxi=(xnput,xutput)ofarbitrarytokens,eachdrawnfromafixedalphabetA.Forexample,supposethatwearegivenastringofinput-outputexamplessuchas“530,35;761,67;923,29;485,”.HereAconsistsoftokensthatrepresentspace-prefixeddigits0–9,acommatokentoseparateinputsfromoutputs,andasemi-colontokentodelineateexamplesfromeachother.Ageneralpatternmachineshouldinferthecompletion“84”byrecognizingthatthepatternistoswapthefirst2tokens,thenremovethe3rd.
WeusetheARCbenchmark[
20
]toevaluateLLMsonsuchsequencetransformations,wherebytokenpatternsaresub-
stantiallymorecomplex,coveringawiderangeofabstractspatialtasks:infilling,counting,translatingandrotatingshapes,etc.Eachtaskcomeswithseveralinput-outputexam-ples(3.3onaverage),and1-3testinputswhichcanberep-resentedas2Dgrids.Sizesbetweeninputsandoutputsmaydifferandarenotprovidedbeforehand,therebyaddingtothedifficultyofapplyingstandardmachinelearningalgorithms,whichtypicallyassumefixedsize.AutoregressiveLLMscanbeusedfortheARCbyflatteningthegridsandpredictingeachnewoutputgriditeminrow-majororder,whichnatu-rallysupportsvariablelengthoutputs.WhileLLMsarenotoriginallytrainedforrasterizingspatialoutputsinthisway,wehypothesizethatageneralpatternmachinewouldbeca-pableofimplicitlyrecognizingthelong-rangedependenciesbetweenrows(usingpositionalencodingasabias[
67
])topickuppatternsthatextendacrossthe2nddimension.
Method
Total(of800)
(d3)text-davinci-003
85
(d3)w/randomA
?44±6
(d2)text-davinci-002[
51
]
64
(p)PaLM[
53
,
54
]
42
(d1)text-davinci-001[
39
]
11
(d1)finetuned
9
Ainoosonetal,2023[
23
]
??130
Kaggle1stPlace,2022
?64
Xuetal.,2022[
22
]
?57
Alfordetal.,2021[
24
]Ferretal.,2021[
21
]
35
32
*Reportedfrom[
22
]outof160object-orientedproblems.
?Numbersaveragedacross5randomlysampledalphabets.**Basedonbruteforcesearchoverarichhand-designedDSL.Tab.1:LLMsout-of-the-boxcansolveanon-trivialnumberofproblemsontheARC,compet-itivewiththebestexistingmethodsusinghand-crafteddomain-specificlanguages[
21
,
24
,
22
].
Result:ARCbenchmark.Ourexperimentsin
Table1
showthatLLMs(PaLM,InstructGPTseriesinacronymsd1-d3)promptedwithinputgridsrepresentedastokensdrawnfromanalphabetofdigits,cancorrectlyinfersolutionsforupto85problems.Surprisingly,thisoutperformsanumberofrecentsystems[
21
,
24
,
22
]basedonprogramsynthesisthatusemanuallyengineereddomain-specificlanguages(DSLs).
5
output:
36
WhileLLMshaveyettosurpassbrute-forcesearch[
23
]tocomposefunctionsfromahandcraftedAPIofgridoperators,LLMsareperhapsthebestperforminggeneralistmethodthatexiststoday.(WeaddresstheimportantcaveatthatpartsoftheARCmaybepresentinthetrainingdataofLLMslaterinthissection.)
Observation:consistenttokenizationmatters.TheARCcanbefoundamongthesuiteoftasksinBIG-Bench[
68
],buthasoftenbeenoverlookedsincemanylanguagemodelsappeartoperformpoorly(nearoratzeroperformance).Weobservethisoccursduetotheformattingofthebenchmark,wheregridelementsarerepresentedasneighboringcharactersinastringi.e.,“8686”(insteadof“8686”).Whilesubtle,thisdifferenceisenoughforcertainByte-PairEncoding(orSentencePiece)tokenizers[
69
,
70
](thatdonottokenizeperdigit)togrouptogethermultiplegridelements(“8”and“6”)intoasingletoken(“86”)whichmapstoadifferenttokenembeddingaltogetherinthevocabulary.Thiscausesinconsistencieswithhowthepatternsareexpressedatthetokenlevel.Forexample,givenataskexpressedinastring“8686,6868;7979,”iftheLLMtokenizergroupstogetherpairsofdigits86,68,79,respectively,thenthesequentialinductivepatternsofthetask(toswapandrepeatindividualdigits)islost.Asimplework-aroundistodirectlypasstokenindicesorembeddingstothelanguagemodel,orusetokenalphabetsunlikelytobegroupedbythetokenizer.Thiswork-aroundgeneralizestootherpatternmanipulationtasksbeyondtheARC;ingeneral,itisimportanttotokenizeinamannerthatisconsistentwiththepatternbeingrepresented.
Observation:tokenmappinginvariance.ThehypothesisthatLLMscanserveasgeneralpatternmachinesstemsfromtheobservationthattheycansurprisinglystillsolveanon-trivialnumberofARCproblemsusingalphabetsAsampledrandomlyfromtheLLM’stokenvocabulary.Forinstance,givenaparticularalphabet:{8→?falls,6→?+#,7→?Ul,9→?Chev,3→?慶,2→?2010},apatternmachineatsufficientproficiencycanbeexpectedtocompletetheprompt“falls+#falls+#,+#falls+#falls;UIChevUIChev,ChevUIChevUI;慶2010慶2010,”bypredicting“2010慶2010慶”.Forexample,text-davinci-003[
51
,
39
]withthefollowingmappingA={0→?offence,1→?Subject,2→?Lub,3→?Fail,4→?Chev,5→?symb,6→?swung,7→?Ul,8→?escalate,9→?Chromebook}solves52ARCproblems,andacross5differentrandomalphabetssolvesanaverageof43.6problems.Interestingly,wefindthattokenmappinginvarianceholdstoanextentonsimplepatterntransformationsforrandomlysampledembeddingsaswell(i.e.,suchthatembeddingsarenotassociatedwithanytokeninthevocabulary;seeAppendix).
Theimplicationsoftokenmappinginvariancearetwo-fold.First,notethatitispossiblethatpartsoftheARC(andotherstaticexamplesofpatterntransformations)arepresentinthetrainingdataofanLLM(i.e.,duetocontamination).Therefore,measuringtheperformanceofLLMsunderrandomalphabetsmayprovideacloserestimateoftheirtrueunderlyingin-contextsequencetransformationcapabilities.(AsadditionalevidencethatLLMs’sequencetransformationabilityisnotsimplyduetomemorization,wealsoprovideanewprocedurally-generatedpatterntransformationbenchmarkwhichwedescribebelow.)
Second,wehypothesizethatthepatternma-nipulationcapabilitieswhichtokeninvarianceimpliescouldhelptodrivepositivetransferfrompatternslearnedacrossInternet-scalelanguagedatatonewmodalitiesorsymbolicrepresentationsforrobotreasoning.Asanexampleofthisidea,(i)
Fig.3
(top)showsagrasp(Skittles)detectorwhichoutputstar-getcoordinateswithinadownsampledimage(with6in-contextexamples),and(ii)
Fig.3
(bottom)showsspatialrearrangementviapre-dictingsimpleforwarddynamicswherethe
Output(Rendered)
Input
Input(Low-Res)Input&Output(Tokens)
input:
676792
868687
916187
929293
879293
629314692
6262.91
44438787
4468112112
93118117118
93
92
87
93
93
12361
12487
12343
12369
118123
input:
63474763777761575862
634241424.237373742
63464646464637374142
63626262626262625842
63636262626262626262
output:
63474763777761575862
633737424.242424242
63535357464242424242
63585862466262624642
63636363626262626262
Fig.3:ExampleLLMpredictionasanin-contextgraspdetector(top)andasimpleforwarddynamicsmodel(bottom).
redbowlmovestothegreenplate(with9in-contextexamplesofdownsampledimagesasinputsandoutputs).Thegeneralityofwhatthearbitrarytokenscouldrepresentmayallowpatterntransformationcapabilities–especiallyasLLMsimprove–tobeleveragedatvariouslevelsofabstractioninrobotics(includingatthelevelofpixelsorrobotjointpositions).Incorporatingmoresemanticpriorsintorepre-sentationsmayalsoboostperformanceandenablefurtherLLM-drivenreasoning(e.g.,reducingvisual
6
Function
ExampleInputs
ExampleOutputs
530
35
remove_second(swap(s1,s2),s3)
761
67
echo(copy(swap(swap(
prepend(removesecond(
6
77815989
1
59897766
swap(echo(s1s2)),s3s4),s5s6s7s8s9s10)
430350238
502383344
Tab.2:IllustrationsoftransformationsinourPCFGbenchmark.Row1showsatransformationcomposedofk=2operationsoverw=3tokens,androw2showsatransformationcomposedofk=8operationsoverw=10tokens,respectively.Foreachtransformationfunction,weshowtwoexampleinputsandthecorrespondingoutputs.
dataintomoresemanticspatialrepresentations).Itmay
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
- 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 深入解析食品質檢員考試試題及答案
- ug考試題目及答案大全
- 檢視2024年統計學考試解題案例題目及答案
- 動力系統與底盤調節(jié)試題及答案
- 二手車評估師考試中的判斷邏輯和2024年試題答案
- 2025年小學語文考前必讀試題及答案
- 審視2024年汽車維修工考試的發(fā)展歷史與試題及答案
- 2024年汽車維修工前沿技術探討試題及答案
- 古代文學史細節(jié)考察試題及答案
- 2024-2025學年河南省駐馬店新蔡一高高一下學期2月月考地理試題及答案
- 醫(yī)療保險學(周綠林-李紹華主編)課件PPT模板
- 個人身份信息保密協議書
- 達斡爾民族服飾課件
- 公路工程工地試驗室自校表格大全
- Unit 4 Reading and Thinking 教學設計 高中英語人教版(2019)選擇性必修第三冊
- 穴位按摩開天門
- 教師職業(yè)道德教育與心理教育相結合的新探索--基于師德培訓的實效性
- 2023多囊卵巢綜合征診治路徑專家共識(最全版)
- 液壓系統計算公式匯總(EXCEL版)更詳細哦
- 色溫-XY-UV色坐標換算公式
- 垃圾清運重點難點和解決措施
評論
0/150
提交評論