版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領
文檔簡介
1
arXiv:2108.02497v3[cs.LG]9Feb2023
Howtoavoidmachinelearningpitfalls:aguideforacademicresearchers
MichaelA.Lones*
Abstract
Thisdocumentisaconciseoutlineofsomeofthecommonmistakesthatoccurwhen
usingmachinelearning,andwhatcanbedonetoavoidthem.Whilstitshouldbeaccessibletoanyonewithabasicunderstandingofmachinelearningtechniques,itwasoriginallywrittenforresearchstudents,andfocusesonissuesthatareofpartic-ularconcernwithinacademicresearch,suchastheneedtodorigorouscomparisonsandreachvalidconclusions.Itcovers?vestagesofthemachinelearningprocess:whattodobeforemodelbuilding,howtoreliablybuildmodels,howtorobustlyevaluatemodels,howtocomparemodelsfairly,andhowtoreportresults.
1Introduction
It’seasytomakemistakeswhenapplyingmachinelearning(ML),andthesemistakescanresultinMLmodelsthatfailtoworkasexpectedwhenappliedtodatanotseenduringtrainingandtesting[
Liaoetal.
,
2021
].Thisisaproblemforpractitioners,sinceitleadstothefailureofMLprojects.However,itisalsoaproblemforsociety,sinceiterodestrustinthe?ndingsandproductsofML[
Gibney
,
2022
].Thisguideaimstohelpnewcomersavoidsomeofthesemistakes.It’swrittenbyanacademic,andfocusesonlessonslearntwhilstdoingMLresearchinacademia.Whilstprimarilyaimedatstudentsandscienti?cresearchers,itshouldbeaccessibletoanyonegettingstartedinML,andonlyassumesabasicknowledgeofMLtechniques.However,unlikesimilarguidesaimedatamoregeneralaudience,itincludestopicsthatareofaparticularconcerntoacademia,suchastheneedtorigorouslyevaluateandcomparemodelsinordertogetworkpublished.Tomakeitmorereadable,theguidanceiswritteninformally,inaDosandDon’tsstyle.It’snotintendedtobeexhaustive,andreferences(withpublicly-accessibleURLswhereavailable)areprovidedforfurtherreading.Sinceitdoesn’tcoverissuesspeci?ctoparticularacademicsubjects,it’srecommendedyoualsoconsultsubject-speci?cguidancewhereavailable(e.g.
Stevensetal.
[
2020]
formedicine).Feedbackiswelcome,anditisexpectedthatthisdocumentwillevolveovertime.Forthisreason,ifyouciteit,pleaseincludethearXivversionnumber(currentlyv3).
*SchoolofMathematicalandComputerSciences,Heriot-WattUniversity,Edinburgh,Scotland,UK,Email:
m.lones@hw.ac.uk
,Web:
http://www.macs.hw.ac.uk/~ml355
.
2
Contents
1Introduction
1
2Beforeyoustarttobuildmodels
3
2.1Dotakethetimetounderstandyourdata
3
2.2Don’tlookatallyourdata
3
2.3Domakesureyouhaveenoughdata
3
2.4Dotalktodomainexperts
4
2.5Dosurveytheliterature
4
2.6Dothinkabouthowyourmodelwillbedeployed
5
3Howtoreliablybuildmodels
5
3.1Don’tallowtestdatatoleakintothetrainingprocess
5
3.2Dotryoutarangeofdi?erentmodels
6
3.3Don’tuseinappropriatemodels
7
3.4Dokeepupwithrecentdevelopmentsindeeplearning
8
3.5Don’tassumedeeplearningwillbethebestapproach
8
3.6Dooptimiseyourmodel’shyperparameters
9
3.7Dobecarefulwhereyouoptimisehyperparametersandselectfeatures
9
3.8Doavoidlearningspuriouscorrelations
11
4Howtorobustlyevaluatemodels
11
4.1Douseanappropriatetestset
11
4.2Don’tdodataaugmentationbeforesplittingyourdata
12
4.3Douseavalidationset
12
4.4Doevaluateamodelmultipletimes
12
4.5Dosavesomedatatoevaluateyour?nalmodelinstance
14
4.6Don’tuseaccuracywithimbalanceddatasets
14
4.7Don’tignoretemporaldependenciesintimeseriesdata
15
5Howtocomparemodelsfairly
16
5.1Don’tassumeabiggernumbermeansabettermodel
16
5.2Dousestatisticaltestswhencomparingmodels
16
5.3Docorrectformultiplecomparisons
17
5.4Don’talwaysbelieveresultsfromcommunitybenchmarks
17
5.5Doconsidercombinationsofmodels
17
6Howtoreportyourresults
18
6.1Dobetransparent
18
6.2Doreportperformanceinmultipleways
19
6.3Don’tgeneralisebeyondthedata
19
6.4Dobecarefulwhenreportingstatisticalsigni?cance
19
6.5Dolookatyourmodels
20
7Finalthoughts
20
8Acknowledgements
21
9Changes
21
3
2Beforeyoustarttobuildmodels
It’snormaltowanttorushintotrainingandevaluatingmodels,butit’simportanttotakethetimetothinkaboutthegoalsofaproject,tofullyunderstandthedatathatwillbeusedtosupportthesegoals,toconsideranylimitationsofthedatathatneedtobeaddressed,andtounderstandwhat’salreadybeendoneinyour?eld.Ifyoudon’tdothesethings,thenyoumayendupwithresultsthatarehardtopublish,ormodelsthatarenotappropriatefortheirintendedpurpose.
2.1Dotakethetimetounderstandyourdata
Eventuallyyouwillwanttopublishyourwork.Thisisaloteasiertodoifyourdataisfromareliablesource,hasbeencollectedusingareliablemethodology,andisofgoodquality.Forinstance,ifyouareusingdatacollectedfromaninternetresource,makesureyouknowwhereitcamefrom.Isitdescribedinapaper?Ifso,takealookatthepaper;makesureitwaspublishedsomewherereputable,andcheckwhethertheauthorsmentionanylimitationsofthedata.Donotassumethat,becauseadatasethasbeenusedbyanumberofpapers,itisofgoodquality—sometimesdataisusedjustbecauseitiseasytogetholdof,andsomewidelyuseddatasetsareknowntohavesigni?cantlimitations(see
Paulladaetal.
[
2020
]foradiscussionofthis).Ifyoutrainyourmodelusingbaddata,thenyouwillmostlikelygenerateabadmodel:aprocessknownasgarbageingarbageout.So,alwaysbeginbymakingsureyourdatamakessense.Dosomeexploratorydataanalysis(see
Cox
[
2017
]forsuggestions).Lookformissingorinconsistentrecords.Itismucheasiertodothisnow,beforeyoutrainamodel,ratherthanlater,whenyou’retryingtoexplaintoreviewerswhyyouusedbaddata.
2.2Don’tlookatallyourdata
Asyoulookatdata,itisquitelikelythatyouwillspotpatternsandmakeinsightsthatguideyourmodelling.Thisisanothergoodreasontolookatdata.However,itisimportantthatyoudonotmakeuntestableassumptionsthatwilllaterfeedintoyourmodel.The“untestable”bitisimportanthere;it’s?netomakeassumptions,buttheseshouldonlyfeedintothetrainingofthemodel,notthetesting.So,toensurethisisthecase,youshouldavoidlookingcloselyatanytestdataintheinitialexploratoryanalysisstage.Otherwiseyoumight,consciouslyorunconsciously,makeassumptionsthatlimitthegeneralityofyourmodelinanuntestableway.ThisisathemeIwillreturntoseveraltimes,sincetheleakageofinformationfromthetestsetintothetrainingprocessisacommonreasonwhyMLmodelsfailtogeneralise.See
Don’tallowtestdatatoleakinto
thetrainingprocess
formoreonthis.
2.3Domakesureyouhaveenoughdata
Ifyoudon’thaveenoughdata,thenitmaynotbepossibletotrainamodelthatgener-alises.Workingoutwhetherthisisthecasecanbechallenging,andmaynotbeevidentuntilyoustartbuildingmodels:italldependsonthesignaltonoiseratiointhedataset.
4
Ifthesignalisstrong,thenyoucangetawaywithlessdata;ifit’sweak,thenyouneedmoredata.Ifyoucan’tgetmoredata—andthisisacommonissueinmanyresearch?elds—thenyoucanmakebetteruseofexistingdatabyusingcross-validation(see
Doevaluateamodelmultipletimes
).Youcanalsousedataaugmentationtechniques(e.g.see
Wongetal.
[
2016
]and
ShortenandKhoshgoftaar
[
2019
];fortimeseriesdata,see
IwanaandUchida
[
2021
]),andthesecanbequitee?ectiveforboostingsmalldatasets,though
Don’tdodataaugmentationbeforesplittingyourdata
.Dataaugmentationisalsousefulinsituationswhereyouhavelimiteddataincertainpartsofyourdataset,e.g.inclassi?cationproblemswhereyouhavelesssamplesinsomeclassesthanothers,asituationknownasclassimbalance.See
Haixiangetal.
[
2017
]forareviewofmethodsfordealingwiththis;alsosee
Don’tuseaccuracywithimbalanceddatasets
.Anotheroptionfordealingwithsmalldatasetsistousetransferlearning(see
Dokeepupwith
recentdevelopmentsindeeplearning
).However,ifyouhavelimiteddata,thenit’slikelythatyouwillalsohavetolimitthecomplexityoftheMLmodelsyouuse,sincemodelswithmanyparameters,likedeepneuralnetworks,caneasilyover?tsmalldatasets(see
Don’tassumedeeplearningwillbethebestapproach
).Eitherway,it’simportanttoidentifythisissueearlyon,andcomeupwithasuitablestrategytomitigateit.
2.4Dotalktodomainexperts
Domainexpertscanbeveryvaluable.Theycanhelpyoutounderstandwhichproblemsareusefultosolve,theycanhelpyouchoosethemostappropriatefeaturesetandMLmodeltouse,andtheycanhelpyoupublishtothemostappropriateaudience.Failingtoconsidertheopinionofdomainexpertscanleadtoprojectswhichdon’tsolveusefulproblems,orwhichsolveusefulproblemsininappropriateways.AnexampleofthelatterisusinganopaqueMLmodeltosolveaproblemwherethereisastrongneedtounderstandhowthemodelreachesanoutcome,e.g.inmakingmedicalor?nancialdecisions(see
Rudin
[
2019
]).Atthebeginningofaproject,domainexpertscanhelpyoutounderstandthedata,andpointyoutowardsfeaturesthatarelikelytobepredictive.Attheendofaproject,theycanhelpyoutopublishindomain-speci?cjournals,andhencereachanaudiencethatismostlikelytobene?tfromyourresearch.
2.5Dosurveytheliterature
You’reprobablynotthe?rstpersontothrowMLataparticularproblemdomain,soit’simportanttounderstandwhathasandhasn’tbeendonepreviously.Otherpeoplehavingworkedonthesameproblemisn’tabadthing;academicprogressistypicallyaniterativeprocess,witheachstudyprovidinginformationthatcanguidethenext.Itmaybediscouragingto?ndthatsomeonehasalreadyexploredyourgreatidea,buttheymostlikelyleftplentyofavenuesofinvestigationstillopen,andtheirpreviousworkcanbeusedasjusti?cationforyourwork.Toignorepreviousstudiesistopotentiallymissoutonvaluableinformation.Forexample,someonemayhavetriedyourproposedapproachbeforeandfoundfundamentalreasonswhyitwon’twork(andthereforesavedyouafewyearsoffrustration),ortheymayhavepartiallysolvedtheprobleminawaythatyou
canbuildon.So,it’simportanttodoaliteraturereviewbeforeyoustartwork;leavingittoolatemaymeanthatyouareleftscramblingtoexplainwhyyouarecoveringthesamegroundornotbuildingonexistingknowledgewhenyoucometowriteapaper.
2.6Dothinkabouthowyourmodelwillbedeployed
WhydoyouwanttobuildanMLmodel?Thisisanimportantquestion,andtheanswershouldin?uencetheprocessyouusetodevelopyourmodel.Manyacademicstudiesarejustthat—studies—andnotreallyintendedtoproducemodelsthatwillbeusedintherealworld.Thisisfairenough,sincetheprocessofbuildingandanalysingmodelscanitselfgiveveryusefulinsightsintoaproblem.However,formanyacademicstudies,theeventualgoalistoproduceanMLmodelthatcanbedeployedinarealworldsituation.Ifthisisthecase,thenit’sworththinkingearlyonabouthowitisgoingtobedeployed.Forinstance,ifit’sgoingtobedeployedinaresource-limitedenvironment,suchasasensororarobot,thismayplacelimitationsonthecomplexityofthemodel.Iftherearetimeconstraints,e.g.aclassi?cationofasignalisrequiredwithinmilliseconds,thenthisalsoneedstobetakenintoaccountwhenselectingamodel.Anotherconsiderationishowthemodelisgoingtobetiedintothebroadersoftwaresystemwithinwhichitisdeployed;thisprocedureisoftenfarfromsimple(see
Sculley
etal.
[
2015
]).However,emergingapproachessuchasMLOpsaimtoaddresssomeofthedi?culties;see
Tamburri
[
2020
]forareview,and
Shankaretal.
[
2022
]foradiscussionofcommonchallengeswhenoperationalisingMLmodels.
3Howtoreliablybuildmodels
BuildingmodelsisoneofthemoreenjoyablepartsofML.WithmodernMLframeworks,it’seasytothrowallmannerofapproachesatyourdataandseewhatsticks.However,thiscanleadtoadisorganisedmessofexperimentsthat’shardtojustifyandhardtowriteup.So,it’simportanttoapproachmodelbuildinginanorganisedmanner,makingsureyouusedatacorrectly,andputtingadequateconsiderationintothechoiceofmodels.
3.1Don’tallowtestdatatoleakintothetrainingprocess
It’sessentialtohavedatathatyoucanusetomeasurehowwellyourmodelgeneralises.Acommonproblemisallowinginformationaboutthisdatatoleakintothecon?guration,trainingorselectionofmodels(seeFigure
1
).Whenthishappens,thedatanolongerprovidesareliablemeasureofgenerality,andthisisacommonreasonwhypublishedMLmodelsoftenfailtogeneralisetorealworlddata.Thereareanumberofwaysthatinformationcanleakfromatestset.Someoftheseseemquiteinnocuous.Forinstance,duringdatapreparation,usinginformationaboutthemeansandrangesofvariableswithinthewholedatasettocarryoutvariablescaling—inordertopreventinformationleakage,thiskindofthingshouldonlybedonewiththetrainingdata.Othercommonexamplesofinformationleakagearecarryingoutfeatureselectionbeforepartitioningthedata(see
Dobecarefulwhereyouoptimisehyperparametersandselect
5
Figure1:See
Don’tallowtestdatatoleakintothetrainingprocess
.[left]Howthingsshouldbe,withthetrainingsetusedtotrainthemodel,andthetestsetusedtomeasureitsgenerality.[right]Whenthere’sadataleak,thetestsetcanimplicitlybecomepartofthetrainingprocess,meaningthatitnolongerprovidesarealiablemeasureofgenerality.
features
),usingthesametestdatatoevaluatethegeneralityofmultiplemodels(see
Douseavalidationset
and
Don’talwaysbelieveresultsfromcommunitybenchmarks
),andapplyingdataaugmentationbeforesplittingo?thetestdata(see
Don’tdodata
augmentationbeforesplittingyourdata
).Thebestthingyoucandotopreventtheseissuesistopartitiono?asubsetofyourdatarightatthestartofyourproject,andonlyusethisindependenttestsetoncetomeasurethegeneralityofasinglemodelattheendoftheproject(see
Dosavesomedatatoevaluateyour?nalmodelinstance
).Beparticularlycarefulifyou’reworkingwithtimeseriesdata,sincerandomsplitsofthedatacaneasilycauseleakageandover?tting—see
Don’tignoretemporaldependencies
intimeseriesdata
formoreonthis.Forabroaderdiscussionofdataleakage,see
Kapoor
andNarayanan
[
2022
].
3.2Dotryoutarangeofdi?erentmodels
Generallyspeaking,there’snosuchthingasasinglebestMLmodel.Infact,there’saproofofthis,intheformoftheNoFreeLunchtheorem,whichshowsthatnoMLapproachisanybetterthananyotherwhenconsideredovereverypossibleproblem[
Wolpert
,
2002
].So,yourjobisto?ndtheMLmodelthatworkswellforyourparticularproblem.Thereissomeguidanceonthis.Forexample,youcanconsidertheinductivebiasesofMLmodels;thatis,thekindofrelationshipstheyarecapableofmodelling.Forinstance,linearmodels,suchaslinearregressionandlogisticregression,areagoodchoiceifyouknowtherearenoimportantnon-linearrelationshipsbetweenthefeaturesinyourdata,butabadchoiceotherwise.Goodqualityresearchoncloselyrelatedproblemsmayalsobeabletopointyoutowardsmodelsthatworkparticularlywell.However,alotofthetimeyou’restillleftwithquiteafewchoices,andtheonlywaytoworkoutwhichmodelisbestistotrythemall.Fortunately,modernMLlibrariesinPython(e.g.scikit-learn[
Varoquauxetal.
,
2015
]),R(e.g.caret[
Kuhn
,
2015]
),Julia(e.g.MLJ[
Blaometal.
,
2020
])etc.allowyoutotryoutmultiplemodelswithonlysmallchangestoyourcode,sothere’snoreasonnottotrythemalloutand?ndoutforyourselfwhichoneworksbest.However,
Don’tuseinappropriatemodels
,and
Douse
6
7
Figure2:See
Dokeepupwithrecentdevelopmentsindeeplearning
.Aroughhistoryofneuralnetworksanddeeplearning,showingwhatIconsidertobethemilestonesintheirdevelopment.Forafarmorethoroughandaccurateaccountofthe?eld’shistoricaldevelopment,takealookat
Schmidhuber
[
2015
].
avalidationset
,ratherthanthetestset,toevaluatethem.Whencomparingmodels,
Dooptimiseyourmodel’shyperparameters
and
Doevaluateamodelmultipletimes
tomakesureyou’regivingthemallafairchance,and
Docorrectformultiplecomparisons
whenyoupublishyourresults.
3.3Don’tuseinappropriatemodels
Byloweringthebarriertoimplementation,modernMLlibrariesalsomakeiteasytoapplyinappropriatemodelstoyourdata.This,inturn,couldlookbadwhenyoutrytopublishyourresults.Asimpleexampleofthisisapplyingmodelsthatexpectcategoricalfeaturestoadatasetcontainingnumericalfeatures,orviceversa.SomeMLlibrariesallowyoutodothis,butitmayresultinapoormodelduetolossofinformation.Ifyoureallywanttousesuchamodel,thenyoushouldtransformthefeatures?rst;therearevariouswaysofdoingthis,rangingfromsimpleone-hotencodingstocomplexlearnedembeddings.Otherexamplesofinappropriatemodelchoiceincludeusingaclassi?cationmodelwherearegressionmodelwouldmakemoresense(orviceversa),attemptingtoapplyamodelthatassumesnodependenciesbetweenvariablestotimeseriesdata,orusingamodelthatisunnecessarilycomplex(see
Don’tassumedeeplearningwillbethe
bestapproach
).Also,ifyou’replanningtouseyourmodelinpractice,
Dothinkabout
howyourmodelwillbedeployed
,anddon’tusemodelsthataren’tappropriateforyourusecase.
8
3.4Dokeepupwithrecentdevelopmentsindeeplearning
Machinelearningisafast-moving?eld,andit’seasytofallbehindthecurveanduseapproachesthatotherpeopleconsidertobeoutmoded.Nowhereisthismorethecasethanindeeplearning.So,whilstdeeplearningmaynotalwaysbethebestsolution(see
Don’tassumedeeplearningwillbethebestapproach
),ifyouaregoingtousedeeplearning,thenit’sadvisabletotryandkeepupwithrecentdevelopments.Togivesomeinsightintothis,Figure
2
summarisessomeoftheimportantdevelopmentsovertheyears.Multilayerperceptrons(MLP)andrecurrentneuralnetworks(particularlyLSTM)havebeenpopularforsometime,butareincreasinglybeingreplacedbynewermodelssuchasconvolutionalneuralnetworks(CNN)andtransformers.CNNs(see
Lietal.
[
2021
]forareview)arenowthego-tomodelformanytasks,andcanbeappliedtobothimagedataandnon-imagedata.Beyondtheuseofconvolutionallayers,someofthemainmilestoneswhichledtothesuccessofCNNsincludetheuseofrecti?edlinearunits(ReLU),theadoptionofmodernoptimisers(notablyAdamanditsvariants)andthewidespreaduseofregularisation,especiallydropoutlayersandbatchnormalisation—sogiveseriousconsiderationtoincludingtheseinyourmodels.Anotherimportantgroupofcontemporarymodelsaretransformers(see
Linetal.
[
2022
]forareview).Thesearegraduallyreplacingrecurrentneuralnetworksasthego-tomodelforprocessingsequentialdata,andareincreasinglybeingappliedtootherdatatypestoo,suchasimages[
Khanetal.
,
2022
].AprominentdownsideofbothtransformersanddeepCNNsisthattheyhavemanyparametersandthereforerequirealotofdatatotrainthem.However,anoptionforsmalldatasetsistousetransferlearning,whereamodelispre-trainedonalargegenericdatasetandthen?ne-tunedonthedatasetofinterest[
Hanetal.
,
2021
].Foranextensive,yetaccessible,guidetodeeplearning,see
Zhangetal.
[
2021
].
3.5Don’tassumedeeplearningwillbethebestapproach
Anincreasinglycommonpitfallistoassumethatdeepneuralnetworkswillprovidethebestsolutiontoanyproblem,andconsequentlyfailtotryoutother,possiblymoreappropriate,models.Whilstdeeplearningisgreatforcertaintasks,itisnotgoodateverything;thereareplentyofexamplesofitbeingout-performedby“oldfashioned”machinelearningmodelssuchasrandomforestsandSVMs.See,forinstance,
Grinsztajn
etal.
[
2022
],whoshowthattree-basedmodelsoftenoutperformdeeplearnersontabulardata.Certainkindsofdeepneuralnetworkarchitecturemayalsobeill-suitedtocertainkindsofdata:see,forexample,
Zengetal.
[
2022
],whoarguethattransformersarenotwell-suitedtotimeseriesforecasting.Therearealsotheoreticalreasonswhyanyonekindofmodelwon’talwaysbethebestchoice(see
Dotryoutarangeofdi?erentmodels
).Inparticular,adeepneuralnetworkisunlikelytobeagoodchoiceifyouhavelimiteddata,ifdomainknowledgesuggeststhattheunderlyingpatternisquitesimple,orifthemodelneedstobeinterpretable.Thislastpointisparticularlyworthconsider
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 廣東松山職業(yè)技術學院《地圖與測量學》2023-2024學年第一學期期末試卷
- 廣東水利電力職業(yè)技術學院《草食動物生產(chǎn)學》2023-2024學年第一學期期末試卷
- 廣東石油化工學院《工程技術基礎》2023-2024學年第一學期期末試卷
- 廣東汕頭幼兒師范高等??茖W?!度沼锰沾蓜?chuàng)新設計》2023-2024學年第一學期期末試卷
- 廣東培正學院《商務公文寫作》2023-2024學年第一學期期末試卷
- 七年級上冊《第一章 有理數(shù)章末小結(jié)與考點檢測》課件
- 廣東茂名幼兒師范??茖W校《科技論文撰寫實踐》2023-2024學年第一學期期末試卷
- 關愛生命-慢病識別及管理(蘇州衛(wèi)生職業(yè)技術學院)學習通測試及答案
- 【備戰(zhàn)2021高考】全國2021屆高中地理試題匯編(11月份):E2內(nèi)外力作用對地形的影響
- 【名師一號】2020-2021學年高中英語(北師大版)必修5隨堂演練:第十四單元綜合測評
- 鐵路貨運員(中級)資格認定考試題庫(濃縮500題)
- iqc部門年終工作總結(jié)
- 五年級上冊脫式計算100題及答案
- 2024年人工智能發(fā)展引領AI應用創(chuàng)新
- 智能智能化智能眼鏡
- 四川省眉山市2023-2024學年高二上學期期末生物試題【含答案解析】
- 三年級下冊數(shù)學混合計算100題及答案
- 中國動畫賞析
- 浙江省溫州市2023-2024學年八年級上學期道德與法治期末測試(含答案)
- 地方國企重組改制實施方案
- 空壓機及氣罐故障事故應急救援預案
評論
0/150
提交評論