云計(jì)算技術(shù)及應(yīng)用之云中的數(shù)據(jù)處理-HadoopMapReduce_第1頁
云計(jì)算技術(shù)及應(yīng)用之云中的數(shù)據(jù)處理-HadoopMapReduce_第2頁
云計(jì)算技術(shù)及應(yīng)用之云中的數(shù)據(jù)處理-HadoopMapReduce_第3頁
云計(jì)算技術(shù)及應(yīng)用之云中的數(shù)據(jù)處理-HadoopMapReduce_第4頁
云計(jì)算技術(shù)及應(yīng)用之云中的數(shù)據(jù)處理-HadoopMapReduce_第5頁
已閱讀5頁,還剩46頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

云計(jì)算技術(shù)及應(yīng)用之云中的數(shù)據(jù)處理—Hadoop/MapReduce提綱一、MapReduce概述二、MapReduce編程三、PIG和Hive簡介MapReduce概述什么是Hadoop和MapReduce?MapReduce最早由Google提出,用于處理云中P級的大數(shù)據(jù)Processes20PBofdataperdayMapReduce是一種專用于大規(guī)模數(shù)據(jù)的并行編程框架;MapReduce依賴于底層的文件系統(tǒng),MapReduce程序可以方便的在萬級以上的大規(guī)模廉價(jià)集群中部署和運(yùn)行。Data-parallelprogrammingmodelforclustersofcommoditymachinesHadoop是支持MapReduce的最大開源平臺UsedbyYahoo!,Facebook,Amazon,…MapReduce能夠做什么?Google:IndexbuildingforGoogleSearchArticleclusteringforGoogleNewsStatisticalmachinetranslationYahoo!:IndexbuildingforYahoo!SearchSpamdetectionforYahoo!MailFacebook:DataminingAdoptimizationSpamdetectionMapReduce能夠做什么?Inresearch:AnalyzingWikipediaconflicts(PARC)Naturallanguageprocessing(CMU)Bioinformatics(Maryland)Astronomicalimageanalysis(Washington)Oceanclimatesimulation(Washington)<Yourapplicationhere>MapReduce的設(shè)計(jì)目標(biāo)可擴(kuò)展性——Scalability

Scan100TBon1node@50MB/s=23daysScanon1000-nodecluster=33minutes省錢——Cost-efficiency:Commoditynodes(cheap,butunreliable)CommoditynetworkAutomaticfault-tolerance(feweradmins)Easytouse(fewerprogrammers)典型的Hadoop集群40nodes/rack,1000-4000nodesincluster1GBpsbandwidthinrack,8GBpsoutofrackNodespecs(Yahooterasort):

8x2.0GHzcores,8GBRAM,4disks(=4TB?)AggregationswitchRackswitch典型的Hadoop集群Imagefrom/hadoop-data/attachments/HadoopPresentations/attachments/aw-apachecon-eu-2009.pdf主要針對的挑戰(zhàn)容錯:Cheapnodesfail,especiallyifyouhavemanyMeantimebetweenfailuresfor1node=3yearsMTBFfor1000nodes=1daySolution:Buildfault-toleranceintosystem低帶寬:Commoditynetwork=lowbandwidthSolution:Pushcomputationtothedata分布式系統(tǒng)編程:ProgrammingdistributedsystemsishardSolution:Data-parallelprogrammingmodel:userswrite“map”and“reduce”functions,systemhandlesworkdistributionandfaulttoleranceMapReduce編程Hadoopde的核心構(gòu)成Distributedfilesystem(HDFS)SinglenamespaceforentireclusterReplicatesdata3xforfault-toleranceMapReduceimplementationExecutesuserjobsspecifiedas“map”and“reduce”functionsManagesworkdistribution&fault-toleranceHDFS:Hadoop分布式文件系統(tǒng)HadoopDistributedFileSystemFilessplitinto128MBblocksBlocksreplicatedacrossseveraldatanodes(usually3)Singlenamenodestoresmetadata(filenames,blocklocations,etc)Optimizedforlargefiles,sequentialreadsFilesareappend-onlyNamenodeDatanodes1234124213143324File1Hadoop架構(gòu)-HDFS14Hadoop架構(gòu)-MapReduce15MapReduce編程模型Datatype:key-valuerecordsMapfunction:(Kin,Vin)list(Kinter,Vinter)Reducefunction:(Kinter,list(Vinter))list(Kout,Vout)舉例:WordCountdefmapper(line):

foreachwordinline.split():output(word,1)defreducer(key,values):output(key,sum(values))WordCountExecutionthequickbrownfoxthefoxatethemousehownowbrowncowMapMapMapReduceReducebrown,2fox,2how,1now,1the,3ate,1cow,1mouse,1quick,1the,1brown,1fox,1quick,1the,1fox,1the,1how,1now,1brown,1ate,1mouse,1cow,1InputMapShuffle&SortReduceOutputMapReduce程序的執(zhí)行過程

Master節(jié)點(diǎn)控制著多個Salve節(jié)點(diǎn)上的任務(wù)執(zhí)行,并負(fù)責(zé)用戶段的調(diào)度Mappers優(yōu)先部署于與輸入數(shù)據(jù)相同的節(jié)點(diǎn)或機(jī)架上。Pushcomputationtodata,minimizenetworkuseMappers將結(jié)果直接保存于本地磁盤,而不是推送給ReducersReducers繼續(xù)處理這些分布于集群各處的中間結(jié)果,通常Reducers比節(jié)點(diǎn)數(shù)要多,并且允許Reducer在執(zhí)行失敗的情況下自動重啟。增加優(yōu)化步驟:TheCombinerAcombinerisalocalaggregationfunctionforrepeatedkeysproducedbysamemapForassociativeops.likesum,count,maxDecreasessizeofintermediatedataExample:localcountingforWordCount:defcombiner(key,values):output(key,sum(values))WordCountwithCombinerInputMap&CombineShuffle&SortReduceOutputthequickbrownfoxthefoxatethemousehownowbrowncowMapMapMapReduceReducebrown,2fox,2how,1now,1the,3ate,1cow,1mouse,1quick,1the,1brown,1fox,1quick,1the,2fox,1how,1now,1brown,1ate,1mouse,1cow,1MapReduce容錯機(jī)制1.Ifataskcrashes:RetryonanothernodeOkayforamapbecauseithadnodependenciesOkayforreducebecausemapoutputsareondiskIfthesametaskrepeatedlyfails,failthejoborignorethatinputblock(user-controlled)Note:Forthisandtheotherfaulttolerancefeaturestowork,yourmapandreducetasksmustbeside-effect-freeMapReduce容錯機(jī)制2.Ifanodecrashes:RelaunchitscurrenttasksonothernodesRelaunchanymapsthenodepreviouslyranNecessarybecausetheiroutputfileswerelostalongwiththecrashednodeMapReduce容錯機(jī)制3.Ifataskisgoingslowly(straggler):LaunchsecondcopyoftaskonanothernodeTaketheoutputofwhichevercopyfinishesfirst,andkilltheotheroneCriticalforperformanceinlargeclusters:stragglersoccurfrequentlyduetofailinghardware,bugs,misconfiguration,etc更多例子:

SearchInput:(lineNumber,line)recordsOutput:linesmatchingagivenpatternMap:

if(linematchespattern):

output(line)Reduce:identifyfunctionAlternative:noreducer(map-onlyjob)更多例子:SortInput:(key,value)recordsOutput:samerecords,sortedbykeyMap:

先做局部排序Reduce:

再負(fù)責(zé)MergeMap階段的成果,生成全局排序pigsheepyakzebraaardvarkantbeecowelephant更多例子:

Sort優(yōu)化手段:多個Reduce分工合作Trick:Pickpartitioning

functionhsuchthat

k1<k2=>h(k1)<h(k2)MapMapMapReduceReduceant,beezebraaardvark,elephantcowpigsheep,yak[A-M][N-Z]更多例子:

InvertedIndexInput:(filename,text)recordsOutput:listoffilescontainingeachwordMap:

foreachwordintext.split():

output(word,filename)Combine:uniquifyfilenamesforeachwordReduce:

defreduce(word,filenames):

output(word,sort(filenames))更多例子:

InvertedIndexExampletobeornottobeafraid,(12th.txt)be,(12th.txt,hamlet.txt)greatness,(12th.txt)not,(12th.txt,hamlet.txt)of,(12th.txt)or,(hamlet.txt)to,(hamlet.txt)hamlet.txtbenotafraidofgreatness12th.txtto,hamlet.txtbe,hamlet.txtor,hamlet.txtnot,hamlet.txtbe,12th.txtnot,12th.txtafraid,12th.txtof,12th.txtgreatness,12th.txt更多例子:

MostPopularWordsInput:(filename,text)recordsOutput:the100wordsoccurringinmostfilesTwo-stagesolution:Job1:Createinvertedindex,giving(word,list(file))recordsJob2:Mapeach(word,list(file))to(count,word)SorttheserecordsbycountasinsortjobOptimizations:Mapto(word,1)insteadof(word,file)inJob1EstimatecountdistributioninadvancebysamplingHadoop安裝DownloadfromToinstalllocally,unzipandsetJAVA_HOMEDetails:

/core/docs/current/quickstart.htmlThreewaystowritejobs:JavaAPIHadoopStreaming(forPython,Perl,etc)PipesAPI(C++)WordCountinJava

public

static

classMapClassextends

MapReduceBase

implements

Mapper<LongWritable,Text,Text,IntWritable>{

private

final

static

IntWritableONE=new

IntWritable(1);

public

voidmap(LongWritablekey,Textvalue,OutputCollector<Text,IntWritable>output,Reporterreporter)throws

IOException{Stringline=value.toString();StringTokenizeritr=new

StringTokenizer(line);

while

(itr.hasMoreTokens()){output.collect(new

text(itr.nextToken()),ONE);}}}WordCountinJava

public

static

classReduceextends

MapReduceBase

implementsReducer<Text,IntWritable,Text,IntWritable>{

public

void

reduce(Textkey,Iterator<IntWritable>values,OutputCollector<Text,IntWritable>output,Reporterreporter)throwsIOException{

intsum=0;

while

(values.hasNext()){sum+=values.next().get();}output.collect(key,newIntWritable(sum));}}WordCountinJava

public

static

voidmain(String[]args)throws

Exception{JobConfconf=new

JobConf(WordCount.class);conf.setJobName("wordcount");conf.setMapperClass(MapClass.class);conf.setCombinerClass(Reduce.class);conf.setReducerClass(Reduce.class);

FileInputFormat.setInputPaths(conf,args[0]);FileOutputFormat.setOutputPath(conf,new

Path(args[1]));conf.setOutputKeyClass(Text.class);

//outkeysarewords(strings)conf.setOutputValueClass(IntWritable.class);

//valuesarecounts

JobClient.runJob(conf);}WordCountinPythonwithHadoopStreamingimportsysforlineinsys.stdin:

forwordinline.split():print(word.lower()+"\t"+1)importsyscounts={}forlineinsys.stdin:word,count=line.split("\t")dict[word]=dict.get(word,0)+int(count)forword,countincounts:print(word.lower()+"\t"+1)Mapper.py:Reducer.py:PIG和HIVEMapReduce的問題MapReduceisgreat,asmanyalgorithms

canbeexpressedbyaseriesofMRjobsButit’slow-level:mustthinkaboutkeys,values,partitioning,etcCanwecapturecommon“jobpatterns”?Pig簡介StartedatYahoo!ResearchNowrunsabout30%ofYahoo!’sjobsFeatures:ExpressessequencesofMapReducejobsDatamodel:nested“bags”ofitemsProvidesrelational(SQL)operators

(JOIN,GROUPBY,etc)EasytopluginJavafunctionsPigPendev.env.forEclipse一個簡單的例子Supposeyouhaveuserdatainonefile,websitedatainanother,andyouneedtofindthetop5mostvisitedpagesbyusersaged18-25.LoadUsersLoadPagesFilterbyageJoinonnameGrouponurlCountclicksOrderbyclicksTaketop5Examplefrom/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt復(fù)雜的MapReduce程序Examplefrom/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.pptUsers=load

‘users’

as(name,age);

Filtered=filterUsersby

age>=18andage<=25;

Pages=load‘pages’as(user,url);

Joined=joinFilteredbyname,Pagesbyuser;

Grouped=groupJoinedbyurl;

Summed=foreachGroupedgenerategroup,

count(Joined)asclicks;

Sorted=orderSummedbyclicksdesc;

Top5=limitSorted5;

storeTop5into

‘top5sites’;InPigLatinExamplefrom/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.pptPIG腳本的翻譯NoticehownaturallythecomponentsofthejobtranslateintoPigLatin.LoadUsersLoadPagesFilterbyageJoinonnameGrouponurlCountclicksOrderbyclicksTaketop5Users=load…

Fltrd=filter…

Pages=load…

Joined=join…

Grouped=group…

Summed=…count()…

Sorted=order…

Top5=limit…Examplefrom/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.pptPIG腳本的翻譯NoticehownaturallythecomponentsofthejobtranslateintoPigLatin.LoadUsersLoadPagesFilterbyageJoinonnameGrouponurlCountclicksOrderbyclicksTaketop5Users=load…

Fltrd=filter…

Pages=load…

Joined=join…

Grouped=group…

Summed=…count()…

Sorted=order…

Top5=limit…Job1Job2Job3Examplefrom/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.pptHive簡介DevelopedatFacebookUsedformajorityofFacebookjobs“Relationaldatabase”builtonHadoopMaintainslistoftableschemasSQL-likequerylanguage(HQL)CancallHadoopStreamingscriptsfromHQLSupportstablepartitioning,clustering,complex

datatypes,someoptimizations創(chuàng)建Hive表CREATETABLEpage_views(viewTimeINT,useridBIGINT,page_urlSTRING,referrer_urlSTRING,ipSTRINGCOMMENT'UserIPaddress')COMMENT'Thisisthepageviewtable'PARTITIONEDBY(dtSTRING,countrySTRING)STOREDASSEQUENCEFILE;Pa

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論