hadoop5月份作業(yè)匯報(bào)鄧森林省分系統(tǒng)集成中心_第1頁(yè)
hadoop5月份作業(yè)匯報(bào)鄧森林省分系統(tǒng)集成中心_第2頁(yè)
hadoop5月份作業(yè)匯報(bào)鄧森林省分系統(tǒng)集成中心_第3頁(yè)
hadoop5月份作業(yè)匯報(bào)鄧森林省分系統(tǒng)集成中心_第4頁(yè)
hadoop5月份作業(yè)匯報(bào)鄧森林省分系統(tǒng)集成中心_第5頁(yè)
已閱讀5頁(yè),還剩58頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

Hadoop-5月份作業(yè)匯 作業(yè)1:Hadoop偽分布式環(huán)境搭 _地 作業(yè)4:?jiǎn)?dòng)作業(yè),通過jobhistory查看job狀態(tài)和日 _查看job狀態(tài)和日 1.1.3架構(gòu)規(guī)根據(jù)課程講解,計(jì)劃構(gòu)建“1+3”平臺(tái),即1臺(tái)Master機(jī)和3臺(tái)Slave機(jī)。VMware虛擬機(jī)搭建4個(gè)操作系統(tǒng)的機(jī)器架構(gòu)如下圖所示:為后期調(diào)試方便將分別將其IP地址也設(shè)置便于的標(biāo)號(hào)即:191對(duì)應(yīng)master201對(duì)3)用戶名及IP對(duì)應(yīng)列表IPSlaveSlaveSlave_操作_網(wǎng)絡(luò)設(shè)VMware工具使用時(shí),需要設(shè)置一下虛擬網(wǎng)絡(luò),主要設(shè)置虛擬網(wǎng)絡(luò)地址段和子網(wǎng)掩碼地1:打開VMwareWorkstationPro”2:設(shè)置NAT(VMnet8模式下設(shè)置3vmnet8IP_Master切換登陸方式(前期準(zhǔn)備1:鍵入快捷鍵Ctrl+Alt+F22:rootviinittabvi3:init:5:initdefault5vietc/inittab--vi編輯器編輯/etc/inittab文按 --切換到INSERT模式,也就是修改模按Esc --退出INSERT模 Linux機(jī)默認(rèn)進(jìn)入命令界面Linux的運(yùn)行級(jí)0:關(guān)1:?jiǎn)斡?:多用戶狀態(tài)沒有網(wǎng)絡(luò)服3:多用戶狀態(tài)有網(wǎng)絡(luò)4:系統(tǒng)未使用保留給5:圖形界6:重新啟修改IP地址(使用root用戶1Mastere0表示第一塊網(wǎng)(的虛擬機(jī)網(wǎng)卡配置件由于是原來的虛機(jī)的,所以和新的虛擬機(jī)硬件匹配不上,所以要根據(jù)新硬件生成新的配置文件—這是遇到eh1的情況)Linkencap:Ethernet-->連接類型:Ethernet(以太網(wǎng)inetaddr:28-->IP地址:28inet6addr:fe80::20c:29ff:fe2f:f890/64-->inet6地UPBROADCASTRUNNINGMULTICASTMTU:1500Metric:1-->UP(代表網(wǎng)卡開啟狀態(tài))RUNNING(表網(wǎng)卡的網(wǎng)線被接上)MULTICAST(支持組播)MTU:1500(最大傳輸單元):1500字節(jié)躍點(diǎn)數(shù)RXpackets:90errors:0dropped:0overruns:0frame:0-->接收數(shù)據(jù)包:90錯(cuò)誤:0丟棄:0過載:0幀數(shù):0TXpackets:56errors:0dropped:0overruns:0carrier:0-->發(fā)送數(shù)據(jù)包:56錯(cuò)誤:0丟棄:0過載:0載波:0collisions:0txqueuelen:1000-->碰撞:0發(fā)送隊(duì)列長(zhǎng)度:1000RXbytes:90418.8KiB)TXbytes:73837.2KiB)-->接收字節(jié):90418.8KiB)發(fā)送字節(jié):73837.22IP 動(dòng)態(tài) 3servicenetworkrestartIPhostnamevi3:修改為修改vi3servicenetwork4#cat/修改hosts文件(為方便后期ssh免密登陸vi操作命vi操作前操作后hadooprootvi步驟2:插入 serviceiptables2serviceiptablesserviceiptables1:ssh-keygentrsa生成密鑰文件和私鑰文件ssh-keygen-tid_rsa:2:查看新建的.ssh文件夾1)打開.ssh文件:cd.ssh2)查看文件明細(xì):ls-al1)給.ssh文件夾賦700權(quán)限od700 2)再運(yùn)行一次ssh-keygen-trsa3)創(chuàng)建authorized_keys文件catid_rsa.pub/home/hadoop/.ssh/authorized_keys3)給authorized_keys文件賦權(quán)600od600 4)測(cè)試sshmaster1能否成功_jdk安 javasudomkdir 方法進(jìn)入jdk源碼包所 (前提你已經(jīng)到linux系統(tǒng)中cpjdk-8u66-linux-x64.tar.gz方法然后進(jìn)入java cd/usr/local/java解壓壓縮包jdksudovi(附注:此處用sudovietc/profile,hadoop用戶沒有相應(yīng)權(quán)限來修改文件sourceecho$CLASSPATHecho$PATH修改默認(rèn)update-alternatives--install/usr/bin/javajava/usr/java/jdk1.8.0_131/bin/java300update-alternatives--install/usr/bin/javacupdate-alternatives--install/usr/bin/javajava/usr/java/jdk1.8.0_131/bin/java300update-alternatives--install/usr/bin/javacjavac/usr/java/jdk1.8.0_131/bin/javac300update-alternatives--configjava_hadoop-2.7.3.tar.gz安2.7.3.tar.gz 方法進(jìn)入hadoop源碼包所 (前提你已經(jīng)到linux系統(tǒng)中方法 cd/usr/local解壓壓縮包刪除hadoop-hadoop-2.7.3文件名更改為hadoop(方便后期管理sudomv-ihadoop-2.7.3_SlavemastermasterVMware1.1.3slave機(jī)修改Linux在識(shí)別網(wǎng)卡時(shí)第一張會(huì)是eth0eth1。有時(shí)候我們使用虛擬機(jī)克隆技術(shù)后eth1.無論我們?cè)趺葱薷亩紵o法改變,這就對(duì)我們使用N臺(tái)虛擬機(jī)進(jìn)行HA-heartbeat實(shí)驗(yàn)時(shí)造成了困擾。1 270-persistent-net.rules3查詢自動(dòng)生成的UUIDnmclicon|sed-n'1,2p'步驟5vissh免 登步驟1:將master1的公 到slave1_hadoop后期配置(上接1.2.4_hadoop-2.7.3.tar.gz安裝IPhadoop-所在文件地址定位:export替換成exportsudovihadoop-env.shyarn-所在文件地址定位export替換成exportsudoviyarn-env.shmapred-所在文件地址定位exportJAVA_HOME=/home/y/libexec/jdk1.6.0sudovimapred-env.sh配置sudovimtmp、hdfs、dfs/data、sudomkdirtmpsudomkdirtmpsudomkdirhdfssudomkdirdfs操作后,使用命令:lsalcore-定位地址:sudovicore-site.xml 登陸)所以以master1:9000設(shè)置hdfs-定位地址:cdsudovihdfs-mapred-定位地址:yarn-定位地址:sudoviyarn-ey>slaves:添加:slave1、slave2、slave3三個(gè)節(jié)點(diǎn),注意要?jiǎng)h除默認(rèn)的修改前:修改后:master終端配置的內(nèi)容傳到3個(gè)slave od766sudood766source/etc/profile在Master原來:drwxr-xr- 9rootroot4096Aug 現(xiàn)在:drwxrw-rw12rootroot4096May2203:41 -#od777od777slavedatanode--slavedatanodeslave1slave2slave3slave機(jī)上hdfs-site.xmlDatanode(dfs.datanode.data.dir參數(shù)),[hadoop@slave2dfs]$rmrfdata[hadoop@slave2dfs]$mkdirdata再初始化Namenode內(nèi)容后,-#od-R777od777cdexport將exportcd定位:exportexportcdvi/usr/local/hadoop/etc/hadoop/mapred-env.sh vi/etc/profile#HADOOP_HOMECONFIG--tmp、hdfsdfs/data輸入:cdsudomkdirtmpsudomkdirhdfssudomkdirdfs[hadoop@master1hadoop]$mkdirtmphdfsdfs[hadoop@master1hadoop]$ls-altotal 4096May2701:07 4096May2016:47 2 4096Aug 2hadoop 4096May2701:07 3 4096Aug 2hadoop 4096May2701:0724096Aug34096Aug24096Aug184854Aug114978Aug11366Aug24096Aug44096Aug 2hadoop 4096May2701:07cd 后期ssh免密登陸)所以以master1:9000設(shè)置cdcd mvmapred-site.xml.tem temapred-site.xmlcdsourcesource --slavedatanodeslave1slave2slave3[hadoop@slave2dfs]$rm-rfdata_ 賦777來解決問題_前期了解_操作1、賦權(quán)予全部的/usr/local文件sudood-R777/usr/local2、啟動(dòng)定位地址:cd3hadoop進(jìn)程操作命令:ps-ef|grephadoopConfiguredCapacity:55935541248(52.09DFSUsed:98304(96DFSUsed%:Underreplicatedblocks:0Blockswithcorruptreplicas:0Missingblocks:0DFSUsed:32768(32NonDFSDFSUsed%:(3.93ConfiguredCacheCapacity:0(0CacheUsed:0(0CacheRemaining:0(0CacheUsed%:LastLastcontact:SunMay2817:18:56PDTDFSUsed:32768(32NonDFSDFSUsed%:(3.93ConfiguredCacheCapacity:0(0CacheUsed:0(0CacheRemaining:0(0CacheUsed%:Lastcontact:SunMay2817:18:57PDTDFSUsed:32768(32NonDFSDFSUsed%:(3.93ConfiguredCacheCapacity:0(0CacheUsed:0(0CacheRemaining:0(0CacheUsed%:Lastcontact:SunMay2817:18:56PDT 下創(chuàng)建了一個(gè)examples/wordcount/子mkdir-p [hadoop@master1local]$cd [hadoop@master1local]$cd [hadoop@master1wordcount]$pwd Ithinkpmpboxwillhelp Ithinkpmpboxwillhelp5、在HDFS文件系統(tǒng)上創(chuàng)建 cd/usr/local/hadoop[hadoop@master1hadoop]$./bin/hadoopfs-mkdir[hadoop@master1hadoop]$./bin/hadoopfs-mkdir/input[hadoop@master1hadoop]$hdfsdfs-ls/Found1 -hadoop 令hdfsdfs-6hdfs服務(wù)inputcd/usr/local/hadoop17/05/2817:43:58WARNhdfs.DFS:Caughtexceptionatjava.lang.Object.wait(NativeMethod)[hadoop@master1hadoop]$hdfsdfs-ls/inputFound22hadoop2hadoop :Caughtatjava.lang.Object.wait(NativeMethod)- $bin/hdfsdfs-mkdir-p$bin/hdfsdfs-putetc/hadoop7jarls–alcdsources.jarorg.apache.hadoop.examples.WordCount/input/outputrunning 0/,我打開一看,說的是要設(shè)置yarn里面關(guān)于內(nèi)存和虛擬內(nèi)存的配置項(xiàng)。出現(xiàn)此故障的原因應(yīng)該是,在每個(gè)Docker分配的內(nèi)存和CPU資源太少,不能滿足Hadoop和Hive運(yùn)行yarn-./bin/hdfsdfs-mkdir-p cd[hadoop@master1hadoop]$./bin/hadoopjar./share/hadoop/mapreduce/sources/hadoop- 17/05/2823:45:04INFOinput.FileInputFormat:Totalinputpathstoprocess:217/05/2823:45:04INFOmapreduce.JobSubmitter:numberofsplits:2 17/05/2823:45:07INFOmapreduce.JobSubmitter:Cleaningupthestagingarea/tmp/hadoop- org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:Invalidresourcerequest,requestedmemory<0,orrequestedmemory>maxconfigured,requestedMemory=1536,atatjava.security.AccessController.doPrivileged(NativeMethod)atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)atatorg.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)atorg.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)atjavax.security.auth.Subject.doAs(Subject.java:422)atatorg.apache.hadoop.mapred atorg.apache.hadoop.examples.WordCount.main(WordCount.java:87)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.hadoop.util.RunJar.run(RunJar.java:221)atCausedby:org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:Invalidresourcerequest,requestedmemory<0,orrequestedmemory>maxconfigured,requestedMemory=1536,maxMemory=768atatjava.security.AccessController.doPrivileged(NativeMethod)atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)atatatjava.lang.reflect.Method.invoke(Method.java:498)atcom.sun..$14.submitApplication(UnknownSource)at...15 questException):Invalidresourcerequest,requestedmemory<0,orrequestedmemory>maxconfigured,requestedMemory=1536,maxMemory=768atatjava.security.AccessController.doPrivileged(NativeMethod)atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)atcom.sun..$13.submitApplication(UnknownSource)...25_地_step01:./bin/hadoopfs-mkdirstep02:刪除創(chuàng)建的inputhadoopfsrmrinput2.7.3相關(guān)要求需要更改 ./bin/hadoopfs-mkdir–pstep04:將本地文件上./bin/hadoopfs-put-step05:hdfsdfs-ls[hadoop@master1hadoop]$hdfsdfs-pmpboxokpmpboxv1.0pmpboxonlineIthinkpmpboxwillhelpcdsources.jarorg.apache.hadoop.examples.WordCount/user/input/user/output17/05/3117/05/3100:15:17INFOmapreduce.JobSubmitter:Cleaningupthestagingarea/tmp/hadoop-Exceptioninthread"main"java.io.IOException:request,requestedmemory<0,orrequestedmemory>maxconfigured,requestedMemory=1536,maxMemory=768 atatjava.security.AccessController.doPrivileged(NativeMethod)atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)atorg.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)atorg.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)atjavax.security.auth.Subject.doAs(Subject.java:422)atatorg.apache.hadoop.mapred atorg.apache.hadoop.examples.WordCount.main(WordCount.java:87)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.hadoop.util.RunJar.run(RunJar.java:221)atresourcerequest,requestedmemory<0,orrequestedmemory>maxconfigured,requestedMemory=1536,maxMemory=768 atatjava.security.AccessController.doPrivileged(NativeMethod)atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)atat atjava.lang.reflect.Method.invoke(Method.java:498)atcom.sun. 14.submitApplication(UnknownSource) atCausedby:RequestException):Invalidresourcerequest,requestedmemory<0,orrequestedmemory>maxconfigured,requestedMemory=1536,maxMemory=768 atatjava.security.AccessController.doPrivileged(NativeMethod)atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)atorg.apache.hadoop.ipc..call(.java:1475)atorg.apache.hadoop.ipc..call(.java:1412)atcom.sun..$13.submitApplication(UnknownSource) ...25修改cdstep08:slavecdsources.jarorg.apache.hadoop.examples.WordCount/user/input/user/output1..bin/hadoopdfsadmin-safemode[hadoop@master1hadoop]$./bin/hadoopjar./share/hadoop/mapreduce/sources/hadoop-17/05/3100:51:43INFO :ConnectingtoResourceManagerat17/05/3100:51:46INFOinput.FileInputFormat:Totalinputpathstoprocess:217/05/3100:51:46WARNhdfs.DFS :Caughtexceptionatjava.lang.Object.wait(NativeMethod)17/05/3100:51:46WARNhdfs.DFS :Caughtexceptionjava.lang.InterruptedException 5-centos6.6_master1,hadoop@master1:/usr/local/hadoopPage11/12atjava.lang.Object.wait(NativeMethod)17/05/3100:51:46INFOmapreduce.JobSubmitter:numberofsplits:217/05/3100:51:47INFOmapreduce.JobSubmitter:Submittingtokensforjob:17/05/3100:51:47INFOimpl.Yarn Impl:Submittedapplication17/05/3100:51:48INFOmapreduce.Job:Theurltotrackthe17/05/3100:51:48INFOmapreduce.Job:Runningjob:job_1496216703778_000217/05/3100:52:03INFOmapreduce.Job:Jobjob_1496216703778_0002runninginubermode:false17/05/3100:52:43INFOmapreduce.Job:Jobjob_1496216703778_0002completed17/05/3100:52:44INFOmapreduce.Job:Counters:50FileSystemCountersFILE:NumberofbytesFILE:Numberofbyteswritten=356386FILE:Numberofreadoperations=0FILE:Numberoflargereadoperations=0FILE:Numberofwriteoperations=0HDFS:Numberofbytesread=389HDFS:Numberofbyteswritten=137HDFS:Numberofreadoperations=9HDFS:Numberoflargereadoperations=0HDFS:Number

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論