![【Hadoop實(shí)驗(yàn)一】三節(jié)點(diǎn)完全模式安裝_第1頁(yè)](http://file2.renrendoc.com/fileroot_temp3/2021-10/16/0fff3b46-dfb5-4400-9386-036dbf1f1184/0fff3b46-dfb5-4400-9386-036dbf1f11841.gif)
![【Hadoop實(shí)驗(yàn)一】三節(jié)點(diǎn)完全模式安裝_第2頁(yè)](http://file2.renrendoc.com/fileroot_temp3/2021-10/16/0fff3b46-dfb5-4400-9386-036dbf1f1184/0fff3b46-dfb5-4400-9386-036dbf1f11842.gif)
![【Hadoop實(shí)驗(yàn)一】三節(jié)點(diǎn)完全模式安裝_第3頁(yè)](http://file2.renrendoc.com/fileroot_temp3/2021-10/16/0fff3b46-dfb5-4400-9386-036dbf1f1184/0fff3b46-dfb5-4400-9386-036dbf1f11843.gif)
![【Hadoop實(shí)驗(yàn)一】三節(jié)點(diǎn)完全模式安裝_第4頁(yè)](http://file2.renrendoc.com/fileroot_temp3/2021-10/16/0fff3b46-dfb5-4400-9386-036dbf1f1184/0fff3b46-dfb5-4400-9386-036dbf1f11844.gif)
![【Hadoop實(shí)驗(yàn)一】三節(jié)點(diǎn)完全模式安裝_第5頁(yè)](http://file2.renrendoc.com/fileroot_temp3/2021-10/16/0fff3b46-dfb5-4400-9386-036dbf1f1184/0fff3b46-dfb5-4400-9386-036dbf1f11845.gif)
版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、Hadoop 1.0.4 Installation GuideHadoop 1.0.4 安裝指南Create: LinouLast Update: Version: VersionPersonDateComments1.0Linou2013/1/2Contents1環(huán)境準(zhǔn)備31.1 環(huán)境介紹31.2 建立Hadoop運(yùn)行帳號(hào)31.3 配置SSH免密碼登錄31.4 下載并解壓Hadoop安裝包到指定目錄42配置Hadoop52.1 配置namenode, 修改.xml文件52.2 配置hadoop-env.sh62.3 配置master和slaves文件62.4 向各個(gè)節(jié)點(diǎn)復(fù)制Hadoop63格
2、式化namenode73.1 格式化namenode74啟動(dòng)Hadoop84.1 啟動(dòng)hadoop84.2 檢查后臺(tái)進(jìn)程85測(cè)試95.1 上傳文件到HDFS95.2 創(chuàng)建目錄95.3 顯示文件內(nèi)容95.4 刪除目錄,文件91 環(huán)境準(zhǔn)備1.1 環(huán)境介紹 本次實(shí)驗(yàn)使用Oracle VM VirtualBox搭建虛擬機(jī)。本機(jī)環(huán)境:虛擬機(jī)環(huán)境:機(jī)器IPname內(nèi)存硬盤Master01master2048M30Gslave102slave11024M30Gslave203slave21024M30G1.2 建立Hadoop運(yùn)行帳號(hào)分
3、別在各個(gè)節(jié)點(diǎn)執(zhí)行下面腳本#!/bin/bashgroupadd hadoopuseradd hadoop -g hadoop;echo hadoop|passwd -stdin hadoop 1.3 配置SSH免密碼登錄首先所有節(jié)點(diǎn)上執(zhí)行:su - hadoop ssh-keygen -t dsa然后分別在各個(gè)節(jié)點(diǎn)上執(zhí)行:master: cd .ssh/ cat id_rsa.pubauthorized_keys scpauthorized_keys slave1:/home/hadoop/.ssh/slave1: cd .ssh/ cat id_rsa.pubauthorized_keys
4、scpauthorized_keys slave2:/home/hadoop/.ssh/slave2: cd .ssh/ cat id_rsa.pubauthorized_keys scpauthorized_keys master:/home/hadoop/.ssh/master: scpauthorized_keys slave1:/home/hadoop/.ssh/ scpauthorized_keys slave2:/home/hadoop/.ssh/遇到一個(gè)問(wèn)題,怎么設(shè)置都還是問(wèn)我要密碼!最后查到是因?yàn)閍uthorized_keys的權(quán)限問(wèn)題,把權(quán)限設(shè)置為600chmod 600 a
5、uthorized_keys一切OK!1.4 下載并解壓Hadoop安裝包到指定目錄首先在master節(jié)點(diǎn)上完成配置之后拷貝到各個(gè)節(jié)點(diǎn)上去oracle192 hadoop$ cp -r hadoop-1.0.4 /u01/app/2 配置Hadoop2.1 配置namenode, 修改.xml文件cd /u01/app/hadoop/conf/vi hdfs:/01:9000vi hdfs-site.xmldfs.data.dir/u01/app/data/hdfsdfs.replication2vi map
6、red-site.xmlmapred.job.trackerhdfs:/01:90012.2 配置hadoop-env.sh2.3 配置master和slaves文件vi master01vi slaves02032.4 向各個(gè)節(jié)點(diǎn)復(fù)制Hadoopscp -r ./hadoop/ 02:/u01/appscp -r ./hadoop/ 03:/u01/app3 格式化namenode3.1 格式化namenodeoracle192 hadoop$ bi
7、n/hadoop namenode -format13/01/02 11:48:49 INFO namenode.NameNode: STARTUP_MSG: /*STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = 01/01STARTUP_MSG: args = -formatSTARTUP_MSG: version = 1.0.4STARTUP_MSG: build = /repos/asf/hadoop/common/branches/branch-1
8、.0 -r 1393290; compiled by hortonfo on Wed Oct 3 05:13:58 UTC 2012*/13/01/02 11:48:49 INFO util.GSet: VM type = 64-bit13/01/02 11:48:49 INFO util.GSet: 2% max memory = 17.77875 MB13/01/02 11:48:49 INFO util.GSet: capacity = 221 = 2097152 entries13/01/02 11:48:49 INFO util.GSet: recommended=2097152,
9、actual=209715213/01/02 11:48:49 INFO namenode.FSNamesystem: fsOwner=oracle13/01/02 11:48:49 INFO namenode.FSNamesystem: supergroup=supergroup13/01/02 11:48:49 INFO namenode.FSNamesystem: isPermissionEnabled=true13/01/02 11:48:49 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10013/01/02 11:4
10、8:49 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)13/01/02 11:48:50 INFO namenode.NameNode: Caching file names occuring more than 10 times 13/01/02 11:48:50 INFO common.Storage: Image file of size 112 saved in 0 seconds.13/01/02
11、 11:48:50 INFO common.Storage: Storage directory /tmp/hadoop-oracle/dfs/name has been successfully formatted.13/01/02 11:48:50 INFO namenode.NameNode: SHUTDOWN_MSG: /*SHUTDOWN_MSG: Shutting down NameNode at 01/01*/4 啟動(dòng)Hadoop4.1 啟動(dòng)hadooporacle192 hadoop$ bin/start-all.sh start
12、ing namenode, logging to /u01/app/hadoop/libexec/./logs/hadoop-oracle-namenode-01.out02: starting datanode, logging to /u01/app/hadoop/libexec/./logs/hadoop-oracle-datanode-02.out03: starting datanode, logging to /u01/app/hadoop/libexec/./logs/hadoop-o
13、racle-datanode-03.out01: starting secondarynamenode, logging to /u01/app/hadoop/libexec/./logs/hadoop-oracle-secondarynamenode-01.outstarting jobtracker, logging to /u01/app/hadoop/libexec/./logs/hadoop-oracle-jobtracker-01.out02: starting
14、tasktracker, logging to /u01/app/hadoop/libexec/./logs/hadoop-oracle-tasktracker-02.out03: starting tasktracker, logging to /u01/app/hadoop/libexec/./logs/hadoop-oracle-tasktracker-03.out4.2 檢查后臺(tái)進(jìn)程oracle192 hadoop$ jps3404 kvstore-2.0.23.jar3472 ManagedService3575
15、 ManagedService9544 NameNode9820 JobTracker9734 SecondaryNameNode9930 Jps5 測(cè)試5.1 上傳文件到HDFSoracle192 test$ hadoop dfs -put test.txt /testoracle192 test$ hadoop dfs -ls /testFound 1 items-rw-r-r- 2 oracle supergroup 17 2013-01-02 12:15 /test/test.txt5.2 創(chuàng)建目錄oracle192 test$ hadoop dfs -ls /useroracle19
16、2 test$ hadoop dfs -mkdir /user/testoracle192 test$ hadoop dfs -ls /userFound 1 itemsdrwxr-xr-x - oracle supergroup 0 2013-01-02 13:03 /user/test5.3 顯示文件內(nèi)容oracle192 test$ hadoop dfs -cat /test/test.txthadoop test file5.4 刪除目錄,文件oracle192 test$ hadoop dfs -rmr /test/test.txtDeleted hdfs:/
17、01:9000/test/test.txtoracle192 test$ hadoop dfs -ls /testoracle192 test$ hadoop dfs -rmr /testDeleted hdfs:/01:9000/testoracle192 test$ hadoop dfs -ls /testls: Cannot access /test: No such file or directory.5.5 測(cè)試WorldCount程序oracle192 hadoop$ hadoop dfs -put /home/oracle/test/test.txt /t
18、estoracle192 hadoop$ hadoop jar hadoop-examples-1.0.4.jar wordcount /test/test.txt /test/out13/01/03 10:47:46 INFO input.FileInputFormat: Total input paths to process : 113/01/03 10:47:46 INFO util.NativeCodeLoader: Loaded the native-hadoop library13/01/03 10:47:46 WARN snappy.LoadSnappy: Snappy nat
19、ive library not loaded13/01/03 10:47:46 INFO mapred.JobClient: Running job: job_201301021214_000213/01/03 10:47:47 INFO mapred.JobClient: map 0% reduce 0%13/01/03 10:48:00 INFO mapred.JobClient: map 100% reduce 0%13/01/03 10:48:12 INFO mapred.JobClient: map 100% reduce 100%13/01/03 10:48:17 INFO map
20、red.JobClient: Job complete: job_201301021214_000213/01/03 10:48:17 INFO mapred.JobClient: Counters: 2913/01/03 10:48:17 INFO mapred.JobClient: Job Counters 13/01/03 10:48:17 INFO mapred.JobClient: Launched reduce tasks=113/01/03 10:48:17 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=1616413/01/03 10:48:
21、17 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=013/01/03 10:48:17 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=013/01/03 10:48:17 INFO mapred.JobClient: Launched map tasks=113/01/03 10:48:17 INFO mapred.JobClient
22、: Data-local map tasks=113/01/03 10:48:17 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=1141113/01/03 10:48:17 INFO mapred.JobClient: File Output Format Counters 13/01/03 10:48:17 INFO mapred.JobClient: Bytes Written=2313/01/03 10:48:17 INFO mapred.JobClient: FileSystemCounters13/01/03 10:48:17 INFO m
23、apred.JobClient: FILE_BYTES_READ=4113/01/03 10:48:17 INFO mapred.JobClient: HDFS_BYTES_READ=13713/01/03 10:48:17 INFO mapred.JobClient: FILE_BYTES_WRITTEN=4330313/01/03 10:48:17 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=2313/01/03 10:48:17 INFO mapred.JobClient: File Input Format Counters 13/01/03 1
24、0:48:17 INFO mapred.JobClient: Bytes Read=3213/01/03 10:48:17 INFO mapred.JobClient: Map-Reduce Framework13/01/03 10:48:17 INFO mapred.JobClient: Map output materialized bytes=4113/01/03 10:48:17 INFO mapred.JobClient: Map input records=113/01/03 10:48:17 INFO mapred.JobClient: Reduce shuffle bytes=013/01/03 10:48:17 INFO mapred.JobClient: Spilled Records=613/01/03 10:48:17 INFO mapred.JobClient: Map output bytes=5613/01/03 10:48:17 INFO mapred.JobClient: CPU time spent (ms)=362013/01
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025年度教學(xué)儀器知識(shí)產(chǎn)權(quán)保護(hù)合同
- 全新轎車購(gòu)買合同范本
- 2025年度金融貸款居間風(fēng)險(xiǎn)控制合同
- 全國(guó)授權(quán)合同范本
- 養(yǎng)鵝合同范例
- 切割支撐合同范本
- 業(yè)主和裝修工長(zhǎng)合同范例
- 2025年度花卉市場(chǎng)渠道建設(shè)與拓展合同
- 自愿租賃房屋意向合同范本
- n 1賠償合同范本
- 中國(guó)太陽(yáng)能光電建筑行業(yè)現(xiàn)狀調(diào)研分析及市場(chǎng)前景預(yù)測(cè)報(bào)告(2024版)
- 關(guān)于防范遏制礦山領(lǐng)域重特大生產(chǎn)安全事故的硬措施課件
- 2025年中國(guó)成都餐飲業(yè)市場(chǎng)運(yùn)營(yíng)態(tài)勢(shì)分析及投資前景預(yù)測(cè)報(bào)告
- 2024年榆林職業(yè)技術(shù)學(xué)院高職單招職業(yè)適應(yīng)性測(cè)試歷年參考題庫(kù)含答案解析
- 2025年春新外研版(三起)英語(yǔ)三年級(jí)下冊(cè)課件 Unit3第1課時(shí)startup
- (教研室)2023屆山東省德州市、煙臺(tái)市高考一模地理試題 附答案
- 《河南民俗文化》課件
- 八年級(jí)上冊(cè)英語(yǔ)完形填空、閱讀理解綜合訓(xùn)練100題-含參考答案
- 項(xiàng)目合作備忘錄范文
- 德龍自卸車合格證掃描件(原圖)
- 【紅】四川大學(xué)信紙?zhí)ь^logo
評(píng)論
0/150
提交評(píng)論