hadoop集群環(huán)境搭建new_第1頁
hadoop集群環(huán)境搭建new_第2頁
hadoop集群環(huán)境搭建new_第3頁
hadoop集群環(huán)境搭建new_第4頁
hadoop集群環(huán)境搭建new_第5頁
已閱讀5頁,還剩10頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

一.準備環(huán)境安裝包1)準備4臺PC2)安裝配置Linux系統(tǒng):CentOS-7.0-1406-x86_64-DVD.iso3)安裝配置Java環(huán)境:jdk-8u121-linux-x64.gz4)安裝配置Hadoop:hadoop-2.7.4-x64.tar.gz5)安裝配置Hbase:hbase-1.2.1-bin.tar.gz網絡配置主機名IPmaster02slave103slave204slave305常用命令systemctlstartfoo.service#運行一個服務systemctlstopfoo.service#停止一個服務systemctlrestartfoo.service#重啟一個服務systemctlstatusfoo.service#顯示一個服務(無論運行與否)的狀態(tài)systemctlenablefoo.service#在開機時啟用一個服務systemctldisablefoo.service#在開機時禁用一個服務systemctlis-enablediptables.service#查看服務是否開機啟動reboot#重啟主機shutdown-hnow#立即關機source/etc/profile#配置文件修改立即生效yuminstallnet-tools二.安裝配置CentOS安裝CentOS1)選擇啟動盤CentOS-7.0-1406-x86_64-DVD.iso,啟動安裝2)選擇InstallCentOS7,回車,繼續(xù)安裝3)選擇語言,默認是English,學習可以選擇中文,正時環(huán)境選擇English4)配置網絡和主機名,主機名:master,網絡選擇開啟,配置手動的IPV45)選擇安裝位置;在分區(qū)處選擇手動配置;選擇標準分區(qū),點擊這里自動創(chuàng)建他們,點擊完成,收受更改6)修改root密碼,密碼:Jit1237)重啟,安裝完畢。配置IP檢查IPipaddr或iplink配置IP和網關#cd/etc/sysconfig/network-scripts#進入網絡配置文件目錄findifcfg-em*#查到網卡配置文件,例如ifcfg-em1viifcfg-em1#編輯網卡配置文件或vi/etc/sysconfig/network-scripts/ifcfg-em1#編輯網卡配置文件配置內容:BOOTPROTO=static#靜態(tài)IP配置為static,動態(tài)配置為dhcpONBOOT=yes#開機啟動IPADDR=02#IP地址NETMASK=#子網掩碼GATEWAY=DNS1=5systemctlrestartnetwork.service#重啟網絡配置hosts#vi/etc/hosts編輯內容:masterslave1slave205slave3. .systemctlstatusfirewalld.service#檢查防火墻狀態(tài)systemctlstopfirewalld.service#關閉防火墻systemctldisablefirewalld.service#禁止開機啟動防火墻yuminstall-yntp#安裝ntp服務ntpdate#同步網絡時間安裝配置jdk卸載自帶jdk安裝好的CentOS會自帶OpenJdk,用命令java-version,會有下面的信息:Javaversion"1.6.0"OpenJDKRuntimeEnvironment(build1.6.0-b09)OpenJDK64-BitServerVM(build1.6.0-b09,mixedmode)最好還是先卸載掉openjdk,在安裝sun公司的jdk.先查看rpm-qa|grepjava顯示如下信息:java-1.4.2-gcj-compat--40jpp.115java-1.6.0-openjdk--1.7.b09.el5卸載:rpm-e--nodepsjava-1.4.2-gcj-compat--40jpp.115rpm-e--nodepsjava-1.6.0-openjdk--1.7.b09.el5還有一些其他的命令rpm-qa|grepgcjrpm-qa|grepjdk如果出現找不到openjdksource的話,那么還可以這樣卸載yum-yremovejavajava-1.4.2-gcj-compat--40jpp.115yum-yremovejavajava-1.6.0-openjdk--1.7.b09.el5安裝jdk上傳jdk-8u121-linux-x64.gz安裝包到root根目錄mkdir/hometar-zxvfjdk-8u121-linux-x64.gz-C/home/rm-rfjdk-8u121-linux-x64.gz各個主機之間復制jdkscp-r/homeroot@slave1:/home/hadoopscp-r/homeroot@slave2:/home/hadoopscp-r/homeroot@slave3:/home/hadoop各個主機配置jdk環(huán)境變量vi/etc/profile編輯內容exportJAVA_HOME=/home/jdk1.8.0_121exportPATH=$JAVA_HOME/bin:$PATHexportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarsource/etc/profile#使配置文件生效java-version#查看java版本創(chuàng)建hadoop用戶(每臺主機上執(zhí)行)[root@Master1?]#groupaddhadoop〃創(chuàng)建用戶組[root@Master1?]#useradd-ghadoophadoop//新建hadoop用戶并增力加至Uhadoop工作組[root@Master1?]#passwdhadoop〃設置密碼配置ssh無密鑰訪問分別在各個主機上檢查ssh服務狀態(tài):systemctlstatussshd.service#檢查ssh服務狀態(tài)yuminstallopenssh-serveropenssh-clients#安裝ssh服務,如果已安裝,則不用執(zhí)行該步驟systemctlstartsshd.service#啟動ssh服務,如果已安裝,則不用執(zhí)行該步驟分別在各個主機上生成密鑰(每臺主機分別執(zhí)行)su-hadoop〃登錄至Uhadoop用戶ssh-keygen-trsa-P#生成密鑰(按三次回車完成),如下圖所示[hadoop@jit:-ICS(ssh-keygen一七rsaGeneratingpiiblie/privatersakeypair.Ente工fileinlitiichtogavethekey(/hone/hadoop/..sid^raa>:Createddirectory1/haine/hadcop/*ssh1,一Enterpassphrase(■emptyfornopassphrase]:EntermamepassphraaeagaLn:Yoiiridentificationhasbeensavedin/hGSse/tiadaop/.ssh/id^rsa.Ycmupublicteyhaabeensavedin/home/hadoop/.ssh/i-d_rsa.pub.Thekeyfingerprintis:58:01:bl:d7:22:3a£13:92:aa:792f3:36:be:3a:e2:4dhadoDpejit-135Thekey'srajudamartiErjageis;TOC\o"1-5"\h\z--[RSA2D48] +Ioo. II..二 I|o.a-k. II..?!?II.+.S II..o IIoaE II..-IHMi |||-D.4-hs I+ +[had&opSjic-lCS$|在slave1上cp-/.ssh/id_rsa.pub-/.ssh/slave1.id_rsa.pubscp-/.ssh/slave1.id_rsa.pubhadoop@master:?/.ssh在slave2上cp-/.ssh/id_rsa.pub-/.ssh/slave2.id_rsa.pubscp-/.ssh/slave2.id_rsa.pubhadoop@master:?/.ssh在slave3上cp-/.ssh/id_rsa.pub-/.ssh/slave3.id_rsa.pubscp-/.ssh/slave3.id_rsa.pubhadoop@master:?/.ssh在master上cd?/.sshcatid_rsa.pub>>authorized_keyscatslave1.id_rsa.pub>>authorized_keyscatslave2.id_rsa.pub>>authorized_keyscatslave3.id_rsa.pub>>authorized_keysscpauthorized_keyshadoop@slave1:?/.sshscpauthorized_keyshadoop@slave2:?/.sshscpauthorized_keyshadoop@slave3:~/.ssh分別在各個主機上執(zhí)行如下命令(賦予權限)su-hadoopchmod600~/.ssh/authorized_keys測試ssh免密登錄sshslave1 #第一次登錄需要輸入yes然后回車,如沒提示輸入密碼,則配置成功。三.安裝配置hadoop安裝hadoop上傳hadoop-2.7.4.tar.gz安裝包到root根目錄tar-zxvfhadoop-2.7.4.tar.gz-C/home/hadooprm-rfhadoop-2.7.4.tar.gzmkdir/home/hadoop/hadoop-2.7.4/tmpmkdir/home/hadoop/hadoop-2.7.4/logsmkdir/home/hadoop/hadoop-2.7.4/hdfmkdir/home/hadoop/hadoop-2.7.4/hdf/datamkdir/home/hadoop/hadoop-2.7.4/hdf/name在hadoop中配置hadoop-env.sh文件editthefileetc/hadoop/hadoop-env.shtodefinesomeparametersasfollows:#settotherootofyourJavainstallationexportJAVA_HOME=/home/jdk1.8.0_121修改yarn-env.sh#exportJAVA_HOME=/home/y/libexec/jdk1.7.0/exportJAVA_HOME=/home/jdk1.8.0_121修改slaves#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/slaves配置內容:刪除:localhost添加:slave1slave2slave3修改core-site.xml#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/core-site.xml配置內容:<configuration><property><name></name><value>hdfs://master:9000</value></property><property><name>hadoop.tmp.dir</name><value>file:/home/hadoop/hadoop-2.7.4/tmp</value></property><property><name>io.file.buffer.size</name><value>131072</value><description>該屬性值單位為KB,131072KB即為默認的64M</description></property></configuration>修改hdfs-site.xml#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/hdfs-site.xml配置內容:<configuration><property><name>services</name><value>hadoop-cluster1</value></property><property><name>dfs.datanode.data.dir</name><value>/home/hadoop/hadoop-2.7.4/hdf/data</value><final>true</final></property><property><name>.dir</name><value>/home/hadoop/hadoop-2.7.4/hdf/name</value><final>true</final></property><property><name>dfs.replication</name><value>1</value>〈description,分片數量,偽分布式將其配置成1即可〈/description〉</property><property><name>dfs.permissions</name><value>false</value></property></configuration>修改mapred-site.xmlcp/home/hadoop/hadoop-2.7.4/etc/hadoop/mapred-site.xml.template/home/hadoop/hadoop-2.7.4/etc/hadoop/mapred-site.xmlvi/home/hadoop/hadoop-2.7.4/etc/hadoop/mapred-site.xml配置內容:<configuration><property><name></name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>master:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>master:19888</value></property></configuration>修改yarn-site.xml#vi/home/hadoop/hadoop-2.7.4/etc/hadoop/yarn-site.xml配置內容:<configuration><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.mapred.ShuffleHandler</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.resourcemanager.address</name><value>master:8032</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>master:8030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>master:8031</value></property><property><name>yarn.resourcemanager.admin.address</name><value>master:8033</value></property><property><name>yarn.resourcemanager.webapp.address</name><value>master:8088</value></property></configuration>、 、 .各個主機之間復制hadoopscp-r/home/hadoop/hadoop-2.7.4hadoop@slave1:/home/hadoopscp-r/home/hadoop/hadoop-2.7.4hadoop@slave2:/home/hadoopscp-r/home/hadoop/hadoop-2.7.4hadoop@slave3:/home/hadoop各個主機配置hadoop環(huán)境變量su-rootvi/etc/profile編輯內容:exportHADOOP_HOME=/home/hadoop/hadoop-2.7.4exportPATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATHexportHADOOP_LOG_DIR=/home/hadoop/hadoop-2.7.4/logsexportYARN_LOG_DIR=$HADOOP_LOG_DIRsource/etc/profile#使配置文件生效格式化namenodecd/home/hadoop/hadoop-2.7.4/sbinhdfsnamenode-format3.5啟動hadoop啟動hdfs:cd/home/hadoop/hadoop-2.7.4/sbinstart-all.sh檢查hadoop啟動情況:02:50070#如下圖所示HadbOpOverviewEiatanDdea 口日汩門ateVolumeFatfuresSnapshotstartup LUitiesOverview,mongodbl.90001(active)弱成d:ThUAug1010DB34CST20172.7.Af(X3915e1eEWM01314^230ti73015We17572B^22O17-C8-01T0029Z ksn^ChkTramoraAOI-2.74□lusterio:ClD-9te47e8d-50b7-4e3d-al43-a3d4.e1131126BlockFDOlID=6P-2O67633353-1721B.1S1O2-iaDZ2KMa2B1SummaryS&:urrt^'IEC1TSafemadeisoft.30friesanddireclones,IS=帖tataJlfileEy'stem□bj&ctl.s).H&apMefnaryused16337M6of321.5MBHwMemoryMaxReamMemoryIsS&9MB.NonHeapMurwyused5723MBorseaimbCommlteil同皿HeapMemoryM司耳NonHe?Memoryis-1BConfiguredCap^cit^:3.1TB□FSUsed::1.DE5MB(0%)NOflDFSUsed:118.93GBDFSMOHiniHQ:2.96TB(96.26%)BteKP81U總比1.05MB(0%)

hlfldwp Dan3rKdgs OaianiMZE!WumeFaiures 如平hoi SoaraupPr^e^sJuXDatanodeInformationNcdeLBr5tC-9htMLAdnvCipantrUsedNcdeLBr5tC-9htMLAdnvCipantrUsed廂nDF5UsriRemarnng日藤事B囹0k.曄IusedFadedValuniMVprai-snMd皿知M口(111K1B1045M1的2InSsr^ioE-1H37B7C4KBW2*>G311TMK3:喀iD27Ji-Fl-106(P?吐的iK網腎5h9?v%iMTSM:El弼的廂LMTg0g電也蜥0274£in&IIYiW103TB1KB388OB1硒畤C66泄蛾027.4Sdaitt?dFrodiog如FFRnnlnfAjpFC4?plet?dCsa±larMTr:nHwj中UsedBesjxv7ataiM?C-E¥Ei!irEWcrsarTots]0020flX24CBCLB0a±0UM劉,ihxWwaIP*U*域Q0AppllaiEitiCH.T^pe中■3ueu£aSianTlA!中!如,曲n里0Stm/。fl而■薄期:因值將Ih春力中mdJk<steC1皿L口KiHERICEdfffwiltTbuAw;10i03:e:ifitOBOj3D1Trbu.姓m1g09:制+000?301TFl機5HETita3猶2可也」建檢查進程:jpsmaster主機包含ResourceManager、SecondaryNameNode、NameNode等,則表示啟動成功,例如2212ResourceManager2484Jps1917NameNode2078SecondaryNameNode各個slave主機包含DataNode、NodeManager等,則表示啟用成功,例如17153DataNode17334Jps17241NodeManager停止hadoop命名#stop-all.sh四.安裝配置zookeeper4.1配置zookeeper環(huán)境變量vi/etc/profileexportZOOKEEPER_HOME=/home/hadoop/zookeeper-3.4.6exportPATH=$ZOOKEEPER_HOME/bin:$PATHsource/etc/profilezookeeper4.2配置zookeeper1、至Uzookeeper官網下載zookeeper/apache/zookeeper/zookeeper-3.4.6/2、在slave1,slave2,slave3上面搭建zookeeper例如:slave103slave204slave3053、上傳zookeeper-3.4.6.tar.gz到任意一臺服務器的根目錄,并解壓:zookeeper:tarzxvfzookeeper-3.4.6.tar.gz-C/home/hadoop4、在zookeeper目錄下建立zookeeper-data目錄,同時將zookeeper目錄下conf/zoo_simple.cfg文件復制一份成zoo.cfgcp/home/hadoop/zookeeper-3.4.6/conf/zoo_sample.cfgzoo.cfg5、修改zoo.cfgThenumberofmillisecondsofeachticktickTime=2000Thenumberofticksthattheinitial#synchronizationphasecantakeinitLimit=10Thenumberofticksthatcanpassbetween#sendingarequestandgettinganacknowledgementsyncLimit=5thedirectorywherethesnapshotisstored.donotuse/tmpforstorage,/tmphereisjust#examplesakes.dataDir=/home/hadoop/zookeeper-3.4.6/zookeeper-datatheportatwhichtheclientswillconnectclientPort=2181themaximumnumberofclientconnections.increasethisifyouneedtohandlemoreclients#maxClientCnxns=60#Besuretoreadthemaintenancesectionoftheadministratorguidebeforeturningonautopurge.##/doc/current/zookeeperAdmin.html#sc_maintenance#ThenumberofsnapshotstoretainindataDir#autopurge.snapRetainCount=3PurgetaskintervalinhoursSetto"0"todisableautopurgefeature#autopurge.purgeInterval=1server.1=slave1:2888:3888server.2=slave2:2888:3888server.3=slave3:2888:38886、拷貝zookeeper目錄到另外兩臺服務器:scp-r/home/hadoop/zookeeper-3.4.6slave2:/home/hadoopscp-r/home/hadoop/zookeeper-3.4.6slave3:/home/hadoop分別在幾臺服務器的zookeeper-data目錄下建立myid其ip對應相應的server.*server.1的myid內容為1server.2的myid內容為2server.3的myid為37、啟動ZooKeeper集群,在每個節(jié)點上分別啟動ZooKeeper服務:cd/home/hadoop/zookeeper-3.4.6/bin/zkServer.shstart8、可以查看ZooKeeper集群的狀態(tài),保證集群啟動沒有問題:分別查看每臺服務器的zookeeper狀態(tài)zookeeper#bin/zkServer.shstatus查看那些是following那個是leaderEg:zkServer.shstatus五.安裝配置hbase安裝hbase上傳hbase-1.2.1-bin.tar.gz安裝包到root根目錄tar-zxvfhbase-1.2.1-bin.tar.gz-C/home/hadoopmkdir/home/hadoop/hbase-1.2.1/logs、、一配置hbase環(huán)境變量vi/etc/profileexportHBASE_HOME=/home/hadoop/hbaseexportPATH=$PATH:$HBASE_HOME/binsource/etc/profile修改hbase-env.sh#vi/home/hadoop/hbase-1.2.1/conf/hbase-env.sh配置內容:exportJAVA_HOME=/home/jdk1.8.0_121exportHBASE_LOG_DIR=${HBASE_HOME}/10gsexportHBASE_MANAGES_ZK=false修改regionservers#vi/home/hadoop/hbase-1.2.1/conf/regionservers配置內容:刪

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論