HadoopSpark集群部署手冊_第1頁
HadoopSpark集群部署手冊_第2頁
HadoopSpark集群部署手冊_第3頁
HadoopSpark集群部署手冊_第4頁
HadoopSpark集群部署手冊_第5頁
已閱讀5頁,還剩21頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1軟件環(huán)境整體狀況說明

JDKVVVV

HadoopV(Master)V(Slave)V(Slave)V(Slave)

HiveV

ScalaVVVV

SparkV(Master)V(Worker)V(Worker)V(Worker)

2安裝包下載路徑

系統(tǒng)名軟件包名下我路徑

Spark開源軟件/

.tar.gz

.tar.gz

3Hadoop2.2安裝和配置

3,集群網(wǎng)絡(luò)環(huán)境

節(jié)點(diǎn)IP地址和主機(jī)名分布如下:

IPHostName用戶名

DashDBOl.yunvod

sparkOl.yunvod

spark02.yunvod

spark03.yunvod

3.2環(huán)境搭建(每臺機(jī)器都要操作)

3.2.1修改HostName(非必需)

vim/etc/sysconfig/network

修改HOSTNAME為須要的名稱

重啟服務(wù)器,進(jìn)行生效

reboot

3.2.2設(shè)置Host映射文件

1.運(yùn)用root身份編輯/etc/hosts映射文件,設(shè)置IP地址及機(jī)器名的映射,設(shè)置信息如下:

vim/etc/hosts

4DashDBOl.yun

5sparkOl.yun

6spark02.yun

7spark03.yun

0localhosclocalhost.localdomainlocalhost4localhost4.Iocaldomain4centoscentos.yun

::1localhostlocalhost.localdomainlocalhostfilocalhost6.localdomains

4DashDBOl.yun

5sparkOl.yun

6spark02.yun

7spark03.yxin

2.運(yùn)用如下吩咐對網(wǎng)絡(luò)設(shè)置進(jìn)行重啟

/etc/init.d/networkrestart

[rootQDashDBOlcommon]f/etc/init.d/networkrestart

正在關(guān)閉娘口ethO:[確定】

笑閑環(huán)回接口:[確定]

彈出環(huán)回鏤口:[確定]

彈出界面ethO:Determiningifipaddress4isalreadyinusefordeviceethO...

[確定]

[root0DashDBOlcommon]f|

3.驗(yàn)證設(shè)置是否勝利

[vod0DashDBOlpingsparkOl.yun

PINGspark01.yun(17Z.lb.15U.Z5)bb(b4)bytesofdata.

64bytesfromsparkOl.yun(5):icmp_seq=l81=64time=2.07ms

64bytesfromsparkOl.yun(5):icmp_seq=2ttl=64time=0.299ms

3.2.3設(shè)置操作系統(tǒng)環(huán)境

3.2.3.1關(guān)閉防火墻

在Hadoop安裝過程中須要關(guān)閉防火墻和SEIinux,否則會出現(xiàn)異樣

1.serviceiptablesstatus查看防火墻狀態(tài),如下所示表示iptables已經(jīng)開啟

?hadoop@hadoopl:/home/hadoop-□X

FileEditViewSearchTerminalHelp

[root@hadooplhadoop]#serviceiptablesstatus£

Table:filter

ChainINPUT(policyACCEPT)

numtargetprotoptsourcedestination

1ACCEPTall--/0/0stateRELATED,

ESTABLISHED

2ACCEPTicmp--/0/0

3ACCEPTall--/0/0

4ACCEPTtcp--/0/0stateNEWtcp

dpt:22

5REJECTall--/0/0reject-withic

mp-host-prohibited

ChainFORWARD(policyACCEPT)

numtargetprotoptsourcedestination

1REJECTall--/0/0reject-withic

mp-host-prohibited

ChainOUTPUT(policyACCEPT)

numtargetprotoptsourcedestination

2.以root用戶運(yùn)用如下吩咐關(guān)閉iptables

chkconfigiptablesoff

3.2.3.2關(guān)閉SElimix

1.運(yùn)用getenforce吩咐查看是否關(guān)閉

團(tuán)hadoop@hadoopl:/home/hadoop-□x

FileEditViewSearchTerminalHelp

[root@hadooplhadoop]#

[root@hadooplhadoop]#getenforce

Enforcing

2.修改/etc/selinux/config文件

將SELINUX=enforcing改為SELINUX=disabled,執(zhí)行該吩咐后重啟機(jī)器生效

國hadoop@hadoopl:/home/hadoop_□x

FileEditViewSearchTerminalHelp

I

#ThisfilecontrolsthestateofSELinuxonthesystem.

#SELINUX=cantakeoneofthesethreevalues:

#enforcing-SELinuxsecuritypolicyisenforced.

#permissive-SELinuxprintswarningsinsteadofenforcing.

,disabled-No?ELinuxpolicyisloaded.

^SELINUX=enforcing

SELINUX:disable

#SELINUXTYPE=cantakeoneofthesetwovalues:

#targeted-Targetedprocessesareprotected,

#mis-MultiLevelSecurityprotection.

SELINUXTYPE=targeted

3.2.3.3JDK安裝及配置

給予vod用戶/usr/lib/java書目可讀寫權(quán)限,運(yùn)用吩咐如下:

sudochmod-R777/usr/lib/java

把下載的安裝包,上傳到/usr/lib/java書目下,運(yùn)用如下吩咐進(jìn)行解壓

tar-zxvf

解壓后書目如下圖所示:

[vod@DashDB01java]$pud

/usr/lib/java

[vodSDashDBOljava]$11

總用量134988

drwxr-xr-x8vodadmins40963月182014

-rw-r-r--1vodadmins138ZZ0064工0月1409:46Jdk-7u53-iinux-x64.tar.gz

運(yùn)用root用戶配置/etc/profile,該設(shè)置對全部用戶均生效

vim/etc/profile

添加以下信息:

exportPATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin

exportCLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

修改完畢后,運(yùn)用

source/etc/pro-version

[vod@DashDB01java]djava-version

javaversion,rl.7.0_531

Java(TH)SERuntimeEnvironment(build1.7.0_55-bl3)

JavaHotSpot(TM)64-picServerVMfbui1d24.5S-bO3,mixedmode)

[vod0DashDBOl-java]$|echogJAVA_H0ME1

/usr/lib/java/jdkl.77O_55

3.2.3.4更新OpenSSL

yumupdateopenssl

3.2.3.5無密碼驗(yàn)證配置

1.以root用戶運(yùn)用vim/etc/ssh/sshd_config,打開sshd_config配置文件,開放4個配

置,如下圖所示:

RSAAuthenticationyes

PubkeyAuthenticationyes

AuthorizedKeys

StrictModesno

authorizedKeysFile.ssh/authorized_keys

2.配置后重啟服務(wù)

servicesshdrestart

Troot(madooplnadoopT-

[root?hadooplhadoopj*servicesshdrestart

stoppingsshd:

Startingsshd:

[root?hadooplhadoop]0

3.運(yùn)用root用戶登錄在4個節(jié)點(diǎn),g/home/common書目下,執(zhí)行吩咐

mkdir.ssh

4.運(yùn)用vod用戶登錄在4個節(jié)點(diǎn)中運(yùn)用如下吩咐生成私鑰和公鑰;

sudochown-Rvod.ssh

ssh-keygen-trsa

[vod0DashDBOl-]$ssh-keygen-trsa

Generatingpublic/privatersakeypair.

Enterfileinwhichtosavethekey(/home/common/.ssh/id_rsa):

/home/common/.ssh/id_rsaalreadyexists.

Overwrite(y/n)?y

Enterpassphrase(emptyfornopassphrase):

Entersamepassphraseagain:

Youridentificationhasbeensavedin/home/common/.ssh/id_rsa.

Yourpublickeyhasbeensavedin/hone/common/.ssh/id_rsa.pub.

Thekeyfingerprintis:

6d:a3:33:8c:14:le:9b:ff:26:44:3d:9b:ac:eb:d6:12vod@DashDB01.yun

Thekey'srandomartimageis:

+—[RSA2048]---+

I..oI

I=.S.++I

I.+Eo+.I

I..*+I

I=4-0I

Ioo=.I

+-----------------+

[vodeDashDBOl|

5.進(jìn)入/home/common/.ssh書目在4個節(jié)點(diǎn)中分別

運(yùn)用如下吩咐cpid_rsa.pubauthorized_keys_DashDB01.yun

把公鑰命名

authorized_keys_DashDB01.yun

authorized_keys_spark01.yun

authorized_keys_spark02.yun

authorized_keys_spark03.yun

[vospark03*]$cd.ssh

[vod@spark03.ssh]$ll

總用量8

-rv1vodadmins167510月1317:55id_rsa

-rw-r--r--1vodadmins39710月1317:55id_rsa.pub

[vod@spark03.ssh]$cpid_rsa.pubauthorized_keys_spark03

6.把3個從節(jié)點(diǎn)(sparkOl,spark02,sparkO3)的公鑰運(yùn)用scp吩咐傳送到DashDBOl.yun

節(jié)點(diǎn)的/home/common/.ssh文件夾中;

scpauthorized_keys_spark01.yun:/home/common/.ssh

[vodBsparkOl.ssh]專scpauthorized_keys_spark01vod0DashDBOl.yun:/home/coimon/.ssh

Theauthenticityofhost'dashdbOl.yun(4)1can'tbeestablished.

RSAkeyfingerprintis76:98:61:09:6a:6b:b6:f3:2e:95:98:b7:08:5c:26:78.

Areyousureyouwanetocontinueconnecting(yes/no)2yes

11

Warning:PermanentlyaddeddashdbOl.yunz4(RSA)tochelisto£knownhosts.

vod@dashdbOl.yun—password:

authorized_keys_spark01100%3970.4KB/S

vod0:Nosuchfileordirectory

[vodBsparkOl.ssh](1

最終DashDBOl.yun節(jié)點(diǎn)中文件如下

01.ssh]$11

總用量24

-rw-r--r--1vodadmins398月1317:56authorized_keys_naster

1vodadmins3971317:59authorized_keys_spark01

1vodadmins3971318:02authorized_keYS_spark02

1vodadmins3971318:01authorized_keYS_spark03

1vodadmins1675月1317:52id_rsa

1vodadmins3981317:52idrsa.pub

01.ssh]$|

7.把4個節(jié)點(diǎn)的公鑰信息保存到authorized_key文件中

運(yùn)用catauthorized_keys_DashDB01.yun>>authorized_keys吩咐

[vodSDashDBOl.ssh]$catauthorized_keY3_master?authorized_keys

[vod0DashDBOl.ssh]$catauthorized_keys_spark01?authorized_keys

[vodSDashDBOl.ssh]$catauthorized_keys_spark03?authorized_keys

[vodSDashDBOl.ssh”catauthorized_keys_spax:k02?authorized_keys

[vodeDashDBOl.ssh]$catauthorized_keys

ssh-rsaAAAAB3NzaClYc2EAAAABIwAAAQEAvqW2x2ZV5In86rBsusEp9V47BF0Ku6tdzs2rlthuUG0XjchQTG/H9ryDg5obd

M4Zbfe4gRhvPMa4qDY9E9SkMCK8BO\iAjJohahuif/lmPdzdRcpgvjkpnv6Y2g7jlm7qgUH+ULGKafarnFeSneLVD^n2NnXc0

0Iy9DXtqH2FiXLZX4-lXQAp5Cn7hiajlKLdM39sI/00hzMb964Gcxi3DAFnxlYNVlFDgTzHpqlCeQi95AuZZqd3SL¥0fTwD3v0

6RCjKGj6LSPWw4M4fR+AJ2A/bgKKrgT0azKh/jlPXs3Y0oC8c3xitQ==vod0DashDBOl.yun

ssh-rsaAAAAB3NzaClyc2EAAAABIwAAAQEAue0eGcGhl0AeCi20EM8GqUlGavK9d09HqR3i4rl4vczbdbDQYK+lGQtH2RV3N

y5QQGLAqBF1iju4PwUtkFxBK92Tc/ijNAAc5C5ZfR2/ZaD39pR7PNDfmWX2DH650D/KFZvY3KBSYf0M0dUittijIppHizgQGMGQ4

TXFArkCcH58MgTLVnX8IfPpiICNbXBBJpwMhq8AyDStHDkLPVYj0I651HlnWU6FIaEMi:W3j:SbYRQlbq00cAkn87u60EUr7bnR

pGQMmUl+ESUPTLPnKAOUEmaaVE9FFYmZ/Y\ryTftKQf7AfDBUvU71v==vod@spark01.yun

ssh-rsaAAAAB3NzaClYc2EAAAABIwAAAQEArKMnXwBtAUB9fQdg7QpyaW3id54uBLBKVplIGVskX7hw7j50rAIyfd49abwb5

yqhd78czUjY/LsvQAbCEhHQNz2QCjktesxXLsbRdljNFTiLvLZBXd21DdjZLbtDEpp6S0L6t0eUxy7rAvFDuleY+/JoPt6S9o

K5RARosWFpdR/STQlpo2D7sYGfb/PRA+6X4de0BUS+n0cVLa0tSxY05LLlos9Fe0gn7V3JjLxGH9rWswkpk+GvO4so/Z3kMY

QIpP54vIE3orbUi4W8iIW3ulWYS0nP5xb061xBfUDKUyf3LR725Q==vod@spark03.yun

ssh-rsaAAAAB3NzaClyc2EAAAABIwAAAQEA/MqICzVQNwXlt73ahS6PfAxwe7HuouHGeax9SNxjwRjgaGGncW+LQvkDu6R

3hcDt/pXTwvCf0N4kGDztxgI2Sds+wiSUDtV5bGniXxMF2Z3iWroyc0TctcUp0+MT+CvkxNH7mtzgaNV33iz8cgHi:WE/VN4BS0

V+30A0QH7QvLlzlHv0MbyZlhucTsuPkde0USJRmQMQXy35T6+d4X9nshJ2IYSmIPtAUy9dRVG201L6Xh6jQbEz/lnVv2n4THh

9YEiu0fW5AJc0qBBSLSM0ZHNLTVRX7JhYLYI68d2HLosSuGdYHPHuw==vod@spark02.yun

[vodSDashDBOl.ssh]$|

8.把該文件分發(fā)到其他兩個從節(jié)點(diǎn)上

運(yùn)用scpauthorized_keys:/home/common/.ssh把密碼文件分發(fā)出

[vod@DashDB01.ssh]$scpauthorized_keysvodSsparkOl.yun:/hoae/common/,ssh

Theauthenticityofhost'sparkOl.yun(5)1can,tbeestablished.

RSAkeyfingerprintisf7:de:20:93:44:33:76:2e:bd:0c:29:3d:b0:6f:37:cc.

Areyousureyouwanttocontinueconnecting(yes/no)?yes

Warning:Permanentlyadded1sparkOl.yun,172.16.158.251(RSA)toChelistcfknownhosts.

vodBsparkOl.yun'spassword:

authorized_keys100%15891.6KB/S

[vod0DashDBOl.ssh]$scpauthorized_keysvod@spark02.yun:/hone/coiimon/.ssh

Theauthenticityofhost*spark02.yun(6)1can'tbeestablished.

RSAkeyfingerprintis45:46:e2:ba:a2:7f:08:ld:8b:ba:ed:ll:4c:27:ab:0e.

Areyousureyouwanttocontinueconnecting(yes/no)?yes

,,

Warning:Permanentlyaddedspai:k02.yun/6(RSA)tothe113Ccfknownhosts.

vod@spark02.yun'spassword:

authorized_keys100%15891.6KB/s

[vodSDashDBOl.ssh]$scpauthorized_keysvod0sparkO3.yun:/hoMe/common/.ssh

Theauthenticityofhost'spark03.yun(7)1can'tbeestablished.

RSAkeyfingerprintis6a:d3:e4:a4:21:52:7b:£7:84:al:61:£0:3b:0c:89:8b.

Areyousureyouwanttocontinueconnecting(yes/no)?yes

Uarning:Permanentlyadded'sparkOO.yun,172.1G.150.271(RSA)tothelistcfknownhosts.

vod0sparkO3.yun'spassword:

authorized_keys100%15891.6KB/S

[vodeDashDBOl.ssh]?|

其余三臺機(jī)器的.ssh文件包含如下:

I[vod@spark01.ssh]$11

總用量20

-rv-r——r--1vodadmins158910月1318:04authorized_keYS

-rw-r--r-1vodadmins39710月1317:56authorized_keys_3Park01

-rv------------1vodadmins167510月1317:54id_rsa

-rw-r--r--1vodadmins39710月1317:54id_rsa.pub

-ru-r——r——1vodadmins40810月1317:59knoun_hosts

___■____

9.在4臺機(jī)器中運(yùn)用如下設(shè)置authorized.keys讀寫權(quán)限

chmod775authorized_keys

[vodSDashDBOl.ssh]$chmod4J0authorized_keys

10.測試ssh免密碼登錄是否生效

[vod@DashDB01.ssh]$sshsparkOl.yun

Lastlogin:TueOct1318:06:432015fromdashdbOl.yun

[vod@spark01-]$exit

logout

ConnectiontosparkOl.yunclosed.

[vod@DashDB01.ssh]$sshspark02.yun

Lastlogin:TueOct1318:06:492015fromdashdbOl.yxin

[vod@spark02-]$exit

logout

Connectiontospark02.yunclosed.

[vod@DashDB01.ssh]$sshspark03.yun

Lastlogin:TueOct1317:10:232015£rom8

[vod@spark03專exit

logout

Connectiontospark03.yunclosed.

-----------------??_____________________

3.3配置Hadooop設(shè)置

3.3.1打算hadoop文件

1.把書目移到/usr/local書目下

cd/home/hadoop/Downloads/

sudocphadoop-2.2.0/usr/local

2.運(yùn)用chown吩咐遍歷修改書目全部者為hadoop

sudochown-Rvod

3.3.2在Hadoop書目下創(chuàng)建子書目

運(yùn)用vod用戶在書目下創(chuàng)建tmp、name和data書目,保證書目全部者為vod

cd

mkdirtmp

mkdirname

mkdirdata

Is

[vodGDashDBOlhadoop-2.2.0]$11

總用量40

drvxr-xr-x2vodadmins409610月1409:31

drwxr-xr-x2vodadmins409610月1409:34

druxr-xr-x3vodadmins409610月1409:31

drvxr-xr-x2vodadmins409610月1409:31

drwxr-xr-x3vodadmins409610月1409:31

drvxr-xr-x2vodadmins409610月1409:31

drvxr-xr-x2vodadmins409610月1409:34

drvxr-xr-x2vodadmins409610月1409:31

drvxr-xr-x4vodadmins409610月1409:31

drwxr-xr-x2vodadmins409610月1409:34

配置/etc/provim/etc/profile

添加以下內(nèi)容

exportHADOOP_HOME=/usr/local/hadoop

exportPATH=$PATH:$HADOOP_HOME/bin

exportYARN_HOME=$HADOOP_HOME

exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

exportYARN_CONF_DIR=$HADOOP_HOMR/etc/hadoop

運(yùn)用吩咐使其生效

source/etc/profile

3.3.3配置hadoop-env.sh

1.打開配置文件hadoop-env.sh

cdetc/hadoop

sudovimhadoop-env.sh

2.加入配置內(nèi)容,設(shè)置了hadoop中jdk和hadoop/bin路徑

exportJAVA_HOME=/usr/lib/java/jdkl.7.0_55

exportPATH=t?:/usr/local;hadoop-2.2.0/bin

I

"hadoop-env.sh"81L,348JLC

3.編譯配置文件hadoop-env.sh,并確認(rèn)生效

sourcehadoop-env.sh

3.3.4配置yarn-env.sh

1.在打開配置文件yarn-env.sh

sudovimyarn-env.sh

2.加入配置內(nèi)容,設(shè)置了hadoop中jdk和hadoop/bin路徑

export

3.編譯配置文件yarn-env.sh,并確認(rèn)生效

sourceyarn-env.sh

3.3.5配置core-site,xml

1.運(yùn)用如下吩咐打開core-site.xml配置文件

sudovimcore-site.xml

2.在配置文件中,根據(jù)如下內(nèi)容進(jìn)行配置

<configuration>

<property>

<name></name>

<value>hdfs://4:9000</value>

</property>

<property>

<name>fs.defaultFS</name>

<value>hdfs://4:9000</value>

</property>

<property>

<name>io.</name>

<value>131072</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<valuex/value>

<description>Abaseforothertemporarydirectories.</description>

</property>

<property>

<name>xyuser.hduser.hosts</name>

<value>*</value>

</property>

<property>

<name>xyuser.hduser.groups</name>

<value>*</value>

</property>

</configuration>

3.3.6配置hdfs-site.xml

1.運(yùn)用如下吩咐打開hdfs-site.xml配置文件

sudovimhdfs-site.xml

2.在配置文件中,根據(jù)如下內(nèi)容進(jìn)行配置

<configuration>

<property>

<name>node.secondary.-address</name>

<value>4:9001</value>

</property>

<property>

<name>.dir</name>

<value></value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<valuex/value>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

<property>

<name>dfs.webhdfs.enabled</name>

<value>true</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

3.3.7配置mapred-site.xml

1.默認(rèn)狀況下不存在mapred-site.xml文件,可以從模板拷貝一份

cpmapred-site.xml.templatemapred-site.xml

2.運(yùn)用如下吩咐打開mapred-site.xml配置文件

sudovimmapred-site.xml

3.在配置文件中,根據(jù)如下內(nèi)容進(jìn)行配置

<configuration>

<property>

<name></name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>4:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>4:19888</value>

</property>

</configuration>

3.3.8配置yarn-site.xml

1.運(yùn)用如下吩咐打開yarn-site.xml配置文件

sudovimyarn-site.xml

2.在配置文件中,根據(jù)如下內(nèi)容進(jìn)行配置

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</valje>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

<property>

<name>yarn.resourcemanager.address</name>

<value>:8032</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>:8030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>:8031</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>:8033</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>:8088</value>

</property>

</configuration>

3.3.9配置slaves文件

1.設(shè)置從節(jié)點(diǎn)

sudovimslaves

修改為

sparkOl.yun

spark02.yun

spark03.yun

3.3.10向各節(jié)點(diǎn)分發(fā)hadoop程序

1.在sparkOl.yunspark02.yunspark03.yun機(jī)器中創(chuàng)建書目,然后修改該書目全

部權(quán)限

sudochown-Rvod

sudo

2.在DashDBOl.yun機(jī)器上進(jìn)入書目,運(yùn)用如下吩咐把hadoop文件夾復(fù)制到其他3臺

運(yùn)用吩咐

scp-r

3.在從節(jié)點(diǎn)查看是否復(fù)制勝利

執(zhí)行

4.每個節(jié)點(diǎn)配置/etc/provim/etc/profile

添加以下內(nèi)容

exportHADOOP_HOME=/usr/local/hadoop

exportPATH=$PATH:$HADOOP_HOME/bin

exportYARN_HOME=$HADOOP_HOME

exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

exportYARN_CONF_DIR=$HADOOP_HOMR/etc/hadoop

運(yùn)用吩咐使其生效

source/etc/profile

3.4啟動hadoop

3.4.1格式化namenode

./bin/hdfsnamenode-format

H0.88447.226[Tw.88.jy.227[|W.^lV.Zg□

,l

rhadoop?hadoopi^Tfc3r/usr/ioca1/hadoop-2.5.6/~A

Lhadoop@hadoopl/usr/local/hadoop-2.2.0]Sls

binetc11bLlCENSE.txtNOTiCE.txtsbintmp

dataincludelibexecnameREADME.txtshare

[hadoop^hadoopl/usr/local/hadoop-2.2.0]$./bin/hdfsnamenode-format.

z

14/09zz410:12:00INFOnamenode.NameNode:STARTUP_MSG:

/9■金??食**?會衾?貪?食*魯??■食食?食偷食??帝?會?食食翁?翕金■,食

STARTUP_MSG:StartingNameNode

STARTUPJSG:host-hadoopl/126

STARTUPJSG:args-[-format]

STARTUPJSG:version-2.2.0

STARTUP/%:classpath-/usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/had

oop/common/1ib/co<nmons-net-3.1.jar:/usr/local/hadoop-2.2.0/snare/hadoop/c<xnmon/lib/commons-http

client-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/localhadoop

-2.2.0/share/hadoop/co<nmon/lib/jasper-compi1er-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/

common/Hb/jetty-6.1.26.jar:/usr/local/hadoop-2.2,0/share/hadoop/common/1ib/s1f4j-api-1.7,5.jar

14,09/2410:12:17INFOnamenode.FSNanesystem:node.safemode.min.datanodes?0

14/09/2410:12:17INFOnamenode.FSNamesystem:node.safemode.extension-30000

14/09/2410:12:17INFOnamenode.FSNamesystem:Retrycacheonnamenodeisenabled

14/09/2410:12:17INFOnamenode.FSNamesystem:Retrycachewilluse0.03oftotalheapandretry

cacheentryexpirytimeis600000millys

14/09/2410:12:17INFOutil.GSet:ComputingcapacityformapNamenodeRetrycache

14/09/2410:12:17INFOutil.GSet:VMtype-64-bit

14/09/2410:12:17INFOutil.GSet:0.029999999329447746%maxmemory-966.7MB

IN'。UHLGS。。:capac'cy:2Als=/768en*匚___

宜INCCcommon.nragA:cHrocrcry/uqr/Iccal/hadoop?5.?.0/naia。hasbe0n

successfullyformatted.

14/0^/2410:12:18INFOnan?efM3de.FSlmage:Savingimagerile/usr/iocai/nadoop-2.2.0/narae/current

/fsimage.ckpt_0000000000000000000usingnocompression

14/09/2410:12:18INFOnamenode.FSlmaqe:Imagefile/usr/local/hadoop-2.2.0/najne/current/fsimag

e.ckpt_0000000000000000000ofsize198bytessavedin0seconds.

14/09/2410:12:18INFOnamenode.NNStorageRetentiorwanager:Goingtoretain1imageswithtxid>

■0

14/09/2410:12:18INFOutil.Exituti1:Exitingwithstatus0

14/09/2410:12:18INFOnamenode.NameNode:SHUTOO¥ft4,>*SG:

SHUTDOWNSSG:ShuttingdownNameNodeathadoopl/126

3.4.2啟動hadoop

cd/usr/local/

./start-all.sh

3.4.3驗(yàn)證當(dāng)前進(jìn)行

此時執(zhí)行jps吩咐

在DashDBOl.yun上運(yùn)彳亍的進(jìn)程有:namenode,secondarynamenode,

resourcemanager

sparkOl.yunspark02.yun和spark03.yun上面運(yùn)彳亍的進(jìn)程有datanode1nodemanager

4H安裝和配置

4.1拷貝項(xiàng)目

更改文件夾所屬

sudochown-Rvod/usr/local/

sudochmod775-R/usr/local/

配置/etc/provim/etc/profile

exportHIVE_HOME=/usr/local

exportPATH=$HIVE_HOME/bin:SPATH

exportHIVE_CONF_DIR=$HIVE_HOME/conf

source/etc/profile

4.2配置hive(運(yùn)用mysql數(shù)據(jù)源)

前提條件:在mysql數(shù)據(jù)庫建立hive用戶并給予相關(guān)權(quán)限

mysql>CREATEUSER'hive'IDENTIFIEDBY'mysql';

mysql〉GRANTALLPRIVILEGESON*.*TO'hive'@'%'WITHGRANTOPTION:

mysql>flushprivileges;

cd$HIVE_CONF_DIR/

cphive-default.xml.templatehive-site.xml

vimhive-site.xml

修改下列參數(shù):

<name>javaxjdo.option.ConnectionllRL</name>

<value>jdbc:mysql://50:3306/hive?createDatabaseIfNotExist=true</value>

<name>javaxjdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

<namex/name>

<value>hive</value>

<name></name>

<value>hive</value>

執(zhí)行吩咐

chmod775-R/usr/local/hive-/

4.3啟動HiveServer2(后臺啟動)

cd$HIVE_HOME/bin

nohuphive-servicehiveserver2&

測試:netstat-an|grep10000或者運(yùn)用jdbc連接測試

4.4測試

輸入hive吩咐,啟動hive

hive>showtables;

OK

Timetak

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論