教育436實驗手冊_第1頁
教育436實驗手冊_第2頁
教育436實驗手冊_第3頁
教育436實驗手冊_第4頁
教育436實驗手冊_第5頁
已閱讀5頁,還剩110頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

雙節(jié)點HA與仲裁磁 LVM高級擴(kuò) HA- 附 CPUVT-xCPUVT-xEPTvmwareworkstation9之后,似乎取消了模擬方式,想要實現(xiàn)虛擬化嵌套,CPU必須支持EPT。EPT是一種硬件輔助的內(nèi)存虛擬化方式,內(nèi)存虛擬化實際就是進(jìn)行地址轉(zhuǎn)換從客戶都已經(jīng)支持EPT指令。沒有AMDCPU,所以AMD情況不了解。需要CPU支持VT-xEPT指令集。有關(guān)虛擬化定義,猛擊此處查看百科的定義(打不開自行尋找方式vmwareworkstation/virtualbox/kvm可以使用RH401課程的instructor,在server上配置好DNS服務(wù),域 ISCSIiqn:iSCSIQualifiediqn格式:iqn.YYYY-MM.reversed target:iscsi服務(wù)[root@node4~]#yum-y[root@node4~]#yum-yinstallscsi-target-#在sda2G10G10G分區(qū)設(shè)為LVM[root@node4~]#fdisk-lDisk/dev/sda5:2152255heads,63sectors/track,261Disk/dev/sda6:10.7GB,10742183424255heads,63sectors/track,1305Writingphysicalvolumedatatodisk[root@node4~]#vgcreatevgsrv/dev/sda6[root@node4~]#vgs #PV#LV#SN VSize 0wz--n-10.00g10.00g[root@node4~]#[root@node4~]#lvcreate-L1G-nexample[root@node4~]#lvs LSizeOriginSnap%MoveLogCopy%Convertexamplevgsrv-wi-a-1.00g[root@node4~]#lllrwxrwxrwx1rootroot7Aug603:07/dev/vgsrv/example->../dm-[root@node4~]#vim#38 StartingSCSItarget [OK[root@node4~]#tgt-adminshowNoactionspecified.[root@node4~]#tgt-admin--Target1: Systeminformation:Driver:iscsiI_TnexusLUNLUN: 能力,所以size是SCSIID:SCSISN:Size:0MB,Blocksize:Online:YesReadonly:NoBackingstoretype:nullBackingstoreflags:LUN:Type:diskSCSIID:IETSCSISN:Online:YesReadonly:NoBackingstoretype:Backingstoreflags:ACLinformation:[root@node1~]#yum-yinstalliscsi-initiator-[root@node1~]#iscsiadm-mdiscovery-tsendtargets-p [OK [root@node1~]#[root@node1~]#iscsiadm-mnode- Logginginto[iface:default,target: .example:exampletarget,portal:,3260]Loginto[iface:default,target: [root@node1~]##查看iscsi掛接情況三種的方式[root@node1~fdiskldev/sdb[root@node1~]#[root@node1~]#ls/dev/disk/by-path/-total lrwxrwxrwx1rootroot9Aug602:03pci-0000:00:10.0-scsi-0:0:0:0->../../sdalrwxrwxrwx1rootroot10Aug602:03pci-0000:00:10.0-scsi-0:0:0:0-part1->../.lrwxrwxrwx1rootroot10Aug602:03pci-0000:00:10.0-scsi-0:0:0:0-part2->../.lrwxrwxrwx1rootroot10Aug602:03pci-0000:00:10.0-scsi-0:0:0:0-part3->.././sda3[root@node1~]#[root@node1~iscsiadmmsession # CurrentPortal:4:3260,1Iface IfaceNetdev:<empty>SID:iSCSISessionState:LOGGED_INInternaliscsidSessionState:NOIface.Initiatorname實際工作中,這里最好修改下,修改成自己主機(jī)名相關(guān),這樣方便于管理,修改##登出[root@node1~]#iscsiadm-mnode- Loggingoutofsession[sid:1,target: Logoutof[sid:1, .example:exampletarget,portal:,3260]#[root@node1~]#rm-rf#停止iSCSIinitiator [OK [OK#修改[root@node1~]#cat [root@node1~]#iscsiadm-mdiscovery-tsendtargets-p#[OK [root@node1~]#iscsiadm-msession-P1 CurrentPortal:Iface IfaceHWaddress:<empty>IfaceNetdev:<empty>SID:iSCSISessionState:LOGGED_INInternaliscsidSessionState:NO#backing #direct-store 網(wǎng)絡(luò)上千篇一律的說,這個是用于一個整個磁盤,但是沒有人說這個##backing #direct-store 網(wǎng)絡(luò)上千篇一律的說,這個是用于一個整個磁盤,但是沒有人說這個#參數(shù)實際上在虛擬機(jī)里無法生效的,如果想用direct-store,用實體機(jī)測試。#cat backing-store/dev/sda5[root@node1~]#iscsiadm-mdiscovery-tsendtargets-p:3260 [root@node1[root@node1~]#iscsiadm-mnode-Logginginto[iface:default,target:Loginto[iface:default,.example.node4:sdbtarget,portal:.example.node4:sdbtarget,portal:##vendor-idRHCAIncinitiation端可以看到LUNscsi_sns01#initiator-address #配置可以掛接這個LUN的客戶端,如果不initiator-address #配置,則表示,所有機(jī)器均可掛接initiator-address#修改配置完成之后重啟服務(wù),但是此時LUN已經(jīng)被# SCSItargetdaemon:initiatorsstillconnected StartingSCSItargetdaemon: [root@node1~]#iscsiadm-mnode- Loggingoutof[root@node1~]#iscsiadm-mnode- #node4 SCSItarget [OKStartingSCSItarget [OKnode1[root@node1~]#iscsiadm-mnode- .example:exampletarget- #但是這樣太麻煩了!客戶端有兩個init.d ,/etc/init.d/iscsi和iscsid,iscsi是控制LUN的卸載的,所以就只要stopiscsi即可node1停止iscsi [OKnode4[root@node4~]# SCSItargetdaemon: [OK]StartingSCSItarget [OKLUN:SCSISN:s01Online:YesReadonly:NoBackingstoretype:Backingstoreflags:AccountACLnode1 [OKParted但是實際工作中,如果使用到專業(yè)設(shè)備,往往單個LUN是遠(yuǎn)超過2T,那就要使用GPT分區(qū),fdisk無法對gpt分區(qū),此時需要使用parted。11049kB200MB2 300MB11049kB200MBiscsi2 300MBUDEVudev#在node4上建立一個lvm卷,同時打開udevadm[root@node4~]##在node4上建立一個lvm卷,同時打開udevadm[root@node4~]#lvcreate-L3G-nudevtestvgsrvLogicalvolume"udevtest"created[root@node4[root@node4~]#udevadmmonitorwillprintthereceivedeventsUDEV-theeventwhichudevsendsoutafterruleprocessingKERNEL-thekerneluevent .032952]/devices/virtual/bdi/253:1UDEV.033720]/devices/virtual/bdi/253:1.033798]/devices/virtual/block/dm-1UDEV.033973]/devices/virtual/block/dm-1.034409]UDEV.047874]/devices/virtual/block/dm-1[root@node2[root@node2~]#udevadminfo-a-pUdevadminfostartswiththedevicespecifiedbythedevpathandwalksupthechainofparentdevices.Itprintsforeverydevicefound,allpossibleattributesintheudevruleskeyformat.Aruletomatch,canbecomposedbytheattributesoftheandtheattributesfromonesingleparent 0ATTRS{vendor}=="VMware,"ATTRS{model}=="VMwareVirtualS"ATTRS{rev}=="1.0" ##在node2上掛接上前一章配置的iscsiLUN掛接iscsi[root@node2~]#cat/etc/udev/rules.d/80-ACTION=="add",SUBSYSTEM=="block",DRIVERS=="sd",ENV{ID_MODEL}=="VIRTUAL-DISK",SYMLINK+="iscsi/NETDISK%n",MODE="0644"[root@node2SUBSYSTEM使用udevadminfoapsys/block/sdbDRIVERS同樣使用udevadminfo-a-p/sys/block/sdbudevadminfo--export-db|grep-A10sdb|grep [root@node2~]#/etc/init.d/iscsirestart[root@node2~]#ll/dev/iscsi/totallrwxrwxrwx1rootroot6Aug1105:04NETDISK->lrwxrwxrwx1rootroot7Aug1105:04NETDISK1->../sdb1[root@node2~]#開發(fā),將設(shè)備情況從/proc中分離出來并加以改進(jìn)。 [root@ash6020[root@ash6020sys]#blockbusclassdevdevicesfirmwarefshypervisorkernelmodulepower[root@ash6020sys]# active-active模式,一方面可以做冗余,active-standby模式,只有一路鏈路工作,vendor-idRHCAInc.scsi_sns01initiator-addressinitiator-addressnode1/2/3/etc/init.d/iscsi重啟node4tgtd[root@node4~]# -將#node2#node2做iSCSILUN[root@node2~]#iscsiadm-mdiscovery-tsendtargets-p [root@node2~]#iscsiadm-mdiscovery-tsendtargets-p #重啟iscsi#[root@node2~]#ls/dev/disk/by-#--user_friendly_names[root@node2[root@node2~]#vim#default表示加入multipath的設(shè)備配置blacklist表示不加入multipath[root@node2~]#cat/etc/multipath.conf|egrep-v"^#|^$"defaults{ "/lib/udev/scsi_id--whiisted-- }}getuid_callout這行目的就是取設(shè)備的WWID #[root@node2~]#ls-l/dev/mapper/totallrwxrwxrwx1root 7Aug1202:471exampletarget->../dm-crw-rw1rootroot10,58Aug122013control[root@node2~]#user_friendly_names改成yes[root@node2~]#ls-l/dev/mapper/total0crw-rw1rootroot10,58Aug122013lrwxrwxrwx1root 7Aug1202:58mpatha->../dm-[root@node2~]#cat/etc/multipath.conf|egrep-v[root@node2~]#cat/etc/multipath.conf|egrep-v"^#|^$"defaults{getuid_callout"/lib/udev/scsi_id--whiisted-- }multipathsmultipathmultipath}}blacklist}mutipaths中wwid的取值用命令:/lib/udev/scsi_idwhiisted#此處不要加--re [root@node2~]#/etc/init.d/multipathdrestart[root@node2~]#ls-l/dev/mapper/total0crw-rw1rootroot10,58Aug122013lrwxrwxrwx1root 7Aug1203:02rhca-iscsidisk->../dm-#如果此處dm-0的指向設(shè)備名字未變,用multipathF#active狀態(tài),一條是enable[root@node2~]#multipath-size=1.0Gfeatures='0'hwhandler='0'wp=rw|- ='round-robin0'prio=1|`-6:0:0:1sdb8:16activeready`- ='round-robin0'prio=1`-5:0:0:1sdc8:32activeready[root@node2~]#[root@node2~]#llcrw-rw1rootroot10,58Aug122013lrwxrwxrwx1root 7Aug1203:05/dev/mapper/rhca-iscsidisk->../dm-brw-rw1rootdisk253,1Aug1203:13/dev/mapper/rhca-Aug1203:18:13|/lib/udev/scsi_idexittedwithAug1203:18:13|/lib/udev/scsi_idexittedwithsize=1.0Gfeatures='0'hwhandler='0'wp=undef ='round-robin0'prio=1|`-6:0:0:1sdb8:16activeready`- ='round-robin0'prio=1`-`-5:0:0:1sdc8:32activeready[root@node2~]#mkfs.ext4/dev/mapper/rhca-iscsidiskp1[root@node2~]#mkfs.ext4/dev/mapper/rhca-iscsidiskp1[root@node2~]#df-h/media/ SizeUsedAvailUse%Mounted 18M 2%#同時觀察system的log,會發(fā)現(xiàn)檢測到#同時觀察system的log,會發(fā)現(xiàn)檢測到iscsi一條鏈路斷掉,在 #之后,切換到另外一條pathAug1203:28:08node2kernel:connection4:0:detectedconnerrorAug1203:28:09node2iscsid:KernelreportediSCSIconnection4:0error(1011)stateAug1203:28:11node2iscsid:cannotmakeaconnectionto:3260(-Aug1203:28:14node2iscsid:cannotmakeaconnectionto:3260(-Aug1203:28:17node2iscsid:cannotmakeaconnectionto:3260(-Aug1203:30:09node2kernel:device-mapper:multipath:FailingpathAug1203:30:09node2multipathd:rhca-iscsidisk:sdb-directiocheckerreportspathisdownAug1203:30:09node2multipathd:checkerfailedpath8:16inmaprhca-iscsidiskAug1203:30:09node2multipathd:rhca-iscsidisk:remainingactivepaths:#此時multipath狀態(tài)顯示,主鏈路faulty[root@node2media]#multipath-size=1.0Gfeatures='0'hwhandler='0'wp=rw ='round-robin0'prio=0|`-6:0:0:1sdb8:16failedfaulty`- ='round-robin0'prio=1`-5:0:0:1sdc8:32activeready#將eth2up,狀態(tài)如下,activesize=1.0Gfeatures='0'hwhandler='0'wp=rw ='round-robin0'prio=1|`-6:0:0:1sdb8:16activeready`-`-='round-robin0'`-5:0:0:1sdc8:32activeready[root@node2media]#cat/etc/iscsi/iscsid.conf|greptimeout|[root@node2media]#cat/etc/iscsi/iscsid.conf|greptimeout|grep-v^# cement_timeout=120node.conn[0].timeo.login_timeout=15[root@node2media]#[root@node2media]#sed-i'/timeout/s/=.*$/=5/g'/etc/iscsi/iscsid.conf[root@node2media]#cat/etc/iscsi/iscsid.conf|greptimeout|grep-v^# cement_timeout=5[root@node2media]##RedHat只是PowerControl是使用的其他軟件方式解決,而非一個硬件。RedHatHAclusterCorosyncModclusterdricci和luci之間通信的程CongaRedHatLuciRicciLucipythonweb所以Ricci在所有節(jié)點都需要,而Luci只需要裝在一個節(jié)點上[root@node1~]#ssh-keygen-t[root@node1~]#ssh-keygen-t[root@node1~]#ssh-copy-id-i.ssh/id_rsa.pubroot@node2[root@node1~]#ssh-copy-id-i.ssh/id_rsa.pubroot@node3[root@node1~]#ssh-copy-id-i.ssh/id_rsa.pubroot@node4[root@node1/2/3/4~]#/etc/init.d/acpidstop[root@node1/2/3/4~]#chkconfigacpidoff[root@node4[root@node4~]#yum-ygroupinstall"HighAvailabilityManagement"[root@node4~]#/etc/init.d/lucistart#安裝完 includepassword-auth includepassword-auth includepassword-auth includepassword-auth[root@node4~]#id-aluciuid=141(luci)gid=141(luci) #再退出用rootLuci[root@node1~]#yum-ygroupinstall'HighAvailability''ResilientStorage'[root@node2~]#yum-ygroupinstall'HighAvailability''ResilientStorage'[root@node3~]#yum-ygroupinstall'HighAvailability''ResilientStorage'#[root@node1/2/3~]#chkconfigriccion[root@node1/2/3~]#/etc/init.d/riccistart[root@node1/2/3~]#netstatnltp|grep:11111#設(shè)置ricci用戶[root@node1/2/3~]#echoredhat|passwd--stdin登錄ManageClusters=>ClusterNameUsethesamepasswordforallnodes:如果所有節(jié)點的ricci用戶的一樣,就可以勾選Node配置,這里使用的是private網(wǎng)段,node1/2.private.clu 是ricci用戶的,同時注意11111端口是不是可以被 net到。然我更建議你在初期使用groupinstall方式。原因是Luci沒有將集群的配置信息下放到那個節(jié)點。解決方式是,手動拷貝是Luci和下面節(jié)點的通信出現(xiàn)問題。出現(xiàn)問題查看系統(tǒng)日志/var/log/messagetailf這個問題是紅帽預(yù)料到的,/etc/cluster/cluster.confnode2/3上。解決方式是手動拷貝過去。然后點擊JionCluster^^值,則cluster正常,低于則cluster失效。Quorum=floor(“expectedvotes”/2+1Quorum票數(shù)是:每個節(jié)點擁有的票數(shù)的和/2Expectedvotes:3Expectedvotes:3 #Nodevotes: #Quorum: 3/21有5個節(jié)點可用,這樣判定集群失效是有問題的,所以不要讓單個node的票值過高。[root@node2~]#cat<?xml "nodeid="1" " "Expectedvotes:3Nodevotes:Quorum:第仲裁機(jī) ClusterServer里叫做I/OFencing,他將仲裁磁盤和Fence功能結(jié)合在一起。node狀態(tài)發(fā)生變化時候,通過節(jié)點間的競爭,看誰先搶到多數(shù)磁盤,搶到數(shù)量多的節(jié)點獲得重組cluster權(quán)利,失敗節(jié)點退出。KVMvmFencevirsh發(fā)送請求的方fencevmwareworkstationFencevmwareSCSI3PRiscsi-targetsFencelog里會報錯:fence_scsi[error]failedtocreateRedHat手冊里寫道SCSIPersistentReservationsfence當(dāng)使用SCSIfencing時,集群中的所有節(jié)點必須使用同一設(shè)備,這樣每個節(jié)點即控制到每個LUN的,而不是對獨立分區(qū)的。 Useoffence_scsiwithiSCSIstorageislimitedtoiSCSIserversvendortoensurethatyourserveriscompliantwithSCSI3ReservationReservationsupport.NotethattheiSCSIservershippedwithRedsoitisnotsuitableforusewithfence_scsi.Tips通過SCSIReservation機(jī)制來進(jìn)行SCSI鎖的操作,目前絕大多數(shù)的磁盤都支持SCSI到reservation reservation或者resettarget命令,用來解除SCSI鎖。然后,第二個主機(jī)發(fā)送I/O請求之類型的SCSI鎖。主機(jī)1上的HBA1對的LUN加上SCSI-2鎖,此時即使主機(jī)1的HBA2也無法這個LUN。所以SCSI-2Reservation也被稱為SinglePathReservation。PRKeyPRKeySCSI-3Reservation通常被應(yīng)用在多通路的共享環(huán)境下面。這里SCSI-3Reservation也稱之為PersistentReservation。所以在RHEL5時候fence_scsiclvmRHEL5的fence_scsi只支持SCSI2協(xié)以我把iscsi建立在ubuntu,配置方式請看附錄中ubuntu下的iscsi配置##不 <?xml"votes="1""votes="1""votes="1" 是cluster.conf的語法錯誤。#顯示success,配置成功了fencenode2success###[root@node2~]#yum-yinstall[root@node2~]#yum-yinstall[root@node2~]#chkconfigwatchdogon[root@desktop26~]#mkdir/etc/cluster/#生成key,考試時候這個key0.2544+0records4+0records4096bytes(4.1kB)copied, s,4.0#配置主機(jī)Fence,注意 [root@desktop26~]#fence_virtd-clibvirt0.1multicastListenermodulesareresponsibleforacceptingrequestsfromfencing ListenermoduleThemulticastlistenermoduleisdesignedforuseenvironmentswheretheguestsandhostsmaycommunicateoveranetworkusingThemulticastaddressistheaddressthata willusetosendfencingrequeststofence_virtd.MulticastIPAddressUsingipv4asfamily.onthatinterface.Normally,itlistensonallinterfaces.Inenvironmentswherethevirtualmachinesareusingthehostmachineasagateway,this*must*beset(typicallytovirbr0).Setto'none'fornointerface.Interface[none]:Thekeyfileisthesharedkeyinformationwhichisusedtoauthenticatefencingrequests.Thecontentsofthisfilemustbedistributedtoeachphysicalhostandvirtualmachinewithinacluster.KeyFileBackendmodulesareresponsibleforroutingrequeststotheappropriatehypervisorormanagementlayer.Thelibvirtbackendmoduleisdesignedforsingledesktopsorservers.Donotuseinenvironmentswherevirtualmachinesmaybemigratedbetweenhosts.Configurationcomplete.backends{libvirturi=}}

multicastport="1229";family="ipv4";address=key_file=}}backend="libvirt";listener=}===EndConfiguration ce/etc/fence_virt.confwiththeabove[y/N]?[root@desktop26~]#chkconfigfence_virtdon[root@node1~]#fence_nodenode2fencenode2Rgmanager資源組:由這些資源組成的一個groupUdevKERNEL=="sd*",--whiisted--recedevice=/dev/$name",",KERNEL=="sd*",--whiisted--rece-total0lrwxrwxrwx1rootroot6Aug1611:29hadisk->lrwxrwxrwx1rootroot7Aug1611:29hadisk1->lrwxrwxrwx1rootroot6Aug1611:29netdisk->../sdc[root@node1~]#[root@node1/2/3~]#yum-yinstall[root@node1~]#ipaddradddeveth00/24[root@node1~]#ipaddshowdeveth0link/ether00:0c:29:99:52:01brdff:ff:ff:ff:ff:ffinet1/24brd55scopeglobaleth0inet0/24scopeglobalsecondaryeth0valid_lftforeverpreferred_lftforever[root@node1~]#RHCA436這個啟動吧,總會有點問題,不知道這個ricci[root@node2~]#ClusterStatusfortempcluster@SunAug1810:57:082013MemberStatus:QuorateMember 1 2Online,Local, 3Service Owner [root@node2~]#[root@node2~]#pkill-9httpd[root@node2~]#ps-ef|grephttp[root@node2~]#clustatClusterStatusfortempcluster@SunAug1810:59:382013MemberStatus:QuorateMember 1 2Online,Local, 3Service Owner [root@node2~]#clustatClusterStatusfortempcluster@SunAug1810:59:412013MemberStatus:QuorateMember 1 2Online,Local, 3Service Owner [root@node2~]#ClusterStatusfortempcluster@SunAug1811:00:032013MemberStatus:QuorateMember 1 2Online,Local, 3Service Owner [root@node2~]#[root@node2~]#ClusterStatusfortempcluster@SunAug1811:00:172013MemberStatus:QuorateMember 1 2Online,Local, 3Service Owner [root@node2~]#[root@node2~]#ClusterStatusfortempcluster@SunAug1811:03:142013MemberStatus:QuorateMember 1 2Online,Local, 3Online,Service Owner [root@node2~]#ClusterStatusfortempcluster@SunAug1811:03:512013MemberStatus:QuorateMember 1 2Online,Local, 3Online,Service Owner [root@node2~]#[root@node2~]#ClusterStatusfortempcluster@SunAug1811:04:062013MemberStatus:QuorateMember 1 2Online,Local, 3Online,Service Owner [root@node2~]#[root@node1~]#ClusterStatusfortempcluster@FriAug1612:32:422013MemberStatus:QuorateMember 1Online,Local, 2Online, 3Online,ServiceServiceOwnerCreatingbackingstorelogicalvolume...SUCCESSDefiningiSCSItarget...SUCCESSStartingSCSItargetdaemon:[root@node1~]#partprobe[OK#測試資源是否能在node1/2/3上加載,[root@node1~ipaddradddeveth10/24[root@node1~]#ipaddshoweth1[root@node1~]#servicehttpdStartinghttpd:Syntaxerroronline292ofRootmustbea#SELinux[root@node1~]#servicehttpdrestart [OK[root@node1~]#ls-Z[root@node1~]#elinks-dumpRHCA436 [OK[root@node1~]#umount/var/www/html/[root@node1~]#ipaddrdeldeveth10/24Node2/3 IP->filesystem->#資源的變化動態(tài)可以在#資源的變化動態(tài)可以在rgmanager的log#現(xiàn)有資源狀態(tài)用clustat看[root@node1~]#ClusterStatusfortmpcluster@SatFeb2315:02:022013MemberStatus:QuorateMember ServiceOwnerClusterStatusfortempcluster@FriAug1612:36:462013MemberStatus:QuorateMember 1Online, 2 3Service Owner #[root@node1~]#clustatClusterStatusfortempcluster@FriAug1612:37:312013MemberStatus:QuorateMember 1Online,Local, 2Online, 3Online,Service Owner [root@node1~]#clusvcadm-ewebClusterStatusfortempcluster@FriAug1612:38:382013MemberStatus:QuorateMember 1Online,Local, 2Online, 3Online,Service Owner ##遷移資源組到別的node[root@node1~]#clusvcadm-rweb-m[root@node1~]#clusvcadm-d[root@node2~]#yum–yinstallsamba#分一個128M的分區(qū)用于測試#partporbe沒反應(yīng),就重啟下系統(tǒng),這個是RHEL6的一個bug[root@node2~]#mkfs.ext4/dev/iscsi/hadisk2[root@node2cifs-export]#touchsamba-test[root@node2samba]#cat/etc/samba/smb.confworkgroup=logfile=/var/log/samba/log.%msecurity=userloadprinters=nocupsoptions=public=yes.D0FriAug1613:00:470MonAug1908:16:000FriAug1613:00:47D0FriAug1612:57:39 [root@node1~]#fdisk/dev/sda[root@node1~]#partprobe[root@node1~]#fdisk/dev/sda[root@node1~]#partprobe[root@node1~]#mkdir/nfs_data[root@node1~]#mount/dev/sda1/nfs_data/[root@node1~]#touch/nfs_data/nfs_test[root@node1~]#ipaddradddeveth10/24[root@node1~]#ipaddrshoweth1|grep172inet/24brd55scopeglobaleth1inet0/24scopeglobalsecondaryeth1[root@node1~]#umount[root@node1~]#ipaddrdeldeveth10/24[root@node1~]##node2/3<special<attributes<childtype="lvm"start="1"<childtype="fs"start="2"<childtype="clusterfs"start="3"<childtype="netfs"start="4"<child port"start="5"<child"start="6"<childtype="ip"start="7"<childtype="smb"start="8"<childtype="script"start="9"StartHA為了解決雙節(jié)點的仲裁問題,RedHat解決方式是雙節(jié)點設(shè)置預(yù)期仲裁票數(shù)為1,當(dāng)節(jié)點數(shù)為2時候,cman會在配置文件中加入預(yù)期票數(shù)為1[root@node2~]#grep[root@node2~]#grepcman[root@node2~]#[root@node2~]#cman_toolcman_tool:nooperationspecifiedVersion:6.2.0ClusterId:32263ClusterMember:YesNodes:2Expectedvotes:Nodevotes:Quorum:Flags:2nodePortsBound:0NodeID:2Multicastaddresses:Nodeaddresses:LVMLVM#[root@node1~]#lvcreate-L2G-nresizemevgsrv[root@node1~]#mkdir/testresize[root@node1~]#touch/testresize/file{0..9}#模擬一個錯誤操作,在mount的情況下縮卷#[root@node1~]#lvcreate-L2G-nresizemevgsrv[root@node1~]#mkdir/testresize[root@node1~]#touch/testresize/file{0..9}#模擬一個錯誤操作,在mount的情況下縮卷Doyoureallywanttoreduceresizeme?[y/n]:yReducinglogicalvolumeresizemeto1.00GiBLogicalvolumeresizemesuccessfully#此時還是 [root@node1~]#umountmount:wrongfstype,badoption,badsuperblockon/dev/mapper/vgsrv-resizeme,missingcodepageorhelperprogram,orothererrorInsomecasesusefulinfoisfoundinsyslog-trydmesg|tailorso##查看LVM[root@node1~]#cat/etc/lvm/lvm.conf|greparchive|grep-v'#'archive=1[root@node1~]##connect()failedonlocalsocket:NosuchfileordirectoryInternalclusterlockinginitialisationfailed.WARNING:Fallingbacktolocalfile-basedVolumeGroupswiththeclusteredattributewillbeVGDescription:Created*before*executing'/sbin/vgs--noheadings-oBackupTime:SunNov1810:11:49VGDescription:Description:Created*before*executing'lvcreate-L2G-nresizemeBackupTime:TueFeb2612:17:12VGDescription:Created*before*executing'lvresizeL1Gdev/mapper/vgsrv-resizeme'#BackupTime:TueFeb2612:26:16VGDescription:Created*after*executing'lvresize-L1G/dev/mapper/vgsrv-BackupTime:TueFeb2612:26:16#這個其實就是卷的一個操作log。說明LVMblock[root@node1~]#file #[root@node1~]#vgcfgrestore-f.vg#此時仍然不能掛載的,將lv設(shè)置成noactive[root@node1~]lvchangeandev/mapper/vgsrv-resizeme[root@node1~]lvchangeay/dev/mapper/vgsrv-resizeme#remount完成blockblock狀態(tài),而不是拷貝一個新lvs[root@node4~]# OriginSnap%MoveLogCopy%vgsrv-wi-aovgsrv-wi- [root@node4~]##[root@node4~]#lvcreate-h|grep'\-[root@node4~]##為roo100M[root@node4~]#lvcreate-s/dev/vgsrv/root-nrootsnap-LRoundingupsizetofullphysicalextent128.00MiBLogicalvolume"rootsnap"created[root@node4~]#lvs OriginSnap%MoveLogCopy%Convert vgsrv-wi-ao256.00m vgsrvowi- rootsnapvgsrvswi-a-128.00m storagevgsrv-wi-ao #[root@node4~]#ls/net/ devhomelib64 mediamntopt rootselinuxsysusrbootetc lost+found netprocsbin tmp#測試,在根分區(qū)里dd#可以看到file這個文件并沒有出現(xiàn)在/net里[root@node4~]#ll/file-rw-r--r--.1rootroot Feb2614:21/file[root@node4~]#ll/net/filels:cannotaccess/net/file:Nosuchfileor#[root@node4~]# OriginSnap%MoveLogCopy%Convert vgsrv-wi-ao256.00m vgsrvowi- rootsnapvgsrvswi-ao128.00m storagevgsrv-wi-ao [root@node4~]# OriginSnap%MoveLogCopy%Convert vgsrv-wi-ao256.00m vgsrvowi- rootsnapvgsrvswi-ao128.00m storagevgsrv-wi-ao #將快照恢復(fù)到root##Can'tmergeoveropenoriginvolumeCan'tmergewhensnapshotisMergingofsnapshotrootsnapwillstartnextactivation.[root@node4~]#reboot#快照合并之后,邏輯卷要重新激活,使用lvchange–anay#根分區(qū)要重新激活,這個只能重啟了#[root@node4~]# OriginSnap%MoveLogCopy%vgsrv-wi-aovgsrv-wi- [root@node4~]#HA-HA-LVMnode[root@node1~]#/etc/init.d/clvmdstartStartingclvmd:ActivatingVG(s): 4logicalvolume(s)involumegroup"vgsrv"nowactiveclvmdnotrunningonnodenode3.private.cluclvmdnotrunningonnode[OK[root@node1~]#chkconfigclvmd-- [root@node1~]#chkconfigclvmdonnode2/3#設(shè)置分區(qū)為8e,注意使用-cu參數(shù)node1/2/3partporbe和[root@node1~]#pvcreate/dev/mapper/clusterstoragep5[root@node1[root@node1~]#lvcreate-nhttplv-L128M[root@node2~]# OriginSnap%MoveLogCopy%Converthttplvfirstvg-wi-a-128.00m-wi-ao-wi-ao[root@node1~]#echo"ClusterLVMTestforRHCA436">/net/index.html[root@node1~]#chcon-R-thttpd_sys_content_t/net[root@node1~]#umount[root@node1~]##PV#LV#SN110wz--nc1.01g140wz--n-#[root@node1~]#lvchange-an[root@node1~]# OriginSnap%MoveLogCopy%Convert firstvg-wi128.00m resizeme-wi-ao [root@node1~]#[root@node1~]#clusvcadm-d##<fsdevice="/dev/firstvg/httplv"fsid="4657"fstype="ext4"mountpoint="/var/www/html/"name="halvm_web"quick_status="on"self_fence="on"/>[root@node1~]#clusvcadm-e[root@node1~]#ClusterStatusfortmpcluster@ThuFeb2803:55:292013MemberStatus:QuorateMember Online,Local,ServiceOwner#配置 #配置 #將測試環(huán)境清空還原,lab-*是教學(xué)環(huán)境所有,考試中沒有這個的[root@node4~]#lab-setup-targets[root@node1/2/3~]#lab-setup-iscsi-#[root@node1/2/3~]#yum-ygroupinstall'HighAvailability''ResilientStorage'[root@node4~]#yumgroupinstall'HighAvailabilityManagement'#node1/2/3設(shè)置Ricci[root@node1~]#echoredhat|passwd--stdinricciChangingpasswordforuserricci.[root@node1~]#/etc/init.d/riccistartnode4[root@node4~]#chkconfiglucion[root@node1/2/3~]#chkconfigcmanon所有節(jié)點,盡可能少用Luci在哪個節(jié)點上 ##安裝libvirt-fence[root@desktop26~]#fence_virtd-libvirt0.1multicastfrom ListenermoduleThemulticastlistenermoduleisdesignedforusewheretheguestsandhostsmaycommunicateoveranetworkusingThemulticastaddressistheaddressthatsendfencingrequeststowilluseUsingipv4asMulticastIPPortonthatinterface.Normally,itlistensonallinterfaces.Inenvironmentswherethevirtualmachinesareusingthehostmachineasagateway,this*must*beset(typicallytovirbr0).Setto'none'fornointerface.Interface[none]:Thekeyfileisthesharedkeyinformationwhichisusedtoauthenticatefencingrequests.Thecontentsofthisfilemustbedistributedtoeachphysicalhostandvirtualmachinewithinacluster.KeyFileBackendmodulesareresponsibleforroutingrequeststotheappropriatehypervisorormanagementlayer.Backendmodule Nobackendmodulenamedlibvirtfound!(#為什么出現(xiàn)這個錯誤?因為我寫libvirtConfigurationcomplete.backends{libvirturi=}}

multicastport="1229";family="ipv4";address=key_file=}}backend="libvirt";listener=}===EndConfiguration [root@desktop26~]##上面配置中有個fencekey[root@desktop26~]#scp/etc/cluster/fence_xvm.keynode1:/etc/cluster/[root@desktop26~]#scp/etc/cluster/fence_xvm.keynode2:/etc/cluster/[root@desktop26~]#scp/etc/cluster/fence_xvm.keynode3:/etc/cluster/[root@desktop26~]#/etc/init.d/fence_virtdstart#node1fencenode1successRestricted:##分一個2G的分區(qū),并設(shè)置成8e,過程略#三個節(jié)點發(fā)現(xiàn)0[root@node1/2/3~]#partprobe#三個節(jié)點啟動clvmd,并打開lvmcluster模式[root@node1/2/3~]#/etc/init.d/clvmdstart[root@node1/2/3~]#chkconfigclvmdon[root@node1/2/3~]#lvmconf--enable-cluster[root@node2~]#pvcreate/dev/mapper/clusterstoragep1[root@node2~]#lvcreate-nnfslv-L1Gnfsvg#只要clvmd是啟動,在任何節(jié)點,nfslv都是正常的激活狀態(tài),nfsvg顯示cluster[root@node2~]#vgs|grepnfsvg [root@node2~]#lvs|grepnfslvnfslvnfsvg-wi- nfslvnfsvg-wi-a- #格式化gfs2mkfs.gfs2[options]<device>[block-count-b-c--EnabledebuggingcodePrintthishelp,then-J Sizeof -j Numberof - Don'ttrytodiscardunused- Don'taskfor-p - Don'tprint-r ResourceGroup#<clustername>:<table-u Sizeofunlinked- [root@node2~]#mkfs.gfs2-tcluster1:locktable1-j3-J128M/dev/mapper/nfsvg-#[root@node2~]#df-T/dev/mapper/nfsvg-nfslv UsedAvailableUse%Mounted65125238%-t NameofthelockNFSServer 二個service)#[root@node2~]#ClusterStatusforcluster1@SunMar315:49:312013MemberStatus:QuorateMember 1Online, 2Online,Local, 3Online,Service Owner [root@node2~]#[root@node2~]#clusvcadm-e#Failure總有原因,查看日志[root@node2~]#tail-100fMar0315:52:08rgmanager[nfsserver]StartingNFSServernfs_expMar0315:52:09rgmanager[nfsserver]Startingrpc.statdMar0315:52:09rgmanager[nfsserver]StartingNFSMar0315:52:09rgmanager[nfsserver]FailedtostartNFSMar0315:52:09rgmanager[nfsserver]FailedtostartNFSServerMar0315:52:09rgmanagerstartonnfsserver"nfs_exp"returned1(genericerror)Mar0315:52:09rgmanager#68:Failedtostartservice:nfs_service;returnvalue:1Mar0315:52:09rgmanagerStopserviceservice:nfs_serviceShuttingdownNFS ShuttingdownNFS ShuttingdownNFS StartingNFS [OKStartingNFS [OKStartingNFS # [OKStarting [OKShuttingdownNFSShuttingdownNFS]ShuttingdownNFS[OKShuttingdownNFS[OKStartingNFS[OKStartingNFS[OKStartingNFS[]StartingNFS[]Localmachinetryingtoenableservice:nfs_ser

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論