版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
ParallelProgrammingInstructor:ZhangWeizhe(張偉哲)ComputerNetworkandInformationSecurityTechniqueResearchCenter,SchoolofComputerScienceandTechnology,HarbinInstituteofTechnologyProgrammingUsingtheMessage-PassingParadigm3IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline4AParallelMachineModelInterconnect…TheclusterAnodecancommunicatewithothernodesbysendingandreceivingmessagesoveraninterconnectionnetwork可以通過在互連網(wǎng)絡上發(fā)送和接收消息來與其他節(jié)點進行通信的節(jié)點ThevonNeumanncomputer5Principlesof
Message-PassingProgrammingEachprocessorinamessagepassingprogramrunsaseparateprocess(sub-program,task)Thelogicalviewofamachinesupportingthemessage-passingparadigmconsistsofpprocesses,eachwithitsownexclusiveaddressspace.AllvariablesareprivateEachdataelementmustbelongtooneofthepartitionsofthespace;hence,datamustbeexplicitlypartitionedandplaced.CommunicateviaspecialsubroutinecallsAllinteractions(read-onlyorread/write)requirecooperationoftwoprocesses-theprocessthathasthedataandtheprocessthatwantstoaccessthedata.6Principlesof
Message-PassingProgramming消息傳遞程序中的每個處理器都運行一個單獨的進程(子程序,任務)支持消息傳遞范例的機器的邏輯視圖由p個進程組成,每個進程都有自己的獨占地址空間。所有變量都是私有的每個數(shù)據(jù)元素必須屬于該空間的一個分區(qū);因此,必須明確分區(qū)和放置數(shù)據(jù)。通過特殊的子程序通話所有交互(只讀或讀/寫)都需要兩個進程的協(xié)作,這兩個進程是具有數(shù)據(jù)訪問數(shù)據(jù)的進程。7Principlesof
Message-PassingProgrammingSPMDSingleProgramMultipleDataSameprogramrunseverywhere EachprocessonlyknowsandoperatesonasmallpartofdataMPMDMultipleProgramMultipleData Eachprocessperformadifferentfunction(input,problemsetup,solution,output,display)8MessagesMessagesarepacketsofdatamovingbetweenprocessesThemessagepassingsystemhastobetoldthefollowinginformation:SendingprocessSourcelocationDatatypeDatalengthReceivingprocess(es)DestinationlocationDestinationsize9MessagePassingMessage-passingprogramsareoftenwrittenusingtheasynchronous異步orlooselysynchronous松散同步paradigms.Asynchronouscommunicationdoesnotcompleteuntilthemessagehasbeenreceived.Anasynchronouscommunicationcompletesassoonasthemessageisonitsway
10IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline11WhatisMPI?ThedevelopmentofMPIstartedinApril1992.MPIwasdesignedbytheMPIForum(adiversecollectionofimplementors,librarywriters,andendusers)quiteindependentlyofanyspecificimplementation
MPI由MPI論壇(實施者,圖書館作家和最終用戶的多樣化集合)設計,完全獨立于任何具體的實現(xiàn)Website
//mpi/12WhatisMPI?MPIdefinesastandardlibraryformessage-passingthatcanbeusedtodevelopportablemessage-passingprogramsusingeitherCorFortran.Afixedsetofprocessesiscreatedatprograminitialization,oneprocessiscreatedperprocessor
mpirun–np5programEachprocessknowsitspersonalnumber(rank)EachprocessknowsnumberofallprocessesEachprocesscancommunicatewithotherprocessesProcesscan’tcreatenewprocesses(inMPI-1)13WhatisMPI?MPI定義了消息傳遞的標準庫,可用于使用C或Fortran開發(fā)便攜式消息傳遞程序。在程序初始化時創(chuàng)建一組固定的進程,每個處理器創(chuàng)建一個進程每個進程知道其個人號碼(等級)每個進程知道所有進程的數(shù)量每個進程都可以與其他進程進行通信進程無法創(chuàng)建新進程(在MPI-1中)14MPI:theMessagePassingInterfaceTheminimalsetofMPIroutines.MPI_InitInitializesMPI.MPI_FinalizeTerminatesMPI.MPI_Comm_sizeDeterminesthenumberofprocesses.MPI_Comm_rankDeterminesthelabelofcallingprocess.MPI_SendSendsamessage.MPI_RecvReceivesamessage.15StartingandTerminatingtheMPILibrary
MPI_InitiscalledpriortoanycallstootherMPIroutines.ItspurposeistoinitializetheMPIenvironment.MPI_Finalizeiscalledattheendofthecomputation,anditperformsvariousclean-uptaskstoterminatetheMPIenvironment.Theprototypesofthesetwofunctionsare:
intMPI_Init(int*argc,char***argv) intMPI_Finalize()
MPI_InitalsostripsoffanyMPIrelatedcommand-linearguments.MPI_Init也剝離任何與MPI相關(guān)的命令行參數(shù)。AllMPIroutines,data-types,andconstantsareprefixedby“MPI_”.ThereturncodeforsuccessfulcompletionisMPI_SUCCESS.所有MPI例程,數(shù)據(jù)類型和常量都以“MPI_”作為前綴。成功完成的返回碼為MPI_SUCCESS。16CommunicatorsAcommunicatordefinesacommunicationdomain-asetofprocessesthatareallowedtocommunicatewitheachother.InformationaboutcommunicationdomainsisstoredinvariablesoftypeMPI_Comm.CommunicatorsareusedasargumentstoallmessagetransferMPIroutines.Aprocesscanbelongtomanydifferent(possiblyoverlapping)communicationdomains.MPIdefinesadefaultcommunicatorcalledMPI_COMM_WORLDwhichincludesalltheprocesses.17Communicators通信者定義通信域-允許彼此通信的一組進程。有關(guān)通信域的信息存儲在MPI_Comm類型的變量中。通信器用作所有消息傳輸MPI例程的參數(shù)。進程可以屬于許多不同(可能重疊)的通信域。MPI定義了一個名為MPI_COMM_WORLD的默認通訊器,包括所有進程。18QueryingInformationThe
MPI_Comm_size
and
MPI_Comm_rank
functionsareusedtodeterminethenumberofprocessesandthelabelofthecallingprocess,respectively.Thecallingsequencesoftheseroutinesareasfollows:
intMPI_Comm_size(MPI_Commcomm,int*size) intMPI_Comm_rank(MPI_Commcomm,int*rank)
Therankofaprocessisanintegerthatrangesfromzerouptothesizeofthecommunicatorminusone.進程的等級是從零到通信者的大小減一的整數(shù)。19OurFirstMPIProgram#include<mpi.h>main(intargc,char*argv[]){ intnpes,myrank;
MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&npes); MPI_Comm_rank(MPI_COMM_WORLD,&myrank); printf("Fromprocess%doutof%d,HelloWorld!\n", myrank,npes);
MPI_Finalize();}20ParallelProgrammingWithMPICommunication通訊Basicsend/receive(blocking)基本發(fā)送/接收(阻塞)Non-blocking非阻塞Collective集體Synchronization同步Implicitinpoint-to-pointcommunication隱含的點對點通信Globalsynchronizationviacollectivecommunication通過集體交流進行全球同步ParallelI/O(MPI2)21Basic
SendingandReceivingMessagesThebasicfunctionsforsendingandreceivingmessagesinMPIaretheMPI_SendandMPI_Recv,respectively.Thecallingsequencesoftheseroutinesareasfollows:
intMPI_Send(void*buf,intcount,MPI_Datatype datatype,intdest,inttag,MPI_Commcomm) intMPI_Recv(void*buf,intcount,MPI_Datatype datatype,intsource,inttag, MPI_Commcomm,MPI_Status*status)
MPI_Send22MPI_SendThemessagetobesentisdeterminedbypointingtothememoryblock(buffer),whichcontainsthemessage.Thetriad,whichisusedtopointtothebuffer(buf,count,type),isincludedintotheparametersofpracticallyalldatapassingfunctions,Theprocesses,amongwhichdataispassed,shouldbelongtothecommunicator,specifiedinthefunctionMPI_Send,Theparametertagisusedonlywhenitisnecessarytodifferentiateamongthemessagesbeingpassed.Otherwise,anarbitraryintegernumbercanbeusedastheparametervalue23MPI_Send要發(fā)送的消息通過指向包含該消息的存儲器塊(緩沖器)來確定。用于指向緩沖區(qū)(buf,count,type)的三元組被包含在幾乎所有數(shù)據(jù)傳遞函數(shù)的參數(shù)中,數(shù)據(jù)傳遞的過程應屬于MPI_Send函數(shù)中指定的通信器,僅當需要區(qū)分正在傳遞的消息之間時才使用參數(shù)標簽。否則,可以使用任意整數(shù)作為參數(shù)值24MPI_Recv2526SendingandReceivingMessages
MPIallowsspecificationofwildcardargumentsforbothsourceandtag.IfsourceissettoMPI_ANY_SOURCE,thenanyprocessofthecommunicationdomaincanbethesourceofthemessage.IftagissettoMPI_ANY_TAG,thenmessageswithanytagareaccepted.Onthereceiveside,themessagemustbeoflengthequaltoorlessthanthelengthfieldspecified.27SendingandReceivingMessages
MPI允許為源和標簽指定通配符參數(shù)。如果source設置為MPI_ANY_SOURCE,則通信域的任何進程都可以是消息的源。如果標簽設置為MPI_ANY_TAG,則會接受帶有任何標簽的消息。在接收端,該消息的長度必須等于或小于指定的長度字段。28MPIDatatypes
MPIDatatypeCDatatypeMPI_CHARsignedcharMPI_SHORTsignedshortintMPI_INTsignedintMPI_LONGsignedlongintMPI_UNSIGNED_CHARunsignedcharMPI_UNSIGNED_SHORTunsignedshortintMPI_UNSIGNEDunsignedintMPI_UNSIGNED_LONGunsignedlongintMPI_FLOATfloatMPI_DOUBLEdoubleMPI_LONG_DOUBLElongdoubleMPI_BYTEMPI_PACKED29Point-to-pointExampleProcess0 Process1#defineTAG999floata[10];intdest=1;MPI_Send(a,10,MPI_FLOAT,dest,TAG,MPI_COMM_WORLD);#defineTAG999MPI_Statusstatus;intcount;floatb[20];intsender=0;MPI_Recv(b,20,MPI_FLOAT,sender,TAG,MPI_COMM_WORLD,&status);MPI_Get_count(&status,MPI_FLOAT,&count);30Non-blockingCommunicationInordertooverlapcommunicationwithcomputation,MPIprovidesapairoffunctionsforperformingnon-blocking非阻塞sendandreceiveoperations.intMPI_Isend(void*buf,intcount,MPI_Datatypedatatype, intdest,inttag,MPI_Commcomm, MPI_Request*request)intMPI_Irecv(void*buf,intcount,MPI_Datatypedatatype, intsource,inttag,MPI_Commcomm, MPI_Request*request)Theseoperationsreturnbeforetheoperationshavebeencompleted.FunctionMPI_Testtestswhetherornotthenon-blockingsendorreceiveoperationidentifiedbyitsrequesthasfinished.intMPI_Test(MPI_Request*request,int*flag, MPI_Status*status)
Non-blockingCommunicationThefollowingschemeofcombiningthecomputationsandtheexecutionofthenonblockingcommunicationoperationispossible:組合計算和非阻塞通信操作的執(zhí)行的以下方案是可能的:31
ProgramsEvaluatingofMPIProgramExecutionTime
Theexecutiontimeneedstoknowforestimatingtheobtainedspeedupofparallelcomputation執(zhí)行時間需要知道估計獲得的并行計算加速Obtainingthetimeofthecurrentmomentoftheprogramexecutionisprovidedbymeansofthefollowingfunction:Theaccuracyoftimemeasurementcandependontheenvironmentoftheparallelprogramexecution.Thefollowingfunctioncanbeusedinordertodeterminethecurrentvalueoftimemeasurementaccuracy:時間測量的準確度可以取決于并行程序執(zhí)行的環(huán)境??梢允褂靡韵鹿δ軄泶_定時間測量精度的當前值:3233MPICollectiveCommunicationRoutinesthatsendmessage(s)toagroupofprocessesorreceivemessage(s)fromagroupofprocesses將消息發(fā)送到一組進程或從一組進程接收消息Potentiallymoreefficientthanpoint-to-pointcommunication潛在地比點對點通信更有效率ExamplesBroadcastReductionBarrierScatterGatherAll-to-allCollectiveCommunication-BroadcastThereistheneedfortransmittingthevaluesofthevectorXtoalltheparallelprocesses,AnevidentwayistousetheabovediscussedMPIcommunicationfunctionstoprovideallrequireddatatransmissions:Therepetitionofthedatatransmissionsleadstosummingupthelatenciesofthecommunicationoperations,Therequireddatatransmissionscanbeexecutedwiththesmallernumberofiterations34CollectiveCommunication-Broadcast需要將矢量X的值發(fā)送到所有并行進程,一個明顯的方法是使用上述討論的MPI通信功能來提供所有必需的數(shù)據(jù)傳輸:數(shù)據(jù)傳輸?shù)闹貜蛯е驴偨Y(jié)通信操作的延遲,所需的數(shù)據(jù)傳輸可以用較少的迭代次數(shù)執(zhí)行3536CollectiveCommunication-BroadcastTheone-to-allbroadcastoperationis:
intMPI_Bcast(void*buf,intcount,MPI_Datatypedatatype,intsource,MPI_Commcomm)SendsdatafromroottoallothersinagroupCollectiveCommunication-Broadcast3738CollectiveCommunication–ReduceTheall-to-onereductionoperationis:
intMPI_Reduce(void*sendbuf,void*recvbuf,intcount, MPI_Datatypedatatype,MPI_Opop,inttarget, MPI_Commcomm)Combinesdatafromallprocessesingroup-Performs(associative)reductionoperation(SUM,MAX)-ReturnsthedatatooneprocessCollectiveCommunication–Reduce39CollectiveCommunication–ReduceTheBasicMPIOperationTypesforDataReductionFunctions…4041CollectiveCommunication–synchronizationThebarriersynchronizationoperationisperformedinMPIusing:
intMPI_Barrier(MPI_Commcomm)
Abarrieroperationsynchronisesanumberofprocessors.屏障操作同步多個處理器。42CollectiveCommunication–ScatterThecorrespondingscatteroperationis:
intMPI_Scatter(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, intsource,MPI_Commcomm)
SendseachelementofarrayinroottoseparateprocessCollectiveCommunication–GatherGatheringDatafromAlltheProcessestoaProcess…Gatheringdatafromalltheprocessestoaprocessisreversetodatascattering.ThefollowingMPIfunctionprovidestheexecutionofthisoperation:4344CollectiveCommunication–GatherThegatheroperationisperformedinMPIusing:
intMPI_Gather(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, inttarget,MPI_Commcomm)
Collectsdatafromsetofprocesses45OtherCollectiveCommunicationMPIalsoprovidestheMPI_Allgatherfunctioninwhichthedataaregatheredatalltheprocesses.
intMPI_Allgather(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf,intrecvcount, MPI_Datatyperecvdatatype,MPI_Commcomm)Iftheresultofthereductionoperationisneededbyallprocesses,MPIprovides:
intMPI_Allreduce(void*sendbuf,void*recvbuf,intcount,MPI_Datatypedatatype,MPI_Opop,MPI_Commcomm)Tocomputeprefix-sums,MPIprovides:
intMPI_Scan(void*sendbuf,void*recvbuf,intcount,MPI_Datatypedatatype,MPI_Opop,MPI_Commcomm)
46OtherCollectiveCommunicationTheall-to-allpersonalizedcommunicationoperationisperformedby:
intMPI_Alltoall(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, MPI_Commcomm)Usingthiscoresetofcollectiveoperations,anumberofprogramscanbegreatlysimplified.使用這套核心集體操作,可以大大簡化一些程序。4748TopologiesandEmbeddings
MPIallowsaprogrammertoorganizeprocessorsintologicalk-dmeshes.TheprocessoridsinMPI_COMM_WORLDcanbemappedtoothercommunicators(correspondingtohigher-dimensionalmeshes)inmanyways.Thegoodnessofanysuchmappingisdeterminedbytheinteractionpatternoftheunderlyingprogramandthetopologyofthemachine.MPIdoesnotprovidetheprogrammeranycontroloverthesemappings.
49TopologiesandEmbeddings
MPI允許程序員將處理器組織成邏輯k-d網(wǎng)格。MPI_COMM_WORLD中的處理器ID可以通過多種方式映射到其他通訊器(對應于高維網(wǎng)格)。任何此類映射的優(yōu)點由底層程序的交互模式和機器的拓撲結(jié)構(gòu)決定。MPI不提供程序員對這些映射的任何控制。50TopologiesandEmbeddingsDifferentwaystomapasetofprocessestoatwo-dimensionalgrid.and(b)showarow-andcolumn-wisemappingoftheseprocesses,(c)showsamappingthatfollowsaspace-llingcurve(dottedline)(d)showsamappinginwhichneighboringprocessesaredirectlyconnectedinahypercube.51CreatingandUsing
CartesianTopologies
Wecancreatecartesiantopologiesusingthefunction:我們可以使用以下功能創(chuàng)建笛卡爾拓撲:
intMPI_Cart_create(MPI_Commcomm_old,intndims, int*dims,int*periods,intreorder,MPI_Comm*comm_cart) Thisfunctiontakestheprocessesintheoldcommunicatorandcreatesanewcommunicatorwithdimsdimensions.該功能采用舊通信器中的進程,并創(chuàng)建一個具有dims維度的新通信器。Eachprocessorcannowbeidentifiedinthisnewcartesiantopologybyavectorofdimensiondims.每個處理器現(xiàn)在可以在這個新的笛卡爾拓撲由尺寸DIMS的向量標識。Example:CalculatingπThevalueofconstantπcanbecomputedbymeansoftheintegralTocomputethisintegralthemethodofrectanglescanbeusedfornumericalintegration52Example:CalculatingπCyclicschemecanbeusedtodistributethecalculationsamongtheprocessorsPartialsums,thatwerecalculatedondifferentprocessors,havetobesummed5354Calculatingπ–SequentialProgram
intnum_steps=1000;doublewidth;voidmain() { inti; doublex,pi,sum=0.0; width=1.0/(double)num_steps;
for(i=1;i<=num_steps;i++){ x=(i-0.5)*width; sum=sum+4.0/(1.0+x*x); } pi=sum*width;}55MPIExample–Calculatingπ56MPIExample–Calculatingπ57CollectiveCommunication–Calculatingπ58CollectiveCommunication–Calculatingπ59IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline60WhatisPVM?
ThedevelopmentofPVMstartedinsummer1989atOakRidgeNationalLaboratory(ORNL).Isasoftwarepackagethatallowsaheterogeneouscollectionofworkstations(hostpool)tofunctionasasinglehighperformanceparallelmachine(virtual)PVM,throughits
virtualmachine
providesasimpleyetusefuldistributedoperatingsystemIthas
daemon
runningonallcomputersmakingupthe
virtualmachine61WhatisPVM?PVM的發(fā)展始于1989年夏天在橡樹嶺國家實驗室(ORNL)。是一個軟件包,允許異構(gòu)收集的工作站(主機池)作為單個高性能并行機器(虛擬)PVM通過其虛擬機提供了一個簡單而有用的分布式操作系統(tǒng)它具有在組成虛擬機的所有計算機上運行的守護程序62PVMResourcesWebsite
/pvm/pvm_home.html
Book PVM:ParallelVirtualMachine
AUsers'GuideandTutorialforNetworkedParallelComputing
AlGeist,AdamBeguelin,JackDongarra,WeichengJiang,RobertManchek,VaidySunderam
/pvm3/book/pvm-book.html
63HowPVMisDesigned64BasicPVMFunctionsEnrollsthecallingprocessintoPVMandgeneratesauniquetaskidentifierifthisprocessisnotalreadyenrolledinPVM.IfthecallingprocessisalreadyenrolledinPVM,thisroutinesimplyreturnstheprocess'stid.將調(diào)用進程注冊到PVM中,并生成唯一的任務標識符,如果此進程尚未注冊在PVM中。如果調(diào)用進程已經(jīng)注冊在PVM中,這個例程只需返回進程的tid。 tid=pvm_mytid();StartsnewPVMprocesses.Theprogrammercanspecifythemachinearchitectureandmachinenamewhereprocessesaretobespawned.開始新的PVM流程。程序員可以指定要生成進程的機器體系結(jié)構(gòu)和機器名稱。numt=pvm_spawn("worker",0,PvmTaskDefault,"",1,&tids[i]);65BasicPVMFunctionsTellsthelocalpvmdthatthisprocessisleavingPVM.ThisroutineshouldbecalledbyallPVMprocessesbeforetheyexit.Addhoststothevirtualmachine.Thenamesshouldhavethesamesyntaxaslinesofapvmdhostfile.pvm_addhosts(hostarray,4,infoarray);Deleteshostsfromthevirtualmachine.
pvm_delhosts(hostarray,4);66BasicPVMFunctionsImmediatelysendsthedatainthemessagebuffertothespecifieddestinationtask.Thisisablocking,sendoperation.Returns0ifsuccessful,<0otherwise.
pvm_send(tids[1],MSGTAG);Multicasts組播amessagestoredintheactivesendbuffertotasksspecifiedinthetids[].Themessageisnotsenttothecallereveniflistedinthearrayoftids.
pvm_mcast(tids,ntask,msgtag);67BasicPVMFunctionsBlocksthereceivingprocessuntilamessagewiththespecifiedtaghasarrivedfromthespecifiedtid.Themessageisthenplacedinanewactivereceivebuffer,whichalsoclearsthecurrentreceivebuffer.
pvm_recv(tid,msgtag);Sameaspvm_recv,exceptanon-blocking非阻塞receiveoperationisperformed.Ifthespecifiedmessagehasarrived,thisroutinereturnsthebufferidofthenewreceivebuffer.Ifthemessagehasnotarrived,itreturns0.Ifanerroroccurs,thenaninteger<0isreturned.
pvm_nrecv(tid,msgtag);68BasicPVMFunctionspvm_barrier("worker",5);pvm_bcast("worker",msgtag);pvm_gather(&getmatrix,&myrow,10,PVM_INT,msgtag,"workers",root);pvm_scatter(&getmyrow,&matrix,10,PVM_INT,
msgtag,"workers",root);pvm_reduce(PvmMax,&myvals,10,PVM_INT,msgtag,"workers",root);69PVMExample:HelloWorld!#include<stdio.h>#include"pvm3.h"main(){intcc,tid;charbuf[100];printf("i'mt%x\n",pvm_mytid());cc=pvm_spawn("hello_other",0,0,"",1,&tid);if(cc==1){ cc=pvm_recv(-1,-1); pvm_bufinfo(cc,0,0,&tid); pvm_upkstr(buf); printf("fromt%x:%s\n",tid,buf);}else printf("can'tstarthello_other\n");pvm_exit();exit(0);}#include"pvm3.h"main(){ intptid; charbuf[100]; ptid=pvm_parent(); strcpy(buf,"hello,worldfrom"); gethostname(buf+strlen(buf),64); pvm_initsend(PvmDataDefault); pvm_pkstr(buf); pvm_send(ptid,1); pvm_exit(); exit(0);}70SetuptoUsePVMSetPVM_ROOTandPVM_ARCHinyour.cshrcfileBuildPVMforeacharchitecturetypeCreatea.rhostsfileoneachhostlistingallthehostsyouwishtouseCreatea$HOME/.xpvm_hostsfilelistingallthehostsyouwishtouseprependedbyan``&''71StartingPVMBefore
wegooverthestepstocompileandrunparallelPVMprograms,youshouldbesureyoucanstartupPVMandconfigureavirtualmachine.OnanyhostonwhichPVMhasbeeninstalledyoucantype%pvmandyoushouldgetbackaPVMconsolepromptsignifyingthatPVMisnowrunningonthishost.Youcanaddhoststoyourvirtualmachinebytypingattheconsolepromptpvm>addhostnameAndyoucandeletehosts(excepttheoneyouareon)fromyourvirtualmachinebytypingpvm>deletehostnameIfyougetthemessage``Can'tStartpvmd,''thencheckthecommonstartupproblemssectionandtryagain.72StartingPVM在完成編譯并運行并行PVM程序的步驟之前,您應該確保可以啟動PVM并配置虛擬機。在任何安裝了PVM的主機上,您可以鍵入%pvm并且您應該返回PVM控制臺提示符,表示PVM正在此主機上運行。您可以通過在控制臺提示符下鍵入來將主機添加到虛擬機pvm>addhostname并且您可以通過鍵入從虛擬機中刪除主機(您所在的除外)pvm>deletehostname如果您收到消息“無法啟動pvmd”,請檢查常見的啟動問題部分,然后重試。73StartingPVMToseewhatthepresentvirtualmachinelookslike,youcantypepvm>confToseewhatPVMtasksarerunningonthevirtualmachine,youtypepvm>ps–aOfcourseyoudon'thaveanytasksrunningyet;that'sinthenextsection.Ifyoutype``quit"attheconsoleprompt,theconsolewillquitbutyourvirtualmachineandtaskswillcontinuetorun.AtanyUnixpromptonanyhostinthevirtualmachine,youcantype%pvmandyouwillgetthemessage``pvmalreadyrunning"andtheconsoleprompt.Whenyouarefinishedwiththevirtualmachine,youshouldtypepvm>haltThiscommandkillsanyPVMtasks,shutsdownthevirtualmachine,andexitstheconsole.ThisistherecommendedmethodtostopPVMbecauseitmakessurethatthevirtualmachineshutsdowncleanly.YoushouldpracticestartingandstoppingandaddinghoststoPVMuntilyouarecomfortablewiththePVMconsole.AfulldescriptionofthePVMconsoleanditsmanycommandoptionsisgivenattheendofthischapter.74StartingPVM要查看當前虛擬機的外觀,可以鍵入pvm>conf要查看虛擬機上運行的PVM任務,請鍵入pvm>ps–a當然你還沒有任何運行的任務;這在下一節(jié)。如果您在控制臺提示符下鍵入“quit”,則控制臺將退出,但您的虛擬機和任務將繼續(xù)運行。在虛擬機中的任何主機上的任何Unix提示符下,可以鍵入%pvm您將收到消息“pvmalreadyrunning”和控制臺提示。完成虛擬機后,您應該鍵入pvm>halt該命令可以殺死任何PVM任務,關(guān)閉虛擬機,并退出控制臺。這是阻止PVM的推薦方法,因為它確保虛擬機完全關(guān)閉。您應該練習啟動和停止并將主機添加到PVM,直到您對PVM控制臺感到滿意為止。在本章末尾給出了PVM控制臺及其許多命令選項的完整說明。75IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline76PVMandMPIGoalsPVMAdistributedoperatingsystemPortabilityHeterogeneityHandlingcommunicationfailuresMPIAlibraryforwritingapplicationprogram,notadistributedoperatingsystemportabilityHighPerformanceHeterogeneityWell-definedbehavior77PVMandMPIGoalsPVM分布式操作系統(tǒng)可移植性異質(zhì)性處理通信故障MPI用于編寫應用程序的庫,而不是分布式操作系統(tǒng)可移植性高性能異質(zhì)性明確的行為78WhatisNotDifferent?Portability
–sourcecodewrittenforonearchitecturecanbecopiedtoasecondarchitecture,compiledandexecutedwithoutmodification(tosomeextent)Support
MPMD
programsaswellas
SPMDInteroperability–theabilityofdifferentimplementationsofthesamespecificationtoexchangemessagesHeterogeneity(tosomeextent)
PVM&MPIaresystemsdesignedtoprovideuserswithlibrariesforwritingportable,heterogeneous,MPMDprograms79WhatisNotDifferent?可移植性-為一個架構(gòu)編寫的源代碼可以復制到第二個架構(gòu),編譯并執(zhí)行,無需修改(在某種程度上)支持MPMD程序以及SPMD互操作性-相同規(guī)范的不同實現(xiàn)交換消息的能力異質(zhì)性(某種程度上)PVM和MPI是為用戶提供寫入便攜式,異構(gòu)的MPMD程序的庫的系統(tǒng)80ProcesscontrolAbilitytostartandstoptasks,tofindoutwhichtasksarerunning,andpossiblywheretheyarerunning.PVM
containsallofthesecapabilities–itcanspawn/killtasksdynamicallyMPI-1
hasnodefinedmethodtostartnewtask.
MPI-2
containfunctionstostartagroupoftasksandtosendakillsignaltoagroupoftasks81Processcontrol能夠啟動和停止任務,查找正在運行的任務,以及可能運行的位置。PVM包含所有這些功能-
它可以動態(tài)地生成/刪除任務MPI-1沒有定義的方法來啟動新任務。MPI-2包含啟動一組任務并向一組任務發(fā)送殺死信號的功能82ResourceControlPVM
isinherently
dynamic
innature,andithasarichsetofresourcecontrolfunctions.HostscanbeaddedordeletedloadbalancingtaskmigrationfaulttoleranceefficiencyMPI
isspecificallydesignedtobe
static
innaturetoimproveperformance83ResourceControlPVM本質(zhì)上是動態(tài)的,它具有豐富的資源控制功能。主機可以添加或刪除負載均衡任務遷移容錯效率MPI是專門設計為靜態(tài)的,以提高性能84Virtualtopology
-onlyfor
MPIConvenientprocessnamingNamingschemetofitthecommunicationpatternSimplifieswritingofcodeCanal
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 22024版?zhèn)€人理財顧問合同范本:某銀行與客戶理財服務合同
- 2024年設備質(zhì)保服務與支持協(xié)議版
- 2024年道路清障拖車作業(yè)合同規(guī)范文本3篇
- 山洪災害防御知識培訓課件
- 2024音樂素材購買及使用權(quán)授權(quán)合同:視頻素材
- 2024年零售連鎖店經(jīng)營承包合同范本版B版
- 《技術(shù)模板》課件
- 浙江廣廈建設職業(yè)技術(shù)大學《大數(shù)據(jù)挖掘技術(shù)及其應用》2023-2024學年第一學期期末試卷
- 2024施工合同煙囪施工施工圖紙設計合同3篇
- 2024年環(huán)保設施運營合同3篇
- 2025年廣東汕頭市人大常委會辦公室招聘聘用人員3人歷年高頻重點提升(共500題)附帶答案詳解
- 2024江蘇泗陽縣交通產(chǎn)業(yè)集團招聘第一線操作人員招聘39人易考易錯模擬試題(共500題)試卷后附參考答案
- GB 19272-2024室外健身器材的安全通用要求
- 北師大版五年級數(shù)學下冊第3單元第3課時分數(shù)乘法(三)課件
- 學校食堂菜譜及定價方案
- 變電一次設備標準缺陷庫
- 你比劃我猜題目大全
- 人教PEP版2022-2023六年級英語上冊期末試卷及答案(含聽力材料)
- 社區(qū)護理學教學設計教案
- (完整word版)師徒結(jié)對活動記錄表
- 研發(fā)準備金制度企業(yè)研發(fā)準備金制度范文2016
評論
0/150
提交評論