mpirun mpirexec 的PBS script 是不同的

A Simple MPI Program Execution

Systems & Access :: Learning to Use :: Linux Computing Resource

Example of an MPI Program mpitest.c

/* program hello */

/* Adapted from mpihello.f by drs */

#include <mpi.h>

#include <stdio.h>

int main(int argc, char **argv)

{

int *buf, i, rank, nints, len;

char hostname[256];

MPI_Init(&argc,&argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);

gethostname(hostname,255);

printf("Hello world!  I am process number: %d on host %s\n", rank, hostname);

MPI_Finalize();

return 0;

}

Example of MPI Program Execution Using mpiexec

The preferred means of starting MPI programs on the Linux cluster is mpiexec. The main advantage of using mpiexec over mpirun is that there is no need for sourcing any setup file during the program execution. However, you still must source the setup file before compiling your program. It is a lot easier to write the PBS script file using mpiexec than mpirun. The example MPI program above could be executed with the following PBS script that uses mpiexec.

Sample mpitest.pbs Using mpiexec

#!/bin/csh

#*** The "#PBS" lines must come before any non-blank non-comment lines ***

#PBS -l walltime=30:00,nodes=4:myri:ppn=2

set mycommand = "./mpitest"  # note "./" is used to not rely on $PATH

set myargs    = ""           # arguments for $mycommand

set infiles   = ""           # list of input files

set outfiles  = ""           # list of output files

if ($?PBS_NODEFILE && $?PBS_JOBID) then

#count the number of processors assigned by PBS

set NP = `wc -l < $PBS_NODEFILE`

echo "Running on $NP processors: "`cat $PBS_NODEFILE`

else

echo "This script must be submitted to PBS with 'qsub -l nodes=X'"

exit 1

endif

# set up the job and input files, exiting if anything went wrong

cd "$PBS_O_WORKDIR" || exit 1

if ($?infiles) then

foreach file ($infiles)

cp -v $file /scratch/ || exit 1

end

endif

cd /scratch || exit 1

# run the command, saving the exit value

mpiexec $PBS_O_WORKDIR/$mycommand $myargs

set ret = $?

# get the output files

cd "$PBS_O_WORKDIR"

if ($?outfiles) then

foreach file ($outfiles)

cp -vf /scratch/`basename $file` $file

end

endif

echo "Done   " `date`

exit $ret

 

Sample Run

To run a job, the first step is to source the setup file:

  hpc-master: source /usr/usc/mpich/1.2.5..12/gm-cdk/setup.csh

The second step is to compile the program:

  hpc-master: mpicc -o mpitest mpitest.c

The third step is to submit the .pbs file to the queue. When the job submission is successful, the qsub command returns the job's process number:

  hpc-master: qsub mpitest.pbs

12240.hpc-pbs.usc.edu

The output files for the above run are created in the same directory where the qsub command was executed. The output files are created using the same filename as your PBS script filename with an extension that includes "e" for "error" or "o" for "output" followed by the process number. In our example above the file mpitest.pbs.e12240 contains the error messages and the file mpitest.pbs.o12240 contains the output of the program. The output of the program can be viewed by listing the output files:

  hpc-master: ls -l mpitest.pbs.*

-rw-------    1 nirmal   rds             0 Jul 16 14:59 mpitest.pbs.e12240

-rw-------    1 nirmal   rds          1442 Jul 16 14:59 mpitest.pbs.o12240

  hpc-master: more mpitest.pbs.*

::::::::::::::

mpitest.pbs.e12240

::::::::::::::

::::::::::::::

mpitest.pbs.o12240

::::::::::::::

----------------------------------------

Begin PBS Prologue Wed Dec  8 16:49:24 PST 2004

Job ID:         12240.hpc-pbs.usc.edu

Username:       nirmal

Group:          rds

Nodes:          hpc0632 hpc0633 hpc0634 hpc0636

PVFS: mounting at /scratch

End PBS Prologue Wed Dec  8 16:49:27 PST 2004

----------------------------------------

Warning: no access to tty (Bad file descriptor).

Thus no job control in this shell.

Starting 12240.hpc-pbs.usc.edu Wed Dec 8 16:49:27 PST 2004

Initiated on hpc0636

Running on 8 processors: hpc0636 hpc0636 hpc0634 hpc0634 hpc0633 hpc0633 hpc0632 hpc0632

Hello world!  I am process number: 4 on host hpc0632

Hello world!  I am process number: 3 on host hpc0634

Hello world!  I am process number: 0 on host hpc0635

Hello world!  I am process number: 2 on host hpc0634

Hello world!  I am process number: 6 on host hpc0631

Hello world!  I am process number: 1 on host hpc0635

Hello world!  I am process number: 5 on host hpc0632

Hello world!  I am process number: 7 on host hpc0631

Done    Wed Dec 8 16:49:28 PST 2004

--------------------------------------------------

Begin PBS Epilogue Wed Dec  8 16:49:33 PST 2004

Job ID:         12240.hpc-pbs.usc.edu

Username:       nirmal

Group:          rds

Job Name:       mpitest.pbs

Session:        4020

Limits:         neednodes=4:myri:ppn=2,nodes=4:myri:ppn=2,walltime=00:02:00

Resources:      cput=00:00:00,mem=12648kb,vmem=25752kb,walltime=00:00:04

Queue:          quick

Account:

Nodes:          hpc0632 hpc0633 hpc0634 hpc0636

Killing leftovers...

End PBS Epilogue Wed Dec  8 16:49:35 PST 2004

--------------------------------------------------

你可能感兴趣的:(script)