Posted tagged ‘MPI’

Finally Learned MPI

March 18, 2009

I have used several codes that use MPI but not written one from scratch. I have only added to codes others have done the MPI work for. To be honest, I had a hard time reading the online documentation. However, a post doc showed me a code (below) that opened opened my eyes to what is going on. I am posting it in case it could be of some help to others:

To compile run:
mpicc testmpi.c

To run type:
mpirun -np 4 ./a.out

The trick is to understand each of the 4 processors runs the whole code but is assigned a rank. As you can see in the for loop and print statement, a conditional on rank causes the processor to know which code is “its” job. First you have to Init and assign a rank and size which comes from the -np 4 passed above. The MPI_Barrier command pauses the program until all the processors have caught up with the others, and the MPI_Reduce command takes all the values each processor has computed and feeds them back to the master processor. As you can see I am taking each sum and “MPI_SUM”ing them together as “MPI_DOUBLE”s and feeding my master processor “0” the sum into the value Sum.

I then make the master processor print the total Sum.

#include
#include “mpi.h”

int main(int argc, char** argv)
{
int rank, size;
int i;
double sum=0, Sum;

MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

for (i = rank*1000000000/size; i < (rank + 1)*1000000000/size; i++) {
sum += 1.0;
}
printf(“%d gets sum = %g\n”,rank,sum);

MPI_Barrier(MPI_COMM_WORLD);
MPI_Reduce(&sum,&Sum,1,MPI_DOUBLE,MPI_SUM,0,MPI_COMM_WORLD);

if (rank == 0)
printf(“Total sum %g\n!”,Sum);

MPI_Finalize();
}

Advertisements