sábado, 21 de junio de 2014

MPI commands that will be used in this blog


  In this blog I will explain the basic commands mpi will use in this blog and video tutorials.


 MPI_Init( &argc, &argv );

this command we will be used in the initiation of our code.
this code is used to start parallel communication between processors.

MPI_Finalize();

this command will be used in the completion of our code
Terminates MPI execution environment



   MPI_Comm_size( MPI_COMM_WORLD, &nprocs );
 where nprocs is a integer.
This command saves the number of processors in nprocs.



   MPI_Comm_rank( MPI_COMM_WORLD, &myproc );
where myproc is a integer.
this command gives us a number to identify each processor, and will be different for each processor.



   MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
where buf is initial address of send buffer
where count is number of elements in send buffer (nonnegative integer)
where datatype is datatype of each send buffer element.
where  dest is rank of destination, this is obtained by the command MPI_Comm_rank
where tag is message tag (integer), This is used to distinguish messages sent.
where comm is communicator, for these examples we used MPI_COMM_WORLD
This command allows us to communicate between processors.


 MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)
where buf is initial address of receive buffer (choice)   OUTPUT
where count is maximum number of elements in receive buffer (nonnegative integer)
where datatype is datatype of each receive buffer element.
where source is rank of source this is obtained by the command MPI_Comm_rank
where tag is message tag, This is used to distinguish messages received
where status is status object. OUTPUT



MPI_Send and MPI_Recv is a blocking command

No hay comentarios:

Publicar un comentario