TRIQS/mpi 1.3.0
C++ interface to MPI
|
mpi implements various high-level C++ wrappers around their low-level C counterparts. It is not intended as a full replacement for the C implementation. Instead it tries to help the user with the most common tasks like initializing and finalizing an MPI environment or sending data via collective communications.
The following provides a detailed reference documentation grouped into logical units.
If you are looking for a specific function, class, etc., try using the search bar in the top left corner.
MPI essentials provide the user with two classes necessary for any MPI program:
MPI_Init
in its constructor and MPI_Finalize
in its destructor. There should be at most one instance in every program and it is usually created at the very beginning of the main
function.MPI_Comm
object. Besides storing the MPI_Comm
object, it also provides some convient functions for getting the size of the communicator, the rank of the current process or for splitting an existing communicator.It further contains the convenient function mpi::is_initialized and the static boolean mpi::has_env.
MPI datatypes and operations map various C++ datatypes to MPI datatypes and help the user with registering their own datatypes to be used in MPI communications. Furthermore, it offers tools to simplify the creation of custom MPI operations usually required in MPI_Reduce
or MPI_Accumulate
functions.
The following generic collective communications are defined in Collective MPI communication:
They offer a much simpler interface than their MPI C library analogs. For example, the following broadcasts a std::vector<double>
from the process with rank 0 to all others:
Compare this with the call to the C library:
Under the hood, the generic mpi::broadcast implementation calls the specialized mpi::mpi_broadcast(std::vector< T >&, mpi::communicator, int). The other generic functions are implemented in the same way. See the "Functions" section in Collective MPI communication to check which datatypes are supported out of the box.
In case your datatype is not supported, you are free to provide your own specialization.
Furthermore, there are several functions to simplify communicating generic, contiguous ranges:
Lazy MPI communication can be used to provied collective MPI communication for lazy expression types. Most users probably won't need to use this functionality directly.
We refer the interested reader to TRIQS/nda for more details.
Event handling provides the mpi::monitor class which can be used to communicate and handle events across multiple processes.
Example 2: Use monitor to communicate errors shows a simple use case.
Utilities is a collection of various other tools in mpi which do not fit into any other category above.
For users, the most useful of them is probably mpi::check_mpi_call. A wrapper function that checks the error code returned by MPI C library routines and throws an exception in case the code is != MPI_SUCCESS
.