The purpose of the MPi library is to provide a simplified, C++-style API to the MPI routines for standard types (those for which an MPI type exists) and for composite higher-level objects, in particular the TRIQS arrays and Green’s functions.

The communication routines in the C API of the MPI library have require several parameters, such as the reduce operation:

int MPI_Reduce(void *sendbuf, void *recvbuf, int count,
               MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)

In principle, all parameters except for the communicator and id of the root process can be determined from the variable or object to be transmitted. In most cases, we use MPI_COMM_WORLD as the communicator, take the id 0 for the root process and use MPI_SUM as the operation.

This allows us to write

int a = 5;

Such an interface is simpler to use and much less error prone. For higher-level objects, such as vectors or higher-dimensional arrays, the simplifcation is even more significant. Take the scatter and gather operations as examples:

int MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype sendtype,
                void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
                MPI_Comm comm)
int MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype sendtype,
               void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
               MPI_Comm comm)

In order to scatter a (contiguos) multidimensional array across all nodes, apply some operations to it and gather it back on the master one requires several lines of relatively complex code. The leading dimension of the array needs to be sliced, slice length and adress of the first element of each slice have to be computed and finally the MPI C API function has to be called. This can be packaged in the library once and for all.

Using the library these operations look as follows:

nda::array<int, 3> A(8, 8, 8); // a three-dimensional array
//do something with the corresponding part of A on each node

All index computations are encapsulated in the mpi library calls.

In this library, we employ metaprogramming techniques for type deduction as well as a lazy mechanism to avoid unecessary copyies of data.

MPI documentation/manual/triqs

In this document, we describe the use of the TRIQS MPI library. For more information on MPI, see, e.g., the open MPI web pages or consult the MPI documentation/manual/triqs documentation/manual.

Supported functions and types

Currently, the TRIQS MPI library supports the following operations:


These routines have the same meaning as their corresponding MPI analogues. They work for all ‘basic’ types, i.e. types for which a native MPI-type exists. These are:

unsigned long

We also support std::vector<T> for T being a basic type, as well as the types provided by the TRIQS array and TRIQS gf libraries. In addition, the library provides a mechanism to enable MPI support for custom containers based on the array or gf libraries.

Basic usage

In order to create an MPI environment, set up the communicator and broadcast a variable, use the following code block:

int main(int argc, char* argv[]) {

  mpi::environment env(argc, argv);
  mpi::communicator world;

  int a = 5;
  broadcast(a, world);


The declaration of the communicator is optional. If no communicator is passed to the routine, MPI_COMM_WORLD is used by default.

All collective operations have the same signature. They take up to three arguments:

reduce(T const &x, communicator = {}, int root = 0)

Here T can be any supported type. The communicator is optional. By default, the data will be collected on (or transmitted from) the process with id 0.


Support for built-in types is provided by the header mpi/mpi.hpp, while string, pair and vector have their own headers mpi/string.hpp, mpi/pair.hpp and mpi/vector.hpp. Support for array types is provided by nda/mpi.hpp.

MPI example

#include <nda/nda.hpp>
#include <nda/mpi.hpp>
#include <iostream>

using namespace nda;

int main(int argc, char *argv[]) {

  mpi::environment env(argc, argv);
  mpi::communicator world;

  int a = 5;
  a = mpi::reduce(a);

  array<int, 2> A(2, 10);
  A() = 1;

  std::cout << A << std::endl;

  A += world.rank();

  std::cout << A << std::endl;



Simple MPI example.