TRIQS/mpi 1.3.0
C++ interface to MPI
|
Generic and specialized implementations for a subset of collective MPI communications (broadcast, reduce, gather, scatter).
mpi provides several generic collective communications routines as well as specializations for certain common types. The generic functions usually simply forward the call to one of the specializations (mpi_broadcast
, mpi_gather
, mpi_gather_into
, mpi_reduce
, mpi_reduce_into
, mpi_scatter
or mpi_scatter_into
) using ADL but can also perform some additional checks. It is therefore recommended to always use the generic versions when possible.
Here is a short overview of the available generic functions:
mpi_broadcast
.mpi_gather
if it is implemented. Otherwise, it calls mpi::gather_into with a default constructed output object.mpi_gather_into
.mpi_reduce
if it is implemented. Otherwise, it calls mpi::reduce_into with a default constructed output object.mpi_reduce_into
with the same input and output object.mpi_reduce_into
.mpi_scatter
if it is implemented. Otherwise, it calls mpi::scatter_into with a default constructed output object.mpi_scatter_into
.In case, all processes should receive the result of the MPI operation, one can use the convenience functions mpi::all_gather, mpi::all_gather_into, mpi::all_reduce, mpi::all_reduce_in_place or mpi::all_reduce_into. They forward the given arguments to their "non-all" counterparts with the all
argument set to true.
mpi provides various specializations for several types. For example,
Users are encouraged to implement their own specializations for their custom types or in case a specialization is missing (see e.g. Example 4: Provide custom spezializations).
Furthermore, there are several functions to simplify communicating (contiguous) ranges: mpi::broadcast_range, mpi::gather_range, mpi::reduce_range and mpi::scatter_range. Some of these range functions are more generic than others. Please check the documentation of the specific function for more details.
Functions | |
template<typename T> | |
bool | mpi::all_equal (T const &x, communicator c={}) |
Checks if a given object is equal across all ranks in the given communicator. | |
template<typename T> | |
decltype(auto) | mpi::all_gather (T &&x, communicator c={}) |
Generic MPI all-gather. | |
template<typename T1, typename T2> | |
void | mpi::all_gather_into (T1 &&x_in, T2 &&x_out, communicator c={}) |
Generic MPI all-gather that gathers directly into an existing output object. | |
template<typename T> | |
decltype(auto) | mpi::all_reduce (T &&x, communicator c={}, MPI_Op op=MPI_SUM) |
Generic MPI all-reduce. | |
template<typename T> | |
void | mpi::all_reduce_in_place (T &&x, communicator c={}, MPI_Op op=MPI_SUM) |
Generic MPI all-reduce in place. | |
template<typename T1, typename T2> | |
void | mpi::all_reduce_into (T1 &&x_in, T2 &&x_out, communicator c={}, MPI_Op op=MPI_SUM) |
Generic MPI all-reduce that reduces directly into an existing output object. | |
template<typename T> | |
void | mpi::broadcast (T &&x, communicator c={}, int root=0) |
Generic MPI broadcast. | |
template<std::ranges::sized_range R> | |
void | mpi::broadcast_range (R &&rg, communicator c={}, int root=0) |
Implementation of an MPI broadcast for std::ranges::sized_range objects. | |
template<typename T> | |
decltype(auto) | mpi::gather (T &&x, communicator c={}, int root=0, bool all=false) |
Generic MPI gather. | |
template<typename T1, typename T2> | |
void | mpi::gather_into (T1 &&x_in, T2 &&x_out, communicator c={}, int root=0, bool all=false) |
Generic MPI gather that gathers directly into an existing output object. | |
template<MPICompatibleRange R1, MPICompatibleRange R2> requires (std::same_as<std::remove_cvref_t<std::ranges::range_value_t<R1>>, std::remove_cvref_t<std::ranges::range_value_t<R2>>>) | |
void | mpi::gather_range (R1 &&in_rg, R2 &&out_rg, communicator c={}, int root=0, bool all=false) |
Implementation of an MPI gather for mpi::MPICompatibleRange objects. | |
template<typename T, std::size_t N> | |
void | mpi::mpi_broadcast (std::array< T, N > &arr, communicator c={}, int root=0) |
Implementation of an MPI broadcast for a std::array . | |
template<typename T1, typename T2> | |
void | mpi::mpi_broadcast (std::pair< T1, T2 > &p, communicator c={}, int root=0) |
Implementation of an MPI broadcast for a std::pair . | |
void | mpi::mpi_broadcast (std::string &s, communicator c, int root) |
Implementation of an MPI broadcast for a std::string . | |
template<typename T> | |
void | mpi::mpi_broadcast (std::vector< T > &v, communicator c={}, int root=0) |
Implementation of an MPI broadcast for a std::vector . | |
template<typename T> requires (has_mpi_type<T>) | |
void | mpi::mpi_broadcast (T &x, communicator c={}, int root=0) |
Implementation of an MPI broadcast for types that have a corresponding MPI datatype. | |
template<typename T> requires (has_mpi_type<T>) | |
std::vector< T > | mpi::mpi_gather (T const &x, communicator c={}, int root=0, bool all=false) |
Implementation of an MPI gather for types that have a corresponding MPI datatype. | |
void | mpi::mpi_gather_into (std::string const &s_in, std::string &s_out, communicator c={}, int root=0, bool all=false) |
Implementation of an MPI gather for a std::string that gathers directly into an existing output string. | |
template<typename T> | |
void | mpi::mpi_gather_into (std::vector< T > const &v_in, std::vector< T > &v_out, communicator c={}, int root=0, bool all=false) |
Implementation of an MPI gather for a std::vector that gathers directly into an existing output vector. | |
template<typename T, MPICompatibleRange R> requires (has_mpi_type<T> && std::same_as<T, std::remove_cvref_t<std::ranges::range_value_t<R>>>) | |
void | mpi::mpi_gather_into (T const &x, R &&rg, communicator c={}, int root=0, bool all=false) |
Implementation of an MPI gather that gathers directly into an existing output range for types that have a corresponding MPI datatype. | |
template<typename T, std::size_t N> | |
auto | mpi::mpi_reduce (std::array< T, N > const &arr, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce for a std::array . | |
template<typename T1, typename T2> | |
auto | mpi::mpi_reduce (std::pair< T1, T2 > const &p, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce for a std::pair . | |
template<typename T> | |
auto | mpi::mpi_reduce (std::vector< T > const &v, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce for a std::vector . | |
template<typename T> requires (has_mpi_type<T>) | |
T | mpi::mpi_reduce (T const &x, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce for types that have a corresponding MPI datatype. | |
template<typename T1, std::size_t N1, typename T2, std::size_t N2> | |
void | mpi::mpi_reduce_into (std::array< T1, N1 > const &arr_in, std::array< T2, N2 > &arr_out, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce for a std::array that reduces directly into an existing output array. | |
template<typename T1, typename T2> | |
void | mpi::mpi_reduce_into (std::vector< T1 > const &v_in, std::vector< T2 > &v_out, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce for a std::vector that reduces directly into a given output vector. | |
template<typename T> requires (has_mpi_type<T>) | |
void | mpi::mpi_reduce_into (T const &x_in, T &x_out, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce that reduces directly into an existing output object for types that have a corresponding MPI datatype. | |
template<typename T> | |
void | mpi::mpi_scatter_into (std::vector< T > const &v_in, std::vector< T > &v_out, communicator c={}, int root=0) |
Implementation of an MPI scatter for a std::vector that scatters directly into an existing output vector. | |
template<typename T> | |
decltype(auto) | mpi::reduce (T &&x, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Generic MPI reduce. | |
template<typename T> | |
void | mpi::reduce_in_place (T &&x, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Generic in place MPI reduce. | |
template<typename T1, typename T2> | |
void | mpi::reduce_into (T1 &&x_in, T2 &&x_out, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Generic MPI reduce that reduces directly into an existing output object. | |
template<std::ranges::sized_range R1, std::ranges::sized_range R2> | |
void | mpi::reduce_range (R1 &&in_rg, R2 &&out_rg, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM) |
Implementation of an MPI reduce for std::ranges::sized_range objects. | |
template<typename T> | |
decltype(auto) | mpi::scatter (T &&x, mpi::communicator c={}, int root=0) |
Generic MPI scatter. | |
template<typename T1, typename T2> | |
void | mpi::scatter_into (T1 &&x_in, T2 &&x_out, communicator c={}, int root=0) |
Generic MPI scatter that scatters directly into an existing output object. | |
template<MPICompatibleRange R1, MPICompatibleRange R2> requires (std::same_as<std::remove_cvref_t<std::ranges::range_value_t<R1>>, std::remove_cvref_t<std::ranges::range_value_t<R2>>>) | |
void | mpi::scatter_range (R1 &&in_rg, R2 &&out_rg, long scatter_size, communicator c={}, int root=0, long chunk_size=1) |
Implementation of an MPI scatter for mpi::MPICompatibleRange objects. | |
bool mpi::all_equal | ( | T const & | x, |
communicator | c = {} ) |
#include <mpi/generic_communication.hpp>
Checks if a given object is equal across all ranks in the given communicator.
It makes two calls to mpi::all_reduce, one with MPI_MIN
and the other with MPI_MAX
, and compares their results.
MPI_MIN
and MPI_MAX
need to make sense for the given type T
.T | Type to be checked. |
x | Object to be equality compared. |
c | mpi::communicator. |
Definition at line 297 of file generic_communication.hpp.
decltype(auto) mpi::all_gather | ( | T && | x, |
communicator | c = {} ) |
#include <mpi/generic_communication.hpp>
Generic MPI all-gather.
It simply calls mpi::gather with all = true
.
Definition at line 271 of file generic_communication.hpp.
void mpi::all_gather_into | ( | T1 && | x_in, |
T2 && | x_out, | ||
communicator | c = {} ) |
#include <mpi/generic_communication.hpp>
Generic MPI all-gather that gathers directly into an existing output object.
It simply calls mpi::gather_into with all = true
.
Definition at line 280 of file generic_communication.hpp.
decltype(auto) mpi::all_reduce | ( | T && | x, |
communicator | c = {}, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Generic MPI all-reduce.
It simply calls mpi::reduce with all = true
.
Definition at line 245 of file generic_communication.hpp.
void mpi::all_reduce_in_place | ( | T && | x, |
communicator | c = {}, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Generic MPI all-reduce in place.
It simply calls mpi::reduce_in_place with all = true
.
Definition at line 254 of file generic_communication.hpp.
void mpi::all_reduce_into | ( | T1 && | x_in, |
T2 && | x_out, | ||
communicator | c = {}, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Generic MPI all-reduce that reduces directly into an existing output object.
It simply calls mpi::reduce_into with all = true
.
Definition at line 263 of file generic_communication.hpp.
void mpi::broadcast | ( | T && | x, |
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/generic_communication.hpp>
Generic MPI broadcast.
It calls the specialized mpi_broadcast
function.
T | Type to be broadcasted. |
x | Object to be broadcasted (into). |
c | mpi::communicator. |
root | Rank of the root process. |
Definition at line 68 of file generic_communication.hpp.
void mpi::broadcast_range | ( | R && | rg, |
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/ranges.hpp>
Implementation of an MPI broadcast for std::ranges::sized_range
objects.
The behaviour of this function is as follows:
MPI_Bcast
and broadcasts the elements from the input range on the root process to all other processes.It throws an exception in case a call to the MPI C library fails and it expects that the input range size is equal on all processes.
R | std::ranges::sized_range type. |
rg | Range to be broadcasted (into). |
c | mpi::communicator. |
root | Rank of the root process. |
Definition at line 69 of file ranges.hpp.
decltype(auto) mpi::gather | ( | T && | x, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false ) |
#include <mpi/generic_communication.hpp>
Generic MPI gather.
If there is a specialized mpi_gather
for the given type, we call it. Otherwise, we call mpi::gather_into with the given input object and a default constructed output object of type T
.
T | Type to be gathered. |
x | Object to be gathered. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the gather. |
mpi_gather
call. Definition at line 208 of file generic_communication.hpp.
void mpi::gather_into | ( | T1 && | x_in, |
T2 && | x_out, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false ) |
#include <mpi/generic_communication.hpp>
Generic MPI gather that gathers directly into an existing output object.
It calls the specialized mpi_gather_into
function.
T1 | Type to be gathered. |
T2 | Type to be gathered into. |
x_in | Object to be gathered. |
x_out | Object to be gathered into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the gather. |
Definition at line 235 of file generic_communication.hpp.
void mpi::gather_range | ( | R1 && | in_rg, |
R2 && | out_rg, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false ) |
#include <mpi/ranges.hpp>
Implementation of an MPI gather for mpi::MPICompatibleRange objects.
The behaviour of this function is as follows:
MPI_Gatherv
or MPI_Allgatherv
to gather the elements from the input ranges on all processes into the output ranges on receiving processes.This is the inverse operation of mpi::scatter_range. The numbers of elements to be gathered do not have to be equal on all processes.
It throws an exception in case a call to the MPI C library fails and it expects that the output range sizes on receiving processes is the number of elements to be gathered.
R1 | mpi::MPICompatibleRange type. |
R2 | mpi::MPICompatibleRange type. |
in_rg | Range to be gathered. |
out_rg | Range to be gathered into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the gather operation. |
Definition at line 278 of file ranges.hpp.
void mpi::mpi_broadcast | ( | std::array< T, N > & | arr, |
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/array.hpp>
Implementation of an MPI broadcast for a std::array
.
It calls mpi::broadcast_range with the given array.
T | Value type of the array. |
N | Size of the array. |
arr | std::array to broadcast (into). |
c | mpi::communicator. |
root | Rank of the root process. |
void mpi::mpi_broadcast | ( | std::pair< T1, T2 > & | p, |
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/pair.hpp>
Implementation of an MPI broadcast for a std::pair
.
It calls the generic mpi::broadcast for the first and second element of the pair.
T1 | Type of the first element of the pair. |
T2 | Type of the second element of the pair. |
p | std::pair to broadcast. |
c | mpi::communicator. |
root | Rank of the root process. |
|
inline |
#include <mpi/string.hpp>
Implementation of an MPI broadcast for a std::string
.
It first broadcasts the size of the string from the root process to all other processes, then resizes the string on all non-root processes and calls mpi::broadcast_range with the (resized) input string.
s | std::string to broadcast (into). |
c | mpi::communicator. |
root | Rank of the root process. |
Definition at line 47 of file string.hpp.
void mpi::mpi_broadcast | ( | std::vector< T > & | v, |
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/vector.hpp>
Implementation of an MPI broadcast for a std::vector
.
It first broadcasts the size of the vector from the root process to all other processes, then resizes the vector on all non-root processes and calls mpi::broadcast_range with the (resized) input vector.
T | Value type of the vector. |
v | std::vector to broadcast. |
c | mpi::communicator. |
root | Rank of the root process. |
Definition at line 53 of file vector.hpp.
void mpi::mpi_broadcast | ( | T & | x, |
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/generic_communication.hpp>
Implementation of an MPI broadcast for types that have a corresponding MPI datatype.
If mpi::has_env is false or if the communicator size is < 2, it does nothing. Otherwise, it calls MPI_Bcast
.
It throws an exception in case the call to the MPI C library fails.
T | Type to be broadcasted. |
x | Object to be broadcasted (into). |
c | mpi::communicator. |
root | Rank of the root process. |
Definition at line 319 of file generic_communication.hpp.
std::vector< T > mpi::mpi_gather | ( | T const & | x, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false ) |
#include <mpi/generic_communication.hpp>
Implementation of an MPI gather for types that have a corresponding MPI datatype.
It constructs an output vector, resizes it on receiving ranks to the size of the communicator and calls mpi::mpi_gather_into. On non-receiving ranks the output vector is empty.
T | Type to be gathered. |
x | Object to be gathered. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the gather. |
std::vector
containing the gathered objects. Definition at line 421 of file generic_communication.hpp.
|
inline |
#include <mpi/string.hpp>
Implementation of an MPI gather for a std::string
that gathers directly into an existing output string.
It first all-reduces the sizes of the input strings from all processes. On receiving ranks, the output string is resized to the reduced size in case it has not the correct size. On non-receiving ranks, the output string is always unmodified. Then mpi::gather_range with the input and (resized) output strings is called.
s_in | std::string to gather. |
s_out | std::string to gather into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result. |
Definition at line 67 of file string.hpp.
void mpi::mpi_gather_into | ( | std::vector< T > const & | v_in, |
std::vector< T > & | v_out, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false ) |
#include <mpi/vector.hpp>
Implementation of an MPI gather for a std::vector
that gathers directly into an existing output vector.
It first all-reduces the sizes of the input vectors from all processes. On receiving ranks, the output vector is resized to the reduced size in case it has not the correct size. On non-receiving ranks, the output vector is always unmodified. Then mpi::gather_range with the input and (resized) output vector is called.
T | Value type of the vector. |
v_in | std::vector to gather. |
v_out | std::vector to gather into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result. |
Definition at line 141 of file vector.hpp.
void mpi::mpi_gather_into | ( | T const & | x, |
R && | rg, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false ) |
#include <mpi/generic_communication.hpp>
Implementation of an MPI gather that gathers directly into an existing output range for types that have a corresponding MPI datatype.
If mpi::has_env is false or if the communicator size is < 2, it copies the input object into the range. Otherwise, it calls MPI_Allgather
or `MPI_Gather.
It throws an exception in case a call to the MPI C library fails and it expects that the range size on receiving processes is equal the communicator size.
T | Type to be gathered. |
R | MPICompatibleRange type to be gathered into. |
x | Object to be gathered. |
rg | Range to be gathered into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the gather. |
Definition at line 447 of file generic_communication.hpp.
auto mpi::mpi_reduce | ( | std::array< T, N > const & | arr, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/array.hpp>
Implementation of an MPI reduce for a std::array
.
It constructs the output array with its value type equal to the return type of reduce(std::declval<T>())
and calls mpi::reduce_range with the input and constructed output array.
Note that the output array will always have the same size as the input array, no matter if the rank receives the reduced data or not.
T | Value type of the array. |
N | Size of the array. |
arr | std::array to reduce. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
std::array
containing the result of the reduction. auto mpi::mpi_reduce | ( | std::pair< T1, T2 > const & | p, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/pair.hpp>
Implementation of an MPI reduce for a std::pair
.
It calls the generic mpi::reduce for the first and second element of the pair separately.
T1 | Type of the first element of the pair. |
T2 | Type of the second element of the pair. |
p | std::pair to be reduced. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
std::pair
containing the results of the two reductions. auto mpi::mpi_reduce | ( | std::vector< T > const & | v, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/vector.hpp>
Implementation of an MPI reduce for a std::vector
.
It first constructs the output vector with its value type equal to the return type of reduce(std::declval<T>())
. On receiving ranks, the output vector is then resized to the size of the input vector. On non-receiving ranks, the output vector is always empty.
It calls mpi::reduce_range with the input and constructed output vector.
T | Value type of the vector. |
v | std::vector to reduce. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
std::vector
containing the result of the reduction. Definition at line 77 of file vector.hpp.
T mpi::mpi_reduce | ( | T const & | x, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Implementation of an MPI reduce for types that have a corresponding MPI datatype.
If mpi::has_env is false or if the communicator size is < 2, it returns a copy of the input object. Otherwise, it calls MPI_Allreduce
or MPI_Reduce
with a default constructed output object.
It throws an exception in case the call to the MPI C library fails.
T | Type to be reduced. |
x | Object to be reduced. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
Definition at line 345 of file generic_communication.hpp.
void mpi::mpi_reduce_into | ( | std::array< T1, N1 > const & | arr_in, |
std::array< T2, N2 > & | arr_out, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/array.hpp>
Implementation of an MPI reduce for a std::array
that reduces directly into an existing output array.
It calls mpi::reduce_range with the input and output array. The output array must be the same size as the input array on receiving ranks.
T1 | Value type of the array to be reduced. |
N1 | Size of the array to be reduced. |
T2 | Value type of the array to be reduced into. |
N2 | Size of the array to be reduced into. |
arr_in | std::array to reduce. |
arr_out | std::array to reduce into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
void mpi::mpi_reduce_into | ( | std::vector< T1 > const & | v_in, |
std::vector< T2 > & | v_out, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/vector.hpp>
Implementation of an MPI reduce for a std::vector
that reduces directly into a given output vector.
It first resizes the output vector to the size of the input vector on receiving ranks and then calls mpi::reduce_range with the input and (resized) output vector.
T1 | Value type of the vector to be reduced. |
T2 | Value type of the vector to be reduced into. |
v_in | std::vector to reduce. |
v_out | std::vector to reduce into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
Definition at line 100 of file vector.hpp.
void mpi::mpi_reduce_into | ( | T const & | x_in, |
T & | x_out, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Implementation of an MPI reduce that reduces directly into an existing output object for types that have a corresponding MPI datatype.
If the addresses of the input and output objects are equal, the reduction is done in place.
If mpi::has_env is false or if the communicator size is < 2, it either does nothing (in place) or copies the input into the output object. Otherwise, it calls MPI_Allreduce
or MPI_Reduce
(with MPI_IN_PLACE
).
It throws an exception in case the call to the MPI C library fails and it is expected that either all or none of the receiving processes choose the in place option.
T | Type to be reduced. |
x_in | Object to be reduced. |
x_out | Object to be reduced into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
Definition at line 381 of file generic_communication.hpp.
void mpi::mpi_scatter_into | ( | std::vector< T > const & | v_in, |
std::vector< T > & | v_out, | ||
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/vector.hpp>
Implementation of an MPI scatter for a std::vector
that scatters directly into an existing output vector.
It first broadcasts the size of the input vector from the root process to all other processes and resizes the output vector if it has not the correct size. The size of the output vector is determined with mpi::chunk_length. Then mpi::scatter_range is called with the input and (resized) output vector.
T | Value type of the vector. |
v_in | std::vector to scatter. |
v_out | std::vector to scatter into. |
c | mpi::communicator. |
root | Rank of the root process. |
Definition at line 119 of file vector.hpp.
decltype(auto) mpi::reduce | ( | T && | x, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Generic MPI reduce.
If there is a specialized mpi_reduce
for the given type, we call it. Otherwise, we call mpi::reduce_into with the given input object and a default constructed output object of type T
.
T | Type to be reduced. |
x | Object to be reduced. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
mpi_reduce
call. Definition at line 90 of file generic_communication.hpp.
void mpi::reduce_in_place | ( | T && | x, |
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Generic in place MPI reduce.
We call mpi::reduce_into with the given object as the input and output argument.
T | Type to be reduced. |
x | Object to be reduced (into). |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
Definition at line 117 of file generic_communication.hpp.
void mpi::reduce_into | ( | T1 && | x_in, |
T2 && | x_out, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/generic_communication.hpp>
Generic MPI reduce that reduces directly into an existing output object.
It calls the specialized mpi_reduce_into
function.
T1 | Type to be reduced. |
T2 | Type to be reduced into. |
x_in | Object to be reduced. |
x_out | Object to be reduced into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
Definition at line 140 of file generic_communication.hpp.
void mpi::reduce_range | ( | R1 && | in_rg, |
R2 && | out_rg, | ||
communicator | c = {}, | ||
int | root = 0, | ||
bool | all = false, | ||
MPI_Op | op = MPI_SUM ) |
#include <mpi/ranges.hpp>
Implementation of an MPI reduce for std::ranges::sized_range
objects.
The behaviour of this function is as follows:
MPI_Reduce
or MPI_Allreduce
to reduce the elements in the input ranges into the output ranges on receiving ranks.It throws an exception in case a call to the MPI C library fails and it expects
R1 | std::ranges::sized_range type. |
R2 | std::ranges::sized_range type. |
in_rg | Range to be reduced. |
out_rg | Range to be reduced into. |
c | mpi::communicator. |
root | Rank of the root process. |
all | Should all processes receive the result of the reduction. |
op | MPI_Op used in the reduction. |
Definition at line 119 of file ranges.hpp.
decltype(auto) mpi::scatter | ( | T && | x, |
mpi::communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/generic_communication.hpp>
Generic MPI scatter.
If there is a specialized mpi_scatter
for the given type, we call it. Otherwise, we call mpi::scatter_into with the given input object and a default constructed output object of type T
.
T | Type to be scattered. |
x | Object to be scattered. |
c | mpi::communicator. |
root | Rank of the root process. |
mpi_scatter
call. Definition at line 161 of file generic_communication.hpp.
void mpi::scatter_into | ( | T1 && | x_in, |
T2 && | x_out, | ||
communicator | c = {}, | ||
int | root = 0 ) |
#include <mpi/generic_communication.hpp>
Generic MPI scatter that scatters directly into an existing output object.
It calls the specialized mpi_scatter_into
function.
T1 | Type to be scattered. |
T2 | Type to be scattered into. |
x_in | Object to be scattered. |
x_out | Object to be scattered into. |
c | mpi::communicator. |
root | Rank of the root process. |
Definition at line 187 of file generic_communication.hpp.
void mpi::scatter_range | ( | R1 && | in_rg, |
R2 && | out_rg, | ||
long | scatter_size, | ||
communicator | c = {}, | ||
int | root = 0, | ||
long | chunk_size = 1 ) |
#include <mpi/ranges.hpp>
Implementation of an MPI scatter for mpi::MPICompatibleRange objects.
The behaviour of this function is as follows:
MPI_Scatterv
to scatter the input range from the root process to the output ranges on all other processes.By default, the input range is scattered as evenly as possible from the root process to all other processes in the communicator. To change that, the user can specify a chunk size which is used to divide the number of elements to be scattered into chunks of the specified size. Then, instead of single elements, the chunks are distributed evenly across the processes in the communicator.
It throws an exception if call to the MPI C library fails and it expects
R1 | mpi::MPICompatibleRange type. |
R2 | mpi::MPICompatibleRange type. |
in_rg | Range to be scattered. |
out_rg | Range to be scattered into. |
scatter_size | Number of elements to be scattered. |
c | mpi::communicator. |
root | Rank of the root process. |
chunk_size | Size of the chunks to scatter. |
Definition at line 213 of file ranges.hpp.