TRIQS/mpi 1.3.0
C++ interface to MPI
Loading...
Searching...
No Matches
Collective MPI communication

Detailed Description

Generic and specialized implementations for a subset of collective MPI communications (broadcast, reduce, gather, scatter).

The generic functions (mpi::broadcast, mpi::reduce, mpi::scatter, ...) call their more specialized counterparts (e.g. mpi::mpi_broadcast, mpi::mpi_reduce, mpi::mpi_scatter, ...).

mpi provides (some) implementations for

Furthermore, there are several functions to simplify communicating generic, contiguous ranges: mpi::broadcast_range, mpi::gather_range, mpi::reduce_in_place_range, mpi::reduce_range and mpi::scatter_range.

Functions

template<typename T >
bool mpi::all_equal (T const &x, communicator c={})
 Checks if a given object is equal across all ranks in the given communicator.
 
template<typename T >
decltype(auto) mpi::all_gather (T &&x, communicator c={})
 Generic MPI all-gather.
 
template<typename T >
decltype(auto) mpi::all_reduce (T &&x, communicator c={}, MPI_Op op=MPI_SUM)
 Generic MPI all-reduce.
 
template<typename T >
void mpi::all_reduce_in_place (T &&x, communicator c={}, MPI_Op op=MPI_SUM)
 Generic MPI all-reduce in-place.
 
template<typename T >
void mpi::broadcast (T &&x, communicator c={}, int root=0)
 Generic MPI broadcast.
 
template<contiguous_sized_range R>
void mpi::broadcast_range (R &&rg, communicator c={}, int root=0)
 Implementation of an MPI broadcast for an mpi::contiguous_sized_range object.
 
template<typename T >
decltype(auto) mpi::gather (T &&x, mpi::communicator c={}, int root=0, bool all=false)
 Generic MPI gather.
 
template<contiguous_sized_range R1, contiguous_sized_range R2>
void mpi::gather_range (R1 &&in_rg, R2 &&out_rg, long out_size, communicator c={}, int root=0, bool all=false)
 Implementation of an MPI gather for an mpi::contiguous_sized_range.
 
template<typename T , std::size_t N>
void mpi::mpi_broadcast (std::array< T, N > &arr, communicator c={}, int root=0)
 Implementation of an MPI broadcast for a std::arr.
 
template<typename T1 , typename T2 >
void mpi::mpi_broadcast (std::pair< T1, T2 > &p, communicator c={}, int root=0)
 Implementation of an MPI broadcast for a std::pair.
 
void mpi::mpi_broadcast (std::string &s, communicator c, int root)
 Implementation of an MPI broadcast for a std::string.
 
template<typename T >
void mpi::mpi_broadcast (std::vector< T > &v, communicator c={}, int root=0)
 Implementation of an MPI broadcast for a std::vector.
 
template<typename T >
requires (has_mpi_type<T>)
void mpi::mpi_broadcast (T &x, communicator c={}, int root=0)
 Implementation of an MPI broadcast for types that have a corresponding MPI datatype, i.e. for which a specialization of mpi::mpi_type has been defined.
 
std::string mpi::mpi_gather (std::string const &s, communicator c={}, int root=0, bool all=false)
 Implementation of an MPI gather for a std::string.
 
template<typename T >
auto mpi::mpi_gather (std::vector< T > const &v, communicator c={}, int root=0, bool all=false)
 Implementation of an MPI gather for a std::vector.
 
template<typename T , std::size_t N>
auto mpi::mpi_reduce (std::array< T, N > const &arr, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an MPI reduce for a std::array.
 
template<typename T1 , typename T2 >
auto mpi::mpi_reduce (std::pair< T1, T2 > const &p, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an MPI reduce for a std::pair.
 
template<typename T >
auto mpi::mpi_reduce (std::vector< T > const &v, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an MPI reduce for a std::vector.
 
template<typename T >
requires (has_mpi_type<T>)
mpi::mpi_reduce (T const &x, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an MPI reduce for types that have a corresponding MPI datatype, i.e. for which a specialization of mpi::mpi_type has been defined.
 
template<typename T , std::size_t N>
void mpi::mpi_reduce_in_place (std::array< T, N > &arr, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an in-place MPI reduce for a std::array.
 
template<typename T >
void mpi::mpi_reduce_in_place (std::vector< T > &v, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an in-place MPI reduce for a std::vector.
 
template<typename T >
requires (has_mpi_type<T>)
void mpi::mpi_reduce_in_place (T &x, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an in-place MPI reduce for types that have a corresponding MPI datatype, i.e. for which a specialization of mpi::mpi_type has been defined.
 
template<typename T >
auto mpi::mpi_scatter (std::vector< T > const &v, communicator c={}, int root=0)
 Implementation of an MPI scatter for a std::vector.
 
template<typename T >
decltype(auto) mpi::reduce (T &&x, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Generic MPI reduce.
 
template<typename T >
void mpi::reduce_in_place (T &&x, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Generic in-place MPI reduce.
 
template<contiguous_sized_range R>
void mpi::reduce_in_place_range (R &&rg, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an in-place MPI reduce for an mpi::contiguous_sized_range object.
 
template<contiguous_sized_range R1, contiguous_sized_range R2>
void mpi::reduce_range (R1 &&in_rg, R2 &&out_rg, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
 Implementation of an MPI reduce for an mpi::contiguous_sized_range.
 
template<typename T >
decltype(auto) mpi::scatter (T &&x, mpi::communicator c={}, int root=0)
 Generic MPI scatter.
 
template<contiguous_sized_range R1, contiguous_sized_range R2>
requires (std::same_as<std::ranges::range_value_t<R1>, std::ranges::range_value_t<R2>>)
void mpi::scatter_range (R1 &&in_rg, R2 &&out_rg, long in_size, communicator c={}, int root=0, long chunk_size=1)
 Implementation of an MPI scatter for an mpi::contiguous_sized_range.
 

Function Documentation

◆ all_equal()

template<typename T >
bool mpi::all_equal ( T const & x,
communicator c = {} )

#include <mpi/generic_communication.hpp>

Checks if a given object is equal across all ranks in the given communicator.

It requires that there is a specialized mpi_reduce for the given type T and that it is equality comparable as well as default constructible.

It makes two calls to mpi::all_reduce, one with MPI_MIN and the other with MPI_MAX, and compares their results.

Note
MPI_MIN and MPI_MAX need to make sense for the given type T.
Template Parameters
TType to be checked.
Parameters
xObject to be equality compared.
cmpi::communicator.
Returns
If the given object is equal on all ranks, it returns true. Otherwise, it returns false.

Definition at line 287 of file generic_communication.hpp.

◆ all_gather()

template<typename T >
decltype(auto) mpi::all_gather ( T && x,
communicator c = {} )
inline

#include <mpi/generic_communication.hpp>

Generic MPI all-gather.

It simply calls mpi::gather with all = true.

Definition at line 202 of file generic_communication.hpp.

◆ all_reduce()

template<typename T >
decltype(auto) mpi::all_reduce ( T && x,
communicator c = {},
MPI_Op op = MPI_SUM )
inline

#include <mpi/generic_communication.hpp>

Generic MPI all-reduce.

It simply calls mpi::reduce with all = true.

Definition at line 186 of file generic_communication.hpp.

◆ all_reduce_in_place()

template<typename T >
void mpi::all_reduce_in_place ( T && x,
communicator c = {},
MPI_Op op = MPI_SUM )
inline

#include <mpi/generic_communication.hpp>

Generic MPI all-reduce in-place.

It simply calls mpi::reduce_in_place with all = true.

Definition at line 194 of file generic_communication.hpp.

◆ broadcast()

template<typename T >
void mpi::broadcast ( T && x,
communicator c = {},
int root = 0 )

#include <mpi/generic_communication.hpp>

Generic MPI broadcast.

If mpi::has_env is true, this function calls the specialized mpi_broadcast function for the given object, otherwise it does nothing.

Template Parameters
TType to be broadcasted.
Parameters
xObject to be broadcasted.
cmpi::communicator.
rootRank of the root process.

Definition at line 76 of file generic_communication.hpp.

◆ broadcast_range()

template<contiguous_sized_range R>
void mpi::broadcast_range ( R && rg,
communicator c = {},
int root = 0 )

#include <mpi/ranges.hpp>

Implementation of an MPI broadcast for an mpi::contiguous_sized_range object.

If mpi::has_mpi_type is true for the value type of the range, then the range is broadcasted using a simple MPI_Bcast. Otherwise, the generic mpi::broadcast is called for each element of the range.

It throws an exception in case a call to the MPI C library fails and it expects that the sizes of the ranges are equal across all processes.

If the ranges are empty or if mpi::has_env is false or if the communicator size is < 2, it does nothing.

Note
It is recommended to use the generic mpi::broadcast for supported types, e.g. std::vector, std::array or std::string. It is the user's responsibility to ensure that ranges have the correct sizes.
// create a vector on all ranks
auto vec = std::vector<int>(5);
if (comm.rank() == 0) {
// on rank 0, initialize the vector and broadcast the first 3 elements
vec = {1, 2, 3, 0, 0};
mpi::broadcast_range(std::span{vec.data(), 3}, comm);
} else {
// on other ranks, broadcast to the last 3 elements of the vector
mpi::broadcast_range(std::span{vec.data() + 2, 3}, comm);
}
// output result
for (auto x : vec) std::cout << x << " ";
std::cout << std::endl;
void broadcast_range(R &&rg, communicator c={}, int root=0)
Implementation of an MPI broadcast for an mpi::contiguous_sized_range object.
Definition ranges.hpp:93

Output (with 4 processes):

1 2 3 0 0
0 0 1 2 3
0 0 1 2 3
0 0 1 2 3
Template Parameters
Rmpi::contiguous_sized_range type.
Parameters
rgRange to broadcast.
cmpi::communicator.
rootRank of the root process.

Definition at line 93 of file ranges.hpp.

◆ gather()

template<typename T >
decltype(auto) mpi::gather ( T && x,
mpi::communicator c = {},
int root = 0,
bool all = false )
inline

#include <mpi/generic_communication.hpp>

Generic MPI gather.

If mpi::has_env is true or if the return type of the specialized mpi_gather is lazy, this function calls the specialized mpi_gather function for the given object. Otherwise, it simply converts the input object to the output type mpi_gather would return.

Template Parameters
TType to be gathered.
Parameters
xObject to be gathered.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the gather.
Returns
The result of the specialized mpi_gather call.

Definition at line 169 of file generic_communication.hpp.

◆ gather_range()

template<contiguous_sized_range R1, contiguous_sized_range R2>
void mpi::gather_range ( R1 && in_rg,
R2 && out_rg,
long out_size,
communicator c = {},
int root = 0,
bool all = false )

#include <mpi/ranges.hpp>

Implementation of an MPI gather for an mpi::contiguous_sized_range.

If mpi::has_mpi_type is true for the value type of the input ranges, then the ranges are gathered using a simple MPI_Gatherv or MPI_Allgatherv. Otherwise, each process broadcasts its elements to all other processes which implies that all == true is required in this case.

It throws an exception in case a call to the MPI C library fails and it expects that the sizes of the input ranges add up to the given size of the output range and that the output ranges have the correct size on receiving processes.

If the input ranges are all empty, it does nothing. If mpi::has_env is false or if the communicator size is < 2, it simply copies the input range to the output range.

Note
It is recommended to use the generic mpi::gather for supported types, e.g. std::vector and std::string. It is the user's responsibility to ensure that the ranges have the correct sizes.
// create input and output vectors on all ranks
auto in_vec = std::vector<int>{0, 1, 2, 3, 4};
auto out_vec = std::vector<int>(3 * comm.size(), 0);
// gather the middle elements of the input vectors from all ranks on rank 0
mpi::gather_range(std::span{in_vec.data() + 1, 3}, out_vec, 3 * comm.size(), comm);
// output result
for (auto x : out_vec) std::cout << x << " ";
std::cout << std::endl;
void gather_range(R1 &&in_rg, R2 &&out_rg, long out_size, communicator c={}, int root=0, bool all=false)
Implementation of an MPI gather for an mpi::contiguous_sized_range.
Definition ranges.hpp:412

Output (with 2 processes):

0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
1 2 3 1 2 3 1 2 3 1 2 3
Template Parameters
R1mpi::contiguous_sized_range type.
R2mpi::contiguous_sized_range type.
Parameters
in_rgRange to gather.
out_rgRange to gather into.
out_sizeSize of the output range on receiving processes (must also be given on non-receiving ranks).
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.

Definition at line 412 of file ranges.hpp.

◆ mpi_broadcast() [1/5]

template<typename T , std::size_t N>
void mpi::mpi_broadcast ( std::array< T, N > & arr,
communicator c = {},
int root = 0 )

#include <mpi/array.hpp>

Implementation of an MPI broadcast for a std::arr.

It simply calls mpi::broadcast_range with the input array.

Template Parameters
TValue type of the array.
NSize of the array.
Parameters
arrstd::array to broadcast.
cmpi::communicator.
rootRank of the root process.

Definition at line 51 of file array.hpp.

◆ mpi_broadcast() [2/5]

template<typename T1 , typename T2 >
void mpi::mpi_broadcast ( std::pair< T1, T2 > & p,
communicator c = {},
int root = 0 )

#include <mpi/pair.hpp>

Implementation of an MPI broadcast for a std::pair.

Simply calls the generic mpi::broadcast for the first and second element of the pair.

Template Parameters
T1Type of the first element of the pair.
T2Type of the second element of the pair.
Parameters
pstd::pair to broadcast.
cmpi::communicator.
rootRank of the root process.

Definition at line 48 of file pair.hpp.

◆ mpi_broadcast() [3/5]

void mpi::mpi_broadcast ( std::string & s,
communicator c,
int root )
inline

#include <mpi/string.hpp>

Implementation of an MPI broadcast for a std::string.

It first broadcasts the size of the string from the root process to all other processes, then resizes the string on all non-root processes and calls mpi::broadcast_range with the (resized) input string.

Parameters
sstd::string to broadcast.
cmpi::communicator.
rootRank of the root process.

Definition at line 47 of file string.hpp.

◆ mpi_broadcast() [4/5]

template<typename T >
void mpi::mpi_broadcast ( std::vector< T > & v,
communicator c = {},
int root = 0 )

#include <mpi/vector.hpp>

Implementation of an MPI broadcast for a std::vector.

It first broadcasts the size of the vector from the root process to all other processes, then resizes the vector on all non-root processes and calls mpi::broadcast_range with the (resized) input vector.

Template Parameters
TValue type of the vector.
Parameters
vstd::vector to broadcast.
cmpi::communicator.
rootRank of the root process.

Definition at line 51 of file vector.hpp.

◆ mpi_broadcast() [5/5]

template<typename T >
requires (has_mpi_type<T>)
void mpi::mpi_broadcast ( T & x,
communicator c = {},
int root = 0 )

#include <mpi/generic_communication.hpp>

Implementation of an MPI broadcast for types that have a corresponding MPI datatype, i.e. for which a specialization of mpi::mpi_type has been defined.

It throws an exception in case a call to the MPI C library fails.

Template Parameters
TType to be broadcasted.
Parameters
xObject to be broadcasted.
cmpi::communicator.
rootRank of the root process.

Definition at line 219 of file generic_communication.hpp.

◆ mpi_gather() [1/2]

std::string mpi::mpi_gather ( std::string const & s,
communicator c = {},
int root = 0,
bool all = false )
inline

#include <mpi/string.hpp>

Implementation of an MPI gather for a std::string.

It first all-reduces the sizes of the input string from all processes and then calls mpi::gather_range.

Parameters
sstd::string to gather.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result.
Returns
std::string containing the result of the gather operation.

Definition at line 65 of file string.hpp.

◆ mpi_gather() [2/2]

template<typename T >
auto mpi::mpi_gather ( std::vector< T > const & v,
communicator c = {},
int root = 0,
bool all = false )

#include <mpi/vector.hpp>

Implementation of an MPI gather for a std::vector.

It first all-reduces the sizes of the input vectors from all processes and then calls mpi::gather_range.

Template Parameters
TValue type of the vector.
Parameters
vstd::vector to gather.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result.
Returns
std::vector containing the result of the gather operation.

Definition at line 126 of file vector.hpp.

◆ mpi_reduce() [1/4]

template<typename T , std::size_t N>
auto mpi::mpi_reduce ( std::array< T, N > const & arr,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/array.hpp>

Implementation of an MPI reduce for a std::array.

It simply calls mpi::reduce_range with the given input array and an empty array of the same size.

Template Parameters
TValue type of the array.
NSize of the array.
Parameters
arrstd::array to reduce.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.
Returns
std::array containing the result of each individual reduction.

Definition at line 86 of file array.hpp.

◆ mpi_reduce() [2/4]

template<typename T1 , typename T2 >
auto mpi::mpi_reduce ( std::pair< T1, T2 > const & p,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/pair.hpp>

Implementation of an MPI reduce for a std::pair.

Simply calls the generic mpi::reduce for the first and second element of the pair.

Template Parameters
T1Type of the first element of the pair.
T2Type of the second element of the pair.
Parameters
pstd::pair to be reduced.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.
Returns
std::pair<T1, T2> containing the result of each individual reduction.

Definition at line 68 of file pair.hpp.

◆ mpi_reduce() [3/4]

template<typename T >
auto mpi::mpi_reduce ( std::vector< T > const & v,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/vector.hpp>

Implementation of an MPI reduce for a std::vector.

It simply calls mpi::reduce_range with the given input vector and an empty vector of the same size.

Template Parameters
TValue type of the vector.
Parameters
vstd::vector to reduce.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.
Returns
std::vector containing the result of each individual reduction.

Definition at line 88 of file vector.hpp.

◆ mpi_reduce() [4/4]

template<typename T >
requires (has_mpi_type<T>)
T mpi::mpi_reduce ( T const & x,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/generic_communication.hpp>

Implementation of an MPI reduce for types that have a corresponding MPI datatype, i.e. for which a specialization of mpi::mpi_type has been defined.

It throws an exception in case a call to the MPI C library fails.

Template Parameters
TType to be reduced.
Parameters
xObject to be reduced.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.
Returns
The result of the reduction.

Definition at line 239 of file generic_communication.hpp.

◆ mpi_reduce_in_place() [1/3]

template<typename T , std::size_t N>
void mpi::mpi_reduce_in_place ( std::array< T, N > & arr,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/array.hpp>

Implementation of an in-place MPI reduce for a std::array.

It simply calls mpi::reduce_in_place_range with the given input array.

Template Parameters
TValue type of the array.
NSize of the array.
Parameters
arrstd::array to reduce.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.

Definition at line 67 of file array.hpp.

◆ mpi_reduce_in_place() [2/3]

template<typename T >
void mpi::mpi_reduce_in_place ( std::vector< T > & v,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/vector.hpp>

Implementation of an in-place MPI reduce for a std::vector.

It simply calls mpi::reduce_in_place_range with the given input vector.

Template Parameters
TValue type of the vector.
Parameters
vstd::vector to reduce.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.

Definition at line 70 of file vector.hpp.

◆ mpi_reduce_in_place() [3/3]

template<typename T >
requires (has_mpi_type<T>)
void mpi::mpi_reduce_in_place ( T & x,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/generic_communication.hpp>

Implementation of an in-place MPI reduce for types that have a corresponding MPI datatype, i.e. for which a specialization of mpi::mpi_type has been defined.

It throws an exception in case a call to the MPI C library fails.

Template Parameters
TType to be reduced.
Parameters
xObject to be reduced.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.

Definition at line 265 of file generic_communication.hpp.

◆ mpi_scatter()

template<typename T >
auto mpi::mpi_scatter ( std::vector< T > const & v,
communicator c = {},
int root = 0 )

#include <mpi/vector.hpp>

Implementation of an MPI scatter for a std::vector.

It first broadcasts the size of the vector from the root process to all other processes and then calls mpi::scatter_range.

Template Parameters
TValue type of the vector.
Parameters
vstd::vector to scatter.
cmpi::communicator.
rootRank of the root process.
Returns
std::vector containing the result of the scatter operation.

Definition at line 106 of file vector.hpp.

◆ reduce()

template<typename T >
decltype(auto) mpi::reduce ( T && x,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )
inline

#include <mpi/generic_communication.hpp>

Generic MPI reduce.

If mpi::has_env is true or if the return type of the specialized mpi_reduce is lazy, this function calls the specialized mpi_reduce function for the given object. Otherwise, it simply converts the input object to the output type mpi_reduce would return.

Template Parameters
TType to be reduced.
Parameters
xObject to be reduced.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.
Returns
The result of the specialized mpi_reduce call.

Definition at line 97 of file generic_communication.hpp.

◆ reduce_in_place()

template<typename T >
void mpi::reduce_in_place ( T && x,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )
inline

#include <mpi/generic_communication.hpp>

Generic in-place MPI reduce.

If mpi::has_env is true, this functions calls the specialized mpi_reduce_in_place function for the given object. Otherwise, it does nothing.

Template Parameters
TType to be reduced.
Parameters
xObject to be reduced.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.

Definition at line 124 of file generic_communication.hpp.

◆ reduce_in_place_range()

template<contiguous_sized_range R>
void mpi::reduce_in_place_range ( R && rg,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/ranges.hpp>

Implementation of an in-place MPI reduce for an mpi::contiguous_sized_range object.

If mpi::has_mpi_type is true for the value type of the range, then the range is reduced using a simple MPI_Reduce or MPI_Allreduce with MPI_IN_PLACE. Otherwise, the specialized mpi_reduce_in_place is called for each element in the range.

It throws an exception in case a call to the MPI C library fails and it expects that the sizes of the ranges are equal across all processes.

If the ranges are empty or if mpi::has_env is false or if the communicator size is < 2, it does nothing.

Note
It is recommended to use the generic mpi::reduce_in_place and mpi::all_reduce_in_place for supported types, e.g. std::vector or std::array. It is the user's responsibility to ensure that ranges have the correct sizes.
// create a vector on all ranks
auto vec = std::vector<int>{0, 1, 2, 3, 4};
// in-place reduce the middle elements only on rank 0
mpi::reduce_in_place_range(std::span{vec.data() + 1, 3}, comm);
// output result
for (auto x : vec) std::cout << x << " ";
std::cout << std::endl;
void reduce_in_place_range(R &&rg, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
Implementation of an in-place MPI reduce for an mpi::contiguous_sized_range object.
Definition ranges.hpp:155

Output (with 4 processes):

0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
0 4 8 12 4
Template Parameters
Rmpi::contiguous_sized_range type.
Parameters
rgRange to reduce.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.

Definition at line 155 of file ranges.hpp.

◆ reduce_range()

template<contiguous_sized_range R1, contiguous_sized_range R2>
void mpi::reduce_range ( R1 && in_rg,
R2 && out_rg,
communicator c = {},
int root = 0,
bool all = false,
MPI_Op op = MPI_SUM )

#include <mpi/ranges.hpp>

Implementation of an MPI reduce for an mpi::contiguous_sized_range.

If mpi::has_mpi_type is true for the value type of the range, then the range is reduced using a simple MPI_Reduce or MPI_Allreduce. Otherwise, the specialized mpi_reduce is called for each element in the range.

It throws an exception in case a call to the MPI C library fails and it expects that the sizes of the input ranges are equal across all processes and that they are equal to the size of the output range on receiving processes.

If the input ranges are empty, it does nothing. If mpi::has_env is false or if the communicator size is < 2, it simply copies the input range to the output range.

Note
It is recommended to use the generic mpi::reduce and mpi::all_reduce for supported types, e.g. std::vector or std::array. It is the user's responsibility to ensure that ranges have the correct sizes.
// create input and output vectors on all ranks
auto in_vec = std::vector<int>{0, 1, 2, 3, 4};
auto out_vec = std::vector<int>(in_vec.size(), 0);
// allreduce the middle elements of the input vector to the last elements of the output vector
mpi::reduce_range(std::span{in_vec.data() + 1, 3}, std::span{out_vec.data() + 2, 3}, comm, 0, true);
// output result
for (auto x : out_vec) std::cout << x << " ";
std::cout << std::endl;
void reduce_range(R1 &&in_rg, R2 &&out_rg, communicator c={}, int root=0, bool all=false, MPI_Op op=MPI_SUM)
Implementation of an MPI reduce for an mpi::contiguous_sized_range.
Definition ranges.hpp:226

Output (with 4 processes):

0 0 4 8 12
0 0 4 8 12
0 0 4 8 12
0 0 4 8 12
Template Parameters
R1mpi::contiguous_sized_range type.
R2mpi::contiguous_sized_range type.
Parameters
in_rgRange to reduce.
out_rgRange to reduce into.
cmpi::communicator.
rootRank of the root process.
allShould all processes receive the result of the reduction.
opMPI_Op used in the reduction.

Definition at line 226 of file ranges.hpp.

◆ scatter()

template<typename T >
decltype(auto) mpi::scatter ( T && x,
mpi::communicator c = {},
int root = 0 )
inline

#include <mpi/generic_communication.hpp>

Generic MPI scatter.

If mpi::has_env is true or if the return type of the specialized mpi_scatter is lazy, this function calls the specialized mpi_scatter function for the given object. Otherwise, it simply converts the input object to the output type mpi_scatter would return.

Template Parameters
TType to be scattered.
Parameters
xObject to be scattered.
cmpi::communicator.
rootRank of the root process.
Returns
The result of the specialized mpi_scatter call.

Definition at line 142 of file generic_communication.hpp.

◆ scatter_range()

template<contiguous_sized_range R1, contiguous_sized_range R2>
requires (std::same_as<std::ranges::range_value_t<R1>, std::ranges::range_value_t<R2>>)
void mpi::scatter_range ( R1 && in_rg,
R2 && out_rg,
long in_size,
communicator c = {},
int root = 0,
long chunk_size = 1 )

#include <mpi/ranges.hpp>

Implementation of an MPI scatter for an mpi::contiguous_sized_range.

If mpi::has_mpi_type is true for the value type of the range, then the range is scattered as evenly as possible across the processes in the communicator using a simple MPI_Scatterv. Otherwise an exception is thrown.

The user can specify a chunk size which is used to divide the input range into chunks of the specified size. The number of chunks are then distributed evenly across the processes in the communicator. The size of the input range is required to be a multiple of the given chunk size, otherwise an exception is thrown.

It throws an exception in case a call to the MPI C library fails and it expects that the output ranges have the correct size and that they add up to the size of the input range on the root process.

If the input range is empty on root, it does nothing. If mpi::has_env is false or if the communicator size is < 2, it simply copies the input range to the output range.

Note
It is recommended to use the generic mpi::scatter for supported types, e.g. std::vector. It is the user's responsibility to ensure that the ranges have the correct sizes (mpi::chunk_length can be useful to do that).
// create input and output vectors on all ranks
auto in_vec = std::vector<int>{};
if (comm.rank() == 0) in_vec = {0, 1, 2, 3, 4, 5, 6, 7};
auto out_vec = std::vector<int>(mpi::chunk_length(5, comm.size(), comm.rank()), 0);
// scatter the middle elements of the input vector from rank 0 to all ranks
mpi::scatter_range(std::span{in_vec.data() + 1, 5}, out_vec, 5, comm);
// output result
for (auto x : out_vec) std::cout << x << " ";
std::cout << std::endl;
void scatter_range(R1 &&in_rg, R2 &&out_rg, long in_size, communicator c={}, int root=0, long chunk_size=1)
Implementation of an MPI scatter for an mpi::contiguous_sized_range.
Definition ranges.hpp:317
long chunk_length(long end, int nranges, int i, long min_size=1)
Get the length of the ith subrange after splitting the integer range [0, end) as evenly as possible a...
Definition chunk.hpp:50

Output (with 2 processes):

4 5
1 2 3
Template Parameters
R1mpi::contiguous_sized_range type.
R2mpi::contiguous_sized_range type.
Parameters
in_rgRange to scatter.
out_rgRange to scatter into.
in_sizeSize of the input range on root (must also be given on non-root ranks).
cmpi::communicator.
rootRank of the root process.
chunk_sizeSize of the chunks to scatter.

Definition at line 317 of file ranges.hpp.