NanoStructures
1.0
DMFT solver for layered, strongly correlated nanostructures
|
The MPI singleton is responsible for data exchange between compute nodes via OpenMPI. More...
#include <openmpi.h>
Public Member Functions | |
int | getRank () const |
returns the rank of the node. More... | |
int | getSize () const |
returns the total number of nodes. More... | |
void | send (int target, int source, math::CFunction &f) |
sends a copy of the math::CFunction from node source to node target. More... | |
void | send (int target, int source, double &v) |
sends a copy of double value v from node source to node target. More... | |
void | combine (math::CFunction &f) |
assembles the partial information of the individual compute nodes to a full copy on each one. More... | |
void | sync (math::CFunction &f, int master) |
distributes the math::CFunction instance f on node getRank()==master to all other nodes. More... | |
void | sync (double &v, int master) |
distributes the double value v on node getRank()==master to all other nodes. More... | |
void | sync () |
blocks until all nodes have reached this function call. | |
Static Public Member Functions | |
static OpenMPI & | getInstance () |
returns the only instance of the OpenMPI class. More... | |
Protected Attributes | |
int | m_rank |
int | m_size |
Static Protected Attributes | |
static std::auto_ptr< OpenMPI > | m_ptr = std::auto_ptr<OpenMPI>(0) |
Friends | |
class | std::auto_ptr< OpenMPI > |
The MPI singleton is responsible for data exchange between compute nodes via OpenMPI.
First and foremost, the MPI class provides convenience functions for the exchange of math::CFunction instances. Secondly, combine allows to reassemble a function which has been jointly calculated on the participating compute nodes. Third, it provides barrier functions to synchronize the nodes.
void mpi::OpenMPI::combine | ( | math::CFunction & | f | ) |
assembles the partial information of the individual compute nodes to a full copy on each one.
Typically (in this program), the calculation of discretized functions \(f(x_i)\) is parallelized by assigning the individual calculations for different arguments \(x_i\) in a round-robin fashion among the compute nodes. Therefore compute node \(p\) holds the data points \( f(x_p), f(x_{p+N}),f(x_{p+2N}), \dots \) with \(N\) the total number of compute nodes. This function reassembles the complete function by collecting all partial information on the master node, rebuilding the full function and distributing it to all nodes. Function blocks until operation is complete.
[in,out] | f | data contribution (in) / storage for assembled data (out) |
|
static |
returns the only instance of the OpenMPI class.
If OpenMPI has not been accessed previously an instance is created. The auto_ptr ensures that the object is properly destructed when the the application closes. OpenMPI is initialized when the object is first constructed.
|
inline |
returns the rank of the node.
|
inline |
returns the total number of nodes.
void mpi::OpenMPI::send | ( | int | target, |
int | source, | ||
math::CFunction & | f | ||
) |
sends a copy of the math::CFunction from node source to node target.
If the return value of getRank() is neither equal to source nor to target the function does nothing. For getRank()==source a copy of the math::CFunction instance referenced by f is send to node target. The contents of the math::CFunction instance referenced by f on target is replaced with the received data. Function blocks until operation is complete.
[in] | target | target node |
[in] | source | source node |
[in,out] | f | data to send (source) / storage location (target) |
void mpi::OpenMPI::send | ( | int | target, |
int | source, | ||
double & | v | ||
) |
sends a copy of double value v from node source to node target.
If the return value of getRank() is neither equal to source nor to target the function does nothing. For getRank()==source the double value v] is send to node target. The contents of the double value referenced by f on target is replaced with the received data. Function blocks until operation is complete.
[in] | target | target node |
[in] | source | source node |
[in,out] | v | data to send (source) / storage location (target) |
void mpi::OpenMPI::sync | ( | math::CFunction & | f, |
int | master | ||
) |
distributes the math::CFunction instance f on node getRank()==master to all other nodes.
Function blocks until operation is complete.
[in,out] | f | data to send (master) / storage location (others) |
[in] | master | node with master copy |
void mpi::OpenMPI::sync | ( | double & | v, |
int | master | ||
) |
distributes the double value v on node getRank()==master to all other nodes.
Function blocks until operation is complete.
[in,out] | f | data to send (master) / storage location (others) |
[in] | master | node with master copy |