MPI

group MPI

The members of this group are parallelized using the distributed memory model implemented by the MPI library.

template<typename T, typename Mesh, Cubism::EntityType Entity = Cubism::EntityType::Cell, size_t RANK = 0, typename UserState = Block::FieldState, template<typename> class Alloc = AlignedBlockAllocator>
class CartesianMPI : public Cubism::Grid::Cartesian<T, Mesh, Entity, RANK, UserState, Alloc>
#include <CartesianMPI.h>

Cartesian MPI block (tensor) field.

Cartesian topology composed of block Field.h for the specified entity type. As opposed to an individual block Field.h, this class manages a structure of arrays (SoA) memory layout for all the blocks in the rank local Cartesian topology instead of just individual blocks. See the Cartesian.h grid section for a non-distributed variant of this class as well as the UserState extension.

Template Parameters
  • T: Field data type

  • Mesh: Mesh type to be associated with fields

  • Entity: Entity type

  • RANK: Rank of (tensor) fields

  • Alloc: Allocator for field data

class Histogram : public Cubism::Util::Sampler
#include <Histogram.h>

MPI profiling using histograms.

Collects samples for a profiled quantity of interest on individual ranks. Can be used to detect inhomogeneities among MPI ranks.

class Profiler : public Cubism::Util::Sampler
#include <Profiler.h>

Runtime profiler.

Used to collect runtime samples for a code section that is enclosed by the push() and pop() methods. Used for profiling.