MPI¶
-
group
MPI
The members of this group are parallelized using the distributed memory model implemented by the MPI library.
-
template<typename
T
, typenameMesh
, Cubism::EntityTypeEntity
= Cubism::EntityType::Cell, size_tRANK
= 0, typenameUserState
= Block::FieldState, template<typename> classAlloc
= AlignedBlockAllocator>
classCartesianMPI
: public Cubism::Grid::Cartesian<T, Mesh, Entity, RANK, UserState, Alloc>¶ - #include <CartesianMPI.h>
Cartesian MPI block (tensor) field.
Cartesian topology composed of block Field.h for the specified entity type. As opposed to an individual block Field.h, this class manages a structure of arrays (SoA) memory layout for all the blocks in the rank local Cartesian topology instead of just individual blocks. See the Cartesian.h grid section for a non-distributed variant of this class as well as the
UserState
extension.- Template Parameters
T
: Field data typeMesh
: Mesh type to be associated with fieldsEntity
: Entity typeRANK
: Rank of (tensor) fieldsAlloc
: Allocator for field data
-
template<typename