CubismNova is a C++ template library used for solving Partial Differential Equations (PDEs) on structured uniform or stretched grids as well as block-structured or adaptively refined grids (AMR). The library provides data structures for point-wise and stencil operations with support for efficient halo (ghost cell) communication using the Message Passing Interface (MPI). A toolbox with vectorized kernels for common operations such as finite difference operators, WENO reconstruction, interpolation, restriction and prolongation operators as well as various data compression schemes is available to the user. Extended data structures for use with various time integration schemes is available as well.
The library is a full refactoring of its successful predecessor Cubism Hejazialhosseini et al. [HRCK12] that has later won the Gordon Bell award for a compressible multicomponent flow problem in 2013 Rossinelli et al. [RHH+13]. Further optimizations on the same code are presented in Hadjidoukas et al. [HRHK15] and Hadjidoukas et al. [HRW+15]. The refactored library offers easier access for the community by separating high performance computing (HPC) concepts from the user of the library. The library user must be concerned with the algorithm design depending on the problem that needs to be solved. The refactored library further offers integrated multigrid solvers and compression algorithms to reduce the I/O overhead at scale. Moreover, the refactored library takes into account suitable data structures for use with heterogeneous accelerators Wermelinger et al. [WHH+16]. Apart from compressible multicomponent flow simulations (Šukys et al. [vSukysRW+18], Wermelinger et al. [WRHK18], Rasthofer et al. [RWK+19]), the library is also used for incompressible multi-phase flow (Karnakov et al. [KWC+19]) as well as incompressible flow with collective swimmers (Verma et al. [VNK18]).
CubismNova can be downloaded from GitHub:
$ git clone --recurse-submodules https://github.com/cselab/CubismNova $ cd CubismNova
The library can be compiled using
cmake. A working MPI implementation is
required and the
mpic++ compiler wrappers must be in the
PATH environment variable. A debug build can be generated with
$ ./cmake_init.sh debug <install path> $ cd debug $ make -j && make test $ make install $ cd .. && rm -rf debug
This assumes that your starting directory is the project root of
An optimized build is likewise generated with
$ ./cmake_init.sh release <install path> $ cd release $ make -j && make test $ make install $ cd .. && rm -rf release
release you can use any other token except
debug. If the
<insall path> is a system directory use
sudo make install instead.
P. E. Hadjidoukas, D. Rossinelli, B. Hejazialhosseini, and P. Koumoutsakos. From 11 to 14.4 PFLOPs: performance optimization for finite volume flow solver. In Proceedings of the 3rd International Conference on Exascale Applications and Software, EASC '15, 7–12. Edinburgh, Scotland, UK, 2015. University of Edinburgh. URL: http://dl.acm.org/citation.cfm?id=2820083.2820085.
P. E. Hadjidoukas, D. Rossinelli, F. Wermelinger, J. Sukys, U. Rasthofer, C. Conti, B. Hejazialhosseini, and P. Koumoutsakos. High throughput simulations of two-phase flows on Blue Gene/Q. In Parallel Computing: On the Road to Exascale, Proceedings of the International Conference on Parallel Computing, ParCo 2015, 1-4 September 2015, Edinburgh, Scotland, UK, 767–776. 2015. URL: http://dx.doi.org/10.3233/978-1-61499-621-7-767, doi:10.3233/978-1-61499-621-7-767.
B. Hejazialhosseini, D. Rossinelli, C. Conti, and P. Koumoutsakos. High throughput software for direct numerical simulations of compressible two-phase flows. SC Conference, 0:1–12, 2012. doi:http://doi.ieeecomputersociety.org/10.1109/SC.2012.66.
P. Karnakov, F. Wermelinger, M. Chatzimanolakis, S. Litvinov, and P. Koumoutsakos. A high performance computing framework for multiphase, turbulent flows on structured grids. In Proceedings of the Platform for Advanced Scientific Computing Conference on - PASC \textquotesingle 19. ACM Press, 2019. URL: https://doi.org/10.1145%2F3324989.3325727, doi:10.1145/3324989.3325727.
U. Rasthofer, F. Wermelinger, P. Karnakov, J. Šukys, and P. Koumoutsakos. Computational study of the collapse of a cloud with $12500$ gas bubbles in a liquid. Phys. Rev. Fluids, 4:063602, 2019. URL: https://link.aps.org/doi/10.1103/PhysRevFluids.4.063602, doi:10.1103/PhysRevFluids.4.063602.
D. Rossinelli, B. Hejazialhosseini, P. Hadjidoukas, Costas Bekas, Alessandro Curioni, Adam Bertsch, Scott Futral, Steffen J. Schmidt, Nikolaus A. Adams, and P. Koumoutsakos. 11 PFLOP/s simulations of cloud cavitation collapse. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC '13, 3:1–3:13. New York, NY, USA, 2013. ACM. URL: http://doi.acm.org/10.1145/2503210.2504565, doi:10.1145/2503210.2504565.
S. Verma, G. Novati, and P. Koumoutsakos. Efficient collective swimming by harnessing vortices through deep reinforcement learning. Proceedings of the National Academy of Sciences, 115(23):5849–5854, 2018. URL: https://doi.org/10.1073%2Fpnas.1800923115, doi:10.1073/pnas.1800923115.
F. Wermelinger, B. Hejazialhosseini, P. E. Hadjidoukas, D. Rossinelli, and P. Koumoutsakos. An efficient compressible multicomponent flow solver for heterogeneous CPU/GPU architectures. In Proceedings of the Platform for Advanced Scientific Computing Conference, PASC '16, 8:1–8:10. New York, NY, USA, 2016. ACM Press. URL: http://doi.acm.org/10.1145/2929908.2929914, doi:10.1145/2929908.2929914.
F. Wermelinger, U. Rasthofer, P. E. Hadjidoukas, and P. Koumoutsakos. Petascale simulations of compressible flows with interfaces. Journal of Computational Science, 26:217–225, 2018. URL: https://doi.org/10.1016%2Fj.jocs.2018.01.008, doi:10.1016/j.jocs.2018.01.008.
J. Šukys, U. Rasthofer, F. Wermelinger, P. E. Hadjidoukas, and P. Koumoutsakos. Multilevel control variates for uncertainty quantification in simulations of cloud cavitation. SIAM Journal on Scientific Computing, 40(5):B1361–B1390, 2018. URL: https://doi.org/10.1137%2F17m1129684, doi:10.1137/17m1129684.