25–27 Sept 2024
Virtual Meeting
Europe/Berlin timezone

High performance massively parallel direct N-body simulations on large hybrid CPU/GPU clusters.

Speaker

Peter Bercik (Main Astronomical Observatory, National Academy of Sciences of Ukraine)

Description

Theoretical numerical modeling has become a third pillar of science, alongside theory and experiment (in the case of astrophysics, experiment is mostly replaced by observation). Numerical modeling allows one to compare theory with experimental or observational data in unprecedented detail, and it also provides theoretical insight into physical processes at work in complex systems. We are in the midst of a new revolution in parallel processor technologies and a shift in parallel programming paradigms that can help push today's software to the exaflop/s scale and help better solve and understand typical multi-scale problems. The current revolution in parallel programming has been largely catalyzed by the use of graphical processing units (GPUs) for general-purpose programming, but it is not clear that this will continue to be the case in the future. GPUs are now widely used to accelerate a wide range of applications, including computational physics and astrophysics, image/video processing, engineering simulations, quantum chemistry, to name a few. In this work, we present direct astrophysical N-body simulations with up to six million bodies using our parallel MPI/CUDA code on large hybrid CPU/GPU clusters (JUREAP/Germany and LUMI/Finland) with different types of mixed CPU/GPU hardware. We achieve about one third of the peak GPU performance for this code, in a real application scenario with single hierarchical block time-steps, high (4th, 6th, and 8th) order Hermite integration schemes, and a real core-halo density structure of the modeled stellar systems.

Primary authors

Peter Bercik (Main Astronomical Observatory, National Academy of Sciences of Ukraine) Rainer Spurzem (University of Heidelberg)

Presentation materials

There are no materials yet.