Authors: Jennifer Faj, Jeremy J. Williams, and Ivy B. Peng (KTH Royal Institute of Technology, Sweden); Urs Ganse, Markus Battarbee, Yann Pfau-Kempf, Leo Kotipalo, and Minna Palmroth (University of Helsinki); and Stefano Markidis (KTH Royal Institute of Technology, Sweden)
Abstract: Vlasiator is a popular and powerful massively parallel code for accurate magnetospheric and solar wind plasma simulations. This work provides an in-depth analysis of Vlasiator, focusing on MPI performance using the Integrated Performance Monitoring (IPM) tool. We show that MPI non-blocking point-to-point communication accounts for most of the communication time. The communication topology shows a large number of MPI messages exchanging data in a six-dimensional grid. We also show that relatively large messages are used in MPI communication, reaching up to 256MB. As a communication-bound application, we found that using OpenMP in Vlasiator is critical for eliminating intra-node communication. Our results provide important insights for optimizing Vlasiator for the upcoming Exascale machines.
Best Poster Finalist (BP): no
Poster: PDF
Poster summary: PDF
Back to Poster Archive Listing