SC23 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Research Posters Archive

Exploring Julia as a Unifying End-to-End Workflow Language for HPC on Frontier


Authors: William F. Godoy (Oak Ridge National Laboratory (ORNL)); Pedro Valero-Lara (Oak Ridge National Laboratory); Caira Anderson (Oak Ridge National Laboratory (ORNL), Cornell University); Katrina W. Lee (Oak Ridge National Laboratory (ORNL); University of Texas, Dallas); and Ana Gainaru, Rafael Ferreira da Silva, and Jeffrey S. Vetter (Oak Ridge National Laboratory (ORNL))

Abstract: We evaluate the use of Julia as a single language and ecosystem paradigm powered by LLVM for the development of high-performance computing (HPC) workflow components. A Gray-Scott 2-variable diffusion-reaction application using a memory-bound 7-point stencil kernel is run on Frontier, the first exascale supercomputer. We evaluate the feasibility, performance, scaling, and trade-offs of (i) the computational kernel on AMD's MI250x GPUs, (ii) weak scaling up to 4,096 MPI processes/GPUs or 512 nodes, (iii) parallel I/O write using the ADIOS2 library bindings, and (iv) Jupyter Notebooks for interactive data analysis.

We will discuss our results which show that although Julia generates a reasonable LLVM-IR kernel, there is nearly a 50% performance difference with native AMD HIP stencil codes on GPU. We observed near-zero overhead when using MPI and parallel I/O bindings to system-wide installed implementations. Consequently, Julia emerges as a compelling high-performance and high-productivity workflow composition strategy as measured on Frontier.


Best Poster Finalist (BP): no

Poster: PDF
Poster summary: PDF


Back to Poster Archive Listing