SC23 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Birds of a Feather

The Future of Benchmarks in Supercomputing


Authors: Sreenivas Sukumar (Hewlett Packard Enterprise (HPE)), Jack Wells (NVIDIA Corporation), Arti Garg (Hewlett Packard Enterprise (HPE))

Abstract: As supercomputing welcomes new workflows of simulations, data science and artificial intelligence in the Exascale era, the goal of this session is to pose, engage, debate, and address the question - "How should the SC community evolve performance benchmarks?". The session will be organized as presentations and panel discussions with audience participation that will invite active members of the Top500, HPCG, MLPerf, TeraSort, etc. and key personnel from industry, academia, and government to discuss the value, need and desire for evolving the benchmark suite that is inclusive and accommodative of emerging applications to guide future supercomputing system design and architecture.

Long Description: Motivation: Most HPC practitioners would agree that “workflows” are the new applications. In the post-Moore’s Exascale era, the supercomputing community has welcomed data science and artificial intelligence practitioners and scientists have adopted data-driven and machine-learning models into their discovery workflow. These communities have brought in a new class of workloads that requires architectural creativity to deliver flexible consumption models of compute capability and capacity. From running HPC codes for molecular dynamics, fluid dynamics and weather forecasting applications we are now entering the creative phase of implementing digital twins, edge-to-supercomputer pipelines of generative and predictive artificial intelligence, computational steering of autonomous labs, etc.

Need to rethink benchmarks: The traditional benchmarking approach to engineer performance by design, i.e., building supercomputers within power and cost budgets to perform “as many” floating point operations a second has served as a directional compass for many years. It is time to rethink this approach because existing benchmarks are:

• Not as inclusive/representative/comprehensive of the emerging use-cases in data-science and artificial intelligence (that need more data throughput, bandwidth, memory capacity, etc.) • Driving vendors and architects to design bespoke compute architectures that do not address a broader community interest. • Leading to proliferation of community-specific benchmarks (HPL, HPCG, MLPerf, etc.) • Curbing creativity for better processor architectures (mixed precision arithmetic, data, model and tensor parallelism, etc.) • Losing relevance that they are no-more considered competitive or worthwhile (e.g., organizations choosing not to submit results for Top500 or MLPerf)

We need to rethink and co-design how we challenge the scientists, system architects and vendors with realistic and representative benchmarks to measure and optimize end-to-end workflow performance.

The Proposed Session: The session will be organized as presentations and panel discussions with audience participation. The speakers will include active members of the Top500, HPCG, MLPerf, TeraSort, etc. and key personnel from industry, academia, and government that stimulate the engineers, scientists, and technologists into conversation.

The session will address the following questions: • How can we ensure that the next generation of supercomputers are well-designed and architected to meet the needs of the community? • How can we design benchmarks that challenge and inspire computational and computer scientists and engineers like HPL did with Top500? • What applications and metrics are the most relevant to scientists? How can these metrics be captured for purchase/procurement decisions?

The expected outcomes of this session • better understanding of the challenges and benefits of evolving supercomputing benchmarks. • A consensus on the need for evolving supercomputing benchmarks. • A set of recommendations for the future of supercomputing benchmarks. • List of volunteers willing to form a working group to spear-head such an effort.

Audience: The target audience for the session is the supercomputing community, including researchers, developers, system architects, and decision-makers.

Conclusion: The proposed session will provide a valuable opportunity to discuss the need for evolving supercomputing benchmarks. The session will bring together experts from industry, academia, and government to share their insights and perspectives and charter a path for the future of supercomputing.




Back to Birds of a Feather Archive Listing