SC23 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Birds of a Feather

Mixed Feelings about Mixed Precisions

Authors: Hatem Ltaief (King Abdullah University of Science and Technology (KAUST)), Piotr Luszczek (University of Tennessee, Innovative Computing Laboratory), Hartwig Anzt (Innovative Computing Laboratory, University of Tennessee), Harun Bayraktar (NVIDIA Corporation)

Abstract: What if we have been oversolving in computational science and engineering for decades? Are low precision arithmetic formats only for AI workloads? How can HPC applications exploit mixed-precision hardware features? This BoF invites the HPC community at large interested in applying mixed precisions into their workflows and discussing the impact on time-to-solution, memory footprint, data motion, and energy consumption. Experts from scientific applications/software libraries/hardware architectures will briefly provide the context on this trendy topic, share their own perspectives, and mostly engage with the audience via a set of questions, while gathering feedback to define a roadmap moving forward.

Long Description: Motivated by the quickly changing hardware landscape, which nowadays is prominently equipped in low-precision arithmetic support, mixing floating-point precision formats has become a mainstream algorithmic optimization. The trade-offs are also important to consider and are not immediately obvious. These optimizations require support from the usual software stack (libraries, middleware, and application codes), strong numerical validation procedures, and ultimately scientific evidence that the applications’ key results survived the intermediate precision loss. The emphasis on alternatives to the IEEE 754 floating-point standard, emerged in recent years due to machine learning’s insatiable need for computational power to train deep models composed of convolutional neural networks or ever growing transformer models based on attention mechanism for natural language processing or human-level models that play open-ended games such as Go, DOTA 2, and Poker or predict structural properties of folded proteins (recall AlphaGo, AlphaZero, or AlphaFold). A successful training session for these models requires in excess of Peta-FLOPS per day and costs millions of dollars due their hardware and energy requirements. To reduce the power draw, drastic reduction in floating-point storage bits is required to reap energy-saving benefits in addition to other tricks-of-the-trade such as sparsification and related approaches based on lottery ticket hypothesis. These were the driving factors towards lower-precision representations and specialized tensor FP units that enable Peta-FLOPS level performance from a single compute node. Unlike machine learning models, scientific applications are much more demanding in terms of accuracy but there are early success stories about exploiting a mix of precisions in HPC. In this BoF, we aim at getting together developers and users alike to provide a forum for exchange of information on established and upcoming techniques in mixed-precision applications, software libraries, and hardware platforms of particular interest to the HPC community. To that end, we limited the expert participation to few main thrusts including hardware/vendor perspective, theoretical mathematical foundations, and software ecosystem. We gathered an inclusive set of moderators to represent a variety of community outlooks and geographical locales. In our opinion, this will assure rapid and iterative process of exploring the topic of mixed-precision and enhance the interaction with the audience which we plan to engage with questions and prompts in an interactive session that leads to rich experience and encourages follow up discussions. This is the third event of a series on the topic of mixed precisions, which started at SC22, and continued during ISC23. Our previous exchanges with the audience were enriching and permitted to foster the necessary interdisciplinary research collaborations required to address the various computational challenges related to mixed precisions.


Back to Birds of a Feather Archive Listing