SC23 Proceedings

SC Technical Program Archives

Exhibitor Forums

  • Accelerating Data Analytics Using Object Based Computational Storage in a HPC Jongryool Kim (SK hynix)
  • Accelerating Scientific Workflows with the NVIDIA Grace Hopper Platform Mathias Wagner (NVIDIA Corporation)
  • AI Factory: How to Scale and Deploy Kevin Tubbs (Penguin Solutions)
  • Composability in HPC: A User’s View from the Trenches Matt Demas (GigaIO) and Brad Hillis (Oak Ridge National Laboratory (ORNL))
  • Compute Express Link (CXL): Advancing Coherent Connectivity Kurt Lender (Intel Corporation, Compute Express Link (CXL) Consortium)
  • The Cost of Flexibility and Security in Cloud-Based HPC – A Case Study Running EDA Workloads with Confidential Computing Technology Mengmei Ye and Derren Dunn (IBM TJ Watson Research Center)
  • Cost-Effective LLM Inference Solution Using SK hynix's AiM (Accelerator-in-Memory) Yongkee Kwon (SK hynix Inc)
  • CXL-Based Memory Disaggregation for HPC and AI Workloads Hokyoon Lee and Jungmin Choi (SK hynix)
  • A Deep Dive into the Latest NVIDIA HPC Software Jeff Larkin (NVIDIA Corporation)
  • Defining the Quantum Accelerated Supercomputer Alex McCaskey (NVIDIA Corporation)
  • Digital Twins for Science Tom Gibbs (NVIDIA Corporation)
  • Ethernet-Based Interconnect, the Critical Crossroads for HPC and AI Networking at Scale Eric Eppe (Eviden)
  • Exploring Converged HPC and AI on the Groq AI Inference Accelerator Tobias Becker (Groq Inc)
  • From Bugs to Breakthroughs: Harnessing HPC Software Debuggers for Success Bill Burns (Perforce Software Inc)
  • From Stencils to Tensors: Running 3D Finite Difference Seismic Imaging on the Groq AI Inference Accelerator Tobias Becker (Groq Inc)
  • Hybrid Quantum-HPC at LRZ Laura Schulz (Leibniz Supercomputing Centre)
  • Large Scale Accelerated Rendering on 10K Ray Tracing Enabled Nodes Roba Binyahib and David DeMarle (Intel Corporation)
  • Next Arm Processor FUJITSU-MONAKA and Its Technologies Toshio Yoshida (Fujitsu Ltd)
  • NVMe Over CXL (NVMe-oC): An Ultimate Optimization of Host-Device Data Movement Bernard Shung and San Chang (Wolley, Inc)
  • Overcoming the Cost of Data Movement in AI Inference Accelerators Arun Iyengar (Untether AI)
  • Scaling Up to 32 GPUs to a Single Node Without Changing a Single Line of Code John Ihnotic (GigaIO)
  • Strong Scaling of State-of-the-Art LLM Inference with Groq Software-Scheduled Deterministic Networks Igor Arsovski (Groq Inc)
  • Supercluster-Scale ML Training with Oracle Cloud Infrastructure Kevin Jorissen (Oracle)


  • Back to SC23 Proceedings Archive