Skip to main content


The SC Papers program is the leading venue for presenting high-quality original research, groundbreaking ideas, and compelling insights on future trends in high performance computing, networking, storage, and analysis. Attend presentations of peer-reviewed technical papers on a wide range of topics over three inspiring days.


Papers Schedule
Tuesday–Thursday, November 14–16, 2023

SC Papers Attendee

Papers Co-Chair
Ümit V. Çatalyürek, Amazon Web Services, Georgia Institute of Technology

Papers Co-Chair
Karen Devine, Sandia National Laboratories, ret.

Paper submissions open March 1, 2023.

Paper Submissions

MAR 30, 2023

Abstract Submissions Close

APR 6, 2023 (No Extensions)

Full Paper Submissions Close

APR 20, 2023

AD (mandatory)/AE (optional) Due

MAY 22–26, 2023

Review/Rebuttal Period

JUN 16, 2023

Notifications Sent

AUG 26, 2023

Final Paper Due

How to submit

What is A Paper?

The SC Papers program is the leading venue for presenting high-quality original research, groundbreaking ideas, and compelling insights on future trends in high performance computing, networking, storage, and analysis. Technical papers are peer-reviewed and an Artifact Description is mandatory for all papers submitted to SC.


Submissions will be considered on any topic related to high performance computing within the areas below. Authors must indicate a primary area from the choices on the submissions form and are strongly encouraged to indicate a secondary area.

preparing your submission

A paper submission has three components: the paper itself, an Artifact Description Appendix (AD), and an Artifact Evaluation Appendix (AE). The Artifact Description Appendix, or explanation of why there is no artifact description, is mandatory. The Artifact Evaluation Appendix is optional.


Papers that have not previously been published in peer-reviewed venues are eligible for submission to SC. For example, papers pre-posted to arXiv, institutional repositories, and personal websites (but no other peer-reviewed venues) remain eligible for SC submission.

Papers that were published in a workshop are eligible if they have been substantially enhanced (i.e., 30% new material).

Paper Format

  • Submissions are limited to 10 2-column pages (U.S. letter – 8.5″x11″), excluding the bibliography, using the ACM proceedings template. Latex users, please use the “sigconf” option (use of the “review” option is recommended but not required). Word authors can use the “Interim Layout”.
  • AD and AE appendices are automatically generated and do not count against the 10 pages.
  • Authors of accepted papers may provide supplemental material with their final version of the paper (e.g., additional proofs, videos, or images).

reproducibility initiative

Reproducible science is essential, and SC continues to innovate in this area. AD/AE Appendices will be integrated into the review process, with AD/AE Appendices considered at every stage of paper review. AD/AE Appendices will be auto-generated from author responses to a standard form that is embedded in the SC online submission system. While the Artifact Description Appendix, or explanation of why there is no Artifact Description Appendix, is mandatory, the Artifact Evaluation Appendix is optional.

Learn more about the Reproducibility Initiative.

paper review process

Papers are peer-reviewed by a committee of experts. Each paper will have three to four reviews. The peer review process is double-blind for the paper and double-open for the Appendices. Paper reviewers do not have access to the names of authors. Appendices reviewers and authors will know each other’s names. While Papers Committee members are named on the SC23 Planning Committee page, the names of the individuals reviewing each paper are not made available to the paper authors. Learn more about the SC double-blind review policy.

Papers not respecting the submission guidelines will be subject to immediate rejection without review. Examples include papers not respecting the double-blind submission, papers exceeding the page limit, and papers not submitting the AD/AE artifacts.

From an author’s perspective, the following are the key steps:

  1. Authors submit a title, abstract, and other metadata.
  2. Authors submit their full paper.
  3. After submission of their paper, authors have two weeks to complete an AD/AE form describing their computational artifacts (or lack of computational artifacts) and, optionally, how they evaluated their computational results. The paper is reviewed, and reviews are distributed to authors.
  4. Authors prepare a rebuttal.
  5. Reviewers consider the rebuttal.
  6. Paper decisions are made in mid June. Some papers may be shepherded for further changes.Authors of accepted papers prepare the final version of their paper.


Submissions will be considered on any topic related to high performance computing within the areas below. Authors must indicate a primary area from the choices on the submissions form and are strongly encouraged to indicate a secondary area.

Small-scale studies – including single-node studies – are welcome as long as the paper clearly conveys the work’s contribution to high performance computing.


The development, evaluation, and optimization of scalable, general-purpose, high performance algorithms.

Topics include:

  • Algorithms for discrete and combinatorial optimization
  • Algorithms for hybrid and heterogeneous systems with accelerators
  • Algorithms for numerical methods and algebraic systems
  • Data-intensive parallel algorithms
  • Energy- and power-efficient algorithms
  • Fault-tolerant algorithms
  • Graph and network algorithms
  • Load balancing and scheduling algorithms
  • Machine learning algorithms
  • Uncertainty quantification methods
  • Other high performance computing algorithms


The development and enhancement of algorithms, parallel implementations, models, software and problem solving environments for specific applications that require high performance resources.

Topics include:

  • Bioinformatics and computational biology
  • Computational earth and atmospheric sciences
  • Computational materials science and engineering
  • Computational astrophysics/astronomy, chemistry, and physics
  • Computational fluid dynamics and mechanics
  • Computation and data enabled social science
  • Computational design optimization for aerospace, energy, manufacturing, and industrial applications
  • Computational medicine and bioengineering
  • Irregular applications including graphs, network science, and text/pattern matching
  • Improved models, algorithms, performance or scalability of specific applications and respective software
  • Use of uncertainty quantification, statistical, and machine-learning techniques to improve a specific HPC application
  • Other high performance applications

Architecture & Networks

All aspects of high performance hardware including the optimization and evaluation of processors and networks.

Topics include:

  • Architectural support for programming languages or software development.
  • Architectures to support extremely heterogeneous composable systems (e.g., chiplets)
  • Design-space exploration / performance projection for future systems
  • Evaluation and measurement on testbed or production hardware systems
  • Hardware acceleration of containerization and virtualization mechanisms for HPC
  • Interconnect technologies, topology, switch architecture, optical networks, software-defined networks
  • I/O architecture/hardware and emerging storage technologies
  • Memory systems: caches, memory technology, non-volatile memory, memory system architecture (to include address translation for cores and accelerators)
  • Multi-processor architecture and micro-architecture (e.g., reconfigurable, vector, stream, dataflow, GPUs, and custom/novel architecture)
  • Network protocols, quality of service, congestion control, collective communication
  • Power-efficient design and power-management strategies
  • Resilience, error correction, high availability architectures
  • Scalable and composable coherence (for cores and accelerators)
  • Secure architectures, side-channel attacks, and mitigation
  • Software/hardware co-design, domain specific language support

Clouds & Distributed Computing

Cloud and system software architecture, configuration, optimization and evaluation, support for parallel programming on large-scale systems or building blocks for next-generation HPC architectures.

Topics include:

  • Convergence of HPC, cloud, edge, and other distributed computing resources
  • Analysis of cost, performance, and reliability of HPC, cloud, and edge facilities
  • Systems, models, and languages that facilitate distributed applications, such as workflow systems, task-oriented systems, functions-as-a-service, and service-oriented computing.
  • Systems, models, and languages for big data, streaming data, and in-situ data analysis on clouds and distributed systems
  • Integration and management of high performance computing hardware (such as accelerators, complex memories, advanced networks) in clouds and distributed systems.
  • Scheduling, load balancing, resource provisioning, resource management, cost efficiency, fault tolerance, and reliability for clouds
  • Green clouds, energy efficiency, power management
  • Self-configuration, management, monitoring, and introspection
  • Security, sharing, auditing, and identity management
  • Virtualization, containerization, and other technologies for isolation and portability
  • Case studies of scalable distributed applications that span facilities

Data Analytics, Visualization, & Storage

All aspects of data analytics, visualization, storage, and storage I/O related to HPC systems, Submissions on work done at scale are highly favored.

Topics include:

  • Cloud-based analytics at scale
  • Databases and scalable structured storage for HPC
  • Data mining, analysis, and visualization for modeling and simulation
  • Data reduction/compression on HPC and clouds for simulation, and experimental data
  • Design and optimization of integrated workflows for visual analytics
  • Ensemble analysis and visualization
  • I/O performance tuning, benchmarking, and middleware
  • In situ data processing and visualization
  • Next-generation storage systems and media
  • Parallel file, object, key-value, campaign, and archival systems
  • Provenance, metadata, and data management
  • Reliability and fault tolerance in HPC storage
  • Scalable storage, metadata, namespaces, and data management
  • Storage tiering, entirely on-premise internal tiering as well as tiering between on-premise and cloud
  • Storage innovations using machine learning such as predictive tiering, failure, etc.
  • Storage networks
  • Scalable cloud, multi-cloud, and hybrid storage
  • Storage systems for data-intensive computing
  • Visual analytics for monitoring and optimizing supercomputing systems and applications
  • Visual analytics for interpreting and tuning machine learning models at scale

machine learning (ML) with HPC

The development and enhancement of algorithms, systems, and software for scalable machine learning utilizing high performance computing technology. This area is primarily addressing the use of HPC to improve ML rather than the use of ML to improve any technology covered by other areas. Papers addressing the latter should be submitted to the respective areas.

Topics include:

  • HPC for ML
  • Data parallelism and model parallelism
  • Efficient hardware for machine learning
  • Hardware-efficient training and inference
  • Performance modeling of machine learning applications
  • Scalable optimization methods for machine learning
  • Scalable hyper-parameter optimization
  • Scalable neural architecture search
  • Scalable IO for machine learning
  • Systems, compilers, and languages for machine learning at scale
  • Testing, debugging, and profiling machine learning applications
  • Visualization for machine learning at scale

Performance Measurement, Modeling, & Tools

Novel methods and tools for measuring, evaluating, and/or analyzing performance for large-scale systems.

Topics include:

  • Analysis, modeling, or simulation methods for performance
  • Methodologies, metrics, and formalisms for performance analysis and tools
  • Novel and broadly applicable performance optimization techniques
  • Performance studies of HPC hardware and software subsystems such as processor, network, memory, accelerators, and storage
  • Scalable tools and instrumentation infrastructure for measurement, monitoring, and/or visualization of performance
  • System-design tradeoffs between performance and other metrics (e.g., performance and resilience, performance and security)
  • Workload characterization and benchmarking techniques

post-Moore Computing

Technologies that continue the scaling of supercomputing performance beyond the limits of Moore’s law, including system architecture, programming frameworks, system software, and applications.

Topics include:

  • Hardware specialization and taming extreme heterogeneity
  • Beyond von-Neumann computer architectures
  • Special purpose computing (e.g., Anton or GRAPE)
  • Quantum computing
  • Neuromorphic and brain-inspired computing
  • Probabilistic, stochastic computing, and approximate computing
  • Novel post-CMOS device technologies and advanced packaging technologies for heterogeneous integration (evaluated in a supercomputing systems or application context)
  • Superconducting electronics for supercomputing
  • Programming models and programming paradigms for post-Moore systems
  • Tools for modeling, simulating, emulating, or benchmarking post-Moore and post-CMOS devices and systems

Programming Frameworks & System Software

Operating system, runtime system, technologies, and software building blocks that enable management of hardware resources and support parallel programming for large-scale systems.

Topics include:

  • Compiler analysis/optimization, Program verification, and Program transformation/synthesis to enhance cross platform portability, maintainability, result reproducibility, resilience, etc. (e.g., combined static and dynamic analysis methods, testing, formal methods)
  • Parallel programming languages, libraries, models, notations, application frameworks, and runtime systems
  • System software, and programming language and compilation techniques for reducing energy and data movement (e.g., precision allocation, use of approximations, tiling)
  • Solutions for parallel-programming challenges (e.g., support for global address spaces, interoperability, memory consistency, determinism, reproducibility, race detection, work stealing, or load balancing)
  • Tools and frameworks for parallel program development (e.g., debuggers and integrated development environments)
  • Approaches for enabling adaptive and introspective system software
  • OS and runtime system enhancements for attached and integrated accelerators
  • Interactions among the OS, runtime, compiler, middleware, and tools
  • Parallel/networked file system integration with the OS and runtime
  • Resource management, job scheduling, system interoperations and energy-aware techniques for large-scale systems
  • Runtime and OS management of complex memory hierarchies

State of the practice

All aspects of the pragmatic practices of HPC, including operational IT infrastructure, services, facilities, large-scale application executions and benchmarks. Papers are expected to capture experiences and ongoing practice relating to modern computing centers or HPC-related software. Papers do not need to cover novel research or developments, but they are expected to offer novel insights and lessons for HPC architects, developers, administrators, or users.

Topics include:

  • Bridging of cloud data centers and supercomputing centers
  • Energy and power efficiency of HPC and data centers
  • Comparative system benchmarking over a wide spectrum of workloads
  • Containers at scale: performance and overhead
  • Deployment experiences of large-scale hardware and software infrastructures and facilities
  • Facilitation of “big data” associated with supercomputing
  • Infrastructural policy issues, especially international experiences
  • Long-term infrastructure management experiences
  • Pragmatic resource management strategies and experiences
  • Monitoring and operational data analytics
  • Procurement, technology investment and acquisition best practices
  • Quantitative results of education, training, and dissemination activities
  • Software engineering best practices for HPC
  • User support experiences with large-scale and novel machines
  • Reproducibility of data

conflict of interest, Plagiarism, & AI-Generated Text

conflict of interest

Please be aware of, and adhere to, these SC Conference guidelines regarding potential conflicts of interest and disclosure.

A potential conflict of interest occurs when a person is involved in making a decision that:

  • Could result in that person, a close associate of that person, or that person’s company or institution receiving significant financial gain, such as a contract or grant, or
  • Could result in that person, or a close associate of that person, receiving significant professional recognition, such as an award or the selection of a paper, work, exhibit, or other type of submitted presentation.

Program Committee members will be given the opportunity to list potential conflicts of interest during each program’s review process. Program Committee chairs and area chairs will make every effort to avoid assignments that have a potential COI.

According to the SC conference you have a conflict of interest with the following:

  • Your PhD advisors, post-doctoral advisors, PhD students, and post-doctoral advisees;
  • Family relations by blood or marriage, or equivalent (e.g., a partner);
  • People with whom you collaborated in the past five years. Collaborators include: co-authors on an accepted/rejected/pending research paper; co-PIs on an accepted/pending grant; those who fund your research; researchers whom you fund; or researchers with whom you are actively collaborating;
  • Close personal friends or others with whom you believe a conflict of interest exists;
  • People who were employed by, or a student at, your primary institution(s) in the past five years, or people who are active candidates for employment at your primary institution(s).

Note that “service” collaborations, such as writing a DOE, NSF, or DARPA report, or serving on a program committee, or serving on the editorial board of a journal, do not inherently create a COI.

Other situations can create COIs, and you should contact the Technical Program Chairs for questions or clarification on any of these issues.


Please review the ACM guidelines on identifying plagiarism.

AI-generated text

The use of artificial intelligence AI–generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to SC. The sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text.

double-blind review

This document aims to help authors, reviewers, and Papers Chairs understand the double-blind review process that the SC Conference Series has adopted. Please contact us with any questions or comments.

Guidance for authors

If you are an author, you should write your paper so as not to disclose your identity or the identities of your co-authors. The following guidelines are best practices for “blinding” a submission in a way that should not weaken it or the presentation of its ideas. These guidelines are broken up into the major submission and review phases: while writing (before submitting), at submission time, and during the rebuttal process.

These practices were distilled from McKinley (2015) and Snodgrass (2007).

While Writing

  • Do not use your name or your co-authors’ names, affiliations, funding sources, or acknowledgments in the heading or body of the document. It is absolutely fine and encouraged to use the name of the machine you are working on and describe it.
  • Do not eliminate self-references to your published work that are relevant and essential to a proper review of your paper solely in an attempt to anonymize your submission. Instead, write self-references in the third person. Recall that the goal and spirit of double-blind review is to create uncertainty about authorship, which is sufficient to realize most of its benefits.
  • To reference your unpublished work, use anonymous citations. From Snodgrass (2007): “The authors developed … [1]” where the reference [1] appears as, “[1] Anonymous (omitted due to double-blind review).” You will have a chance to explain these references to the non-conflicted Papers Chair or their designee(s); see At Submission Time, below. See the FAQ for more examples.

At Submission Time

  • At submission time, you will be asked to declare conflicts of interest you may have with program committee members. You will also have the option to upload a list of conflicts. Reviewers will be asked separately to verify declared conflicts.
  • Because of the double-blind process, there is no  limit on the number of submissions by reviewers. (Track Co-Chairs are subject to limits.) However, there is a limit of four accepted papers for reviewers.

During the Review Period

You are not forbidden from disseminating your work via talks or technical reports. However, you should not try to directly or otherwise unduly influence program committee members who may be reviewing your paper.

During the Rebuttal Period

During the rebuttal period, authors should still assume double-blind review. Therefore, authors should not disclose their identities in their rebuttal  to the reviewers. However, as with the original submission, authors will have the option of entering identity-revealing information in a separate part of the rebuttal form that will, by default, be visible only to non-conflicted Chairs, or their designee(s) in the case of conflicts.

Guidance for Papers Chairs & Reviewers

The following is a set of guidelines for the Papers Co-Chairs or Area Co-Chairs (hereafter, “Chair”) and reviewers (i.e., Papers Committee members). Generally speaking, the procedures draw inspiration from the three principles suggested by Snodgrass:

“The first is that authors should not be required to go to great lengths to blind their submissions. The second is that comprehensiveness of the review trumps blinding efficacy. The final principle … is that [editors and chairs] retain flexibility and authority in managing the reviewing process.”

Before the Submissions Deadline

  • Correctly identifying conflicts of interest (COIs) is one of the most important procedural aspects of double-blind review. Therefore, before the Papers submissions deadline, Chairs and reviewers should log into the review system to verify and upload their conflicts of interest. This process can be time consuming, so please plan accordingly.
  • Reviewers will be given a list of submitting authors from all tracks who have indicated they have a conflict with that reviewer. The authors will be listed separately from any potential co-authors. This will enable the reviewer to point out to the Papers chair(s) any declared conflicts that are potentially spurious.
  • During paper bidding, reviewers should let their Chair know if they suspect a conflict with a submission and what they believe is the nature of the conflict.

During the Review Period

  • A reviewer may accidentally discover the identities of the authors during the review. (For instance, he or she might be checking references to determine the novelty of the submission and discover a technical report with the same content.) In this case, the reviewer should disclose this discovery to their Chair. Such incidents do not necessarily “violate” the double-blind policy, and the reviewer may continue to review the paper. The spirit of double-blind reviewing is that reviewers should not actively try to discover who the authors of a submission are.
  • A reviewer who thinks he or she knows an author’s identity should not reveal their suspicion in their review or during discussions with other reviewers .
  • SC Papers follows a “double-blind until accept” procedure. That is, author identities remain hidden until the review committee has determined all of the accepted papers. For rejected papers, author identities remain hidden even after rejection.
  • Reviewers who feel that knowing an author’s name or affiliations is necessary to review a submission can make their case to their Chair at any time during the review process.
  • Reviewers who wish to ask colleagues to help with reviews must clear these requests with the Chair first and take steps to ensure that the colleague understands the double-blind policy. In any case, a reviewer is responsible for representing their reviews fully.

During the Program Committee Meeting

Chairs should observe and manage conflicts as they would in a single-blind review. For instance, they should avoid discussing a paper until all of the paper’s conflicted reviewers have left the room.

upon acceptance


If your Paper is selected, at least one author must register for the Technical Program in order to attend the SC Conference and present the paper.

For an accepted paper to be included in the proceedings, the author has to present the paper at the conference (in person).


All accepted papers will be listed in the online SC Schedule.

Papers are archived in the ACM Digital Library and IEEE Xplore; members of SIGHPC or subscribers to the archives may access the full papers without charge. This publication contains the full text of all Papers and their Artifact Description appendices presented at the SC Conference.


schedule & location

Paper presentations will be held Tuesday–Thursday, November 14–16, 2023. Paper sessions are 30 minutes. Day, time, and location for each paper session will be published in the online SC Schedule by September.


Papers are assigned either a classroom or a theater room equipped with standard AV facilities:

  • Projector
  • Microphone and podium
  • Wireless lapel microphone or wireless handheld microphone
  • Projection screen


Best Paper (BP), Best Student Paper (BSP), and Best Reproducibility Advancement (BRA) nominations are made during the review process and are highlighted in the online SC schedule. BP, BSP, and BRA winners are selected by a committee who attends the corresponding paper presentations, and winners are announced at the Thursday Awards ceremony.

Reproducibility Initiative

SC has been a leader in tangible progress towards scientific rigor, through its pioneering practice of enhanced reproducibility of accepted papers. The SC23 initiative builds on this success by continuing the practice of using appendices to enhance scientific rigor and transparency.

The Reproducibility Initiative impacts technical papers and their submission and peer review. All paper submitters should review the information on the Reproducibility Initiative page, including the guidelines for AD/AE Appendices & Badges.

Submissions Closed

Create an account in the online submission system and complete the form. A sample form can be viewed before signing in.

If you have questions about Paper submissions, please contact the program committee.

SC attendee

dates & deadlines

Submission, application, and nomination deadlines for all programs and awards, the housing open date, the early registration deadline, and more – all in one place.

Back To Top Button