Loading…
May 5-8, 2025
Chicago, IL
View More Details & Registration

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for the event to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to find out more information.

This schedule is automatically displayed in Central Time (UTC/GMT -6 hours). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.

Type: Spack clear filter
arrow_back View All Dates
Thursday, May 8
 

9:00am CDT

Creating Reproducible Performance Optimization Pipelines with Spack and Ramble - Doug Jacobsen, Google LLC
Thursday May 8, 2025 9:00am - 9:20am CDT
In this talk, I will present about how we use Spack at Google both with customers and internally. This portion will include an overview of our public cache, along with our use of Spack for benchmarking.

Additionally, this talk will discuss some open-sourced tooling we have developed, Ramble, which was built on top of Spack's infrastructure. This talk will discuss how we use Ramble and Spack together to construct complex end-to-end performance benchmarking studies for HPC and AI applications, and how this process can use these tools to create reproducible experiments that are shareable with external users.
Speakers
DJ

Doug Jacobsen

HPC Software Engineer, Google Cloud
Thursday May 8, 2025 9:00am - 9:20am CDT
Salon E-G

9:20am CDT

Democratizing Access to Optimized HPC Software Through Build Caches - Stephen Sachs & Heidi Poxon, AWS
Thursday May 8, 2025 9:20am - 9:40am CDT
This talk presents our implementation of a build cache of pre-optimized HPC applications using Spack. By implementing architecture-specific enhancements for both x86 and ARM platforms during the build process, we created a set of stacks of optimized software accessible through build caches. Using application builds from the cache, users can reduce compute resource requirements without requiring specialized tuning expertise.
We'll demonstrate how teams can quickly deploy HPC clusters using these stacks and discuss the substantial advantages compared to building from source. We'll present comparisons to traditional builds, showing significant time-to-solution improvements. This work represents a step toward enabling the HPC community to focus on scientific discovery rather than software compilation and tuning.
Speakers
HP

Heidi Poxon

Principal Member of Technical Staff, AWS
avatar for Stephen Sachs

Stephen Sachs

Principal HPC Application Engineer, AWS
Dr. Stephen Sachs is a Principal HPC Application Engineer on the HPC Performance Engineering team at AWS. With over 15 years of domain specific experience, he specializes in application optimization and cloud-based HPC solutions. Previously, he worked as an Application Analyst at... Read More →
Thursday May 8, 2025 9:20am - 9:40am CDT
Salon E-G

9:40am CDT

Spack, Containers, CMake: The Good, The Bad & The Ugly in the CI & Distribution of the PDI Library - Julien Bigot, CEA
Thursday May 8, 2025 9:40am - 10:00am CDT
The PDI data interface is a library that supports loose coupling of simulation codes with data handling libraries: the simulation code is annotated in a library-agnostic way, and data management through external libraries is described in a YAML "data handling specification tree". Access to each data handling tool or library (HDF5, NetCDF, Python, compiled functions, Dask/Deisa, libjson, MPI, etc.) is provided through a dedicated plugin. Testing, packaging and distributing PDI is a complex problem as each plugin comes with its own dependencies, some of wich are typically not provided by supercomputer administrators. In the last five years, we have managed to devise solutions to test & validate, package & distribute the library and its plugins, largely based on spack.

In this talk, we will describe PDI, the specific problems we encounter and how we tackled them with a mix of cmake, spack, and containers. We specifically focus on the creation of a large family of spack-based container images used as test environment of the library, and on the efforts deployed to ensure easy installation on the wide range of supercomputers our downstream application rely on.
Speakers
avatar for Julien Bigot

Julien Bigot

Permanent Research Scientist, CEA
Julien is a permanent computer scientist at Maison de la Simulation at CEA. He leads the Science of Computing team. His research focuses on programming models for high-performance computing. He is especially interested in the question of separation of concerns between the simulated... Read More →
Thursday May 8, 2025 9:40am - 10:00am CDT
Salon E-G

10:00am CDT

Spack Deployment Story at LBNL/UC Berkeley - Abhiram Chintangal, Lawrence Berkeley National Lab
Thursday May 8, 2025 10:00am - 10:20am CDT
The High-Performance Computing Services group at Lawrence Berkeley National Laboratory delivers extensive computing resources to Berkeley Lab and the University of California at Berkeley, supporting approximately 4,000 users and nearly 600 research projects across diverse scientific disciplines.

Over the past year and a half, we have modernized our primarily manual software build process using Spack, enabling us to meet the growing application and workflow demands of the HPC software stack.

This presentation will highlight how we leverage Spack’s features—such as environments, views, and module sets—to meet our specific needs and requirements. Additionally, we will discuss how, over the past year, our Spack pipeline, integrated with Reframe (a testing framework), has enabled our larger infrastructure team to efficiently plan and execute large-scale OS migrations across multiple scientific clusters in a short timeframe.
Speakers
avatar for Abhiram Chintangal

Abhiram Chintangal

Site Reliability Engineer, Lawrence Berkeley National Lab
Abhiram is a Systems Engineer with over nine years of experience specializing in meeting the computational and IT demands of scientific labs. He has a deep understanding of the complexities of software in the data-driven landscape of modern science and recognizes its critical role... Read More →
Thursday May 8, 2025 10:00am - 10:20am CDT
Salon E-G

10:45am CDT

Lessons Learned from Developing and Shipping Advanced Scientific Compressors with Spack - Robert Underwood, Argonne National Laboratory
Thursday May 8, 2025 10:45am - 11:05am CDT
Modern scientific applications increasingly produce extremely large volumes of data while the scalability of I/O systems has not increased at the same rate. Lossy data compression has helped many applications address these limitations, but to meet the needs of the most demanding applications, specialized compression pipelines are needed. The FZ project helps users and compression scientists collaborate to meet the I/O needs of exascale applications by making it easier to implement custom compression tools and integrate them with applications. However, to fulfill the complex needs of this diverse ecosystem of software and systems, the FZ project uses Spack to manage the complexity of developing, distributing, and deploying specialized compression pipelines to meet the needs of its developers and users.

Spoken from the perspective of someone who has tried nearly every new spack feature in the last 5 years, and who maintains over 50 packages. This talk tells the story of how the FZ project tackled that complexity with spack, and where spack can grow to meet its future challenges coupled with tips and tricks we've learned along the way.
Speakers
avatar for Robert Underwood

Robert Underwood

Assistant Computer Scientist, Argonne National Laboratory
Assistant Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory focusing on data and I/O for large-scale scientific apps including AI for Science using lossy compression techniques and data management. Robert developed LibPressio, which... Read More →
Thursday May 8, 2025 10:45am - 11:05am CDT
Salon E-G

11:05am CDT

Challenges Mixing Spack-Optimized Hardware Accelerator Libraries on Personal Scientific Computers - Pariksheet Nanda, University of Pittsburgh
Thursday May 8, 2025 11:05am - 11:25am CDT
Personal computing devices sold today increasingly include AI hardware accelerators such as neural processing units and graphics cards with compute capability. However, scientific libraries packaged for laptop and desktop computers focus first on broad instruction set compatibility. Yet, hardware optimized libraries and behaviors can be applied at runtime as widely used by Intel MPI environmental variables. This session discusses the unique use case of the R package system for vendor neutral hardware acceleration using vendor agnostic SYCL / Kokkos. The goal is to allow scientific package developers to quickly and easily write vendor independent accelerator code with deep control and tuning capabilities that use hardware acceleration capabilities as well on laptop / desktop machines as HPC clusters. Although R is specifically discussed, ideas from this session translate to Python and other high-level language packages used in scientific computing. Additionally, this session raises technical challenges directly using Kokkos as well as Apptainer for continuous integration that would greatly benefit from early-stage feedback of those audience members at this conference.
Speakers
avatar for Pariksheet Nanda

Pariksheet Nanda

Postdoctoral Fellow, University of Pittsburgh
Pariksheet first learned about Spack from his university research HPC supervisor who returned from Supercomuting and told him about the "cool new project we need to start using" and has been hooked ever since. When not working on research manuscripts, he enjoys reading and writing... Read More →
Thursday May 8, 2025 11:05am - 11:25am CDT
Salon E-G

11:25am CDT

An Opinionated-Default Approach to Enhance Spack Developer Experience - Kin fai Tse, The Hong Kong University of Science and Technology
Thursday May 8, 2025 11:25am - 11:45am CDT
Despite Spack's strengths as a feature-rich HPC package manager generating fast executables for HPC apps, its adoption remains limited partly due to a steep learning curve and its perception as primarily a sysadmin tool.

We propose a set of opinionated defaults that help new users quickly adopt best practices with guaranteed reproducibility and architecture compatibilities. The approach draws from conventions used in popular userspace Python package managers like pip and conda which was proven to be effective

Unlike Python, Spack is a source-distribution system, compilation errors are a common challenge. We experimented with smoke-testing compatibility compatibility across compilers, libraries, and x86_64 architectures. Results are encoded into conflict rules into the defaults, such practice can be helpful to avoid many common build failures.

We successfully deployed this approach on x86_64 platforms with substantially different purpose (DL vs HPC), demonstrating its transferability and proving current Spack features sufficient for implementation. Additional DX enhancements will be discussed. The defaults are available as an open-source repository.
Speakers
avatar for Kin fai Tse

Kin fai Tse

IT Manager (Research Computing), The Hong Kong University of Science and Technology
Dr. Kin Fai TSE now serves to overseeing DGX cluster operations and HPC migration at HKUST. After his Physics Ph.D., he led MLOps at a voicebot startup (2021). Co-founding Flying Milktea (2022), he built a marketplace with ~2-week onboarding for new interns. He was lead coach for... Read More →
Thursday May 8, 2025 11:25am - 11:45am CDT
Salon E-G

11:45am CDT

Developing and Managing Data Acquisition Software Using Spack - Eric Flumerfelt, Fermi National Accelerator Laboratory
Thursday May 8, 2025 11:45am - 12:05pm CDT
The Data Acquisition systems of particle physics experiments regularly push the boundaries of high-throughput computing, demanding low-latency collection of data from thousands of devices, collating data into time-sliced events, processing these events and making trigger decisions, and writing the selected data streams to disk. To accomplish these tasks, the DAQ Engineering and Operations department at Fermilab leverages multiple software libraries and builds reusable DAQ frameworks on top. These libraries must be delivered in well-defined bundles and are thoroughly tested for compatibility and functionality before being deployed to live detectors. We have several techniques used to ensure that a consistent set of dependencies can be delivered and re-created at need. We must also support active development of DAQ software components, ideally in an environment as close as possible to that of the detectors. This development often occurs across multiple packages which have to be built in concert and features tested in a consistent and reproducible manner.
I will present our scheme for accomplishing these goals using Spack environments, bundle packages, and Github Actions-based CI.
Speakers
avatar for Eric Flumerfelt

Eric Flumerfelt

Computational Physics Developer, Fermi National Accelerator Laboratory
I have been developing data acquisition systems at Fermilab since 2014. I have worked with a number of particle physics experiments, from small test-beam experiments which run for two weeks to large international collaborations.
Thursday May 8, 2025 11:45am - 12:05pm CDT
Salon E-G

1:35pm CDT

From Complexity to Efficiency: SPACK’s Impact on NSM Supercomputers - Samir Shaikh, Harshitha Ugave, Centre for Developement of Advanced Computing (C-DAC)
Thursday May 8, 2025 1:35pm - 1:55pm CDT
The National Supercomputing Mission (NSM) advances India’s research by providing HPC infrastructure across institutions. However, managing software on diverse HPC systems is challenging due to hardware variations, dependencies, and version control.

Spack, a flexible package manager, addresses these issues by enabling seamless software deployment and dependency management across clusters. This study examines Spack’s implementation on 17 NSM HPC systems, improving software availability and consistency.

Spack simplifies this through customized installations, automated dependency handling, and reproducible builds, ensuring compatibility.

Implementation involved a centralized repository, automated builds, user training, software optimization, and continuous refinement. This improved research productivity, reduced support overhead, and standardized environments.

Key benefits include reproducibility, faster issue resolution, and better collaboration. Future plans involve expanding Spack repositories, integrating containers, automating updates, and training. This presentation covers our implementation, challenges, and best practices.
Speakers
avatar for Samir Shaikh

Samir Shaikh

Scientist, Centre for Developement of Advanced Computing (C-DAC)
Samir Shaikh is an HPC specialist at C-DAC, Pune, optimizing large-scale workloads, parallel computing, and system architecture. As a Scientist C, he enhances HPC performance for AI/ML, scientific computing, and NSM supercomputers. An IIT Guwahati M.Tech graduate, he has contributed... Read More →
Thursday May 8, 2025 1:35pm - 1:55pm CDT
Salon E-G

1:55pm CDT

Deploying a Large HPC Software Stack - Challenges and Experiences - Jose Gracia, HLRS, University Stuttgart
Thursday May 8, 2025 1:55pm - 2:15pm CDT
We aim to use Spack to deploy a large software stack at a German national HPC center. In this talk, we will give some background related to the size of the software stack, its deployment frequency, and constraints arising from the operational environment. Next, we will briefly outline some of the challenges and obstacles that we encountered such as configuration issues, interaction with Cray Programming Environment, and unexpected outcomes of the concretizer. We end the talk with a current status and next steps.
Speakers
avatar for Jose Gracia

Jose Gracia

Senior Researcher, HLRS, University Stuttgart
Together with his group, José Gracia does research into topics related to scalable programming models such as new approaches to MPI or task-based programming models and their interoperability at scale. He also works on performance analysis tools, characterization of application performance... Read More →
Thursday May 8, 2025 1:55pm - 2:15pm CDT
Salon E-G

2:15pm CDT

Building and Maintaining OSS on Fugaku: RIKEN’s Experience with Spack - Yuchi Otsuka, RIKEN R-CCS
Thursday May 8, 2025 2:15pm - 2:35pm CDT
Fugaku, Japan’s flagship supercomputer, serves a diverse range of scientific disciplines, requiring extensive open-source software (OSS) support. However, managing OSS on Fugaku presents unique challenges due to its A64FX-based Arm architecture and Fujitsu’s proprietary compilers and libraries. Our team has been leveraging Spack to efficiently manage and maintain OSS. In this talk, we will share our experience using Spack on Fugaku, highlighting how it has enabled a robust and up-to-date OSS environment. We will discuss the practical benefits of Spack, including streamlined software deployment and simplified package management, and reflect on lessons learned from maintaining software in a large-scale HPC system. By sharing our insights, we aim to contribute to the broader Spack community and reinforce its role as a key tool for HPC software management.
Speakers
avatar for Yuchi Otsuka

Yuchi Otsuka

Technical Scientist, RIKEN R-CCS
I have a long-standing background in computational condensed-matter physics research and have been involved in managing and maintaining OSS on Fugaku since 2022. My role is to ensure a robust and up-to-date OSS environment on Fugaku to support a wide range of scientific applicati... Read More →
Thursday May 8, 2025 2:15pm - 2:35pm CDT
Salon E-G

2:35pm CDT

Using Spack to Build and Maintain a Facility-Specific Programming Environment - Nicholas Sly, Lawrence Livermore National Laboratory
Thursday May 8, 2025 2:35pm - 2:55pm CDT
The trials and tribulations of using Spack to construct, build, and maintain a facility-specific programming environment at LLNL. Working with the Spack developers to ensure that Spack is able to do what it claims it can do in a large production environment.
Speakers
NS

Nick Sly

Scientist, Lawerence Livermore National Laboratory
Thursday May 8, 2025 2:35pm - 2:55pm CDT
Salon E-G

2:55pm CDT

Aurora PE: Rethinking Software Integration in the Exascale Era - Sean Koyama, Argonne National Laboratory
Thursday May 8, 2025 2:55pm - 3:15pm CDT
The exascale Aurora supercomputer at the Argonne Leadership Computing Facility posed numerous challenges during its development due to its novel scale. One such challenge was in creating a scalable and maintainable scientific software environment. Typical software deployment methods failed to scale and were difficult to maintain over time, necessitating a new way of thinking about software integration. In this talk we present our work on the Aurora Programming Environment, a bespoke scientific programming environment which optimizes for scale and leverages Spack for its strengths in reproducibility, automation, and multiplicative build combinations. We discuss details of the containerized build process and read-only image deployment strategy as well as existing pain points and workarounds. We also examine the future possibilities that our approach opens up, including tightly integrated CI/CD flows and portable containerized access to the PE. We believe this approach is generalizeable and may benefit facilities where traditional software integration methods fall short of needs.
Speakers
avatar for Sean Koyama

Sean Koyama

Systems Integration Admin, Argonne National Laboratory
Sean Koyama is a Systems Integration Administrator at the Argonne National Laboratory's Leadership Computing Facility. Sean integrates scientific software stacks into the user environments on ALCF machines, including Aurora, the ALCF's exascale supercomputer. Their work includes developing... Read More →
Thursday May 8, 2025 2:55pm - 3:15pm CDT
Salon E-G

3:40pm CDT

Driving Continuous Integration and Developer Workflows with Spack - Richard Berger, Los Alamos National Laboratory
Thursday May 8, 2025 3:40pm - 4:00pm CDT
Spack makes it easy to install dependencies for our software on multiple HPC platforms. However, there is little guidance on how to structure Spack environments for larger projects, share common Spack installations with code teams and utilize them in an effective way for continuous integration and development.

This presentation will share some of the lessons learned from deploying chained Spack installations for multiple code teams at LANL on various HPC platforms both on site and on other Tri-Lab systems, how to structure such deployments for reusability and upgradability, and make them deployable even on air-gapped systems. It will also show how we utilize Spack's build facilities to drive CMake-based projects on GitLab for continuous integration, without having to replicate build configuration logic in GitLab files, while giving developers an easy-to-follow workflow for recreating CI runs in various configurations.
Speakers
avatar for Richard Berger

Richard Berger

Scientist, Los Alamos National Laboratory
Richard is a research software engineer in the Applied Computer Science Group (CCS-7) at Los Alamos National Laboratory (LANL) with a background in Mechatronics, high-performance computing, and software engineering. He is currently contributing to the core development of LAMMPS, FleCSI... Read More →
Thursday May 8, 2025 3:40pm - 4:00pm CDT
Salon E-G

4:00pm CDT

Implementing a Security Conscious Build Configuration Relay with a Shared Build Cache - Chris White, Lawrence Livermore National Laboratory
Thursday May 8, 2025 4:00pm - 4:20pm CDT
In large-scale software development efforts, effective communication between projects is essential to ensure consistency, reproducibility, and efficiency. This presentation explores strategies to improve coordination among software teams by leveraging Continuous Integration (CI) for relaying crucial build configurations while maintaining security for proprietary project sources. We will demonstrate best practices for sharing build configurations with upstream projects without exposing proprietary code.

A key focus will be optimizing the use of Spack, particularly in reducing the number of Spack Package Repositories utilized across multiple teams. This will simplify maintenance, harden builds, and avoiding duplication. Additionally, we will highlight the benefits of heavily integrating Spack CI to generate build caches, which will reduce rebuild times, and enhance software portability. By adopting these approaches, teams can achieve better collaboration, streamlined workflows, and improved software sustainability.
Speakers
avatar for Chris White

Chris White

WSC DevOps Coordinator, Lawrence Livermore National Laboratory
Chris White is the WSC DevOps Coordinator at Lawrence Livermore National Laboratory. He advises multi-disciplinary teams on software best practices with a focus on unifying complex DevOps workflows across multiple teams. Chris specializes in improving collaboration while ensuring... Read More →
Thursday May 8, 2025 4:00pm - 4:20pm CDT
Salon E-G

4:20pm CDT

Spack-Based WEAVE Environment at LLNL - Lina Muryanto, Lawrence Livermore National Security, LLC
Thursday May 8, 2025 4:20pm - 4:50pm CDT
The WEAVE team at LLNL has created a Spack-based virtual environment (accessible to Livermore Computing community) that has a rich set of open-source tools to create a workflow for any HPC applications, commonly used Python packages and several commonly used ML and AI packages.
The goal is to provide a stable, well tested environment that users can activate and use directly across Livermore Computing's vast array of machines/OSes and networks.
We also provide the capability for users to create a local environment based on the WEAVE environment.

Using Spack allows us to install the same set of software across different platforms in LC. It also allows us to use the same Spack environment file to recreate the exact same ecosystem across networks within the Lab.
We leverage GitLab CI for our DevOps platform to automate Spack install, creating test environment, running tests and deploying the environment.
We also leverage LLNL Nexus Repository to sync our build files across networks within the lab.

WEAVE team also has implemented the "WEAVE Badging Program" where the community can submit a request to have a tool to be integrated into the WEAVE environment.
Speakers
avatar for Lina Muryanto

Lina Muryanto

Software Engineer, Lawrence Livermore National Security, LLC
Lina joined LLNL in 2018 as a DevOps engineer for the ESGF Project. In 2021, Lina joined the SD program and joined the WEAVE team in 2022. She has implemented CI/CD from scratch. Lina is passionate about achieving high software quality and reliability through software test development... Read More →
Thursday May 8, 2025 4:20pm - 4:50pm CDT
Salon E-G

4:40pm CDT

DevOps for Monolithic Repositories Using Spack - Phil Sakievich, Sandia National Laboratories
Thursday May 8, 2025 4:40pm - 5:00pm CDT
In the realm of large software projects, the choice between a monolithic repository and several distributed repositories presents significant trade-offs that can impact development efficiency, collaboration, and maintainability. Monolithic repositories, while offering centralized management and streamlined dependency handling, can become unwieldy as project size increases. Conversely, distributed repositories provide modularity and flexibility but may lead to challenges in integration and version control. This presentation will delve into the ongoing research conducted by Sandia National Laboratories, where researchers are exploring innovative solutions to harness the strengths of both repository models through the use of Spack, a package manager designed for scientific computing. We will outline the methodology employed in this exploration, highlighting the performance trade-offs i dentified thus far, including aspects such as build times, dependency resolution, and ease of collaboration. Attendees will gain insights into the implications of repository structure on software development practices and the potential for hybrid approaches to optimize project outcomes.
Speakers
avatar for Phil Sakievich

Phil Sakievich

Senior Computer Scientist R&D, Sandia National Laboratories
Phil comes from a high-performance computing and fluid mechanics background. He became involved with Spack during the ExaScale computing project and author of the Spack-Manager project. Phil is an active member of the Spack technical steering committee and currently leads several... Read More →
Thursday May 8, 2025 4:40pm - 5:00pm CDT
Salon E-G
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -