Loading…
May 5-8, 2025
Chicago, IL
View More Details & Registration

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for the event to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to find out more information.

This schedule is automatically displayed in Central Time (UTC/GMT -6 hours). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.

arrow_back View All Dates
Thursday, May 8
 

8:30am CDT

Registration & Badge Pick-Up
Thursday May 8, 2025 8:30am - 4:00pm CDT
Thursday May 8, 2025 8:30am - 4:00pm CDT
Ballroom Meeting Foyer

9:00am CDT

Breaking Charliecloud News - Reid Priedhorsky, Los Alamos National Laboratory
Thursday May 8, 2025 9:00am - 9:20am CDT
This session will cover late-breaking developments in Charliecloud, such as recent/upcoming new features, notable bugs, and requests for feedback. The specific agenda is TBD.

LA-UR-25-22140
Speakers
RP

Reid Priedhorsky

Scientist, Los Alamos National Laboratory
I am a staff scientist at Los Alamos National Laboratory. Prior to Los Alamos, I was a research staff member at IBM Research. I hold a Ph.D. in computer science from the University of Minnesota and a B.A., also in computer science, from Macalester College.My work focuses on large-scale... Read More →
Thursday May 8, 2025 9:00am - 9:20am CDT
Illinois River

9:00am CDT

Creating Reproducible Performance Optimization Pipelines with Spack and Ramble - Doug Jacobsen, Google LLC
Thursday May 8, 2025 9:00am - 9:20am CDT
In this talk, I will present about how we use Spack at Google both with customers and internally. This portion will include an overview of our public cache, along with our use of Spack for benchmarking.

Additionally, this talk will discuss some open-sourced tooling we have developed, Ramble, which was built on top of Spack's infrastructure. This talk will discuss how we use Ramble and Spack together to construct complex end-to-end performance benchmarking studies for HPC and AI applications, and how this process can use these tools to create reproducible experiments that are shareable with external users.
Speakers
DJ

Doug Jacobsen

HPC Software Engineer, Google Cloud
Thursday May 8, 2025 9:00am - 9:20am CDT
Salon E-G

9:00am CDT

Broader Kokkos Ecosystem
Thursday May 8, 2025 9:00am - 10:20am CDT
1. kokkos-fft Updates – Yuuichi Asahi, CEA (10 minutes)
kokkos-fft implements local interfaces between Kokkos and de facto standard FFT libraries, including fftw, cufft, hipfft (rocfft), and oneMKL. We are inclined to implement the numpy.fft-like interfaces adapted for Kokkos. A key concept is that "As easy as numpy, as fast as vendor libraries". In the talk, we will introduce the basic APIs and typical use cases. We will also present future development plans.

2. Fortran Porting Wish List for Kokkos – Matthew Norman, Oak Ridge National Laboratory (10 minutes)
This presentation covers the beginnings of the Yet Another Kernel Launcher (YAKL) C++ portability library, its evolution alongside Kokkos, the use of Kokkos in its current form, and remaining issues before it can be retired in lieu of using Kokkos instead. The primary outstanding issues are the inclusion of arbitrary lower bounds for Fortran-like View behavior and the ability to use an underlying pool allocator for Views for cheap frequent device allocation and deallocation so that Views can be locally created and destroyed only where needed rather than existing for the global lifetime of simulations. This may improve readability and reduce the memory high water mark in simulations. A few performance related issues will be covered as well, mainly limited to MDRangePolicy and parallel_for register usage.

3. Custom Layout and Tiling for Multi-Dimensional Data – Cedric Chevalier & Gabriel Dos Santos, CEA (10 minutes)
Performance optimizations for exascale HPC applications primarily rely on fine-tuning implementations, requiring comprehensive knowledge of heterogeneous hardware architectures that domain experts often lack. One of Kokkos' biggest successes is tying the memory layout of multi-dimensional arrays to the execution backend. It allows the exploitation of coalescence or cache, depending on the hardware. Here, we propose to go further and design custom tiled layouts that are generic for C++23's std::mdspan. Instead of running tile algorithms on flat data, like Kokkos' mdrange, we want to explore how running flat algorithms on tiled data performs. On CPU, the first experimental results with std::mdspan on a naive dense matrix multiplication demonstrate that, by replacing standard layouts with our proposed solution, we achieve an average speedup of over 2.2x, with peak performance improvements of up to 7.8x. Then, we will discuss how external indexing can improve efficiency. We will present how to exploit it with Kokkos' mdrange algorithm, and how it can behave on GPU.

4. Runtime Auto-Tuning for Kokkos Applications with APEX – Kevin Huck, University of Oregon (10 minutes)
Traditional GPU programming with libraries like CUDA or HIP requires tuning parameters exposed to the user, for example block sizes or number of teams. Kokkos also exposes portable parameters to the Kokkos user. How can Kokkos application programmers easily tune these Kokkos parameters for their application’s deployment when using any given Kokkos backend, without incurring large overheads? In particular, how do we ensure the tuning itself is portable across platforms? We propose using online, i.e., runtime, autotuning, utilizing the APEX Kokkos Tools connector to tune exposed parameters. Specifically, we discuss the Kokkos Tools Tuning Interface, tuning contexts, variable definition, the APEX runtime auto-tuning library utilizing Kokkos Tools, and distributed Kokkos auto-tuning. Applying our auto-tuning approaches to Kokkos sample kernels on Perlmutter and Frontier, we have obtained promising performance results. These results suggest Kokkos online auto-tuning is beneficial for production applications, and we invite Kokkos users to try these features and for Kokkos developers to contribute.

5. Unifying the HPC Ecosystem with std::execution – Mikael Simberg, Swiss National Supercomputing Centre (20 minutes)
Asynchronous programming models are becoming increasingly essential for fully leveraging modern hardware. In the C++ ecosystem, projects typically provide ad-hoc and varying interfaces, making interoperability difficult. Recently approved for C++26, the std::execution library promises to unify the ecosystem by providing a standard, composable interface for asynchronous operations. This talk briefly introduces the motivation and design principles of std::execution, and shares our experiences using it prior to standardization at CSCS in various projects, including Kokkos, HPX, and more. We'll discuss challenges, successes, and opportunities encountered while adopting std::execution.

6. PyKokkos: Performance Portability for Python Developers – Milos Gligoric, The University of Texas at Austin (20 minutes)
Kokkos is a programming model for writing performance portable applications for all major high performance computing platforms. It provides abstractions for data management and common parallel operations, allowing developers to write portable high performance code with minimal knowledge of architecture-specific details. Kokkos is implemented as a heavily-templated C++ library. However, C++ is not ideal for rapid prototyping and quick algorithmic exploration. An increasing number of developers use Python for scientific computing, machine learning, and data analytics. In this talk, I will present a new Python framework, PyKokkos, for writing performance portable applications entirely in Python. PyKokkos provides Kokkos-like abstractions that are easier to use and more concise than the C++ interface. We implemented PyKokkos by building a translator from a subset of Python to C++ Kokkos and bridging necessary function calls via automatically generated Python bindings. I will also cover our recent work on automatic kernel fusion with the goal to optimize PyKokkos applications. The talk will also cover our experience on developing PyKokkos, its current limitations, and future plans.
Speakers
avatar for Kevin Huck

Kevin Huck

Senior Research Associate, University of Oregon
Kevin Huck is a Senior Research Associate in the Oregon Advanced Computing Institute for Science and Society (OACISS) at the University of Oregon. He is interested in the unique problems of performance analysis of large HPC applications as well as automated methods for diagnosing... Read More →
avatar for Cedric Chevalier

Cedric Chevalier

Research Scientist, CEA
Cédric Chevalier is a research scientist at CEA in France. He is interested in developing libraries for HPC simulation codes, particularly in Linear Algebra and Mesh/Graph partitioning. His work at CEA is led by providing practical ways to exploit newer hardware, use new programming... Read More →
avatar for Gabriel Dos Santos

Gabriel Dos Santos

PhD Student, CEA
PhD student on the management of data structures representations in heterogeneous architecture for exascale-class HPC workloads, with a strong background in performance optimization, CPU microarchitectures and vectorization.
avatar for Matthew Norman

Matthew Norman

Climate Scientist, Oak Ridge National Laboratory
Matt Norman leads the Advanced Computing for Life Sciences and Engineering group in the Oak Ridge Leadership Computing Facility (OLCF). He works with weather and climate simulation, urban and wind turbine simulation, PDE discretizations for the Navier-Stokes Equations, GPU acceleration... Read More →
avatar for Mikael Simberg

Mikael Simberg

HPC Application Engineer, Swiss National Supercomputing Centre
Mikael Simberg holds a master's degree in operations research and computer science from Aalto University in Finland. He joined the Swiss National Supercomputing Centre in 2017 where he works as a software developer helping scientific projects make the best use of modern hardware through... Read More →
avatar for Milos Gligoric

Milos Gligoric

Associate Professor, The University of Texas at Austin
Milos Gligoric is an Associate Professor in Electrical and Computer Engineering at The University of Texas at Austin where he holds the Archie W. Straiton Endowed Faculty Fellowship in Engineering. His research interests are in software engineering, especially in designing techniques... Read More →
avatar for Yuuichi Asahi

Yuuichi Asahi

Research Scientist, CEA
His recent interests are HPC and AI with NVIDIA, AMD and Intel GPUs. He has a rich experience in GPU programming models including CUDA, HIP, SYCL, Kokkos, OpenMP, OpenACC, thrust, stdpar, and senders/receivers. For exascale computing, he is highly interested in improving performance... Read More →
Thursday May 8, 2025 9:00am - 10:20am CDT
Salon A-C

9:20am CDT

Democratizing Access to Optimized HPC Software Through Build Caches - Stephen Sachs & Heidi Poxon, AWS
Thursday May 8, 2025 9:20am - 9:40am CDT
This talk presents our implementation of a build cache of pre-optimized HPC applications using Spack. By implementing architecture-specific enhancements for both x86 and ARM platforms during the build process, we created a set of stacks of optimized software accessible through build caches. Using application builds from the cache, users can reduce compute resource requirements without requiring specialized tuning expertise.
We'll demonstrate how teams can quickly deploy HPC clusters using these stacks and discuss the substantial advantages compared to building from source. We'll present comparisons to traditional builds, showing significant time-to-solution improvements. This work represents a step toward enabling the HPC community to focus on scientific discovery rather than software compilation and tuning.
Speakers
HP

Heidi Poxon

Principal Member of Technical Staff, AWS
avatar for Stephen Sachs

Stephen Sachs

Principal HPC Application Engineer, AWS
Dr. Stephen Sachs is a Principal HPC Application Engineer on the HPC Performance Engineering team at AWS. With over 15 years of domain specific experience, he specializes in application optimization and cloud-based HPC solutions. Previously, he worked as an Application Analyst at... Read More →
Thursday May 8, 2025 9:20am - 9:40am CDT
Salon E-G

9:20am CDT

Deploying AI Chatbot Assistants with Charliecloud - Jemma Stachelek, Los Alamos National Laboratory
Thursday May 8, 2025 9:20am - 10:00am CDT
Additional Authors: Tolulope Olatunbosun, Phil Romero & Mike Mason, Los Alamos National Laboratory

Retrieval Augmented Generation (RAG) systems improve the response relevance of LLMs (Large Language Models) by limiting the context to a document corpus. RAG systems have seen broad deployment as document summarization engines and AI chatbots. However, deploying these systems often assumes a privileged and “cloudy” environment with multi-container orchestration (i.e. docker compose) and unfettered internet access to pull resources (e.g. software, data, and models) on-the-fly. As an alternative, we leveraged Charliecloud’s NVIDIA GPU support capabilities to deploy a RAG chatbot in an unprivileged HPC environment where resources are pre-staged. We demonstrate the deployment of AI Chatbots using Charliecloud on a variety of hardware and software versioning.

LA-UR-25-21968
Speakers
JS

Jemma Stachelek

Scientist, Los Alamos National Laboratory
Thursday May 8, 2025 9:20am - 10:00am CDT
Illinois River

9:40am CDT

Spack, Containers, CMake: The Good, The Bad & The Ugly in the CI & Distribution of the PDI Library - Julien Bigot, CEA
Thursday May 8, 2025 9:40am - 10:00am CDT
The PDI data interface is a library that supports loose coupling of simulation codes with data handling libraries: the simulation code is annotated in a library-agnostic way, and data management through external libraries is described in a YAML "data handling specification tree". Access to each data handling tool or library (HDF5, NetCDF, Python, compiled functions, Dask/Deisa, libjson, MPI, etc.) is provided through a dedicated plugin. Testing, packaging and distributing PDI is a complex problem as each plugin comes with its own dependencies, some of wich are typically not provided by supercomputer administrators. In the last five years, we have managed to devise solutions to test & validate, package & distribute the library and its plugins, largely based on spack.

In this talk, we will describe PDI, the specific problems we encounter and how we tackled them with a mix of cmake, spack, and containers. We specifically focus on the creation of a large family of spack-based container images used as test environment of the library, and on the efforts deployed to ensure easy installation on the wide range of supercomputers our downstream application rely on.
Speakers
avatar for Julien Bigot

Julien Bigot

Permanent Research Scientist, CEA
Julien is a permanent computer scientist at Maison de la Simulation at CEA. He leads the Science of Computing team. His research focuses on programming models for high-performance computing. He is especially interested in the question of separation of concerns between the simulated... Read More →
Thursday May 8, 2025 9:40am - 10:00am CDT
Salon E-G

10:00am CDT

Using Charliecloud to Wrap HTCondor Worker Nodes - Oliver Freyermuth, University of Bonn (Germany)
Thursday May 8, 2025 10:00am - 10:20am CDT
This talk will present a setup using Charliecloud to spawn virtual HTCondor compute nodes inside of jobs submitted to a SLURM cluster. The actual containers are distributed via CernVM-FS and mounted unprivilegedly at the HPC site using cvmfsexec. The spawned HTCondor nodes integrate into a larger overlay batch system to run High-Throughput compute jobs from the Worldwide LHC Computing Grid community.

Charliecloud allows to make this setup very portable with its lightweight design, minimal system dependencies and simplicity of use. CernVM-FS which is optimized for distribution of large numbers of small files of which only few might be accessed proves an ideal fit for distribution of directory-format container images. In combination with HTCondor which focuses on optimizing the total throughput and easily handles large numbers of jobs, compute resources can be used opportunistically by integrating them fully unprivilegedly into an overlay batch system. The workloads themselves can again use unprivileged containers to enable the use of user-defined software stacks.
Speakers
OF

Oliver Freyermuth

Research Scientist for IT Operations and High Throughput Computing, University of Bonn (Germany)
Thursday May 8, 2025 10:00am - 10:20am CDT
Illinois River

10:00am CDT

Spack Deployment Story at LBNL/UC Berkeley - Abhiram Chintangal, Lawrence Berkeley National Lab
Thursday May 8, 2025 10:00am - 10:20am CDT
The High-Performance Computing Services group at Lawrence Berkeley National Laboratory delivers extensive computing resources to Berkeley Lab and the University of California at Berkeley, supporting approximately 4,000 users and nearly 600 research projects across diverse scientific disciplines.

Over the past year and a half, we have modernized our primarily manual software build process using Spack, enabling us to meet the growing application and workflow demands of the HPC software stack.

This presentation will highlight how we leverage Spack’s features—such as environments, views, and module sets—to meet our specific needs and requirements. Additionally, we will discuss how, over the past year, our Spack pipeline, integrated with Reframe (a testing framework), has enabled our larger infrastructure team to efficiently plan and execute large-scale OS migrations across multiple scientific clusters in a short timeframe.
Speakers
avatar for Abhiram Chintangal

Abhiram Chintangal

Site Reliability Engineer, Lawrence Berkeley National Lab
Abhiram is a Systems Engineer with over nine years of experience specializing in meeting the computational and IT demands of scientific labs. He has a deep understanding of the complexities of software in the data-driven landscape of modern science and recognizes its critical role... Read More →
Thursday May 8, 2025 10:00am - 10:20am CDT
Salon E-G

10:20am CDT

Coffee Break
Thursday May 8, 2025 10:20am - 10:45am CDT
Thursday May 8, 2025 10:20am - 10:45am CDT

10:45am CDT

Lessons Learned from Developing and Shipping Advanced Scientific Compressors with Spack - Robert Underwood, Argonne National Laboratory
Thursday May 8, 2025 10:45am - 11:05am CDT
Modern scientific applications increasingly produce extremely large volumes of data while the scalability of I/O systems has not increased at the same rate. Lossy data compression has helped many applications address these limitations, but to meet the needs of the most demanding applications, specialized compression pipelines are needed. The FZ project helps users and compression scientists collaborate to meet the I/O needs of exascale applications by making it easier to implement custom compression tools and integrate them with applications. However, to fulfill the complex needs of this diverse ecosystem of software and systems, the FZ project uses Spack to manage the complexity of developing, distributing, and deploying specialized compression pipelines to meet the needs of its developers and users.

Spoken from the perspective of someone who has tried nearly every new spack feature in the last 5 years, and who maintains over 50 packages. This talk tells the story of how the FZ project tackled that complexity with spack, and where spack can grow to meet its future challenges coupled with tips and tricks we've learned along the way.
Speakers
avatar for Robert Underwood

Robert Underwood

Assistant Computer Scientist, Argonne National Laboratory
Assistant Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory focusing on data and I/O for large-scale scientific apps including AI for Science using lossy compression techniques and data management. Robert developed LibPressio, which... Read More →
Thursday May 8, 2025 10:45am - 11:05am CDT
Salon E-G

10:45am CDT

Charliecloud + Gitlab-CI: Building and Using System-Representative Base Containers - Nick Sly, Lawerence Livermore National Laboratory
Thursday May 8, 2025 10:45am - 11:25am CDT
Charliecloud is used in conjunction with Gitlab-CI to build out a matrix of system-representative containers that can be used for building target system-compatible binaries for automated building and testing production codes on NNSA lab machines. This presentation covers the method of generating the base containers as well as a couple of use cases where they have proven helpful.
Speakers
NS

Nick Sly

Scientist, Lawerence Livermore National Laboratory
Thursday May 8, 2025 10:45am - 11:25am CDT
Illinois River

10:45am CDT

Tuning and Performance
Thursday May 8, 2025 10:45am - 12:05pm CDT
1. Leveraging the C Configuration Space and Tuning Library (CCS) in Kokkos Tools - Brice Videau, Argonne National Laboratory (20 minutes)
Online autotuning of runtime and applications presents untapped opportunities to increase HPC application performance and efficiency. During ECP, in order to exploit this potential, the autotuning working group at Argonne National Laboratory and the Kokkos team co-designed the Kokkos Tools tuning API and the C Configuration Space and Tuning Library (CCS). The Kokkos Tools tuning API would create a framework to plug tuners inside Kokkos and expose tuning regions to them, while the CCS library would offer an API to both capture Kokkos configuration spaces and implement tuners to optimize them. This effort led to the creation of the CCS Kokkos connector, a Kokkos tool that leverages both APIs to offer a baseline tuner for Kokkos regions. In this presentation, we will present the result of this collaboration from the perspective of CCS, the abstractions it offers and how they map to Kokkos tuning model. We will describe the capabilities of the CCS library and how it fulfills the goal of offering a standard interface to bridge the gap between tuners and applications/runtimes. We will also discuss the perspectives and future works around the CCS Kokkos connector.

2. Bottlenecks in High-Dimensional Simulations - Nils Schild, Max Planck Institute for Plasma Physics (20 minutes)
The Vlasov-Maxwell system, which describes the motion for charged particles of matter in a plasma state using a particle distribution function, is based on a 6-D phase space defined through configuration and velocity coordinates.
Considering an Eulerian grid for this system with only 32^6 degrees of freedom, the distribution function requires already 8.5 GB of memory. This implies that high-resolution simulations can only be executed on large compute clusters.
In this talk, we focus on two aspects of the open-source code BSL6D to solve a reduced version of the Vlasov-Maxwell system. The shared memory parallelization based on Kokkos applies a stencil algorithm to data, which is non-contiguous in memory, to reduce memory requirements. The inter-node communication bottleneck poses a challenge due to the large halo domain to compute domain ratio. Finally, we discuss the advantages of RAII-managed MPI communicators for distributed domains, simplifying the implementation of parallel algorithms with distributed memory concepts.

3. Accelerating SPECFEM++ with Explicit SIMD and Cache-Optimized Layouts - Rohit Kakodkar, Princeton University (20 minutes)
SPECFEM++ is a suite of computational tools based on the spectral element method used to simulate wave propagation through heterogeneous media. The project aims to unify the legacy SPECFEM codes - three separate Fortran packages (SPECFEM2D, SPECFEM3D, and SPECFEM3D_globe) - into a single C++ package. This new package aims to deliver optimal performance across different architectures by leveraging the Kokkos library. In this presentation, I will outline our efforts to enhance CPU performance using explicit SIMD types (Kokkos::Experimental::simd). High vectorization throughput can be challenging, particularly because the data involved in spectral element assembly is not always organized cache-friendly. To address this, we have implemented a strategy that prefetches the data into cache-optimized scratch views of SIMD types before executing the SIMD operations. Additionally, we have optimized data layouts using custom-defined tiled layouts that improve cache locality. As a result of these optimizations, we have achieved approximately a 2.5x speed-up compared to auto-vectorized implementations.

4. Managing Kokkos Callbacks for Benchmarking, Profiling, and Unit Testing - Maarten Arnst & Romin Tomasetti, University of Liège (20 minutes)
Many Kokkos functions have instrumentation hooks defined within the framework of Kokkos::Tools. These instrumentation hooks allow Kokkos::Tools as well as third-party tracing, profiling and testing tools to register callbacks to monitor and interact with the runtime behavior of the program. In this presentation, we will describe several utilities that we have designed to help manage such callbacks. We have implemented a manager class that can register function objects that can listen to such callbacks. And we have implemented several such function objects, such as an event recorder, an event counter, and a kernel timer that uses event stream synchronization markers on device backends. We will illustrate these utilities through their use in benchmarking, profiling, and unit testing of a Kokkos-based finite-element code.
Speakers
avatar for Brice Videau

Brice Videau

Computer Scientist, Argonne National Laboratory
Brice is a computer scientist, co-leading the performance engineering team at Argonne Leadership Computing Facility. Brice's research topics include heterogeneous programming models, system software, auto-tuning, code generation, and code transformation.
avatar for Maarten Arnst

Maarten Arnst

Associate professor, University of Liege
Associate Professor at University of Liege.
avatar for Nils Schild

Nils Schild

PhD student, Max Planck Institute for Plasma Physics
After studying physics and working on solvers for sparse eigenvalue problems in quantum mechanics at the University of Bayreuth, he moved to the Max Planck Institute for Plasma Physics in Garching (Germany). During his Ph.D., he started implementing the software BSL6D, a solver for... Read More →
avatar for Rohit Kakodkar

Rohit Kakodkar

Research Software Engineer II, Princeton University
Rohit is a Research Software Engineer in Princeton University's Research Computing department. He is focused on rewriting SPECFEM, a spectral element solver designed to simulate wave propagation through heterogeneous media. SPECFEM is extensively used within the computational seismology... Read More →
avatar for Romin Tomasetti

Romin Tomasetti

PhD student, University of Liège
PhD student at University of Liège.
Thursday May 8, 2025 10:45am - 12:05pm CDT
Salon A-C

11:05am CDT

Challenges Mixing Spack-Optimized Hardware Accelerator Libraries on Personal Scientific Computers - Pariksheet Nanda, University of Pittsburgh
Thursday May 8, 2025 11:05am - 11:25am CDT
Personal computing devices sold today increasingly include AI hardware accelerators such as neural processing units and graphics cards with compute capability. However, scientific libraries packaged for laptop and desktop computers focus first on broad instruction set compatibility. Yet, hardware optimized libraries and behaviors can be applied at runtime as widely used by Intel MPI environmental variables. This session discusses the unique use case of the R package system for vendor neutral hardware acceleration using vendor agnostic SYCL / Kokkos. The goal is to allow scientific package developers to quickly and easily write vendor independent accelerator code with deep control and tuning capabilities that use hardware acceleration capabilities as well on laptop / desktop machines as HPC clusters. Although R is specifically discussed, ideas from this session translate to Python and other high-level language packages used in scientific computing. Additionally, this session raises technical challenges directly using Kokkos as well as Apptainer for continuous integration that would greatly benefit from early-stage feedback of those audience members at this conference.
Speakers
avatar for Pariksheet Nanda

Pariksheet Nanda

Postdoctoral Fellow, University of Pittsburgh
Pariksheet first learned about Spack from his university research HPC supervisor who returned from Supercomuting and told him about the "cool new project we need to start using" and has been hooked ever since. When not working on research manuscripts, he enjoys reading and writing... Read More →
Thursday May 8, 2025 11:05am - 11:25am CDT
Salon E-G

11:25am CDT

An Opinionated-Default Approach to Enhance Spack Developer Experience - Kin fai Tse, The Hong Kong University of Science and Technology
Thursday May 8, 2025 11:25am - 11:45am CDT
Despite Spack's strengths as a feature-rich HPC package manager generating fast executables for HPC apps, its adoption remains limited partly due to a steep learning curve and its perception as primarily a sysadmin tool.

We propose a set of opinionated defaults that help new users quickly adopt best practices with guaranteed reproducibility and architecture compatibilities. The approach draws from conventions used in popular userspace Python package managers like pip and conda which was proven to be effective

Unlike Python, Spack is a source-distribution system, compilation errors are a common challenge. We experimented with smoke-testing compatibility compatibility across compilers, libraries, and x86_64 architectures. Results are encoded into conflict rules into the defaults, such practice can be helpful to avoid many common build failures.

We successfully deployed this approach on x86_64 platforms with substantially different purpose (DL vs HPC), demonstrating its transferability and proving current Spack features sufficient for implementation. Additional DX enhancements will be discussed. The defaults are available as an open-source repository.
Speakers
avatar for Kin fai Tse

Kin fai Tse

IT Manager (Research Computing), The Hong Kong University of Science and Technology
Dr. Kin Fai TSE now serves to overseeing DGX cluster operations and HPC migration at HKUST. After his Physics Ph.D., he led MLOps at a voicebot startup (2021). Co-founding Flying Milktea (2022), he built a marketplace with ~2-week onboarding for new interns. He was lead coach for... Read More →
Thursday May 8, 2025 11:25am - 11:45am CDT
Salon E-G

11:25am CDT

Maintaining the Debian Charliecloud Package - Peter Wienemann, Independent
Thursday May 8, 2025 11:25am - 12:05pm CDT
Charliecloud has been in the Debian archive since the early development days of Charliecloud. It was initially packaged by Lucas Nussbaum at the end of 2017/beginning of 2018. The speaker joined the packaging effort in January 2018 and continuously contributed to it since then. This talk will give a brief introduction into Debian and describe how its tool set was useful to improve the Debian Charliecloud package and feed improvements back into the upstream project. But also the information flow from upstream authors to package maintainers has been exemplary. This presentation will provide a few examples showing this fruitful interplay.
Speakers
PW

Peter Wienemann

Independent
Thursday May 8, 2025 11:25am - 12:05pm CDT
Illinois River

11:45am CDT

Developing and Managing Data Acquisition Software Using Spack - Eric Flumerfelt, Fermi National Accelerator Laboratory
Thursday May 8, 2025 11:45am - 12:05pm CDT
The Data Acquisition systems of particle physics experiments regularly push the boundaries of high-throughput computing, demanding low-latency collection of data from thousands of devices, collating data into time-sliced events, processing these events and making trigger decisions, and writing the selected data streams to disk. To accomplish these tasks, the DAQ Engineering and Operations department at Fermilab leverages multiple software libraries and builds reusable DAQ frameworks on top. These libraries must be delivered in well-defined bundles and are thoroughly tested for compatibility and functionality before being deployed to live detectors. We have several techniques used to ensure that a consistent set of dependencies can be delivered and re-created at need. We must also support active development of DAQ software components, ideally in an environment as close as possible to that of the detectors. This development often occurs across multiple packages which have to be built in concert and features tested in a consistent and reproducible manner.
I will present our scheme for accomplishing these goals using Spack environments, bundle packages, and Github Actions-based CI.
Speakers
avatar for Eric Flumerfelt

Eric Flumerfelt

Computational Physics Developer, Fermi National Accelerator Laboratory
I have been developing data acquisition systems at Fermilab since 2014. I have worked with a number of particle physics experiments, from small test-beam experiments which run for two weeks to large international collaborations.
Thursday May 8, 2025 11:45am - 12:05pm CDT
Salon E-G

12:05pm CDT

Lunch (Provided for Attendees)
Thursday May 8, 2025 12:05pm - 1:35pm CDT
Thursday May 8, 2025 12:05pm - 1:35pm CDT
Atrium

1:35pm CDT

Key Charliecloud Innovation - Kubernetes - Angelica Loshak, Los Alamos National Laboratory
Thursday May 8, 2025 1:35pm - 1:55pm CDT
Kubernetes automates container deployment and management across environments. HPC users can benefit from Kubernetes to support the increasing demand for novel workflows, especially in AI. Kubernetes' declarative approach allows users to schedule, scale, and maintain metrics on containers while supporting multiple container runtimes. Charliecloud can enhance HPC workloads when integrated with Kubernetes. However, Kubernetes only supports container runtimes that implement the Container Runtime Interface (CRI), which Charliecloud does not. To address this, we developed a prototype CRI-compatible server for Charliecloud, allowing Kubernetes to manage pods and to create, start, and track Charliecloud containers. Despite Kubernetes expecting certain features that Charliecloud does not use, such as network namespaces, we show that the two systems can still communicate effectively. Our implementation requires 700 lines of new code, fewer than 50 lines of modification to Charliecloud, and no changes to Kubernetes. This demonstrates that Kubernetes and Charliecloud are compatible tools, advancing scientific workflows that require large compute power.

LA-UR-24-28252
Speakers
AL

Angelica Loshak

Student, Los Alamos National Laboratory
Thursday May 8, 2025 1:35pm - 1:55pm CDT
Illinois River

1:35pm CDT

From Complexity to Efficiency: SPACK’s Impact on NSM Supercomputers - Samir Shaikh, Harshitha Ugave, Centre for Developement of Advanced Computing (C-DAC)
Thursday May 8, 2025 1:35pm - 1:55pm CDT
The National Supercomputing Mission (NSM) advances India’s research by providing HPC infrastructure across institutions. However, managing software on diverse HPC systems is challenging due to hardware variations, dependencies, and version control.

Spack, a flexible package manager, addresses these issues by enabling seamless software deployment and dependency management across clusters. This study examines Spack’s implementation on 17 NSM HPC systems, improving software availability and consistency.

Spack simplifies this through customized installations, automated dependency handling, and reproducible builds, ensuring compatibility.

Implementation involved a centralized repository, automated builds, user training, software optimization, and continuous refinement. This improved research productivity, reduced support overhead, and standardized environments.

Key benefits include reproducibility, faster issue resolution, and better collaboration. Future plans involve expanding Spack repositories, integrating containers, automating updates, and training. This presentation covers our implementation, challenges, and best practices.
Speakers
avatar for Samir Shaikh

Samir Shaikh

Scientist, Centre for Developement of Advanced Computing (C-DAC)
Samir Shaikh is an HPC specialist at C-DAC, Pune, optimizing large-scale workloads, parallel computing, and system architecture. As a Scientist C, he enhances HPC performance for AI/ML, scientific computing, and NSM supercomputers. An IIT Guwahati M.Tech graduate, he has contributed... Read More →
Thursday May 8, 2025 1:35pm - 1:55pm CDT
Salon E-G

1:35pm CDT

Algorithms
Thursday May 8, 2025 1:35pm - 3:15pm CDT
1. Gyselalib++: A Portable, Kokkos-Based Library for Exascale Gyrokinetic Simulations - Etienne Malaboeuf, CINES (10 minutes)
The development of fusion energy in magnetic confinement devices relies heavily on simulations of plasma behavior. Gyselalib++ is a new open-source C++ library under active development by a European distributed and multidisciplinary team of physicists, mathematicians, and computer scientists at EPFL, CEA/IRFM, Maison de la Simulation, IPP Garching, and CINES. Gyselalib++ is itself built on top of PDI, DDC and Kokkos and provides mathematical tools for gyrokinetic semi-Lagrangian codes for tokamak plasma simulations. This presentation will introduce the library, its design and the rationale behind its development, and will highlight its key features. It will showcase how the choice of Kokkos made it possible to achieve high performance on modern hardware with performance portability over a wide range of hardware, and will explain the need to introduce DDC to improve development safety. We will discuss feedback from this experience, analyze our successes and the limitations of the approach, especially when it comes to performance, performance portability, and programmability of the code by a highly diverse team in terms of background.

2. Expression Templates with Kokkos for Lattice QCD - Travis Whyte, Jülich Supercomputing Centre (10 minutes)
Lattice quantum chromodynamics (QCD) is a first principles approach to studying the interaction of quarks and gluons. The calculation of observables in lattice QCD requires many different operations between multidimensional arrays of various ranks. In this talk, I will describe an implementation of expression templates using Kokkos that allows for lattice QCD practitioners to simply implement linear algebra operations while avoiding temporaries for views of arbitrary rank. This abstraction has the potential to promote high productivity in the development process. The performance of various benchmarks on different architectures will also be discussed.

3. Bridging Parallel Communication and On-Node Computation with Kokkos - Evan Suggs, Tennessee Technological University (20 minutes)
Although MPI and Kokkos have long been used together, there were no well-defined methods for integrating them effectively. The only approach is to point the underlying Kokkos View buffers to MPI functions.
This causes several major pain points: handling non-contiguous Views, asynchronous operations in both models, and how MPI interacts with Kokkos Profiling. Kokkos Comm is an experimental MPI interface for the Kokkos C++ Performance Portability Programming ecosystem that aims to address these concerns and improve the productivity of Kokkos users.
Currently, Kokkos Comm integrates point-to-point collectives, handling of non-contiguous Views, and Kokkos Tools Profiling. Kokkos Comm also aims to be a springboard for new and improved features that go beyond MPI and Kokkos, allowing Kokkos to work with MPI, stream-triggered MPIs, and other non-MPI communication libraries (e.g., NCCL and RCCL). This presentation will cover the Kokkos Comm API, conversion of existing code, best practices, how Kokkos Comm can help address common issues in Kokkos/MPI, and upcoming additions to Kokkos Comm, such as persistent communication and device-initiated communication.

4. Integration of PETSc, Kokkos Core, and Kernels for Performance Portability in the Age of Accelerators - Junchao Zhang, Argonne National Laboratory (20 minutes)
PETSc, the Portable, Extensible Toolkit for Scientific Computation, provides an extensive suite of scalable parallel solvers for linear and nonlinear equations, ordinary differential equation (ODE) integrators, and optimization algorithms. Widely adopted in both industry and academia, PETSc historically achieved performance portability through the C programming language and the Message Passing Interface (MPI) programming model. It used single-threaded MPI processes for both shared and distributed memory systems. This strategy had served us very well in the microprocessor age. However, the recent proliferation of accelerator-based architectures, particularly graphics processing units (GPUs), has posed new challenges to this performance portability. To address these challenges, we have integrated PETSc with the Kokkos ecosystem, specifically Kokkos-Core and Kokkos-Kernels. In this presentation, we describe our integration approach, highlight our experiences—both effective strategies and encountered challenges—and outline future developments aimed at further enhancing performance portability across evolving computational architectures.

5. Parallel Sweep Algorithms for Cartesian and Honeycomb Grids - Ansar Calloo, CEA (20 minutes)
The linear Boltzmann transport equation (BTE) is the governing equation for expressing the behaviour of neutral particles in a system such as a nuclear reactor. BTE can be solved for the flux of particles using deterministic methods whereby the equation is discretised in the phase space of is fundamental variables. This discrete equation is then usually solved using the source iteration. In this talk, we will present how the sweep algorithm which is based upon a wavefront pattern has been optimised in the context of SMP for CPU and also some preliminary results on GPU. The goal is to show how to adapt the sweep algorithm to be efficient on new supercomputer architectures.We will briefly introduce DONUT (Discrete Ordinates NeUtron Transport), a modern C++ miniapp for solving BTE based on the discrete ordinates and discontinuous Galerkin discretisations for Cartesian and honeycomb grids.
Speakers
avatar for Ansar Calloo

Ansar Calloo

Research engineer, CEA
Ansar obtained his PhD in deterministic neutron transport at CEA. For the past fifteen years, he has been working on improving simulations for reactor physics applications first at EDF R&D, then CEA. His research interests involve nuclear reactor model, numerical methods to solve... Read More →
avatar for Etienne Malaboeuf

Etienne Malaboeuf

HPC Engineer, CINES/CEA
I focus on improving the performance of projects related to real-time and high-performance computing, while providing various forms of support to researchers using French supercomputers. I have worked on numerical simulation software in an HPC context, on supercomputers and on game... Read More →
avatar for Evan Suggs

Evan Suggs

Staff Researcher, Tennessee Technological University
Evan Drake Suggs is a Research Scientist at Tennessee Technological University in Cookeville, Tennessee. In 2023, Suggs graduated with a Master's degree in Data Science from the University of Tennessee at Chattanooga and presented his thesis work on MPI+Kokkos using the ExaMPI implementation... Read More →
avatar for Junchao Zhang

Junchao Zhang

Principal Specialist, Research Software Engineering, Argonne National Laboratory
Junchao Zhang is a software developer at Argonne. He currently works on the Portable, Extensible Toolkit for Scientific Computation (PETSc) project. Before joining PETSc, he was an MPICH developer at Argonne and developed the MPI Fortran 2008 binding and MPI tool interface of MPI-3.0... Read More →
avatar for Travis Whyte

Travis Whyte

Postdoc, Jülich Supercomputing Centre
I graduated from Baylor University with a Ph.D. in Physics, focusing on algorithmic improvements for lattice QCD simulations. Since then, I have continued to work in the field, focusing on improving iterative solvers, scattering simulations and HPC software development.
Thursday May 8, 2025 1:35pm - 3:15pm CDT
Salon A-C

1:55pm CDT

BEE: Orchestrating Workflows with Containerized Applications Leveraging Charliecloud - Krishna Chilleri, Los Alamos National Laboratory
Thursday May 8, 2025 1:55pm - 2:15pm CDT
Build and Execution Environment (BEE) is a workflow orchestration system designed to build containerized HPC applications and orchestrate workflows across HPC and cloud systems. BEE integrates with existing tools and infrastructure in the scientific computing ecosystem, making it an ideal choice for large-scale scientific simulations. The use of these tools and standards allows for efficient management of the provisioning, scheduling, and monitoring of individual tasks, as well as providing flexibility, portability, and reproducibility of workflows. This presentation will highlight how BEE leverages Charliecloud—as one of its container runtimes—to facilitate unprivileged builds, pull images from registries, and run containerized applications.

LA-UR-25-22166
Speakers
KC

Krishna Chilleri

Student, Los Alamos National Laboratory
Thursday May 8, 2025 1:55pm - 2:15pm CDT
Illinois River

1:55pm CDT

Deploying a Large HPC Software Stack - Challenges and Experiences - Jose Gracia, HLRS, University Stuttgart
Thursday May 8, 2025 1:55pm - 2:15pm CDT
We aim to use Spack to deploy a large software stack at a German national HPC center. In this talk, we will give some background related to the size of the software stack, its deployment frequency, and constraints arising from the operational environment. Next, we will briefly outline some of the challenges and obstacles that we encountered such as configuration issues, interaction with Cray Programming Environment, and unexpected outcomes of the concretizer. We end the talk with a current status and next steps.
Speakers
avatar for Jose Gracia

Jose Gracia

Senior Researcher, HLRS, University Stuttgart
Together with his group, José Gracia does research into topics related to scalable programming models such as new approaches to MPI or task-based programming models and their interoperability at scale. He also works on performance analysis tools, characterization of application performance... Read More →
Thursday May 8, 2025 1:55pm - 2:15pm CDT
Salon E-G

2:15pm CDT

Building and Maintaining OSS on Fugaku: RIKEN’s Experience with Spack - Yuchi Otsuka, RIKEN R-CCS
Thursday May 8, 2025 2:15pm - 2:35pm CDT
Fugaku, Japan’s flagship supercomputer, serves a diverse range of scientific disciplines, requiring extensive open-source software (OSS) support. However, managing OSS on Fugaku presents unique challenges due to its A64FX-based Arm architecture and Fujitsu’s proprietary compilers and libraries. Our team has been leveraging Spack to efficiently manage and maintain OSS. In this talk, we will share our experience using Spack on Fugaku, highlighting how it has enabled a robust and up-to-date OSS environment. We will discuss the practical benefits of Spack, including streamlined software deployment and simplified package management, and reflect on lessons learned from maintaining software in a large-scale HPC system. By sharing our insights, we aim to contribute to the broader Spack community and reinforce its role as a key tool for HPC software management.
Speakers
avatar for Yuchi Otsuka

Yuchi Otsuka

Technical Scientist, RIKEN R-CCS
I have a long-standing background in computational condensed-matter physics research and have been involved in managing and maintaining OSS on Fugaku since 2022. My role is to ensure a robust and up-to-date OSS environment on Fugaku to support a wide range of scientific applicati... Read More →
Thursday May 8, 2025 2:15pm - 2:35pm CDT
Salon E-G

2:15pm CDT

Charliecloud Office Hours - Reid Priedhorsky, Los Alamos National Laboratory
Thursday May 8, 2025 2:15pm - 3:15pm CDT
Members of the Charliecloud team will be available for office hours to listen to feedback/suggestions, answer questions, and/or help debug issues. 

LA-UR-25-22140
Speakers
RP

Reid Priedhorsky

Scientist, Los Alamos National Laboratory
I am a staff scientist at Los Alamos National Laboratory. Prior to Los Alamos, I was a research staff member at IBM Research. I hold a Ph.D. in computer science from the University of Minnesota and a B.A., also in computer science, from Macalester College.My work focuses on large-scale... Read More →
Thursday May 8, 2025 2:15pm - 3:15pm CDT
Illinois River

2:35pm CDT

Using Spack to Build and Maintain a Facility-Specific Programming Environment - Nicholas Sly, Lawrence Livermore National Laboratory
Thursday May 8, 2025 2:35pm - 2:55pm CDT
The trials and tribulations of using Spack to construct, build, and maintain a facility-specific programming environment at LLNL. Working with the Spack developers to ensure that Spack is able to do what it claims it can do in a large production environment.
Speakers
NS

Nick Sly

Scientist, Lawerence Livermore National Laboratory
Thursday May 8, 2025 2:35pm - 2:55pm CDT
Salon E-G

2:55pm CDT

Aurora PE: Rethinking Software Integration in the Exascale Era - Sean Koyama, Argonne National Laboratory
Thursday May 8, 2025 2:55pm - 3:15pm CDT
The exascale Aurora supercomputer at the Argonne Leadership Computing Facility posed numerous challenges during its development due to its novel scale. One such challenge was in creating a scalable and maintainable scientific software environment. Typical software deployment methods failed to scale and were difficult to maintain over time, necessitating a new way of thinking about software integration. In this talk we present our work on the Aurora Programming Environment, a bespoke scientific programming environment which optimizes for scale and leverages Spack for its strengths in reproducibility, automation, and multiplicative build combinations. We discuss details of the containerized build process and read-only image deployment strategy as well as existing pain points and workarounds. We also examine the future possibilities that our approach opens up, including tightly integrated CI/CD flows and portable containerized access to the PE. We believe this approach is generalizeable and may benefit facilities where traditional software integration methods fall short of needs.
Speakers
avatar for Sean Koyama

Sean Koyama

Systems Integration Admin, Argonne National Laboratory
Sean Koyama is a Systems Integration Administrator at the Argonne National Laboratory's Leadership Computing Facility. Sean integrates scientific software stacks into the user environments on ALCF machines, including Aurora, the ALCF's exascale supercomputer. Their work includes developing... Read More →
Thursday May 8, 2025 2:55pm - 3:15pm CDT
Salon E-G

3:15pm CDT

Coffee Break
Thursday May 8, 2025 3:15pm - 3:40pm CDT
Thursday May 8, 2025 3:15pm - 3:40pm CDT

3:40pm CDT

Driving Continuous Integration and Developer Workflows with Spack - Richard Berger, Los Alamos National Laboratory
Thursday May 8, 2025 3:40pm - 4:00pm CDT
Spack makes it easy to install dependencies for our software on multiple HPC platforms. However, there is little guidance on how to structure Spack environments for larger projects, share common Spack installations with code teams and utilize them in an effective way for continuous integration and development.

This presentation will share some of the lessons learned from deploying chained Spack installations for multiple code teams at LANL on various HPC platforms both on site and on other Tri-Lab systems, how to structure such deployments for reusability and upgradability, and make them deployable even on air-gapped systems. It will also show how we utilize Spack's build facilities to drive CMake-based projects on GitLab for continuous integration, without having to replicate build configuration logic in GitLab files, while giving developers an easy-to-follow workflow for recreating CI runs in various configurations.
Speakers
avatar for Richard Berger

Richard Berger

Scientist, Los Alamos National Laboratory
Richard is a research software engineer in the Applied Computer Science Group (CCS-7) at Los Alamos National Laboratory (LANL) with a background in Mechatronics, high-performance computing, and software engineering. He is currently contributing to the core development of LAMMPS, FleCSI... Read More →
Thursday May 8, 2025 3:40pm - 4:00pm CDT
Salon E-G

3:40pm CDT

Panel Discussion to be Announced
Thursday May 8, 2025 3:40pm - 5:00pm CDT
Thursday May 8, 2025 3:40pm - 5:00pm CDT
Salon A-C

4:00pm CDT

Implementing a Security Conscious Build Configuration Relay with a Shared Build Cache - Chris White, Lawrence Livermore National Laboratory
Thursday May 8, 2025 4:00pm - 4:20pm CDT
In large-scale software development efforts, effective communication between projects is essential to ensure consistency, reproducibility, and efficiency. This presentation explores strategies to improve coordination among software teams by leveraging Continuous Integration (CI) for relaying crucial build configurations while maintaining security for proprietary project sources. We will demonstrate best practices for sharing build configurations with upstream projects without exposing proprietary code.

A key focus will be optimizing the use of Spack, particularly in reducing the number of Spack Package Repositories utilized across multiple teams. This will simplify maintenance, harden builds, and avoiding duplication. Additionally, we will highlight the benefits of heavily integrating Spack CI to generate build caches, which will reduce rebuild times, and enhance software portability. By adopting these approaches, teams can achieve better collaboration, streamlined workflows, and improved software sustainability.
Speakers
avatar for Chris White

Chris White

WSC DevOps Coordinator, Lawrence Livermore National Laboratory
Chris White is the WSC DevOps Coordinator at Lawrence Livermore National Laboratory. He advises multi-disciplinary teams on software best practices with a focus on unifying complex DevOps workflows across multiple teams. Chris specializes in improving collaboration while ensuring... Read More →
Thursday May 8, 2025 4:00pm - 4:20pm CDT
Salon E-G

4:20pm CDT

Spack-Based WEAVE Environment at LLNL - Lina Muryanto, Lawrence Livermore National Security, LLC
Thursday May 8, 2025 4:20pm - 4:50pm CDT
The WEAVE team at LLNL has created a Spack-based virtual environment (accessible to Livermore Computing community) that has a rich set of open-source tools to create a workflow for any HPC applications, commonly used Python packages and several commonly used ML and AI packages.
The goal is to provide a stable, well tested environment that users can activate and use directly across Livermore Computing's vast array of machines/OSes and networks.
We also provide the capability for users to create a local environment based on the WEAVE environment.

Using Spack allows us to install the same set of software across different platforms in LC. It also allows us to use the same Spack environment file to recreate the exact same ecosystem across networks within the Lab.
We leverage GitLab CI for our DevOps platform to automate Spack install, creating test environment, running tests and deploying the environment.
We also leverage LLNL Nexus Repository to sync our build files across networks within the lab.

WEAVE team also has implemented the "WEAVE Badging Program" where the community can submit a request to have a tool to be integrated into the WEAVE environment.
Speakers
avatar for Lina Muryanto

Lina Muryanto

Software Engineer, Lawrence Livermore National Security, LLC
Lina joined LLNL in 2018 as a DevOps engineer for the ESGF Project. In 2021, Lina joined the SD program and joined the WEAVE team in 2022. She has implemented CI/CD from scratch. Lina is passionate about achieving high software quality and reliability through software test development... Read More →
Thursday May 8, 2025 4:20pm - 4:50pm CDT
Salon E-G

4:40pm CDT

DevOps for Monolithic Repositories Using Spack - Phil Sakievich, Sandia National Laboratories
Thursday May 8, 2025 4:40pm - 5:00pm CDT
In the realm of large software projects, the choice between a monolithic repository and several distributed repositories presents significant trade-offs that can impact development efficiency, collaboration, and maintainability. Monolithic repositories, while offering centralized management and streamlined dependency handling, can become unwieldy as project size increases. Conversely, distributed repositories provide modularity and flexibility but may lead to challenges in integration and version control. This presentation will delve into the ongoing research conducted by Sandia National Laboratories, where researchers are exploring innovative solutions to harness the strengths of both repository models through the use of Spack, a package manager designed for scientific computing. We will outline the methodology employed in this exploration, highlighting the performance trade-offs i dentified thus far, including aspects such as build times, dependency resolution, and ease of collaboration. Attendees will gain insights into the implications of repository structure on software development practices and the potential for hybrid approaches to optimize project outcomes.
Speakers
avatar for Phil Sakievich

Phil Sakievich

Senior Computer Scientist R&D, Sandia National Laboratories
Phil comes from a high-performance computing and fluid mechanics background. He became involved with Spack during the ExaScale computing project and author of the Spack-Manager project. Phil is an active member of the Spack technical steering committee and currently leads several... Read More →
Thursday May 8, 2025 4:40pm - 5:00pm CDT
Salon E-G
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -