The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for the event to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to find out more information.
This schedule is automatically displayed in Central Time (UTC/GMT -6 hours). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
IMPORTANT NOTE: Timing of sessions and room locations are subject to change.
Sign up or log in to bookmark your favorites and sync them to your phone or calendar.
Einstein’s theory of General Relativity revolutionised Physics over a century ago. Despite this, the number of known analytical solutions to the equations, particularly in the dynamical strong-field case is very small. Numerical relativity is often the only tool that can be used to investigate this regime.
I will describe Quokka, a new AMR code for astrophysics, aimed at problems in star formation and galaxy formation. Quokka is a Newtonian radiation hydrodynamics code with support for particles, chemistry, self-gravity, and (soon) magnetic fields based on a method-of-lines formulation. We use AMReX's unique capabilities to enable many of these physics features, from AMReX's OpenBC Poisson solver for self-gravity, to AMReX-Astro Microphysics for the cell-by-cell chemistry network integration with symbolic code generation. I will talk about our current code development focused on particles and a constrained-transport implementation of magnetohydrodynamics, and briefly mention applications in simulating galactic winds and the formation of the first stars.
In this talk I discuss the role of the SUNDIALS (SUite of Nonlinear and DIfferential/ALgebraic equation Solvers) package in three different AMReX-based applications. In this context, SUNDIALS provides support for the overall temporal integration scheme of the model equations. We are particularly interested in the multirate infinitesimal (MRI) schemes, where different physical processes can be advanced using different time steps. Our initial application is in the context of combustion, where we show that we are able to achieve increased accuracy over our traditional spectral deferred corrections approach with greater computational efficiency. Our second application is in micromagnetic memory and storage devices where we demonstrate efficiency by leveraging the non-stiff nature of the computationally-expensive demagnetization processes. Our final application is low-power ferroelectric transistors where we also effectively leverage the non-stiff nature of the computationally-expensive Poisson process. In each case we show significant computational savings over our baseline codes.
The adaptive mesh and particle capabilities of AMReX have been used to implement a wide range of numerical methods, targeting combustion, plasma physics, earth systems modeling, cosmology, and more. This talk will show how they can also be used to implement a quite different type of algorithm: an agent-based model (ABM) for the spread of respiratory diseases called ExaEpi. ABMs are valuable because they provide a fundamental and natural description of the system and are able to capture emergent phenomena. However, their use in forecasting and control is limited by the difficulty in calibrating and quantifying the uncertainty associated with a large number of parameters. By leveraging AMReX, ExaEpi can help address these limitations by enabling many large ensembles to run quickly on exascale compute facilities.
The WarpX project is advancing the modeling and simulation of a wide range of physics applications, including particle accelerators and nuclear fusion, through high-performance scientific computing. This talk will provide a comprehensive update on WarpX, focusing on our experiences with the AMReX software framework. We will highlight key achievements, share valuable lessons learned, and discuss the future challenges we anticipate. The presentation will cover the critical role of AMReX in enabling high performance and portability, showcase significant milestones, and provide insights into the challenges faced and solutions developed.
The phase field (PF) method is used to simulate a wide range of problems in mechanics, ranging from crack propagation to topology optimization. As a diffuse interface method, PF requires AMR to be computationally feasible, but few PF methods leverage block-structured AMR. Alamo is an AMReX-based code designed to solve phase field equations with implicit elastic solves. It features a variety of phase field methods, material models, and a strong-form nonlinear elastic solver based on AMReX's MLMG. In this talk, we give a high-level overview of some of the applications of Alamo, including deflagration of solid rocket propellant, topology optimization of mechanical structures, phase field fracture, microstructure evolution, and solid-fluid interaction.
The past decade has seen a rapid increase in the development and use of Python-based open-source libraries for data-driven methods and machine learning. The widespread adoption of these libraries across scientific and engineering disciplines highlights their growing prominence in accelerating simulations. This talk presents an in-situ (in-memory/online) workflow, which requires minimal modifications, designed for mature AMReX codes to leverage the rich Python-ecosystem. The workflow enables language-interoperable data transfer through runtime coupling of AMReX and pyAMReX via Multiple Program Multiple Data (MPMD) mode of Message Passing Interface (MPI). This capability is demonstrated through multiscale modeling of granular flows, which involves coupling low-fidelity continuum and high-fidelity particle-based methods. The computational intractability of straightforward coupling between low- and high-fidelity methods is addressed using adaptively (on-the-fly) evolving neural network ensembles, implemented in PyTorch Distributed Data Parallel, as a surrogate for the expensive high-fidelity solver. The scalability of the current approach across multiple GPUs will also be discussed.
To comprehensively investigate multiphysics coupling in spintronic devices, GPU acceleration is essential to address the spatial and temporal disparities inherent in micromagnetic simulations. Beyond traditional numerical methods, machine learning (ML) offers a powerful approach to replace and accelerate computationally expensive routines, particularly in evaluating demagnetization fields. Leveraging AMReX and python-based ML workflows, we developed an open-source micromagnetics tool that integrates ML-driven surrogate models to enhance computational efficiency. By replacing costly demagnetization field calculations with neural network-based approximations, the framework significantly accelerates simulations while maintaining accuracy. In addition to supporting key magnetic interactions—including Zeeman, demagnetization, anisotropy, exchange, and Dzyaloshinskii-Moriya interactions—it is validated on µMAG standard problems, widely accepted DMI benchmarks, and Skyrmion-based applications. This ML-accelerated approach improves computational performance and enables large-scale, data-driven micromagnetics simulations, advancing the study of spintronic and electronic systems.
We introduce a hybrid HPC–ML framework for efficient modeling of magnon–photon interactions. The HPC component uses an explicit FDTD leap-frog Maxwell–LLG solver (second-order accurate), solving Maxwell’s equations in nonmagnetic regions and adding the LLG equation where ferromagnets are present. Parallelization leverages AMReX’s MPI+X model for multicore CPUs or GPUs, partitioning the domain among MPI ranks. Data collected from nine points in the ferromagnet feed a Long Expressive Memory (LEM) encoder–decoder, trained with a composite loss function (reconstruction, prediction, and physics) and guided by Curriculum Learning. During training, we begin with shorter sequences, no physics enforcement, and a higher learning rate, then move to longer sequences, physics constraints, and a lower rate. Using just 1 ns of high-fidelity simulation data, the ML surrogate accurately predicts the magnetic-field evolution and matches frequency responses (13–18 GHz) under various DC biases. With physics constraints included, errors remain low even for longer sequences. The model reproduces transmission spectra and captures both primary and dual resonances (1800–2200 Oe) with high precision, achieving errors below 2.5% and demonstrating robust spatial and spectral generalization.
High-fidelity simulations of high-speed, compressible flows require accurately capturing complex features for precise results. AMReX - an exascale, block-structured adaptive mesh refinement (AMR) framework, enables high-resolution simulations for a range of applications. This talk explores two key challenges in high-speed flows: moving bodies and liquid-gas flows. For moving bodies, a ghost-cell method is developed within the Compressible Navier-Stokes (CNS) framework of AMReX to compute fluxes on moving embedded boundary (EB) faces. A third-order least-squares formulation improves wall velocity gradient accuracy, enhancing skin friction coefficient estimation. The method is validated using inviscid and viscous test cases. For liquid-gas flows, an all-Mach multiphase algorithm solves the compressible flow equations using an unsplit volume-of-fluid (VOF) method with piecewise linear interface calculation (PLIC) for liquid-gas interface reconstruction. Simulations include a liquid jet in supersonic crossflow and spray atomization with acoustic excitation.
Wind energy plays a crucial role in meeting the electricity demands of the U.S.; however, high maintenance costs highlight the need for accurate predictions of unsteady loading caused by turbine layout and off-design wind conditions. Existing design tools often neglect fluid-structure interactions that drive costly fatigue loads, prompting the research community to leverage high-performance computing (HPC) solvers to study these effects. Unfortunately, such tools remain too complex and costly for industrial applications, particularly due to challenges in grid generation and setup. To address this, CDI is developing a Cartesian-based hybrid solver that integrates an incompressible vorticity-based far-field formulation with a compressible primitive variable solver in the near field. The framework is built on the AMReX library, enabling block-structured mesh refinement for efficient computation. This talk will explore both the computational and mathematical aspects of coupling these two solvers, highlighting advancements in predictive modeling for wind turbine aerodynamics.
AMR-Wind is a high-fidelity computational-fluid-dynamics solver for simulating wind farm flow physics. The solver enables predictive simulations of the atmospheric boundary layer and wind plants by leveraging a block-structured, adaptive-mesh, incompressible-flow solver. AMR-Wind is designed for scalability on high performance computing systems, with an emphasis on performance portability for graphical processing units (GPUs). These flow solver capabilities and performance characteristics were enabled using the AMReX library. In this talk, we present AMR-Wind, its capabilities and performance characteristics. We detail the numerical implementation, verification and validations efforts, as well as demonstrate AMR-Wind for large eddy simulations of wind farm physics. A demonstration simulation is presented for a 12-turbine wind farm operating in a turbulent atmospheric boundary layer with realistic wake interactions. We also discuss AMR-Wind in the wider context of the ExaWind suite of codes, including (1) its integration as a background solver to Nalu-Wind with overset methods to perform geometry-resolved simulations of wind turbines and (2) its use coupled to a mesoscale weather simulation code, ERF.
Load balancing is an evergreen topic in HPC mesh-and-particle codes like AMReX. Appropriate, fast and effective load balancing strategies are critical to ensure AMReX codes are capable of making the best use of available computational resources and maximizing the size and type of simulations that can be performed. I have begun investigating potential short and long term advancements in load balancing that may be of use to the AMReX community. In this talk, I will give an overview of the current load balancing options in AMReX plus an algorithmic investigation performed last summer by NERSC summer interns which found potential improvements in our current Knapsack and SFC algorithms. Then, I will present an overview of possible next steps, long-term advancements, and where I believe these advances would be most helpful. Hopefully, this will generate a discussion around which investigations would be most helpful to the AMReX community to determine the best targets for this summer’s interns and beyond.
This research develops a framework to simulate multiphase flows with surface tension. Two open-sources are coupled to achieve this goal: interface reconstruction library (IRL), a library of the volume of fluid (VOF) scheme, and PeleLM, a reacting Navier-Stokes equation solver. In addition to the coupling, surface tension is implemented using the continuum surface model (CSF) along with the improved height function technique. The coupling produces spurious errors in the volume fraction field. The spurious error is corrected by geometric consideration, which improves the numerical stability and accuracy. With the developed framework, multiple validation simulations are conducted. The validation simulations are (i) translations and rotations of Zalesak’s disk, (ii) three-dimensional deformations of a sphere droplet, (iii) a stationary circular droplet with surface tension, and (iv) an oscillating elliptical droplet. Quantitative comparisons show that the shape errors of Zalesak’s disk and 3D deformation case show comparable errors with other solvers. The pressure inside the stationary droplet is well maintained within 3.6 % compared to the theoretical value. For the oscillating elliptical droplet case, the oscillation period is within 6.7% compared to the linear theory.