Grand Challenge Seminars

Grand Challenge Seminar Series Decorative banner

M&IC and the Deputy Director for Science and Technology are pleased to announced the revival of the Grand Challenge Seminar Series.

 

Using Data Driven Models to Learn Molecular Interactions for Novel Countermeasure Design

Eleventh in the series was presented by Jonathan Allen

Automated chemical synthesis from combinatorial chemistry libraries presents the potential to exploit new regions of chemical space to counter novel and emerging biological threats. Experimentally driven design and physics-based simulation of novel therapeutics remains too costly to systematically explore the candidate makeable chemical space. Machine learning (ML) and Artificial Intelligence (AI) approaches are being used to build models that balance predictive accuracy with computational efficiency to meet the requirements for searching an exceptionally large chemical space. ML and AI models in drug discovery face a fundamentally challenging problem in that these data driven models are asked to predict on parts of chemical space for disease targets where the model has little or no training. Through cross-target learning of 3D spatial relationships and chemical interactions the goal is to learn a base molecular interaction model that can be re-trained and tuned with smaller targeted learning examples from selected small-molecule and disease target datasets. New methods are presented for learning from experimentally derived data augmented with molecular simulations to guide small-molecule design. Methods are integrated into a computational design loop for proposing new small molecule designs that can be synthesized and experimentally tested.

Watch it here: https://llnlfed.webex.com/webappng/sites/llnlfed/recording/2173b45e7c31103cbefe005056818699/playback

Machine Learning and Multi-Fidelity Modeling of Laser-Driven Particle Acceleration

Tenth in the series was presented by Blagoje Djordjević

Computer models of intense, laser-driven ion acceleration require expensive particle-in-cell (PIC) simulations that may struggle to capture all the multi-scale, multi-dimensional physics involved at reasonable costs. As a short-pulse laser impinges on a solid target, the system rapidly evolves from radiation hydrodynamic conditions to fully kinetic conditions while ending in hybrid-like circumstances that severely complicate efforts to model and understand the physics occurring, let alone optimize the particle beams and radiation generated. Explored is a multi-fidelity approach to ameliorate this deficiency that can incorporate physical trends and phenomena at different levels. As the base framework for this study, an ensemble of approximately 10,000 1D PIC simulations was generated to buttress separate ensembles of hundreds of higher fidelity 1D and 2D simulations. Using transfer learning with deep neural networks, one can reproduce the fidelity of more complex physics at a much smaller cost. The networks trained in this fashion can in turn act as surrogate models for the simulations themselves, allowing for quick and efficient exploration of the parameter space of interest. These surrogate models are useful for exploring more complex schemes, such as pulse shaping, material studies, different acceleration mechanisms, etc. This work was also done in synergy with ongoing LDRD and FES efforts to develop and optimize high-repetition-rate laser systems for particle and light source applications.

Watch it here: https://llnlfed.webex.com/webappng/sites/llnlfed/recording/22fe4e3807dc103c8fcf00505681d077/playback

Simulating Electron Dynamics in HPC Systems, Programs, Libraries and Practices

Ninth in the series was presented by Alfredo A, Correa

To explain the properties of condensed matter systems (solids and liquids) at the atomic level, combining the theories of electromagnetism and quantum mechanics is fundamental. The quantum mechanical aspect of the problem makes the accurate simulation of even small systems of atoms exceptionally computationally intensive, making them an ideal target of HPC. The challenge is even greater when the simulation involves explicit time-dependent electronic excitations; we will illustrate results achieved with HPC allocations involving the combined dynamics of electrons and ions beyond the adiabatic Born-Oppenheimer approximation. As graphical processing units (GPUs) have become ubiquitous in supercomputers and clusters, some paradigms must be adjusted in order to simulate these systems. We apply these new principles to the electronic structure problem in a new atomistic code program called INQ*, designed from scratch to run in parallel on multiple GPUs. INQ is a compact code that implements the density functional theory (DFT) and time-dependent functional theory (TDDFT) for quantum electrons in the plane-wave basis, using modern code design features and techniques. In TDDFT simulations on GPU-based supercomputers, INQ achieves excellent performance. It can handle hundreds and thousands of atoms, with simulation efficiency of one second or less per time-step, and scale to utilize thousands of GPUs.

Watch it here: https://hpc.llnl.gov/sites/default/files/correa-Grand-Challenge.mp4

Complex, Convoluted, Yet Consistent: Protein Induced Membrane Remodeling

Eighth in the series was presented by Tim Carpenter

Proteins embedded in cell membranes are the targets for ~70% of all FDA- approved drugs. A single family of membrane proteins, called ‘GPCRs’ account for almost one third of all drug targets. These cell membranes are highly complicated environments, made up of many different types of constituent lipid molecules. Experimentally, the use of different types of lipids in the membrane that surrounds the protein is known to modulate the protein behavior. However, when designing or testing novel therapeutics, the influence of the membrane is often ignored; 50% of the environment is overlooked. Thus, studies carried out to optimize therapeutics can potentially be misleading—it is possible for the protein to respond differently depending on the properties of the membrane. Using novel multiscale simulations, and a sophisticated membrane model, we demonstrate just how complex and intricate the protein–lipid interaction are. Indeed, the property-altering interactions between the proteins and lipids are reciprocal. While lipids can induce changes in protein behavior, the protein can also drive either changes in lipid properties or reorganization of the lipids. The systems acclimatize and adjust to external stimuli to provide a more consistent, reliable environment for the protein to complete its function. The fine-tuning adaptation of the system further highlights the symbiotic nature of the membrane and the protein such that they should not be considered separate elements but treated as a single multicomponent drug target.

Watch it here: https://llnlfed.webex.com/recordingservice/sites/llnlfed/recording/0171678932e0103b9f5f00505681d077/playback?rcdKey=4832534b0000000422ff29ce28c78e27cd8f18b8d6bef4eb60d05f0b1c5678daa8a065ef92767f98&timeStamp=1666303652666&reviewId=262461147

Supercomputer Revelations of Grain Boundaries and Metal Plasticity

Seventh in the series was Tomas Oppelstrup

Macroscopic metals are poly-crystals, meaning that they consist of many crystal grains each with its own orientation of the atomic lattice. The grain sizes and orientations depend strongly on how the metal was formed and processed. The boundaries between the crystal grains have a strong impact on the deformation response and strength of the metal. Only recently has it become even remotely feasible to address metal strength or deformation response with direct numerical simulation of all participating atoms. Such simulations allow us to perceive how the discrete nature of the atoms and the crystal lattice they form define deformation mechanisms and the overall material response. Using resources awarded by the Computing Grand Challenge program we are conducting very large-scale molecular dynamics simulations to investigate the interplay between the grain boundaries and the bulk material in the interior of the crystal grains during plastic deformation.

The plasticity of poly-crystals is greatly influenced by the grain boundaries which can act as obstacles for the motion of dislocations–line-like defects that act as carriers of plasticity. The resulting strengthening effect is described as the Hall-Petch relation. A different strengthening mechanism referred to as dislocation starvation has also been demonstrated for small single crystals, where the bulk material is essentially dislocation-free, and the plasticity is accommodated by dislocations nucleating at free surfaces. In this seminar I will present results from our molecular dynamics simulations demonstrating the dynamic competition between dislocation multiplication in the bulk and their nucleation at grain boundaries. They reveal grain size and deformation rate regimes where the plasticity and strength are governed by grain boundary dislocation nucleation.

HPC-Enabled Asteroid Detection System in the Era of LSST

Sixth in the series was presented by Nathan Golovich

The Vera C. Rubin Observatory, home to the Legacy Survey of Space and Time (LSST), will make the largest ever contribution to our planetary defense by discovering asteroids in multiple photometric filters to enable compositional inference with enough independent detections to fit orbits of the majority of the remaining dangerous asteroids to be discovered. However, the LSST will fall short of the 2005 congressional mandate to discover and characterize 90% of the 140m and larger dangerous asteroids. As the deadline for the mandate passed in 2020, only ~35% of them have been discovered. To address this shortfall, NASA plans to launch an infrared space mission (Near Earth Object Surveillance Mission), and there has been discussion of extending observations on the Rubin Observatory for an additional few years. Together, these proposals will cost nearly one billion dollars. Our LDRD seeks to develop a computational approach that could unlock the same gains at a fraction of the cost using existing and planned survey observations and LLNL computing resources. Supported by the Computing Grand Challenge, we’ve developed such an approach and obtained initial results that demonstrate the potential to achieve this lofty goal. In this talk I will discuss our 2021 results and our plans for continuing work in 2022.

Watch it here :  https://llnlfed.webex.com/recordingservice/sites/llnlfed/recording/b72b8ec477f3103ab5ff00505681a68b/playback

Metal Strength in Atomistic Detail 

Fifth in the series was presented by Vasily Bulatov

Relentless growths of HPC capabilities is making possible atomistic simulations on previously unthinkable scales of microns and millisecond.  I will briefly review our ongoing work in which we are taking advantage of Grand Challenge allocations to run large-scale Molecular Dynamics simulations aiming to understand the origins and to probe the limits of metal strength.   

Watch it here :  https://llnlfed.webex.com/llnlfed/lsr.php?RCID=27fcec257b2b28d68c8ba4b485d44e53

High Performance Computing Accelerates Drug Discovery to Combat Cancer

Fourth in the series was presented by Yue Yang

Cancer is a leading cause of mortality worldwide, accounting for 10 million deaths per year. Doctors and scientists keep seeking better drugs to care for people with cancer. It takes a long development and approval process—typically 10 years—for a drug to go from an idea in the lab to an approved drug that a doctor can prescribe. The first part of the process is preclinical research, during which a candidate molecule is discovered or designed, and then modified for best efficacy (e.g., kill cancer cells) and safety (doesn’t harm other cells or cause side effects). Computational modeling has been used to accelerate preclinical discovery; however, models may be limited in speed and accuracy. In this talk I will review how we used our recent Grand Challenge allocation for high-performance computing to accelerate cancer drug discovery.

First Principles Calculations of Atomic Nuclei and Their Interactions

Third in the series was presented by Kostas Kravvaris

Atomic nuclei are the heart of matter, the fuel of stars, and a unique doorway to explore some of the most fundamental laws of the universe. An overarching goal of nuclear physics is to arrive at a comprehensive understanding of atomic nuclei and their interactions, and to use this understanding to accurately predict nuclear properties that are difficult to measure, or simply inaccessible to experiment. This effort requires significant computing power and has benefited immensely from current hybrid high performance computing architectures. In this talk I will review recent Grand Challenge calculations of nuclear properties relevant to fundamental physics and applications, present ongoing efforts for quantifying their uncertainties, and discuss the application of quantum computers as the eventual next step in computing atomic nuclei and their interactions.

Watch it here (internal only): https://llnlfed.webex.com/llnlfed/lsr.php?RCID=32d63a00f09681c2848516721769ac90

Origins of Matter

Second in the series was given by Pavlos Vranas

It has been well established that protons and neutrons are not elementary particles. Instead, they are composites made of constituent particles called quarks and gluons. Quarks and gluons are elementary, and their interactions are described by the theory of Quantum Chromodynamics (QCD). Calculations of QCD are important in revealing the structure and interactions of the proton, neutron and the other nuclear particles. Separately, it is also well established that an unknown substance permeates our Universe, and among other things holds the galaxies together with mass density of about five times larger than the mass density of our visible Universe. It has been termed Dark Matter. A Dark Matter theory developed at LLNL suggests that it is similar to QCD.  Both these theories can only be solved by numerical simulation using a discrete space-time, the Lattice, on the fastest supercomputers available. The Grand Challenge program and the LLNL supercomputers have advanced this frontier to a leading world effort. 

Watch it here: https://hpc.llnl.gov/sites/default/files/vranas-Grand-Challenge.mp4

Earthquake Ground Motion Simulations on Sierra and Lassen with SW4

Our first talk was given by Artie Rodgers

SW4 is a summation-by-parts finite difference code for simulating seismic motions in 3D Earth models. Porting of SW4 to Sierra and Lassen with RAJA under the Institutional Center of Excellence project enabled faster, larger and more finely resolved simulations. This talk will highlight some of the science that was made possible by these advances and executed in an FY2019 Computing Grand Challenge allocation. Further enhancements to SW4 are being made under the EQSIM DOE Exascale Computing Project.

Watch it here: https://hpc.llnl.gov/sites/default/files/rodgers-Grand-Challenge.mp4