Grand Challenge Program

Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles).

Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles). Rodgers/Sjogreen/Petersson

The Computing Grand Challenge Program allocates significant quantities of institutional computational resources to LLNL researchers to perform cutting edge research on the LC capability computers.

Grand Challenge Seminar Series 

M&IC and the Deputy Director for Science and Technology are pleased to announced the revival of the Grand Challenge Seminar Series.

Grand Challenge #13 Recipient: Earthquake Simulations

Our first talk was given by Artie Rodgers, entitled, "Earthquake Ground Motion Simulations on Sierra and Lassen with SW4."

SW4 is a summation-by-parts finite difference code for simulating seismic motions in 3D Earth models. Porting of SW4 to Sierra and Lassen with RAJA under the Institutional Center of Excellence project enabled faster, larger and more finely resolved simulations. This talk will highlight some of the science that was made possible by these advances and executed in an FY2019 Computing Grand Challenge allocation. Further enhancements to SW4 are being made under the EQSIM DOE Exascale Computing Project.

It is currently available internally at:

Grand Challenge #14 Recipient: Origins of Matter

Second in the series was Pavlos Vranas' "Origins of Matter."

It has been well established that protons and neutrons are not elementary particles. Instead, they are composites made of constituent particles called quarks and gluons. Quarks and gluons are elementary, and their interactions are described by the theory of Quantum Chromodynamics (QCD). Calculations of QCD are important in revealing the structure and interactions of the proton, neutron and the other nuclear particles. Separately, it is also well established that an unknown substance permeates our Universe, and among other things holds the galaxies together with mass density of about five times larger than the mass density of our visible Universe. It has been termed Dark Matter. A Dark Matter theory developed at LLNL suggests that it is similar to QCD.  Both these theories can only be solved by numerical simulation using a discrete space-time, the Lattice, on the fastest supercomputers available. The Grand Challenge program and the LLNL supercomputers have advanced this frontier to a leading world effort. 

This talk is currently available internally at:

Institutional Computing Grand Challenge program celebrates its 15th campaign

Research projects ranging in scope from multiscale simulations of warm dense plasmas to modeling tumor cell dynamics in the bloodstream were among those allocated time on Laboratory supercomputers under the recently announced Institutional Unclassified Computing Grand Challenge Awards. The 15th Annual Computing Grand Challenge campaign awarded over 87 thousand node hours per week to projects such as these that address compelling, large-scale problems, push the envelope of capability computing, and advance science. “The diversity and quality of this year’s proposals reflect the scientific breadth and excellence of LLNL’s computational science community.  These activities are part of what makes the Lab such an exciting place to work,” said Bruce Hendrickson, Computation Associate Director.

Teams with winning proposals will be allocated time on Quartz, a 3.7 petaFLOP/s machine, Ruby, a 6.0 petaFLOP/s machine and Lassen, a ~23 petaFLOP/s machine. Quartz, Ruby and Lassen are systems dedicated to unclassified research through the Laboratory’s Multiprogrammatic & Institutional Computing (M&IC) program. High performance computers generally consist of thousands of cores; the Quartz system has 2,976 nodes each with 36 cores, for a total of 107,136 cores, the Ruby system has 1,480 nodes each with 56 cores, for a total of 84,672 cores, while Lassen has 788 nodes each with 44 cores and 4 GPUs, for a total of 30,096 cores and 3,152 GPUs. Codes that have been written or modified to use the powerful GPUs on Lassen have seen a large increase in performance over CPU-only platforms.  

“The computing allocations announced today in the Computing Grand Challenge program directly support continuing excellence in our computing, simulation, and data science core competency,” said Pat Falcone, Deputy Director for Science & Technology. “Execution of these research projects with the allocated compute time will extend our capabilities as well as deliver important scientific discoveries.” 

Project proposals were reviewed by a minimum of three internal and one external referees. Criteria used in selecting the projects included: quality and potential impact of proposed science and/or engineering, impact of proposed utilization of Grand Challenge computing resources, ability to effectively utilize a high-performance institutional computing infrastructure, quality and extent of external collaborations, and alignment with the Laboratory strategic vision. Allocations were awarded in two categories, Tier 1 and Tier 2. Tier 1 projects receive a higher allocation and a higher priority. 

Over the last 20 years, high performance computing resources dedicated to unclassified research have increased more than 10,000-fold from 72 gigaFLOPS in 1997 to almost 34 petaFLOPS today. To put that in perspective, only seven countries in the world possess more computing resources than the Laboratory makes available for unclassified computing.

See the chart for allocations awarded under the Computing Grand Challenge program.

Grand Challenge Projects/Allocations

For those with internal site access, all 15 years of Grand Challenge project titles and PI's are available on our Grand Challenge Project subpages.

Grand Challenge Utilization Data

For those with internal site access, many Grand Challenge project titles and allocations are in our Utilization Data area.