Grand Challenge Program

Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles). Rodgers/Sjogreen/Petersson
Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles). Rodgers/Sjogreen/Petersson

The Computing Grand Challenge Program allocates significant quantities of institutional computational resources to LLNL researchers to perform cutting edge research on the LC capability computers.

Call for Proposals: The 17th Annual Grand Challenge

The Computing Directorate has issued a call for proposals (see Newsline article for download) for projects requiring significant unclassified computing resource allocations (greater than 25,000 node-hours/year) on institutional capability systems for up to one year.

This call for proposals to use high performance computing (HPC) resources is open to all Laboratory scientists and engineers. Current grand challenge principal investigators must reapply to be considered for continued computer time. Projects can request allocations on Lassen, Quartz, Ruby or some combination of these systems.

Approximately 15 to 20 proposals will be selected to receive these significant allocations. To be considered, proposals must address a compelling, Grand-Challenge-scale, mission-related problem that pushes the envelope of scale and methodology in capability computing while promising unprecedented scientific and/or engineering discoveries. Collaborators (either academic or industrial) are encouraged. A successful project would be expected to receive high-level recognition from mission sponsors, the computing community, and the scientific community at large.

Note that the Computing Grand Challenge program does not award or cause to be awarded any funds – proposals selected for award through this call are expected to be fully funded, either institutionally or externally.

Proposals must be submitted via e-mail by Wednesday, August 31, 2022, to computing-grand-challenge [at] in PDF or MS Word format. This is a firm deadline, any proposals received after this date may not be accepted. All proposals must adhere to the proposal content and length guidelines (see internal Newsline article for download), which have changed significantly relative to previous years. Acknowledgement of receipt of your proposal will be sent within two business days. If you do not receive this acknowledgement, please resend your proposal. Grand Challenge computing allocations awarded under this program will be announced December 19, 2022.

Grand Challenge #16: Awardees

Grand Challenge 16 image

Research projects ranging in scope from simulations of nanostructured targets being hit by high-power laser pulses to probing the fundamental theories of physics as they relate to dark matter were among those allocated time on Laboratory supercomputers under the recently announced Institutional Unclassified Computing Grand Challenge Awards.

The 16th Annual Computing Grand Challenge campaign awarded over 87,000node hours per week to projects such as these that address compelling, large-scale problems, push the envelope of capability computingand advancescience. “As we have seen year after year, the range and quality of these proposals illustrate the breadth and excellence of LLNL’s computational science community. These projects will enable exciting science and are part of what makes the Lab such a rewarding place to work,” said Bruce Hendrickson, Computation associate director.

Teams with winning proposals will be allocated time on Quartz, a 3.7 petaFLOP/s machine;Ruby, a 6.0 petaFLOP/s machine, and Lassen, a ~23 petaFLOP/s machine. Quartz, Ruby and Lassen are systems dedicated to unclassified research through the Laboratory’s Multiprogrammatic & Institutional Computing (M&IC) program. High performance computers generally consist of thousands of cores; the Quartz system has 2,976 nodes, each with 36 cores, for a total of 107,136 cores;the Ruby system has 1,480 nodes, each with 56 cores, for a total of 84,672 cores;whileLassen has 788 nodes, each with 44 coresand 4 GPUs, for a total of 30,096 cores and 3,152 GPUs. Codes that have been written or modified to use the powerful GPUs on Lassen have seen a large increase in performance over CPU-only platforms.

“Our Computing Grand Challenge program is a critical factor in the continuing excellence of our computing, simulation, and data science core competency, said Pat Falcone, deputy director for Science & Technology. “The computing allocations announced today will support key research efforts and, I am sure, will result in important scientific discoveries.”

Project proposals were reviewed by a minimum ofone externaland three internalreferees. Criteria used in selecting the projects included: quality and potential impact of proposed science and/or engineering, impact of proposed utilization of Grand Challenge computing resources, ability to effectively utilize a high-performance institutional computing infrastructure, quality and extent of external collaborations, and alignment with the Laboratory strategic vision. Allocations were awarded in two categories, Tier 1 and Tier 2. Tier 1 projects receive a higher allocation and a higher priority.

Over the last 20 years, high performance computing resources dedicated to unclassified research have increased more than 10,000-fold from 72 gigaFLOPS in 1997 to almost 34 petaFLOPS today. To put that in perspective, only seven countries in the world possess more computing resources than the Laboratory makes available for unclassified computing.

See the chart for allocations awarded under the Computing Grand Challenge program.

Grand Challenge Projects/Allocations

For those with internal site access, all 15 years of Grand Challenge project titles and PI's are available on our Grand Challenge Awardees page.

Grand Challenge Utilization Data

For those with internal site access, many Grand Challenge project titles and allocations are in our Utilization Data area.