Grand Challenge Program

Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles). Rodgers/Sjogreen/Petersson
Simulated strength of shaking from a magnitude 7.0 Hayward Fault earthquake showing peak ground velocity (colorbar) and seismograms (blue) at selected locations (triangles). Rodgers/Sjogreen/Petersson

The Computing Grand Challenge Program allocates significant quantities of institutional computational resources to LLNL researchers to perform cutting edge research on the LC capability computers.


Grand Challenge #16: Awardees

Grand Challenge 16 image

Research projects ranging in scope from simulations of nanostructured targets being hit by high-power laser pulses to probing the fundamental theories of physics as they relate to dark matter were among those allocated time on Laboratory supercomputers under the recently announced Institutional Unclassified Computing Grand Challenge Awards.

The 16th Annual Computing Grand Challenge campaign awarded over 87,000node hours per week to projects such as these that address compelling, large-scale problems, push the envelope of capability computingand advancescience. “As we have seen year after year, the range and quality of these proposals illustrate the breadth and excellence of LLNL’s computational science community. These projects will enable exciting science and are part of what makes the Lab such a rewarding place to work,” said Bruce Hendrickson, Computation associate director.

Teams with winning proposals will be allocated time on Quartz, a 3.7 petaFLOP/s machine;Ruby, a 6.0 petaFLOP/s machine, and Lassen, a ~23 petaFLOP/s machine. Quartz, Ruby and Lassen are systems dedicated to unclassified research through the Laboratory’s Multiprogrammatic & Institutional Computing (M&IC) program. High performance computers generally consist of thousands of cores; the Quartz system has 2,976 nodes, each with 36 cores, for a total of 107,136 cores;the Ruby system has 1,480 nodes, each with 56 cores, for a total of 84,672 cores;whileLassen has 788 nodes, each with 44 coresand 4 GPUs, for a total of 30,096 cores and 3,152 GPUs. Codes that have been written or modified to use the powerful GPUs on Lassen have seen a large increase in performance over CPU-only platforms.

“Our Computing Grand Challenge program is a critical factor in the continuing excellence of our computing, simulation, and data science core competency, said Pat Falcone, deputy director for Science & Technology. “The computing allocations announced today will support key research efforts and, I am sure, will result in important scientific discoveries.”

Project proposals were reviewed by a minimum ofone externaland three internalreferees. Criteria used in selecting the projects included: quality and potential impact of proposed science and/or engineering, impact of proposed utilization of Grand Challenge computing resources, ability to effectively utilize a high-performance institutional computing infrastructure, quality and extent of external collaborations, and alignment with the Laboratory strategic vision. Allocations were awarded in two categories, Tier 1 and Tier 2. Tier 1 projects receive a higher allocation and a higher priority.

Over the last 20 years, high performance computing resources dedicated to unclassified research have increased more than 10,000-fold from 72 gigaFLOPS in 1997 to almost 34 petaFLOPS today. To put that in perspective, only seven countries in the world possess more computing resources than the Laboratory makes available for unclassified computing.

See the chart for allocations awarded under the Computing Grand Challenge program.

Grand Challenge Projects/Allocations

For those with internal site access, all 15 years of Grand Challenge project titles and PI's are available on our Grand Challenge Awardees page.

Grand Challenge Utilization Data

For those with internal site access, many Grand Challenge project titles and allocations are in our Utilization Data area.