Grand Challenge Program

Grand Challenge Banner

The Computing Grand Challenge Program allocates significant quantities of institutional computational resources to LLNL researchers to perform cutting edge research on the LC capability computers.

Institutional Computing Grand Challenge program celebrates its 19th campaign

Research projects ranging in scope from simulations characterizing the atmospheres of distant planets to campaigns that combine machine learning methods together with large-scale simulations for drug discovery were among those allocated time on Laboratory supercomputers under the recently announced Institutional Unclassified Computing Grand Challenge Awards. The 19th Annual Computing Grand Challenge campaign awarded over 80 thousand node hours per week to projects such as these that address compelling, large-scale problems, push the envelope of capability computing, and advance science. “As in past years, the diversity and quality of this year’s proposals showcase the scientific breadth and excellence of LLNL’s computational science community” said Bruce Hendrickson, Principal Associate Director for Computing.  “These activities are part of what makes the Lab such an exciting place to work.”

Teams with winning proposals will be allocated time on Dane, a 7.0 petaFLOP/s machine, Ruby, a 6.0 petaFLOP/s machine, Lassen, a 23 petaFLOP/s machine, and eventually Tuolumne, a 289 petaFLOP/s machine. Dane, Ruby, and Lassen are systems dedicated to unclassified research through the Laboratory’s Multiprogrammatic & Institutional Computing (M&IC) program. Tuolumne is the newest addition to the M&IC program and a smaller version of the El Capitan exascale system; Grand Challenge projects that are using Lassen will be migrating to this system in the coming months.  High performance computers generally consist of thousands of cores; the Dane system has 1,496 nodes each with 112 cores, for a total of 167,552 cores, the Ruby system has 1,480 nodes each with 56 cores, for a total of 84,672 cores, while Lassen has 788 nodes each with 44 cores and 4 GPUs, for a total of 30,096 cores and 3,152 GPUs. Codes that have been written or modified to use the powerful GPUs on Lassen and now Tuolumne have seen a large increase in performance over CPU-only platforms.  

The allocations granted through the Computing Grand Challenge program are pivotal in fostering innovation and advancing our scientific frontiers,” said Pat Falcone, Deputy Director for Science & Technology. “By empowering researchers with these critical resources, we enhance both our computational capabilities and pave the way for groundbreaking discoveries.”

 Project proposals were reviewed by a minimum of three internal and one external referee. Criteria used in selecting the projects included: quality and potential impact of proposed science and/or engineering, impact of proposed utilization of Grand Challenge computing resources, ability to effectively utilize a high-performance institutional computing infrastructure, quality and extent of external collaborations, and alignment with the Laboratory strategic vision. Allocations were awarded in two categories, Tier 1 and Tier 2. Tier 1 projects receive a higher allocation and a higher priority. 

 Over the last 20 years, high performance computing resources dedicated to unclassified research have increased more than 10,000-fold from 72 gigaFLOPS in 1997 to more than 325 petaFLOPS today. To put that in perspective, only seven countries in the world possess more computing resources than the Laboratory makes available for unclassified computing.

 See the chart for allocations awarded under the Computing Grand Challenge program.

Grand Challenge Proposal Information (future proposals due in August)

How it works

Grand Challenge allocations are awarded annually through a competitive proposal process. The majority of allocations are "Tier 2" awards (25k-50k node-hours), with a smaller number of larger "Tier 1" awards (100k-200k node-hours) given to the highest-ranked proposals that described a compelling need for the additional resources. Lists of all current and past awardees can be found here.

Who can apply

Any LLNL employee can apply! Grand Challenge awards are given solely on the basis of technical merit and anticipated impact. The awards are for compute time only, so applicants must have suitable funding in place to cover their effort in performing the proposed simulation campaigns. Timeline for Grand Challenge 2026 coming soon!

Grand Challenge resources are intended to:

  • Enable highly visible computational science through high impact publications

  • Demonstrate novel and/or cutting-edge use of HPC (e.g. new codes, new algorithms, new workflows)

  • Enhance/maintain LLNL’s reputation as a leader in HPC and computational science

Grand Challenge resources are NOT intended to:

  • Supplement programmatic or directorate compute resources

  • Provide long-term support for projects that don’t have external visibility

Grand Challenge Contact

hill134 [at] llnl.gov (Judy Hill)
Grand Challenge Program Leader

rhoden1 [at] llnl.gov (Nicole Rhoden)
Grand Challenge Coordinator

M&IC Contacts

springmeyer1 [at] llnl.gov (Becky Springmeyer)
M&IC Program Director

tomaschke1 [at] llnl.gov (Greg Tomaschke)
M&IC Deputy Program Director