Grand Challenge Review Process

All proposals will be evaluated by 2-3 internal reviewers and at least one external reviewer. Proposals are assigned numeric scores in six categories:

  • Significance and impact of science
  • Significance and impact of computational approach
  • Quality of HPC research plan
  • Quality/extent of external collaborations
  • Alignment with Laboratory S&T strategic vision
  • Past Grand Challenge performance

The first three categories are weighted the most heavily. Authors are strongly encouraged to be specific about the need for and likely outcome of a single-year simulation campaign. A detailed utilization plan for HPC resources at the Tier 2 level is very important. Projects that score highly overall and request consideration for Tier 1 awards will have their Tier 1 justifications scrutinized by the full Grand Challenge committee. There is no penalty for not requesting Tier 1 awards.

The following scoring guidance is provided to reviewers to improve scoring consistency across the committee. It should be noted that this guidance is given here only as a reference. Reviewers will penalize proposals that make grandiose but unrealistic claims.

Grand Challenge Scoring Guidance for Reviewers

Significance and impact of expected result – what level of scientific importance and visibility will be attached to the results emerging from calculations proposed for the CGC period?

5.0: Highest visibility: groundbreaking science, results probably published in Nature or Science, possibly attracts the attention of the public press

4.0: Exceptional: exceptional science, results probably published in Physical Review Letters or equivalent, possibly as the cover

3.0: Excellent: excellent science, results likely to result in one or more strong technical publications, maybe makes PRL

2.0: Very Good: interesting science, results likely to result in technical publication

1.0: Marginal: unclear if results will be of significant interest

0.0: Poor: computing resources would be better invested elsewhere

 

Significance and impact of the computational approach – what level of attention will be paid to the algorithms/methodology proposed, independent of scale? How will this work advance the field?

5.0: Highest visibility: project showcases new, cutting-edge HPC capabilities or algorithms; demonstrates a potentially seminal HPC technique at extreme-scale; potential/recent Gordon Bell award finalist

4.0: Exceptional: project demonstrates innovative use of HPC resources in a way that is publishable independent of underlying application, will likely result in high quality peer-reviewed HPC submission at the level of Supercomputing or IPDPS (similar to PRL in acceptance rate)

3.0: Excellent: project is a strong example of applying established state of the art codes or algorithms to new problems

2.0: Very Good: project uses standard codes or algorithms in a routine way

1.0: Marginal: computational approach could use improvement

0.0: Poor: code/method has serious problems, and should be improved before attempting Grand Challenge scale applications

 

Quality of HPC research plan – to what extent does the proposed utilization of Grand Challenge computing resources (at the Tier 2 Award level) enable something special?   

5.0: Highest value:  near-perfect use of Grand Challenge resources; codes make exceptional and/or novel use of parallel hardware; research plan absolutely maximizes the bang for the buck for a Tier 2 award.

4.0: Exceptional value: compelling use of Grand Challenge resources; codes have demonstrated ability to use parallel hardware very well; research plan is well thought out and makes very good use of Tier 2 award.

3.0: Excellent value: reasonable use of Grand Challenge resources; codes have demonstrated ability to use parallel hardware well; research plan is sufficient and makes reasonable use of Tier 2 award.

2.0: Some value: research plan lacks detail; some concerns about ability of codes to make efficient use of requested parallel hardware.

1.0: Marginal value: research plan is incomplete or has critical gaps; serious concerns about ability of codes to make efficient use of requested parallel hardware.

0.0: Poor value: Grand Challenge resources would be wasted on this project.

 

Quality/extent of external collaborations

2.0: Active external collaboration with recognized domain experts brings high visibility to project

1.0: Some external collaboration exists (but not of sufficient status to enhance visibility); possibility of students and post-docs working with LLNL

0.0: No collaboration to speak of

 

Alignment with laboratory S&T strategic vision – is this an area that the laboratory cares about? Will this area generate a new funding source for the laboratory?

2.0: LLNL cares about this area; of direct benefit to the Lab’s S&T mission

1.0: More or less of interest; somewhat related to the S&T mission

0.0: Little or no overlap with the Lab’s S&T mission

 

Performance on previous Computing Grand Challenge awards commensurate with the level of resources*

-1.0: Poor performance on previous allocation

0.0: Acceptable performance on previous Tier 1 or Tier 2 allocation(s) or no previous allocation

*This criterion is a measure of performance to be used as a penalty only by applying a negative value between 0.0 and -1.0. For example, if you want to penalize a current proposal 0.5 point based on past GC projects, assign a score of -0.5.