How Do I Review Thee? Let Me Count the Ways: A Comparison of Research Grant Proposal Review Criteria Across US Federal Funding Agencies (original) (raw)

Evaluating grant proposals: lessons from using metrics as screening device

Journal of Data and Information Science, 2023

This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader. The key questions are whether formal policy affects the allocation of funds to researchers with a better publication record and how the previous academic performance of principal investigators is related to future project results. Design/methodology/approach: We compared two competitions, before and after the policy raised the publication threshold for the principal investigators. We analyzed 9,167 papers published by 332 winners in physics and the social sciences and humanities (SSH), and 11,253 publications resulting from each funded project. Findings: We found that among physicists, even in the first period, grants tended to be allocated to prolific authors publishing in high-quality journals. In contrast, the SSH project grantees had been less prolific in publishing internationally in both periods; however, in the second period, the selection of grant recipients yielded better results regarding awarding grants to more productive authors in terms of the quantity and quality of publications. There was no evidence that this better selection of grant recipients resulted in better publication records during grant realization. Originality: This study contributes to the discussion of formal policies that rely on metrics for the evaluation of grant proposals. The Russian case shows that such policy may have a profound effect on changing the supply side of applicants, especially in disciplines that are less suitable for metric-based evaluations. In spite of the criticism given to metrics, they might be a useful additional instrument in academic systems where professional expertise is corrupted and prevents allocation of funds to prolific researchers.

Studying grant decision-making: a linguistic analysis of review reports

Scientometrics, 2018

Peer and panel review are the dominant forms of grant decision-making, despite its serious weaknesses as shown by many studies. This paper contributes to the understanding of the grant selection process through a linguistic analysis of the review reports. We reconstruct in that way several aspects of the evaluation and selection process: what dimensions of the proposal are discussed during the process and how, and what distinguishes between the successful and non-successful applications? We combine the linguistic findings with interviews with panel members and with bibliometric performance scores of applicants. The former gives the context, and the latter helps to interpret the linguistic findings. The analysis shows that the performance of the applicant and the content of the proposed study are assessed with the same categories, suggesting that the panelists actually do not make a difference between past performance and promising new research ideas. The analysis also suggests that ...

Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices

PLoS ONE, 2012

Background: Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers' scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria.

Criteria for assessing grant applications: a systematic review

Palgrave Communications, 2020

Criteria are an essential component of any procedure for assessing merit. Yet, little is known about the criteria peers use to assess grant applications. In this systematic review we therefore identify and synthesize studies that examine grant peer review criteria in an empirical and inductive manner. To facilitate the synthesis, we introduce a framework that classifies what is generally referred to as ‘criterion’ into an evaluated entity (i.e., the object of evaluation) and an evaluation criterion (i.e., the dimension along which an entity is evaluated). In total, the synthesis includes 12 studies on grant peer review criteria. Two-thirds of these studies examine criteria in the medical and health sciences, while studies in other fields are scarce. Few studies compare criteria across different fields, and none focus on criteria for interdisciplinary research. We conducted a qualitative content analysis of the 12 studies and thereby identified 15 evaluation criteria and 30 evaluated entities, as well as the relations between them. Based on a network analysis, we determined the following main relations between the identified evaluation criteria and evaluated entities. The aims and outcomes of a proposed project are assessed in terms of the evaluation criteria originality, academic relevance, and extra-academic relevance. The proposed research process is evaluated both on the content level (quality, appropriateness, rigor, coherence/justification), as well as on the level of description (clarity, completeness). The resources needed to implement the research process are evaluated in terms of the evaluation criterion feasibility. Lastly, the person and personality of the applicant are assessed from a ‘psychological’ (motivation, traits) and a ‘sociological’ (diversity) perspective. Furthermore, we find that some of the criteria peers use to evaluate grant applications do not conform to the fairness doctrine and the ideal of impartiality. Grant peer review could therefore be considered unfair and biased. Our findings suggest that future studies on criteria in grant peer review should focus on the applicant, include data from non- Western countries, and examine fields other than the medical and health sciences.

Grant writing and grant peer review as questionable research practices

F1000Research, 2021

A large part of governmental research funding is currently distributed through the peer review of project proposals. In this paper, we argue that such funding systems incentivize and even force researchers to violate five moral values, each of which is central to commonly used scientific codes of conduct. Our argument complements existing epistemic arguments against peer-review project funding systems and, accordingly, strengthens the mounting calls for reform of these systems.

Measuring bias, burden and conservatism in research funding processes

F1000Research

Background: Grant funding allocation is a complex process that in most cases relies on peer review. A recent study identified a number of challenges associated with the use of peer review in the evaluation of grant proposals. Three important issues identified were bias, burden, and conservatism, and the work concluded that further experimentation and measurement is needed to assess the performance of funding processes. Methods: We have conducted a review of international practice in the evaluation and improvement of grant funding processes in relation to bias, burden and conservatism, based on a rapid evidence assessment and interviews with research funding agencies. Results: The evidence gathered suggests that efforts so far to measure these characteristics systematically by funders have been limited. However, there are some examples of measures and approaches which could be developed and more widely applied. Conclusions: The majority of the literature focuses primarily on the appl...

A new approach to grant review assessments: score, then rank

Research integrity and peer review, 2023

Background In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications. We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical "toy" examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges' evaluations. For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions. Conclusions A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.

The Keys to Preparing Successful Research Grant Proposals

This article seeks to demystif-y the competitive grunt recommendation process of scientific peer review panels. The National Research Initiative Competitive Grants Program (NRICGP) administered by the U.S. Department of Agriculture-Coopel-ative State Research. Extension, and Education Service (USDA-CSREES) serves as the focus of this article. This article provides a brief background on the NRICGP and discusses the application process, the scientific peer review process, guidelines for grant writing. and ways to interpret revicwer cornrnents if a pl-oposal is not funded. The essentials of good grant writing discussed in this article are transferable to other USDA competitive grant programs.

Peer review of grant applications in biology and medicine. Reliability, fairness, and validity

Scientometrics, 2009

This paper examines the peer review procedure of a national science funding organization (Swiss National Science Foundation) by means of the three most frequently studied criteria reliability, fairness, and validity. The analyzed data consists of 496 applications for projectbased funding from biology and medicine from the year 1998. Overall reliability is found to be fair with an intraclass correlation coefficient of 0.41 with sizeable differences between biology (0.45) and medicine (0.20). Multiple logistic regression models reveal only scientific performance indicators as significant predictors of the funding decision while all potential sources of bias (gender, age, nationality, and academic status of the applicant, requested amount of funding, and institutional surrounding) are non-significant predictors. Bibliometric analysis provides evidence that the decisions of a public funding organization for basic project-based research are in line with the future publication success of applicants. The paper also argues for an expansion of approaches and methodologies in peer review research by increasingly focusing on process rather than outcome and by including a more diverse set of methods e.g. content analysis. Such an expansion will be necessary to advance peer review research beyond the abundantly treated questions of reliability, fairness, and validity.