Peering at peer review revealed high degree of chance associated with funding of grant applications (original) (raw)
Related papers
Peer review of grant applications in biology and medicine. Reliability, fairness, and validity
Scientometrics, 2009
This paper examines the peer review procedure of a national science funding organization (Swiss National Science Foundation) by means of the three most frequently studied criteria reliability, fairness, and validity. The analyzed data consists of 496 applications for projectbased funding from biology and medicine from the year 1998. Overall reliability is found to be fair with an intraclass correlation coefficient of 0.41 with sizeable differences between biology (0.45) and medicine (0.20). Multiple logistic regression models reveal only scientific performance indicators as significant predictors of the funding decision while all potential sources of bias (gender, age, nationality, and academic status of the applicant, requested amount of funding, and institutional surrounding) are non-significant predictors. Bibliometric analysis provides evidence that the decisions of a public funding organization for basic project-based research are in line with the future publication success of applicants. The paper also argues for an expansion of approaches and methodologies in peer review research by increasingly focusing on process rather than outcome and by including a more diverse set of methods e.g. content analysis. Such an expansion will be necessary to advance peer review research beyond the abundantly treated questions of reliability, fairness, and validity.
Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices
PLoS ONE, 2012
Background: Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers' scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria.
What do we know about grant peer review in the health sciences?
F1000Research, 2017
Background: Peer review decisions award >95% of academic medical research funding, so it is crucial to understand how well they work and if they could be improved. Methods: This paper summarises evidence from 105 relevant papers identified through a literature search on the effectiveness and burden of peer review for grant funding. Results: There is a remarkable paucity of evidence about the overall efficiency of peer review for funding allocation, given its centrality to the modern system of science. From the available evidence, we can identify some conclusions around the effectiveness and burden of peer review. The strongest evidence around effectiveness indicates a bias against innovative research. There is also fairly clear evidence that peer review is, at best, a weak predictor of future research performance, and that ratings vary considerably between reviewers. There is some evidence of age bias and cronyism. Good evidence shows that the burden of peer review is high and th...
A new approach to peer review assessments: Score, then rank
Research Square (Research Square), 2022
Background: In many peer review settings, proposals are selected for funding on the basis of some summary statistics-such as the mean, median, or percentileof review scores. There are numerous challenges to working with scores. These include low inter-rater reliability, epistemological differences, susceptibility to varying levels of leniency or harshness of reviewers, and the presence of ties. A different approach that is able to mitigate some of these issues would be to additionally collect rankings such as top-k preferences or paired comparisons and incorporate them in the analysis of review scores. Rankings and paired comparisons are scale-free and can enforce demarcation between proposals by design. However, analyzing scores and rankings simultaneously has not been done until recently due to the lack of tools for principled modeling. Methods: We first introduce an innovative protocol for collecting rankings among top quality proposals. This rankings collection is done as an add-on to the typical peer review procedures focused on scores and does not require reviewers to rank all proposals. We then present statistical methodology for obtaining an integrated score for each proposal, and from the integrated scores an induced preference ordering, that captures both types of peer review inputs: scores and rankings. Our statistical methodology allows for the collected rankings to differ from the score-implied rankings; this feature is essential when the two quality assessments disagree which, as we find empirically, often happens in peer review. We illustrate how our method quantifies the uncertainty in order to better understand reviewer preferences among similarly scored proposals. Results: Using artificial "toy" examples and real peer review data, we demonstrate that incorporating top-k rankings into scores allows us to better learn when reviewers can distinguish between proposals. We also examine the robustness of this system to partial rankings, inconsistencies between ratings and rankings, and outliers. Finally, we discuss how, using panel data, this method can provide information about funding priority that provides a level of accuracy in a format that is well suited for the types of decisions research funders make. Conclusions: The gathering of both rating and ranking data and the use of integrated scores and its induced preference ordering can have many advantages over methods relying on ratings alone, leveraging more information to most accurately distill reviewer opinion into a useful output to make the most informed funding decision.
2018
In 2009, RAND Europe conducted a literature review in order to assess the effectiveness and efficiency of peer review for grant funding. This report presents an update to that review to reflect new literature on the topic, and adds case studies exploring peer review practice at six international funders. This report was produced with funding from the Canadian Institutes of Health Research. It will be of interest to government officials dealing with research funding policy, research funders including governmental and charitable funders, research institutions, researchers, and research users. Although the case studies focus on biomedical and health research, the literature review takes a broader scope and it is likely the findings may be of relevance to wider research fields.
Criteria for assessing grant applications: a systematic review
Palgrave Communications, 2020
Criteria are an essential component of any procedure for assessing merit. Yet, little is known about the criteria peers use to assess grant applications. In this systematic review we therefore identify and synthesize studies that examine grant peer review criteria in an empirical and inductive manner. To facilitate the synthesis, we introduce a framework that classifies what is generally referred to as ‘criterion’ into an evaluated entity (i.e., the object of evaluation) and an evaluation criterion (i.e., the dimension along which an entity is evaluated). In total, the synthesis includes 12 studies on grant peer review criteria. Two-thirds of these studies examine criteria in the medical and health sciences, while studies in other fields are scarce. Few studies compare criteria across different fields, and none focus on criteria for interdisciplinary research. We conducted a qualitative content analysis of the 12 studies and thereby identified 15 evaluation criteria and 30 evaluated entities, as well as the relations between them. Based on a network analysis, we determined the following main relations between the identified evaluation criteria and evaluated entities. The aims and outcomes of a proposed project are assessed in terms of the evaluation criteria originality, academic relevance, and extra-academic relevance. The proposed research process is evaluated both on the content level (quality, appropriateness, rigor, coherence/justification), as well as on the level of description (clarity, completeness). The resources needed to implement the research process are evaluated in terms of the evaluation criterion feasibility. Lastly, the person and personality of the applicant are assessed from a ‘psychological’ (motivation, traits) and a ‘sociological’ (diversity) perspective. Furthermore, we find that some of the criteria peers use to evaluate grant applications do not conform to the fairness doctrine and the ideal of impartiality. Grant peer review could therefore be considered unfair and biased. Our findings suggest that future studies on criteria in grant peer review should focus on the applicant, include data from non- Western countries, and examine fields other than the medical and health sciences.
Research Evaluation
Evaluation for the allocation of project-funding schemes devoted to sustain academic research often undergoes changes of the rules for the ex-ante selection, which are supposed to improve the capability of peer review to select the best proposals. How modifications of the rules realize a more accountable evaluation result? Do the changes suggest an improved alignment with the program's intended objectives? The article addresses these questions investigating Research Project of National Interest, an Italian collaborative project-funding scheme for academic curiosity-driven research through a case study design that provides a description of how the changes of the ex-ante evaluation process were implemented in practice. The results show that when government tries to steer the peer-review process by imposing an increasing number of rules to structure the debate among peers and make it more accountable, the peer-review practices remain largely impervious to the change.