Virtual and Peer Reviews of Grant Applications at the Agency for Healthcare Research and Quality (original) (raw)

Studying the Study Section: How Group Decision Making in Person and via Videoconferencing Affects the Grant Peer Review Process. WCER Working Paper No. 2015-6

2015

One of the cornerstones of the scientific process is securing funding for one's research. A key mechanism by which funding outcomes are determined is the scientific peer review process. Our focus is on biomedical research funded by the U.S. National Institutes of Health (NIH). NIH spends $30.3 billion on medical research each year, and more than 80% of NIH funding is awarded through competitive grants that go through a peer review process (NIH, 2015). Advancing our understanding of this review process by investigating variability among review panels and the efficiency of different meeting formats has enormous potential to improve scientific research throughout the nation. NIH's grant review process is a model for federal research foundations, including the National Science Foundation and the U.S. Department of Education's Institute of Education Sciences. It involves panel meetings in which collaborative decision making is an outgrowth of socially mediated cognitive tasks. These tasks include summarization, argumentation, evaluation, and critical discussion of the perceived scientific merit of proposals with other panel members. Investigating how grant review panels function thus allows us not only to better understand processes of collaborative decision making within a group of distributed experts (Brown et al., 1993) that is within a community of practice (Lave & Wenger, 1991), but also to gain insight into the effect of peer review discussions on outcomes for funding scientific research. Theoretical Framework A variety of research has investigated how the peer review process influences reviewers' scores, including the degree of inter-rater reliability among reviewers and across panels, and the impact of discussion on changes in reviewers' scores. In addition, educational theories of distributed cognition, communities of practice, and the sociology of science frame the peer review process as a collaborative decision-making task involving multiple, distributed experts. The following sections review each of these bodies of literature.

Studying the Study Section: How Group Decision Making in Person and via Videoconferencing Affects the Grant Peer Review Process

Grant peer review is a foundational component of scientific research. In the context of grant review meetings, the review process is a collaborative, socially mediated, locally constructed decision-making task. The current study examines how collaborative discussion affects reviewers' scores of grant proposals, how different review panels score the same proposals, and how the discourse practices of videoconference panels differ from in-person panels. Methodologically, we created and videotaped four "constructed study sections," recruiting biomedical scientists with U.S. National Institutes of Health (NIH) review experience and an NIH scientific review officer. These meetings provide a rich medium for investigating the process and outcomes of such authentic collaborative tasks. We discuss implications for research into the peer review process as well as for the broad enterprise of federally funded scientific research.

Videoconferencing in Peer Review: Exploring Differences in Efficiency and Outcomes

Technology-mediated communication, such as teleconference and videoconference, has been found to affect group decision-making processes compared to face-to-face settings. Scientific peer review panels offer a site of authentic, collaborative decision making among expert scientists, yet no research has examined the impact of videoconferencing on such decision-making practices. We assigned real, de-identified grant applications submitted to the National Institutes of Health (NIH) to four panels of experienced NIH reviewers, one of which met via videoconference. The videoconference panel was slightly more efficient than the face-to-face panels, but the outcomes of their decision making (i.e., the scores assigned to grant applications) did not differ. However, preliminary analyses suggest there are differences in the nature of the collaborative discussion among reviewers between the two meeting formats. We discuss implications for research into technology-mediated collaborative decision making, as well as for the scientific grant peer review process broadly.

What do we know about grant peer review in the health sciences? An updated review of the literature and six case studies

2018

In 2009, RAND Europe conducted a literature review in order to assess the effectiveness and efficiency of peer review for grant funding. This report presents an update to that review to reflect new literature on the topic, and adds case studies exploring peer review practice at six international funders. This report was produced with funding from the Canadian Institutes of Health Research. It will be of interest to government officials dealing with research funding policy, research funders including governmental and charitable funders, research institutions, researchers, and research users. Although the case studies focus on biomedical and health research, the literature review takes a broader scope and it is likely the findings may be of relevance to wider research fields.

Engaging people with lived experience in the grant review process

BMC Medical Ethics

People with lived experience are individuals who have first-hand experience of the medical condition(s) being considered. The value of including the viewpoints of people with lived experience in health policy, health care, and health care and systems research has been recognized at many levels, including by funding agencies. However, there is little guidance or established best practices on how to include non-academic reviewers in the grant review process.

Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices

PLoS ONE, 2012

Background: Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers' scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria.

Grant Peer Review: Improving Inter-Rater Reliability with Training

PLOS ONE, 2015

This study developed and evaluated a brief training program for grant reviewers that aimed to increase inter-rater reliability, rating scale knowledge, and effort to read the grant review criteria. Enhancing reviewer training may improve the reliability and accuracy of research grant proposal scoring and funding recommendations. Seventy-five Public Health professors from U.S. research universities watched the training video we produced and assigned scores to the National Institutes of Health scoring criteria proposal summary descriptions. For both novice and experienced reviewers, the training video increased scoring accuracy (the percentage of scores that reflect the true rating scale values), inter-rater reliability, and the amount of time reading the review criteria compared to the no video condition. The increase in reliability for experienced reviewers is notable because it is commonly assumed that reviewers-especially those with experience-have good understanding of the grant review rating scale. The findings suggest that both experienced and novice reviewers who had not received the type of training developed in this study may not have appropriate understanding of the definitions and meaning for each value of the rating scale and that experienced reviewers may overestimate their knowledge of the rating scale. The results underscore the benefits of and need for specialized peer reviewer training.

The CTSA External Reviewer Exchange Consortium (CEREC): Engagement and efficacy

Journal of Clinical and Translational Science, 2019

Introduction: Many institutions evaluate applications for local seed funding by recruiting peer reviewers from their own institutional community. Smaller institutions, however, often face difficulty locating qualified local reviewers who are not in conflict with the proposal. As a larger pool of reviewers may be accessed through a cross-institutional collaborative process, nine Clinical and Translational Science Award (CTSA) hubs formed a consortium in 2016 to facilitate reviewer exchanges. Data were collected to evaluate the feasibility and preliminary efficacy of the consortium. Methods: The CTSA External Reviewer Exchange Consortium (CEREC) has been supported by a custom-built web-based application that facilitates the process and tracks the efficiency and productivity of the exchange. Results: All nine of the original CEREC members remain actively engaged in the exchange. Between January 2017 and May 2019, CEREC supported the review process for 23 individual calls for proposals....

What do we know about grant peer review in the health sciences?

F1000Research, 2017

Background: Peer review decisions award >95% of academic medical research funding, so it is crucial to understand how well they work and if they could be improved. Methods: This paper summarises evidence from 105 relevant papers identified through a literature search on the effectiveness and burden of peer review for grant funding. Results: There is a remarkable paucity of evidence about the overall efficiency of peer review for funding allocation, given its centrality to the modern system of science. From the available evidence, we can identify some conclusions around the effectiveness and burden of peer review. The strongest evidence around effectiveness indicates a bias against innovative research. There is also fairly clear evidence that peer review is, at best, a weak predictor of future research performance, and that ratings vary considerably between reviewers. There is some evidence of age bias and cronyism. Good evidence shows that the burden of peer review is high and th...