Publication Status: Published (original) (raw)

What does 'significance' look like? Assessing the assessment process in competitive grants schemes

2006

This paper focuses on the writer's experiences from 2002-2004 as the sole Education member of the Australian Research Council committee that assesses applications across the 'social, behavioural and economic sciences'. It is drawn from a wider analysis of how judgments about research quality are produced across different spheres of education research activity, drawing particular attention to the characteristics of those who judge, their explicit and implicit criteria, and the textual markers of 'quality' that they work with. The article considers how the explicit categories for scoring; the characteristics of those appointed to the committee; the history of dominant research traditions, and a slippage between the categories 'significance', 'national benefit' and 'national research priorities' all influence the judgments and scores that eventuate from assessors and produce the final rankings.

Managing grant publication mandates: an interoperable, implementation model

2013

How do we measure performance? How do we report it? For universities, performance can be measured in a variety of ways-the number of students enrolled, the number of graduates, theses completions, research grant funding obtained, research outputs in the form of publications, prestige attained by staff and the institution as a whole, and reputation. Some of these performance measures are easily quantifiable, others less so, e.g. prestige and reputation. And of course performance measurement regimes change with time, such that what was considered an appropriate measure at one time may be deemed no longer relevant or even desirable. For example, publication of conference papers in proceedings is now deemed less desirable than publication in A* journals, largely as a result of issues arising from the ERA process. This changing dynamic could also be said to apply to the current effort in Australia to measure performance in regards to research grants and related published research outputs, arising from the introduction of federally-supported mandates. It is a swiftly changing landscape, requiring transformative thinking and process change. The case of the University of Wollongong and its efforts to implement a new grants reporting and performance management regime may be typical, if not representative.

Criteria for assessing grant applications: a systematic review

Palgrave Communications, 2020

Criteria are an essential component of any procedure for assessing merit. Yet, little is known about the criteria peers use to assess grant applications. In this systematic review we therefore identify and synthesize studies that examine grant peer review criteria in an empirical and inductive manner. To facilitate the synthesis, we introduce a framework that classifies what is generally referred to as ‘criterion’ into an evaluated entity (i.e., the object of evaluation) and an evaluation criterion (i.e., the dimension along which an entity is evaluated). In total, the synthesis includes 12 studies on grant peer review criteria. Two-thirds of these studies examine criteria in the medical and health sciences, while studies in other fields are scarce. Few studies compare criteria across different fields, and none focus on criteria for interdisciplinary research. We conducted a qualitative content analysis of the 12 studies and thereby identified 15 evaluation criteria and 30 evaluated entities, as well as the relations between them. Based on a network analysis, we determined the following main relations between the identified evaluation criteria and evaluated entities. The aims and outcomes of a proposed project are assessed in terms of the evaluation criteria originality, academic relevance, and extra-academic relevance. The proposed research process is evaluated both on the content level (quality, appropriateness, rigor, coherence/justification), as well as on the level of description (clarity, completeness). The resources needed to implement the research process are evaluated in terms of the evaluation criterion feasibility. Lastly, the person and personality of the applicant are assessed from a ‘psychological’ (motivation, traits) and a ‘sociological’ (diversity) perspective. Furthermore, we find that some of the criteria peers use to evaluate grant applications do not conform to the fairness doctrine and the ideal of impartiality. Grant peer review could therefore be considered unfair and biased. Our findings suggest that future studies on criteria in grant peer review should focus on the applicant, include data from non- Western countries, and examine fields other than the medical and health sciences.

The post-award effort of managing and reporting on funded research: a scoping review

F1000Research, 2023

Introduction: Reporting on research is a standard requirement of post-award management, and is increasingly required for 'compliance' and to show the impact of funding decisions. The demand for information on research is growing, however, approaches in reporting and post-award management appear inconsistent. Altogether, this can lead to perception of unnecessary effort and ineffiency that impacts on research activity. Identifying this effort is crucial if organisations and Higher Education Institutions (HEIs) are to better streamline and support on their processes. Here, we review the 'effort' and processes in post-award management, explore current practices and the purposes of reporting on research. We also identify where effort is perceived as unnecessary or improvements are needed, using previous reports of solutions to inform recommendations for funders and HEIs. Methods: We conducted a scoping review of the relevant research and grey literature. Electronic searches of databases, and manual searches of journals and funder websites, resulted in inclusion of 52 records and 11 websites. Information on HEI and funder post-award management processes was extracted, catalogued, and summarised to inform discussion. Results: Post-award management is a complex process that serves many purposes but requires considerable effort, particularly in the set up and reporting of research. Perceptions of unnecessary effort stem from inefficiencies in compliance, data management and reporting approaches, and there is evidence of needed improvement in mechanisms of administrative support, research impact assessment, monitoring, and evaluation. Solutions should focus on integrating digital systems to reduce duplication, streamlining reporting methods, and improving administrative resources in HEIs. Conclusions: Funders and HEIs should work together to support a more efficient post-award management process. The value of research information, and how it is collected and used, can be Open Peer Review Approval Status

Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices

PLoS ONE, 2012

Background: Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers' scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria.

Competitive grants in the new millennium : a global workshop for designers and practitioners - proceedings

2000

Sponsored by the Brazilian Ministry of Agriculture and Food Supply through the Brazilian Agricultural Research Corporation (EMBRAPA), the World Bank through AKIS, and the Inter-American Development Bank. The workshop provided a forum for the exchange of experiences in the design and implementation of competitive grant programs for research and extension (CGPs). This was achieved through the participation of over 60 designers and practitioners from fifteen competitive grant programs in Africa, Asia, Latin America, and the United States of America. Representatives of the World Bank and the Inter-American Development Bank also shared their perspectives, as did a number of the international centers of the Consultative Group on International Agricultural Research (CGIAR). This report provides a brief outline of concerns and lessons learned from common experiences in competitive programs from country, regional, and donor perspectives. The full proceedings will be available on the Internet when finalized.

Past performance as predictor of successful grant applications: A case study

2009

Abstract Competitive allocation of research funding is a major mechanism within the science system. It is fundamentally based on the idea of peer review. Peer review is central in project selection as peers are considered to be in a unique position to identify and select the best and most innovative researchers and research projects. In this study, we assess the practice of peer-review based project selection. The basic question is" do the best researchers get the funding"?

Surveys of current status in biomedical science grant review: funding organisations' and grant reviewers' perspectives

BMC Medicine, 2010

Background: The objectives of this research were (a) to describe the current status of grant review for biomedical projects and programmes from the perspectives of international funding organisations and grant reviewers, and (b) to explore funders' interest in developing uniform requirements for grant review aimed at making the processes and practices of grant review more consistent, transparent, and user friendly. Methods: A survey to a convenience sample of 57 international public and private organisations that give grants for biomedical research was conducted. Nine participating organisations then emailed a random sample of their external reviewers an invitation to participate in a second electronic survey. Results: A total of 28 of 57 (49%) organisations in 19 countries responded. Organisations reported these problems as frequent or very frequent: declined review requests (16), late reports , administrative burden , difficulty finding new reviewers (4), and reviewers not following guidelines (4). The administrative burden of the process was reported to have increased over the past 5 years. In all, 17 organisations supported the idea of uniform requirements for conducting grant review and for formatting grant proposals. A total of 258/418 (62%) reviewers responded from 22 countries. Of those, 48% (123/258) said their institutions encouraged grant review, yet only 7% (17/258) were given protected time and 74% (192/258) received no academic recognition for this. Reviewers rated these factors as extremely or very important in deciding to review proposals: 51% (131/258) desire to support external fairness, 47% (120/258) professional duty, 46% (118/258) relevance of the proposal's topic, 43% (110/258) wanting to keep up to date, 40% (104/258) desire to avoid suppression of innovation. Only 16% (42/258) reported that guidance from funders was very clear. In all, 85% (220/258) had not been trained in grant review and 64% (166/258) wanted this. Conclusions: Funders reported a growing workload of biomedical proposals that is getting harder to peer review. Just under half of grant reviewers take part for the good of science and professional development, but many report lack of academic and practical support and clear guidance. Around two-thirds of funders supported the development of uniform requirements for the format and peer review of proposals to help ease the current situation.