Do open citations inform the qualitative peer-review evaluation in research assessments? An analysis of the Italian National Scientific Qualification (original) (raw)

Academics evaluating academics: a methodology to inform the review process on top of open citations

ArXiv, 2021

In the past, several works have investigated ways for combining quantitative and qualitative methods in research assessment exercises. In this work, we aim at introducing a methodology to explore whether citation-based metrics, calculated only considering open bibliographic and citation data, can yield insights on how human peer-review of research assessment exercises is conducted. To understand if and what metrics provide relevant information, we propose to use a series of machine learning models to replicate the decisions of the committees of the research assessment exercises.

Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation

A researcher's number of publications has been a fundamental merit in the competition for academic positions since the late 18th century. Today, the simple counting of publications has been supplemented with a whole range of bibliometric indicators, which supposedly not only measures the volume of research but also its impact. In this study, we investigate how bibliometrics are used for evaluating the impact and quality of publications in two specific settings: biomedicine and economics. Our study exposes the various metrics used in external evaluations of candidates for academic positions at Swedish universities. Moreover, we show how different bibliometric indicators, both explicitly and implicitly, are employed to assess and rank candidates. Our findings contribute to a further understanding of bibliometric indicators as 'judgment devices' that are employed in evaluating individuals and their published works within specific fields. We also show how 'expertise' in using bibliometrics for evaluative purposes is negotiated at the interface between domain knowledge and skills in using indicators. In line with these results, we propose that the use of metrics we report is best described as a form of 'citizen bibliometrics'—an underspecified term which we build upon in the article.

Open bibliographic data and the Italian National Scientific Qualification: Measuring coverage of academic fields

Quantitative Science Studies

The importance of open bibliographic repositories is widely accepted by the scientific community. For evaluation processes, however, there is still some skepticism: Even if large repositories of open access articles and free publication indexes exist and are continuously growing, assessment procedures still rely on proprietary databases, mainly due to the richness of the data available in these proprietary databases and the services provided by the companies they are offered by. This paper investigates the status of open bibliographic data of three of the most used open resources, namely Microsoft Academic Graph, Crossref, and OpenAIRE, evaluating their potentialities as substitutes of proprietary databases for academic evaluation processes. We focused on the Italian National Scientific Qualification (NSQ), the Italian process for University Professor qualification, which uses data from commercial indexes, and investigated similarities and differences between research areas, discipl...

Refrain from adopting the combination of citation and journal metrics to grade publications, as used in the Italian national research assessment exercise (VQR 2011–2014)

Scientometrics, 2016

The prediction of the long-term impact of a scientific article is challenging task, addressed by the bibliometrician through resorting to a proxy whose reliability increases with the breadth of the citation window. In the national research assessment exercises using metrics the citation window is necessarily short, but in some cases is sufficient to advise the use of simple citations. For the Italian VQR 2011-2014, the choice was instead made to adopt a linear weighted combination of citations and journal metric percentiles, with weights differentiated by discipline and year. Given the strategic importance of the exercise, whose results inform the allocation of a significant share of resources for the national academic system, we examined whether the predictive power of the proposed indicator is stronger than the simple citation count. The results show the opposite, for all discipline in the sciences and a citation window above two years.

Individual Bibliometric Assessment @ University of Vienna: From Numbers to Multidimensional Profiles

ArXiv, 2016

This paper shows how bibliometric assessment can be implemented at individual level. This has been successfully done at the University of Vienna carried out by the Bibliometrics and Publication Strategies Department of the Vienna University Library. According to the department's philosophy, bibliometrics is not only a helpful evaluation instrument in order to complement the peer review system. It is also meant as a compass for researchers in the "publish or perish" dilemma in order to increase general visibility and to optimize publication strategies. The individual assessment comprises of an interview with the researcher under evaluation, the elaboration of a bibliometric report of the researcher's publication output, the discussion and validation of the obtained results with the researcher under evaluation as well as further optional analyses. The produced bibliometric reports are provided to the researchers themselves and inform them about the quantitative aspec...

The role of citation networks to explain academic promotions: an empirical analysis of the Italian national scientific qualification

Scientometrics

The aim of this paper is to study the role of citation network measures in the assessment of scientific maturity. Referring to the case of the Italian national scientific qualification (ASN), we investigate if there is a relationship between citation network indices and the results of the researchers’ evaluation procedures. In particular, we want to understand if network measures can enhance the prediction accuracy of the results of the evaluation procedures beyond basic performance indices. Moreover, we want to highlight which citation network indices prove to be more relevant in explaining the ASN results, and if quantitative indices used in the citation-based disciplines assessment can replace the citation network measures in non-citation-based disciplines. Data concerning Statistics and Computer Science disciplines are collected from different sources (ASN, Italian Ministry of University and Research, and Scopus) and processed in order to calculate the citation-based measures us...

Bibliometric evaluation vs. informed peer review: Evidence from Italy

Research Policy, 2015

Italy* A relevant question for the organization of large scale research assessments is whether bibliometric evaluation and informed peer review where reviewers know where the work was published, yield similar results. It would suggest, for instance, that less costly bibliometric evaluation might -at least partlyreplace informed peer review, or that bibliometric evaluation could reliably monitor research in between assessment exercises. We draw on our experience of evaluating Italian research in Economics, Business and Statistics, where almost 12,000 publications dated 2004-2010 were assessed. A random sample from the available population of journal articles shows that informed peer review and bibliometric analysis produce similar evaluations of the same set of papers. Whether because of independent convergence in assessment, or the influence of bibliometric information on the community of reviewers, the implication for the organization of these exercises is that these two approaches are substitutes.

Professional standards in bibliometric research evaluation? A meta-evaluation of European assessment practice 2005–2019

PLOS ONE, 2020

Despite growing demand for practicable methods of research evaluation, the use of bibliometric indicators remains controversial. This paper examines performance assessment practice in Europe-first, identifying the most commonly used bibliometric methods and, second, identifying the actors who have defined widespread practices. The framework of this investigation is Abbott's theory of professions, and I argue that indicator-based research assessment constitutes a potential jurisdiction for both individual experts and expert organizations. This investigation was conducted using a search methodology that yielded 138 evaluation studies from 21 EU countries, covering the period 2005 to 2019. Structured content analysis revealed the following findings: (1) Bibliometric research assessment is most frequently performed in the Nordic countries, the Netherlands, Italy, and the United Kingdom. (2) The Web of Science (WoS) is the dominant database used for public research assessment in Europe. (3) Expert organizations invest in the improvement of WoS citation data, and set technical standards with regards to data quality. (4) Citation impact is most frequently assessed with reference to international scientific fields. (5) The WoS classification of science fields retained its function as a de facto reference standard for research performance assessment. A detailed comparison of assessment practices between five dedicated organizations and other individual bibliometric experts suggests that corporate ownership and limited access to the most widely used citation databases have had a restraining effect on the development and diffusion of professional bibliometric methods during this period.

The Impact of the national assessment exercises on self-citation rate and publication venue: An empirical investigation on the engineering academic sector in Italy

The aim of the present study is to provide an overview of the research activity, in terms of productivity and impact, of the Italian public universities and their academic staff in the engineering sector (EngSec); that is about seven thousand researchers. Beyond the increased bibliometric performance of the considered research area in the last decade, attention will be focused on the possible feedback on publication habits induced by regulations which progressively introduced, in the research world, strong competitiveness elements, such as evaluation exercises, covering both the single researcher career and the funding system. The effects of this feedback are also explored from the point of view of the self-citation phenomenon, which is said to increasingly affect the research world at large. The engineering sector is analyzed both as a whole and decomposed in cultural areas to give a more comprehensive view of this important cultural segment of the academia. In ten years, the productivity of the sector in term of Scopus documents is roughly doubled, with a consistent shift from conference papers to journal articles. The impact also showed a similar trend with self-citations exposing an evident peak in the year 2013.