A classification of the use of research indicators (original) (raw)

Evaluation of new research performance indices at the researcher level

Information Development, 2016

Various new performance evaluation indices for impact have been proposed and studied in various contexts. This study carried out a quantitative evaluation of the case of Malaysian engineering researchers at the micro level using a scientometric approach. In order to understand the behavior of new developments, a comparative performance evaluation is carried out of h-index, a set of h-type indices along with publication and citation metrics. Findings are compared with earlier major studies. We looked for institutional h-index and researchers’ h-index scores and did not find any relationship. Exploratory Factor Analysis is employed to examine the valid categorization and to study the underlying dimensions of the studied metrics and indices. The inter-correlation among h-index, its variants, and traditional metrics is probed in detail. The h, q and g-indices along with publication and citation hold the position on ‘quantity of the productive core’, while the R index showed equal loadin...

Scientometrics Analysis for Research Performance Evaluation: Methodological Issues at Meso and Micro Levels

Pakistan Journal of Information Management and Libraries, 2016

Newly introduced scientometric indices have gained much popularity and have been widely discussed. The application of such indices and their validity for Research Performance Evaluation (RPE) focuses on various contexts and aggregate levels. Several methodological concerns have been raised regarding the application of these indices for RPE purpose. This study aims at describing the methodological issues faced and lessons learned from the investigations carried out on engineering research data in Malaysia, using the scientometric approach at meso and micro levels. This scientometric case study employed a set of newly introduced RPE indices along with traditional metrics. The unit of analysis was Malaysian engineering research. At meso level, twelve Malaysian universities were selected. While, at the micro level, a hundred most productive Malaysian related researchers were chosen. The data were retrieved from Web of Science (WoS) for the duration of ten years (2001-2010) and limited t...

Evaluating research institutions: The potential of the success-index

Scientometrics, 2013

Similarly to the h-index and other indicators, the success-index is a recent indicator that makes it possible to identify, among a general group of papers, those of greater citation impact. This indicator implements a field-normalization at the level of single paper and can therefore be applied to multidisciplinary groups of articles. Also, it is very practical for normalizations aimed at achieving the so-called size-independency. Thanks to these (and other) properties, this indicator is particularly versatile when evaluating the publication output of entire research institutions. This paper exemplifies the potential of the successindex by means of several practical applications, respectively: (i) comparison of groups of researchers within the same scientific field, but affiliated with different universities, (ii) comparison of different departments of the same university, and (iii) comparison of entire research institutions. A sensitivity analysis will highlight the success-index's robustness. Empirical results suggest that the success-index may be conveniently extended to largescale assessments, i.e., involving a large number of researchers and research institutions.

Evaluation of h and h-type Indices for Research Performance at Researcher's Level

Information Development, 2016

Various new performance evaluation indices for impact have been proposed and studied in various contexts. This study carried out a quantitative evaluation of the case of Malaysian engineering researchers at the micro level using a scientometric approach. In order to understand the behavior of new developments, a comparative performance evaluation is carried out of h-index, a set of h-type indices along with publication and citation metrics. Findings are compared with earlier major studies. We looked for institutional h-index and researchers' h-index scores and did not find any relationship. Exploratory Factor Analysis is employed to examine the valid categorization and to study the underlying dimensions of the studied metrics and indices. The intercorrelation among h-index, its variants, and traditional metrics is probed in detail. The h, q and g-indices along with publication and citation hold the position on 'quantity of the productive core', while the R index showed equal loading on both cores. For the case of Malaysian engineering researchers, two conspicuous findings are observed about the total citation and g-index. These have association with the first component named as 'quantity of the productive core.' Our findings strengthen the point that citation count has a strong association with the 'quantity of the productive core' and cannot be used as sole impact evaluation measure.

Are productivity, impact and visibility indicators appropriate for measuring the quality of research conducted in universities

The outcome of scientific activity conducted by universities materializes in a number of ways, including publications, the research projects, the attainment of scientific repute and the PhD theses awarded. This communication contains the preliminary results of a research project designed to evaluate the quality of Spanish public universities' scientific activity on the grounds of a wide range of indicators that take into consideration most of the aspects involved in this activity. The objective sought here was to determine the relationship between bibliometric indicators for productivity, impact and visibility and indicators of scientific repute, external funding and researcher training capacity in Spanish public universities. The present study was based on a review of Spanish public universities' scientific activity in 2002-2006. The relationships between indicators were computed in terms of the determination coefficient (R 2), which measures the percentage of data variability that can be explained by such associations. The results revealed that bibliometric indicators are only scantly related to other measures of scientific activity relevant to university research.

Key Performance Indicators (KPI) for Researchers at Different Levels & Strategies to Achieve it

2023

These quantitative and qualitative metrics play a pivotal role in guiding and evaluating research endeavours, fostering accountability, and enhancing the overall quality and impact of academic work. KPIs play a critical role in shaping the trajectory of academic researchers' careers. They provide a structured way to measure and enhance research productivity, impact, and collaboration, thereby contributing to the advancement of knowledge and the overall enrichment of the academic community. It is academically interesting to know KPTs for PhD scholars' level, post-doctorate scholars level, and research supervisors level. Methodology/Approach: The exploratory research method is adopted to analyze, compare, evaluate, interpret, and create KPIs at different academic research levels. The information is collected from scholarly articles using listed keywords with the help of search engines like Google.com, Google scholar, Organizational websites, and AI machines like ChatGPT and Bard. Using this relevant information, KPIs at different research levels are obtained. These KPIs at different academic research levels are further analysed using ABCD analysis framework. Findings/Result: Many KPIs are identified and listed at PhD scholars' level, post-doctorate scholars level, and research supervisors level. Using ABCD analysis framework, these KPIs are analysed and evaluated. It is believed that the identified KPIs systematically in this research are going to be guiding policies for academic researchers at PhD scholars level, postdoctorate scholars level, and research supervisors level. Originality/Value: For the first time, the key performance indicators (KPIs) are identified systematically and presented using exploratory research method. It is believed that like in business organizations of other industries, these key indicators are expected to be guiding principles to enhance the academic research productivity of higher education and research institutions at PhD scholars level, post-doctorate scholars level, and research supervisors level.

On a New Index for Research Assessment

Journal of Information Science Theory and Practice, 2021

We introduce a new research assessment measure, called the research excellence index. The measure, which we denote by REindex, accurately assesses the research performance of a researcher. The methodology used in deriving the RE-index tackles many of the flaws of popular research performance indicators such as publication counts, citation counts, and the h and g indices. A dataset is introduced, which takes advantage of the wide coverage of Scopus and the Library of Congress, and, at the same time, deals with the Scopus database depth problem. For an academic publication x, a prestige-type and length scores are assigned, and if x is published in an academic periodical publication J, the stature of J is identified through a quartile score. The three scores are used to assign a value score to every academic publication, and cited academic publications are given citation scores that encompass both cases of including and excluding self-citations. The foregoing scores are used to derive another set of scores measuring the combined qualitative and quantitative aspects of the creative work, citations of creative work, informative work and citations of informative work of a researcher. The scores take into consideration co-authorship. From these scores, two versions of the RE-index for a researcher are derived, covering the cases of including and excluding self-citations. The new measure is calculated for two mathematicians.