P-score: a reputation bibliographic index that complements citation counts (original) (raw)
Related papers
The Ranking of Researchers by Publications and Citations
2016
Researcher-level metrics assess a researcher’s publications and number of citations for each publication. This paper tests empirically 28 two-variable metrics, 26 of which are new in this paper, determined as geometric means from eight one-variable metrics. The 54 highest ranked researchers in RePEc are considered, 13 of whom are Nobel prize winners. One new one- variable metric, the number of citations for the 10 the most cited publication, is introduced. Characteristics of the eight one-variable metrics are considered, illustrating why two-variable metrics are needed. The 54 researchers are ranked for all 36 metrics. The lowest sum of ranks for the 13 Nobel prize winners occurs for metric c 1 , the number of citations for the highest cited publication. The 13 Nobel prize winners have on average 5.3 higher rank on w than on h, suggesting a need for being widely cited, not captured by the h -index. The metric nc, the square root of the product of the number of publications and the c...
The Pagerank-Index: Going beyond Citation Counts in Quantifying Scientific Impact of Researchers
Quantifying and comparing the scientific output of researchers has become critical for governments , funding agencies and universities. Comparison by reputation and direct assessment of contributions to the field is no longer possible, as the number of scientists increases and traditional definitions about scientific fields become blurred. The h-index is often used for comparing scientists, but has several well-documented shortcomings. In this paper, we introduce a new index for measuring and comparing the publication records of scientists: the pagerank-index (symbolised as π). The index uses a version of pagerank algorithm and the citation networks of papers in its computation, and is fundamentally different from the existing variants of h-index because it considers not only the number of citations but also the actual impact of each citation. We adapt two approaches to demonstrate the utility of the new index. Firstly, we use a simulation model of a community of authors, whereby we create various 'groups' of authors which are different from each other in inherent publication habits, to show that the pagerank-index is fairer than the existing indices in three distinct scenarios: (i) when authors try to 'massage' their index by publishing papers in low-quality outlets primarily to self-cite other papers (ii) when authors collaborate in large groups in order to obtain more authorships (iii) when authors spend most of their time in producing genuine but low quality publications that would massage their index. Secondly, we undertake two real world case studies: (i) the evolving author community of quantum game theory, as defined by Google Scholar (ii) a snapshot of the high energy physics (HEP) theory research community in arXiv. In both case studies, we find that the list of top authors vary very significantly when h-index and pagerank-index are used for comparison. We show that in both cases, authors who have collaborated in large groups and/or published less impactful papers tend to be comparatively favoured by the h-index, whereas the pagerank-index highlights authors who have made a relatively small number of definitive contributions, or written papers which served to highlight the link between diverse disciplines, or typically worked in smaller