Distinguishing citation quality for journal impact assessment (original) (raw)

A comprehensive examination of the relation of three citation-based journal metrics to expert judgment of journal quality

Journal of Informetrics, 2016

The academic and research policy communities have seen a long debate concerning the merits of peer review and quantitative citation-based metrics in evaluation of research. Some have called for replacing peer review with use of metrics for some evaluation purposes, while others have called for the use peer review informed by metrics. Whatever one's position, a key question is the extent to which peer review and quantitative metrics agree. In this paper we study the relation between the three journal metrics source normalized impact per paper (SNIP), raw impact per paper (RIP) and Journal Impact Factor (JIF) and human expert judgement. Using the journal rating system produced by the Excellence in Research for Australia (ERA) exercise, we examine the relationship over a set of more than 10,000 journals categorized into 27 subject areas. We analyze the relationship from the dimensions of correlation, distribution of the metrics over the rating tiers, and ROC analysis. Our results show that SNIP consistently has stronger agreement with the ERA rating, followed by RIP and then JIF along every dimension measured. The fact that SNIP has a stronger agreement than RIP demonstrates clearly that the increase in agreement is due to SNIP's database citation potential normalization factor. Our results suggest that SNIP may be a better choice than RIP or JIF in evaluation of journal quality in situations where agreement with expert judgment is an important consideration.

Issues and Opinions Evaluating Journal Quality and the Association for Information Systems Senior Scholars’ Journal Basket via Bibliometric Measures: Do Expert Journal Assessments Add Value?

2015

Consistent with Straub and Anderson (2010), we recognize that a journal’s quality and a journal’s impact, reputation, and influence are not necessarily equivalent. Similarly, an underlying nomology likely exists—that is largely unknown and unresearched—such that key factors of quality (e.g., rigor of review process, caution with respect to editorial oversight, accuracy of content, etc.) are what predict journal impact or influence (Straub and Anderson 2010). However, due to the complex and unknown nature of this nomology, and following extant practice in scientometrics research, we follow Straub and Anderson in simply equating journal quality with journal impact and reputation for pragmatic purposes. On this basis, we categorize the various methods of assessing journal quality from this lens into three methodological approaches: expert assessment, citation analyses, and non-validated approaches. We review these approaches to better establish the foundation for our choice to

Assessing Publication Impact Through Citation Data

SAA Archaeological Record, 2006

Within academia, there is a growing movement to document journal quality as a means of evaluat-ing research impact, particularly for the purpose of tenure and promotion evaluations. Indeed, this paper originates out of research I conducted for my tenure ...

Evaluating Journal Quality: A Review of Journal Citation Indicators and Ranking in Business and Management

Evaluating the quality of academic journal is becoming increasing important within the context of research performance evaluation. Traditionally, journals have been ranked by peer review lists such as that of the Association of Business Schools (UK) or though their journal impact factor (JIF). However, several new indicators have been developed, such as the h-index, SJR, SNIP and the Eigenfactor which take into account different factors and therefore have their own particular biases. In this paper we evaluate these metrics both theoretically and also through an empirical study of a large set of business and management journals. We show that even though the indicators appear highly correlated in fact they lead to large differences in journal rankings. We contextualise our results in terms of the UK's large scale research assessment exercise (the RAE/REF) and particularly the ABS journal ranking list. We conclude that no one indicator is superior but that the h-index (which includes the productivity of a journal) and SNIP (which aims to normalize for field effects) may be the most effective at the moment.

Impact factor and other standardized measures of journal citation: A perspective

Indian Journal of Dental Research, 2009

The impact factor of journals has been widely used as glory quotients. Despite its limitations, this citation metric is widely used to reflect scientific merit and standing in one's field. Apart from the impact factor, other bibliometric indicators are also available but are not as popular among decision makers. These indicators are the immediacy index and cited half-life. The impact factor itself is affected by a wide range of sociological and statistical factors. This paper discusses the limitations of the impact factor with suggestions of how it can be used and how it should not be used. It also discusses how other bibliometric indicators can be used to assess the quality of publications.

Caveats for the use of citation indicators in research and journal evaluations

Journal of the American Society for Information …, 2008

Ageing of publications, percentage of self-citations, and impact vary from journal to journal within fields of science. The assumption that citation and publication practices are homogenous within specialties and fields of science is invalid. Furthermore, the delineation of fields and among specialties is fuzzy. Institutional units of analysis and persons may move between fields or span different specialties. The match between the citation index and institutional profiles varies among institutional units and nations. The respective matches may heavily affect the representation of the units. Non-ISI journals are increasingly cornered into "transdisciplinary" Mode-2 functions with the exception of specialist journals publishing in languages other than English. An "externally cited impact factor" can be calculated for these journals. The citation impact of non-ISI journals will be demonstrated using Science and Public Policy as the example.

Evaluating Journal Quality and the Association for Information Systems Senior Scholars' Journal Basket Via Bibliometric Measures: Do Expert Journal Assessments Add Value?

MIS Quarterly, 2013

Information systems journal rankings and ratings help scholars focus their publishing efforts and are widely used surrogates for judging the quality of research. Over the years, numerous approaches have been used to rank IS journals, approaches such as citation metrics, school lists, acceptance rates, and expert assessments. However, the results of these approaches often conflict due to a host of validity concerns. In the current scientometric study, we make significant strides toward correcting for these limitations in the ranking of mainstream IS journals. We compare expert rankings to bibliometric measures such as the ISI Impact Factor™, the h-index, and social network analysis metrics. Among other findings, we conclude that bibliometric measures provide very similar results to expert-based methods in determining a tiered structure of IS journals, thereby suggesting that bibliometrics can be a complete, less expensive, and more efficient substitute for expert assessment. We also find strong support for seven of the eight journals in the Association for Information Systems Senior Scholars' "basket" of journals. A cluster analysis of our results indicates a twotiered separation in the quality of the highest quality IS journals-with MIS Quarterly, Information Systems

The drivers of citations in management science journals

2010

The number of citations is becoming an increasingly popular index for measuring the impact of a scholar's research or the quality of an academic department. One obvious question is: what are the factors that influence the number of citations that a paper receives? This study investigates the number of citations received by papers published in six Management Science journals. It considers factors that relate to the author(s), the article itself, and the journal. The results show that the strongest factor is the journal itself but other factors are also significant including the length of the paper, the number of references, the status of the first author's institution, and the type of paper, especially if it is a review. Overall, this study provides some insights into the determinants of a paper's impact which is helpful for particular stakeholders in making important decisions.

A Survey of Impact and Citation Indices: Limitations and Issues

Research projects can be evaluated through evaluating the research publications produced through those projects. Research publications are evaluation using impact factors and citation indices. There are several citation indices that are proposed and existed to assess the value of a research publication or the research impact of an author or a journal. In this paper, an extensive survey is conducted to evaluate the majority of the citation indices. Using examples, we demonstrated some of the limitations and problems with ...