The Justification of Journal Rankings–A Pilot Study (original) (raw)
Related papers
Mediterranean Journal of Social & Behavioral Research, 2021
The use of impact factor (IF) in the scientific and academic world is not new. A phenomenon that has gained widespread recognition and utilization. However, in modern-day usage, there seems to be a trend in higher education where academics are evaluated based on the impact factor of journals where scholarly works are published. This trend is gradually shifting the paradigm from the assessment of research contents to publication venue. This does not align with the original purpose of IF conceived by Garfield in 1955. One question that has continued to agitate the minds of concerned academics is whether the IF of journals is a dependable measure of research quality. This paper is an attempt to clarify or address this problem. Based on a thorough literature search and filtration, several problems about the use of IF as research quality measure are discussed as well as their implications. Recommendations were also made aimed at providing a way forward in higher education.
Scientific evaluation of the scholarly publications
Journal of Pharmacology and Pharmacotherapeutics, 2013
Worthiness of any scientific journal is measured by the quality of the articles published in it. The Impact factor (IF) is one popular tool which analyses the quality of journal in terms of citations received by its published articles. It is usually assumed that journals with high IF carry meaningful, prominent, and quality research. Since IF does not assess a single contribution but the whole journal, the evaluation of research authors should not be influenced by the IF of the journal. The h index, g index, m quotient, c index are some other alternatives to judge the quality of an author. These address the shortcomings of IF viz. number of citations received by an author, active years of publication, length of academic career and citations received for recent articles. Quality being the most desirable aspect for evaluating an author's work over the active research phase, various indices has attempted to accommodate different possible variables. However, each index has its own merits and demerits. We review the available indices, find the fallacies and to correct these, hereby propose the Original Research Performance Index (ORPI) for evaluation of an author's original work which can also take care of the bias arising because of self-citations, gift authorship, inactive phase of research, and length of non-productive period in research.
A Bibliometric Analysis of Faculty Research Performance Assessment Methods
Journal of the Korean Society For Information Management, 2011
Effective assessment of faculty research performance should involve considerations of both quality and quantity of faculty research. This study analyzed methods for evaluating faculty research output by comparing the rankings of Library and Information Science(LIS) faculty by publication counts, citation counts, and research performance assessment guidelines employed by Korean universities. The study results indicated that faculty rankings based on publication counts to be significantly different from those based on citation counts. Additionally, faculty rankings measured by university guidelines showed bigger correlations with rankings based on publication counts than rankings by citation counts, while differences in universities guidelines did not significantly affect the faculty rankings. The study findings suggest the need for bibliometric indicators that reflect the quality as well as the quantity of research output.
PloS one, 2017
The scientific foundation for the criticism on the use of the Journal Impact Factor (JIF) in evaluations of individual researchers and their publications was laid between 1989 and 1997 in a series of articles by Per O. Seglen. His basic work has since influenced initiatives such as the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for research metrics, and The Metric Tide review on the role of metrics in research assessment and management. Seglen studied the publications of only 16 senior biomedical scientists. We investigate whether Seglen's main findings still hold when using the same methods for a much larger group of Norwegian biomedical scientists with more than 18,000 publications. Our results support and add new insights to Seglen's basic work.
Rigor, Impact and Prestige: A Proposed Framework for Evaluating Scholarly Publications
Innovative Higher Education, 2012
As publication pressure has increased in the world of higher education, more journals, books, and other publication outlets have emerged. Thus it is critical to develop clear criteria for effectively evaluating the quality of publication outlets. Without such criteria funding agencies and promotion committees are forced to guess at how to evaluate a scholar’s portfolio. In this article, we explore the perils of evaluating journals based on a single quantitative measure (e.g., the Impact Factor rating of the Institute for Science Information). We then discuss key considerations for evaluating scholarship, including three main criteria: rigor, impact, and prestige. We then conclude with examples of how these criteria could be applied in evaluating scholarship.
What are we really evaluating when we rank journals : Comparisons of views
2003
This paper examines differences in academics perceptions of how journals should be evaluated in terms of their prestige, contribution to theory, contribution to practice and contribution to teaching. Comparisons are made between individual and institutional weightings, regional variations and whether an individual works at an institution offering a PhD/DBA. Some differences were identified, suggesting that that evaluative criteria used to rank journal may be influenced by employment situations.
Journal Publications, Indexing and Academic Excellence: Have We Chosen the Right Path
Journal communications have been revolutionized by the evolution of the concept of online and open access publications. Even the older and traditional journals have adopted the online mode for paper submission, peer review and editorial procedures before a research communication is published. Most journals now have both an online and print version where in both are either freely available or available under subscription. Currently the situation of journal publications has been in a lot of criticism. This review attempts to elaborate on the current status of journal publication, indexing, impact factor, authorship and criteria for assessment of academic excellence.
What Are We Measuring When We Evaluate Journals
Journal of Marketing Education, 2005
This article undertakes two studies to examine issues related to journal rankings. Study 1 examines the consistency between journal rankings reported in past studies. It finds that while there is consistency when comparing these studies, this consistency does not always occur outside the top-ranked journals. Study 2 explores whether individuals believe that the weighting of four underlying evaluative criteria-that is, prestige, contribution to theory, contribution to practice, and contribution to teaching-vary, based on
Quality vs. Quantity: Novel ways to evaluate academic output
The evaluation of scientific output has a key role in the allocation of research funds and academic positions. Decisions are often based on quality indicators for academic journals, and over the years, a handful of scoring methods have been proposed for this purpose. Discussing the most prominent methods (de facto standards) we show that they do not distinguish quality from quantity at article level. The systematic bias we find is analytically tractable and implies that the methods are manipulable. We introduce modified methods that correct for this bias, and use them to provide rankings of economic journals. Our methodology is transparent; our results are replicable.