On the relation between the WoS impact factor, the Eigenfactor, the SCImago Journal Rank, the Article Influence Score and the journal h-index (original) (raw)
Related papers
Comparison of SCImago journal rank indicator with journal impact factor
Faseb Journal, 2008
The application of currently available sophisticated algorithms of citation analysis allows for the incorporation of the "quality" of citations in the evaluation of scientific journals. We sought to compare the newly introduced SCImago journal rank (SJR) indicator with the journal impact factor (IF). We retrieved relevant information from the official Web sites hosting the above indices and their source databases. The SJR indicator is an open-access resource, while the journal IF requires paid subscription. The SJR indicator (based on Scopus data) lists considerably more journal titles published in a wider variety of countries and languages, than the journal IF (based on Web of Science data). Both indices divide citations to a journal by articles of the journal, during a specific time period. However, contrary to the journal IF, the SJR indicator attributes different weight to citations depending on the "prestige" of the citing journal without the influence of journal self-citations; prestige is estimated with the application of the PageRank algorithm in the network of journals. In addition, the SJR indicator includes the total number of documents of a journal in the denominator of the relevant calculation, whereas the journal IF includes only "citable" articles (mainly original articles and reviews). A 3-yr period is analyzed in both indices but with the use of different approaches. Regarding the top 100 journals in the 2006 journal IF ranking order, the median absolute change in their ranking position with the use of the SJR indicator is 32 (1st quartile: 12; 3rd quartile: 75). Although further validation is warranted, the novel SJR indicator poses as a serious alternative to the well-established journal IF, mainly due to its openaccess nature, larger source database, and assessment of the quality of citations.-Falagas, M. E.,
Validation of Journal Impact Metrics of Web of Science and Scopus
Pakistan Journal of Information Management & Libraries, 2016
Citation based metrics are widely used to assess the impact of research published in journals. This paper presents the results of a research study to verify the accuracy of data and calculations of journal impact metrics presented in Web of Science (WoS) and Scopus in the case of three journals of information and library science. Data collected from the websites of journals were compared with that of two citation extended databases. The study manually calculated the Journal Impact Factor (JIF) and the Impact per Publication (IPP) in accordance with formulas given in the databases. Data were also collected from the Google Scholar to draw a comparison. The study found discrepancies in two sets of data and bibliometric values, i.e., systematic values presented in WoS and Scopus and calculated in this study. Commercial databases presented inflated measures based on fabricated or erroneous data. The study is of practical importance to researchers, universities and research financing bodies that consider these bibliometric indicators as a good tool for measuring performance, assessment, and evaluation of research quality as well as researchers.
Correlation between the Journal Impact Factor and three other journal citation indices
Scientometrics, 2010
To determine the degree of correlation among journal citation indices that reflect the average number of citations per article, the most recent journal ratings were downloaded from the websites publishing four journal citation indices: the Institute of Scientific Information's journal impact factor index, Eigenfactor's article influence index, SCImago's journal rank index and Scopus' trend line index. Correlations were determined for each pair of indices, using ratings from all journals that could be identified as having been rated on both indices. Correlations between the six possible pairings of the four indices were tested with Spearman's rho. Within each of the six possible pairings, the prevalence of identifiable errors was examined in a random selection of 10 journals and among the 10 most discordantly ranked journals on the two indices. The number of journals that could be matched within each pair of indices ranged from 1,857 to 6,508. Paired ratings for all journals showed strong to very strong correlations, with Spearman's rho values ranging from 0.61 to 0.89, all p \ 0.001. Identifiable errors were more common among scores for journals that had very discordant ranks on a pair of indices. These four journal citation indices were significantly correlated, providing evidence of convergent validity (i.e. they reflect the same underlying construct of average citability per article in a journal). Discordance in the ranking of a journal on two indices was in some cases due to an error in one index.
The journal relative impact: an indicator for journal assessment
Scientometrics, 2011
This paper presents the journal relative impact (JRI), an indicator for scientific evaluation of journals. The JRI considers in its calculation the different culture of citations presented by the Web of Science subject categories. The JRI is calculated considering a variable citation window. This citation window is defined taking into account the time required by each subject category for the maturation of citations. The type of document considered in each subject category depends on its outputs in relation to the citations. The scientific performance of each journal in relation to each subject category that it belongs to is considered allowing the comparison of the scientific performance of journals from different fields. The results obtained show that the JRI can be used for the assessment of the scientific performance of a given journal and that the SJR and SNIP should be used to complement the information provided by the JRI. The JRI presents good features as stability over time and predictability.
E-International Journal of Scientific Research, 2011
Publication in scientific academic journals is a key criterion to appointment, tenure, and promotion in many universities from the developed countries. Most universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF), SCImago Journal Rank, Eigenfactor and many more. However, the above metric system is still a novice or indomitable for many universities from the underdeveloped and developing countries. This paper proposes an alternative metric system- World Electronic Journals Impact Factor (WEJ Impact factor). This metric system will be an alternative mechanism for those journals which does not find a place in ISI/Thomson Reuters, SCOPUS and other data base. WEJ Impact Factor is an open access electronic journal metric which uses Google Scholar citation and contribution factor of the journal, which is based on data from the E-International Scientific Research Journal Consortium WEJ Impact factor, is calculated based on the contributing factor and citation factor. The journals are categorized under such as Arts and Humanities, Science, Social Sciences, and Multidisciplinary before assigning the impact factor. This simple and new method of computing impact factor provides fair chance for world wide electronic journals to understand the impact factor of their journals based on quantity, quality, and on contextual level
Annual Journal Citation Indices: A Comparative Study
Journal of Scientometric Research, 2016
We study the statistics of citations made to the indexed Science journals in the Journal Citation Reports during the period 2004-2013 using different measures. We consider different measures which quantify the impact of the journals. To our surprise, we find that the apparently uncorrelated measures, even when defined in an arbitrary manner, show strong correlations. This is checked over all the years considered. Impact factor being one of these measures, the present work raises the question whether it is actually a nearly perfect index as claimed often. In addition, we study the distributions of the different indices which also behave similarly.
Journal Impact Factor, Eigenfactor, Journal Influence and Article Influence
2012
This paper examines the practical usefulness of two new journal performance metrics, namely the Eigenfactor score, which may be interpreted as measuring "Journal Influence", and the Article Influence score, using the Thomson Reuters ISI Web of Science (hereafter ISI) data for 2009 for the 200 most highly cited journals in each of the Sciences and Social Sciences, and compares them with two existing ISI metrics, namely Total Citations and the 5-
Weighted Impact Factor (WIF) for assessing the quality of scientific journals
ArXiv, 2015
Nowadays impact factor is the significant indicator for journal evaluation. In impact factor calculation is used number of all citations to journal, regardless of the prestige of cited journals, however, scientific units (paper, researcher, journal or scientific organization) cited by journals with high impact factor or researchers with high Hirsch index are more important than objects cited by journals without impact factor or unknown researcher. In this paper was offered weighted impact factor for getting more accurate rankings for journals, which consider not only quantity of citations, but also quality of citing journals. Correlation coefficients among different indicators for journal evaluation: impact factors by Thomson Scientific, weighted impact factors offered by different researchers, average and medians of all citing journals impact factors and 5-year impact factors were analysed.
In this paper, we take Nature as a case study to present a new method for calculating and analyzing the structure of the Impact Factor (IF) using the Web of Science (WoS) database and the definition of IF. We calculated Nature's IF for 2013 using data retrieved on 21 March 2014 from the WoS database and comprehensively analyzed its structural features, thereby identifying the contributions to Nature's IF for 2013 of different document types, highly cited papers, highly cited authors, different institutions, and different countries and regions. The results show: (a) the calculated value of Nature's IF for 2013 is 41.002 according to data retrieved in March of 2014; (b) two types of document, article and review, made prominent contributions to Nature's IF for 2013, with contribution values of 35.338 and 3.179, respectively; (c) the institutions that made the greatest contributions to Nature's IF for 2013 are mainly located in the USA and England, with the first three highly cited institutions being Harvard University, Stanford University, and Washington University; (d) the contributions of different countries and regions to Nature's IF for 2013 differ greatly, with the combined contribution rate of the USA and England being 70.4 percent.
2015
The application of currently available sophisticated algorithms of citation analysis allows for the incorporation of the “quality ” of citations in the evaluation of scientific journals. We sought to compare the newly introduced SCImago journal rank (SJR) indicator with the journal impact factor (IF). We retrieved relevant information from the official Web sites hosting the above indices and their source databases. The SJR indicator is an open-access re-source, while the journal IF requires paid subscrip-tion. The SJR indicator (based on Scopus data) lists considerably more journal titles published in a wider variety of countries and languages, than the journal IF (based on Web of Science data). Both indices divide citations to a journal by articles of the journal,