The FASEB Journal • Life Sciences Forum Comparison of SCImago journal rank indicator with journal impact factor (original) (raw)
Related papers
Comparison of SCImago journal rank indicator with journal impact factor
Faseb Journal, 2008
The application of currently available sophisticated algorithms of citation analysis allows for the incorporation of the "quality" of citations in the evaluation of scientific journals. We sought to compare the newly introduced SCImago journal rank (SJR) indicator with the journal impact factor (IF). We retrieved relevant information from the official Web sites hosting the above indices and their source databases. The SJR indicator is an open-access resource, while the journal IF requires paid subscription. The SJR indicator (based on Scopus data) lists considerably more journal titles published in a wider variety of countries and languages, than the journal IF (based on Web of Science data). Both indices divide citations to a journal by articles of the journal, during a specific time period. However, contrary to the journal IF, the SJR indicator attributes different weight to citations depending on the "prestige" of the citing journal without the influence of journal self-citations; prestige is estimated with the application of the PageRank algorithm in the network of journals. In addition, the SJR indicator includes the total number of documents of a journal in the denominator of the relevant calculation, whereas the journal IF includes only "citable" articles (mainly original articles and reviews). A 3-yr period is analyzed in both indices but with the use of different approaches. Regarding the top 100 journals in the 2006 journal IF ranking order, the median absolute change in their ranking position with the use of the SJR indicator is 32 (1st quartile: 12; 3rd quartile: 75). Although further validation is warranted, the novel SJR indicator poses as a serious alternative to the well-established journal IF, mainly due to its openaccess nature, larger source database, and assessment of the quality of citations.-Falagas, M. E.,
Validation of Journal Impact Metrics of Web of Science and Scopus
Pakistan Journal of Information Management & Libraries, 2016
Citation based metrics are widely used to assess the impact of research published in journals. This paper presents the results of a research study to verify the accuracy of data and calculations of journal impact metrics presented in Web of Science (WoS) and Scopus in the case of three journals of information and library science. Data collected from the websites of journals were compared with that of two citation extended databases. The study manually calculated the Journal Impact Factor (JIF) and the Impact per Publication (IPP) in accordance with formulas given in the databases. Data were also collected from the Google Scholar to draw a comparison. The study found discrepancies in two sets of data and bibliometric values, i.e., systematic values presented in WoS and Scopus and calculated in this study. Commercial databases presented inflated measures based on fabricated or erroneous data. The study is of practical importance to researchers, universities and research financing bodies that consider these bibliometric indicators as a good tool for measuring performance, assessment, and evaluation of research quality as well as researchers.
Impact factor and other standardized measures of journal citation: A perspective
Indian Journal of Dental Research, 2009
The impact factor of journals has been widely used as glory quotients. Despite its limitations, this citation metric is widely used to reflect scientific merit and standing in one's field. Apart from the impact factor, other bibliometric indicators are also available but are not as popular among decision makers. These indicators are the immediacy index and cited half-life. The impact factor itself is affected by a wide range of sociological and statistical factors. This paper discusses the limitations of the impact factor with suggestions of how it can be used and how it should not be used. It also discusses how other bibliometric indicators can be used to assess the quality of publications.
SJR and SNIP: two new journal metrics in Elsevier's Scopus
2010
Journal citation measures were originally developed as tools in the study of the scientific-scholarly communication system 1,2. But soon they found their way into journal management by publishers and editors, and into library collection management, and then into broader use in research management and the assessment of research performance. Typical questions addressed with journal citation measures in all these domains are listed in Table 1. Table 1 also presents important points that users of journal citation measures should take into account. Many authors have underlined the need to correct for differences in citation characteristics between subject fields, so that users can be sure that differences are due only to citation impact and
Four alternatives to the journal Impact Factor (IF) indicator are compared to find out their similarities. Together with the IF, the SCImago Journal Rank indicator (SJR), the Eigenfactor TM score, the Article Influence TM score and the journal hindex of 77 journals from more than ten fields were collected. Results show that although those indicators are calculated with different methods and even use different databases, they are strongly correlated with the WoS IF and among each other. These findings corroborate results published by several colleagues and show the feasibility of using free alternatives to the Web of Science for evaluating scientific journals.
Bibliometric Rankings of Journals Based on the Thomson Reuters Citations Database
Journal of Reviews on Global Economics, 2015
Virtually all rankings of journals are based on citations, including self citations by journals and individual academics. The gold standard for bibliometric rankings based on citations data is the widely-used Thomson Reuters Web of Science (2014) citations database, which publishes, among others, the celebrated Impact Factor. However, there are numerous bibliometric measures, also known as research assessment measures, based on the Thomson Reuters citations database, but they do not all seem to have been collected in a single source. The purpose of this paper is to present, define and compare the 16 most well-known Thomson Reuters bibliometric measures in a single source. It is important that the existing bibliometric measures be presented in any rankings papers as alternative bibliometric measures based on the Thomson Reuters citations database can and do produce different rankings, as has been documented in a number of papers in the bibliometrics literature.
Annual Journal Citation Indices: A Comparative Study
Journal of Scientometric Research, 2016
We study the statistics of citations made to the indexed Science journals in the Journal Citation Reports during the period 2004-2013 using different measures. We consider different measures which quantify the impact of the journals. To our surprise, we find that the apparently uncorrelated measures, even when defined in an arbitrary manner, show strong correlations. This is checked over all the years considered. Impact factor being one of these measures, the present work raises the question whether it is actually a nearly perfect index as claimed often. In addition, we study the distributions of the different indices which also behave similarly.
Correlation between the Journal Impact Factor and three other journal citation indices
Scientometrics, 2010
To determine the degree of correlation among journal citation indices that reflect the average number of citations per article, the most recent journal ratings were downloaded from the websites publishing four journal citation indices: the Institute of Scientific Information's journal impact factor index, Eigenfactor's article influence index, SCImago's journal rank index and Scopus' trend line index. Correlations were determined for each pair of indices, using ratings from all journals that could be identified as having been rated on both indices. Correlations between the six possible pairings of the four indices were tested with Spearman's rho. Within each of the six possible pairings, the prevalence of identifiable errors was examined in a random selection of 10 journals and among the 10 most discordantly ranked journals on the two indices. The number of journals that could be matched within each pair of indices ranged from 1,857 to 6,508. Paired ratings for all journals showed strong to very strong correlations, with Spearman's rho values ranging from 0.61 to 0.89, all p \ 0.001. Identifiable errors were more common among scores for journals that had very discordant ranks on a pair of indices. These four journal citation indices were significantly correlated, providing evidence of convergent validity (i.e. they reflect the same underlying construct of average citability per article in a journal). Discordance in the ranking of a journal on two indices was in some cases due to an error in one index.
Nuclear medicine review. Central & Eastern Europe, 2012
Despite its widespread acceptance in the scientific world, impact factor (IF) has been criticized recently on many accounts: including lack of quality assessment of the citations, influence of self citation, English language bias, etc. In the current study, we evaluated three indices of journal scientific impact: (IF), Eigenfactor Score (ES), and SCImago Journal rank indicator (SJR) of nuclear medicine journals. Overall 13 nuclear medicine journals are indexed in ISI and SCOPUS and 7 in SCOPUS only. Self citations, Citations to non-English articles, citations to non-citable items and citations to review articles contribute to IFs of some journals very prominently, which can be better detected by ES and SJR to some extent. Considering all three indices while judging quality of the nuclear medicine journals would be a better strategy due to several shortcomings of IF.
E-International Journal of Scientific Research, 2011
Publication in scientific academic journals is a key criterion to appointment, tenure, and promotion in many universities from the developed countries. Most universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF), SCImago Journal Rank, Eigenfactor and many more. However, the above metric system is still a novice or indomitable for many universities from the underdeveloped and developing countries. This paper proposes an alternative metric system- World Electronic Journals Impact Factor (WEJ Impact factor). This metric system will be an alternative mechanism for those journals which does not find a place in ISI/Thomson Reuters, SCOPUS and other data base. WEJ Impact Factor is an open access electronic journal metric which uses Google Scholar citation and contribution factor of the journal, which is based on data from the E-International Scientific Research Journal Consortium WEJ Impact factor, is calculated based on the contributing factor and citation factor. The journals are categorized under such as Arts and Humanities, Science, Social Sciences, and Multidisciplinary before assigning the impact factor. This simple and new method of computing impact factor provides fair chance for world wide electronic journals to understand the impact factor of their journals based on quantity, quality, and on contextual level