The Citation Wake of Publications Detects Nobel Laureates' Papers (original) (raw)

How Citation Boosts Promote Scientific Paradigm Shifts and Nobel Prizes

PLoS ONE, 2011

Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the ''boosting effect'' of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying ''boost factor'' is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract.

Citation analysis and the development of science: A case study using articles by some Nobel prize winners

Journal of the Association for Information Science and Technology, 2013

Using citation data of articles written by some Nobel Prize winners in physics, we show that concave, convex, and straight curves represent different types of interactions between old ideas and new insights. These cases illustrate different diffusion characteristics of academic knowledge, depending on the nature of the knowledge in the new publications. This work adds to the study of the development of science and links this development to citation analysis.

Universality of citation distributions: Toward an objective measure of scientific impact

Proceedings of The National Academy of Sciences, 2008

We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited c times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator c f = c/c0 is considered, where c0 is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of c f as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields.

The Citation Merit of Scientific Publications

PLoS ONE, 2012

We propose a new method to assess the merit of any set of scientific papers in a given field based on the citations they receive. Given a field and a citation impact indicator, such as the mean citation or the h-index, the merit of a given set of n articles is identified with the probability that a randomly drawn set of n articles from a given pool of articles in that field has a lower citation impact according to the indicator in question. The method allows for comparisons between sets of articles of different sizes and fields. Using a dataset acquired from Thomson Scientific that contains the articles published in the periodical literature in the period 1998-2007, we show that the novel approach yields rankings of research units different from those obtained by a direct application of the mean citation or the h-index.

Methods to account for citation inflation in research evaluation

Research Policy

Quantitative research evaluation requires measures that are transparent, relatively simple, and free of disciplinary and temporal bias. We document and provide a solution to a hitherto unaddressed temporal biascitation inflationwhich arises from the basic fact that scientific publication is steadily growing at roughly 4% per year. Moreover, because the total production of citations grows by a factor of 2 every 12 years, this means that the real value of a citation depends on when it was produced. Consequently, failing to convert nominal citation values into real citation values produces significant mis-measurement of scientific impact. To address this problem, we develop a citation deflator method, outline the steps to generalize and implement it using the Web of Science portal, and analyze a large set of researchers from biology and physics to demonstrate how two common evaluation metricstotal citations and h-indexcan differ by a remarkable amount depending on whether the underlying citation counts are deflated or not. In particular, our results show that the scientific impact of prior generations is likely to be significantly underestimated when citations are not deflated, often by 100% or more of the nominal value. Thus, our study points to the need for a systemic overhaul of the counting methods used evaluating citation impactespecially in the case of researchers, journals, and institutionswhich can span several decades and thus several doubling periods.

Methods for Measuring the Citations and Productivity of Scientists across Time and Discipline

Publication statistics are ubiquitous in the ratings of scientific achievement, with citation counts and paper tallies factoring into an individual's consideration for postdoctoral positions, junior faculty, and tenure. Citation statistics are designed to quantify individual career achievement, both at the level of a single publication, and over an individual's entire career. While some academic careers are defined by a few significant papers ͑possibly out of many͒, other academic careers are defined by the cumulative contribution made by the author's publications to the body of science. Several metrics have been formulated to quantify an individual's publication career, yet none of these metrics account for the collaboration group size, and the time dependence of citation counts. In this paper we normalize publication metrics in order to achieve a universal framework for analyzing and comparing scientific achievement across both time and discipline. We study the publication careers of individual authors over the 50-year period 1958-2008 within six high-impact journals: CELL, the Science. Using the normalized metrics ͑i͒ "citation shares" to quantify scientific success, and ͑ii͒ "paper shares" to quantify scientific productivity, we compare the career achievement of individual authors within each journal, where each journal represents a local arena for competition. We uncover quantifiable statistical regularity in the probability density function of scientific achievement in all journals analyzed, which suggests that a fundamental driving force underlying scientific achievement is the competitive nature of scientific advancement.

Quality versus quantity in scientific impact

Journal of Informetrics, 2015

Citation metrics are becoming pervasive in the quantitative evaluation of scholars, journals and institutions. More then ever before, hiring, promotion, and funding decisions rely on a variety of impact metrics that cannot disentangle quality from quantity of scientific output, and are biased by factors such as discipline and academic age. Biases affecting the evaluation of single papers are compounded when one aggregates citation-based metrics across an entire publication record. It is not trivial to compare the quality of two scholars that during their careers have published at different rates in different disciplines in different periods of time. We propose a novel solution based on the generation of a statistical baseline specifically tailored on the academic profile of each researcher. Our method can decouple the roles of quantity and quality of publications to explain how a certain level of impact is achieved. The method is flexible enough to allow for the evaluation of, and fair comparison among, arbitrary collections of papers-scholar publication records, journals, and entire institutions; and can be extended to simultaneously suppresses any source of bias. We show that our method can capture the quality of the work of Nobel laureates irrespective of number of publications, academic age, 1 arXiv:1411.7357v2 [cs.DL] 15 Dec 2014 and discipline, even when traditional metrics indicate low impact in absolute terms. We further apply our methodology to almost a million scholars and over six thousand journals to measure the impact that cannot be explained by the volume of publications alone.

Thoughts on uncitedness: Nobel laureates and Fields medalists as case studies

Journal of the American Society for Information Science and Technology, 2011

Fields medalists have a rather large fraction (10% or more) of uncited publications.This is the case for (in total) 75 examined researchers from the fields of mathematics (Fields medalists), physics, chemistry, and physiology or medicine (Nobel laureates). We study several indicators for these researchers, including the h-index, total number of publications, average number of citations per publication, the number (and fraction) of uncited publications, and their interrelations. The most remarkable result is a positive correlation between the h-index and the number of uncited articles.We also present a Lotkaian model, which partially explains the empirically found regularities.

Statistical Regularities in the Rank-Citation Profile of Scientists

Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c i (r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c i (r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c i (r) profiles, our results demonstrate the utility of the b i scaling parameter in conjunction with h i for quantifying individual publication impact. We show that the total number of citations C i tallied from a scientist's N i papers scales as C i *h 1zb i i . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.

Impact vitality: an indicator based on citing publications in search of excellent scientists

"Rons, N. and Amez, L. (2009). Impact vitality: an indicator based on citing publications in search of excellent scientists. Research Evaluation, 18(3), 233-241. PDF/DOI: http://arxiv.org/abs/1307.7035, or http://dx.doi.org/10.3152/095820209X470563 This paper contributes to the quest for an operational definition of 'research excellence' and proposes a translation of the excellence concept into a bibliometric indicator. Starting from a textual analysis of funding program calls aimed at individual researchers and from the challenges for an indicator at this level in particular, a new type of indicator is proposed. The Impact Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a researcher's publication output, based on the change in volume over time of the citing publications. The introduced metric is shown to posses attractive operational characteristics and meets a number of criteria which are desirable when comparing individual researchers. The validity of one of the possible indicator variants is tested using a small dataset of applicants for a senior full time Research Fellowship. Options for further research involve testing various indicator variants on larger samples linked to different kinds of evaluations."