Author sequence and credit for contributions in multiauthored publications (original) (raw)
Related papers
Contemporary biomedical research is performed by increasingly large teams. Consequently , an increasingly large number of individuals are being listed as authors in the bylines, which complicates the proper attribution of credit and responsibility to individual authors. Typically, more importance is given to the first and last authors, while it is assumed that the others (the middle authors) have made smaller contributions. However, this may not properly reflect the actual division of labor because some authors other than the first and last may have made major contributions. In practice, research teams may differentiate the main contributors from the rest by using partial alphabetical authorship (i.e., by listing middle authors alphabetically, while maintaining a contribution-based order for more substantial contributions). In this paper, we use partial alphabetical authorship to divide the authors of all biomedical articles in the Web of Science published over the 1980–2015 period in three groups: primary authors, middle authors, and supervisory authors. We operationalize the concept of middle author as those who are listed in alphabetical order in the middle of an authors' list. Primary and supervisory authors are those listed before and after the alphabetical sequence, respectively. We show that alphabetical ordering of middle authors is frequent in biomedical research, and that the prevalence of this practice is positively correlated with the number of authors in the bylines. We also find that, for articles with 7 or more authors, the average proportion of primary, middle and supervisory authors is independent of the team size, more than half of the authors being middle authors. This suggests that growth in authors lists are not due to an increase in secondary contributions (or middle authors) but, rather, in equivalent increases of all types of roles and contributions (including many primary authors and many supervisory authors). Nevertheless, we show that the relative contribution of alphabetically ordered middle authors to the overall production of knowledge in the bio-medical field has greatly increased over the last 35 years.
Citation Indexes Accounting for Authorship Order in Coauthored Research—Review and New Proposal
Science & Technology Libraries, 2016
Research articles' authorship confers credit and has important academic, social, and financial implications. After a paper is published, credit is harvested via citations. Currently, authorship order is not taken into account by the bibliometric citation indexes used in citation databases. It means that one gets the same measure of credit for a single-or first-authored article as for a multiauthored publication in which one is a middle author among other coauthors. This reality may not be appropriate, and it does matter since (1) it influences scientific performance assessment, which is extremely important at the time of recruitment or allocation of grants; and (2) it may affect scientific legacy. This work aims at recalling for the scientific community the need of change of the actual assessment paradigm by providing a review of the main bibliometric index and authorship credit models proposed in the literature as well as an original scientific performance evaluation method suggested by the authors.
Can authors’ position in the ascription be a measure of dominance?
Scientometrics, 2019
Authorship and authorship ordering in the by-line have been gaining interest in the recent past. Unlike authorship in single-author which is an open-book, authorship in multi-author papers has been murky in certain aspects especially, in ordering, among others. Authorship implies responsibility and so, should not order in by-line reflect measures of responsibility on the basis of their contribution? But it hasn’t been that way especially, the authors in middle order and last place, they are addled positions. However, the position of first author is undisputable because of one’s relative contribution or responsibility. On this note, I propose the dominance index (DI) and dominance co-efficiency (DC) of a scientist or an author. These measures are based on the number of times as first author, the total number of multi-authored papers, and the total number of times of co-authors. The average number of times as first author is the dominance score and the number of times as first author divided by the mean of co-authors gives dominance co-effeciency. These measures are put to test on faculty members of a Department of a University to show how the indexes would look like. The data sets are collected from their bio-data and supplemented from SCOPUS database. The results obtained seem quite tenable. But the usage of these indexes is left to the discretion of evaluators. Some pertinent implications and questions such as, the significance of a paper, omission of single-authored papers, the role of the corresponding author, the practice of noblesse oblige, the effect of team size, the regulation of authorship, and ordering in the by-line are discussed. Further, an alternative measure of dominance is also given on the sine quo non that order of authors in the by-line is strictly according to the relative measure of contribution. In this case, the dominance index is a measure of an author’s standing or prominence among his co-authors based on the ranking or position in the ascription of all the co-authored papers. It gives the relative rank or position of an author or scientist among one’s co-author(s) of the times of co-authoring based on the ascription of all the papers of significance. Dominance co-efficiency is the product of a paper of significance and dominance index. A negative or low DI or DC score of a scientist or an author does not mean that the scientific or research contribution is low or insignificant. On the other hand, a high DI and DC scores of a scientist or an author do not necessarily mean that his or her scientific contribution is excellent or important.
A scientometrics law about co-authors and their ranking. The co-author core
2012
Rather than "measuring" a scientist impact through the number of citations which his/her published work can have generated, isn't it more appropriate to consider his/her value through his/her scientific network performance illustrated by his/her co-author role, thus focussing on his/her joint publications, -and their impact through citations? Whence, on one hand, this paper very briefly examines bibliometric laws, like the h-index and subsequent debate about co-authorship effects, but on the other hand, proposes a measure of collaborative work through a new index. Based on data about the publication output of a specific research group, a new bibliometric law is found. Let a co-author C have written J (joint) publications with one or several colleagues. Rank all the co-authors of that individual according to their number of joint publications, giving a rank r to each co-author, starting with r = 1 for the most prolific. It is empirically found that a very simple relationship holds between the number of joint publications J by coauthors and their rank of importance, i.e. J ∝ 1/r. Thereafter, in the same spirit as for the Hirsch core, one can define a "co-author core", and introduce indices operating on an author. It is emphasized that the new index has a quite different (philosophical) perspective that the h-index. In the present case, one focusses on "relevant" persons rather than on "relevant" publications. Although the numerical discussion is based on one case, there is little doubt that the law can be verified in many other situations. Therefore, variants and generalizations could be later produced in order to quantify co-author roles, in a temporary or long lasting stable team(s), and lead to criteria about funding, career measurements or even induce career strategies.
Scientific authorshipPart 2. History, recurring issues, practices, and guidelines
Mutation Research/Reviews in Mutation Research, 2005
One challenge for most scientists is avoiding and resolving issues that center around authorship and the publishing of scientific manuscripts. While trying to place the research in proper context, impart new knowledge, follow proper guidelines, and publish in the most appropriate journal, the scientist often must deal with multi-collaborator issues like authorship allocation, trust and dependence, and resolution of publication conflicts. Most guidelines regarding publications, commentaries, and editorials have evolved from the ranks of editors in an effort to diminish the issues that faced them as editors. For example, the Ingelfinger rule attempts to prevent duplicate publications of the same study. This paper provides a historical overview of commonly encountered scientific authorship issues, a comparison of opinions on these issues, and the influence of various organizations and guidelines in regards to these issues. For example, a number of organizations provide guidelines for author allocation; however, a comparison shows that these guidelines differ on who should be an author, rules for ordering authors, and the level of responsibility for coauthors. Needs that emerge from this review are (a) a need for more controlled studies on authorship issues, (b) an increased awareness and a buy-in to consensus views by non-editor groups, e.g., managers, authors, reviewers, and scientific societies, and (c) a need for editors to express a greater understanding of authors' dilemmas and to exhibit greater flexibility. Also needed are occasions (e.g., an international congress) when editors and others (managers, authors, etc.) can directly exchange views, develop consensus approaches and solutions, and seek agreement on how to resolve authorship issues. Open dialogue is healthy, and it is essential for scientific integrity to be protected so that younger scientists can confidently follow the lead of their predecessors. Published by Elsevier B.V.
The elephant in the room: multi-authorship and the assessment of individual researchers
2013
When a group of individuals creates something, credit is usually divided among them. Oddly, that does not apply to scientific papers. The most commonly used measure of the performance of individual researchers is the h-index, which does not correct for multi-authorship. Each author claims full credit for each paper and each ensuing citation. This mismeasure of achievement is fuelling a flagrant increase in multi-authorship. Several alternatives to the h-index have been devised, and one of them, the individual h-index (hI), is logical, intuitive and easily calculated. Correcting for multi-authorship would end gratuitous authorship and allow proper attribution and unbiased comparisons