peer review – Techdirt (original) (raw)

Stories filed under: "peer review"

Is Your Google Scholar Profile Looking A Bit Empty? Need To Bulk Up Your Citations? Simple – Buy Some

from the that's-not-how-it-works dept

Techdirt has been reporting on the rotten state of academic publishing for more than ten years. Abuses include publishers willing to publish anything for a fee, and the sale of nonsense papers so that they can be used to bulk up an academic’s CV. But the world moves on, and Nature has a report about a new way to boost the number of citations claimed by a researcher: simply buy them. This latest academic publishing scam was discovered as a result of an investigation carried out by Yasir Zaki, a computer scientist at New York University Abu Dhabi, and his team. Nature explains:

In their sting operation, Zaki and his colleagues created a Google Scholar profile for a fictional scientist and uploaded 20 made-up studies that were created using artificial intelligence.

The team then approached a company, which they found while analysing suspicious citations linked to one of the authors in their data set, that seemed to be selling citations to Google Scholar profiles. The study authors contacted the firm by e-mail and later communicated through WhatsApp. The company offered 50 citations for 300or100citationsfor300 or 100 citations for 300or100citationsfor500. The authors opted for the first option and 40 days later 50 citations from studies in 22 journals — 14 of which are indexed by scholarly database Scopus — were added to the fictional researcher’s Google Scholar profile.

The rise of preprints as an alternative to traditional academic publishing has made this kind of fraud easier, Zaki’s research suggests. Preprints are simple to generate, and aren’t generally peer-reviewed, so it is easy to write them to order and slip in bogus citations.

But the larger problem is the way academics are evaluated when applying for jobs or being considered for promotion. An important metric is often an academic’s citation count. As part of their study, Zaki and his colleagues surveyed 574 researchers working at the ten highest-ranked universities in the world. The responses indicated that of those universities that look at citation counts when evaluating scientists, more than 60% obtain this data from Google Scholar.

There will always be unscrupulous researchers who try to game the system of academic evaluations, and others willing to help for a fee. As many scholars have been arguing for years, the real solution to all these abuses is not to tackle them piecemeal, but to change the entire system of academic appraisals. Another benefit of doing so would be to break the stranglehold that journals with high “impact factors” have on the scholarly world. That would allow an open access title to compete for papers based on its merits, not on its perceived importance for career progression.

Follow me @glynmoody on Mastodon and on Bluesky.

Filed Under: academic publishing, citations, google scholar, impact factors, nature, open access, peer review, preprints, whatsapp
Companies: google

PLOS ONE Topic Pages: Peer-Reviewed Articles That Are Also Wikipedia Entries: What's Not To Like?

from the good-for-the-academic-career,-too dept

It is hard to imagine life without Wikipedia. That’s especially the case if you have school-age children, who now turn to it by default for information when working on homework. Less well-known is the importance of Wikipedia for scientists, who often use its pages for reliable explanations of basic concepts:

Physicists — researchers, professors, and students — use Wikipedia daily. When I need the transition temperature for a Bose-Einstein condensate (prefactor and all), or when I want to learn about the details of an unfamiliar quantum algorithm, Wikipedia is my first stop. When a graduate student sends me research notes that rely on unfamiliar algebraic structures, they reference Wikipedia.

That’s from a blog post on the open access publisher Public Library of Science (PLOS) Web site. It’s an announcement of an interesting new initiative to bolster the number of physicists contributing to Wikipedia by writing not just new articles for the online encyclopedia, but peer-reviewed ones. The additional element aims to ensure that the information provided is of the highest quality — not always the case for Wikipedia articles, whatever their other merits. As the PLOS post explains, the new pages have two aspects:

A peer-reviewed ‘article’ in [the flagship online publication] PLOS ONE, which is fixed, peer-reviewed openly via the PLOS Wiki and citable, giving information about that particular topic.

That finalized article is then submitted to Wikipedia, which becomes a living version of the document that the community can refine, build on, and keep up to date.

The two-pronged approach of these “Topic Pages” has a number of benefits. It means that Wikipedia gains high-quality, peer-reviewed articles, written by experts; scientists just starting out gain an important new resource with accessible explanations of often highly-technical topics; and the scientists writing Topic Pages can add them to their list of citable publications — an important consideration for their careers, and an added incentive to produce them.

Other PLOS titles such as PLOS Computational Biology and PLOS Genetics have produced a few Topic Pages previously, but the latest move represents a major extension of the idea. As the blog post notes, PLOS ONE is initially welcoming articles on topics in quantum physics, but over time it plans to expand to all of physics. Let’s hope it’s an idea that catches on and spreads across all academic disciplines, since everyone gains from the approach — not least students researching their homework.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: peer review, science, wikipedia
Companies: plos

Move By Top Chinese University Could Mean Journal Impact Factors Begin To Lose Their Influence

from the and-no-bad-thing,-either dept

The so-called “impact factors” of journals play a major role in the academic world. And yet people have been warning about their deep flaws for many years. Here, for example, is Professor Stephen Curry, a leading advocate of open access, writing on the topic back in 2012:

I am sick of impact factors and so is science.

The impact factor might have started out as a good idea, but its time has come and gone. Conceived by Eugene Garfield in the 1970s as a useful tool for research libraries to judge the relative merits of journals when allocating their subscription budgets, the impact factor is calculated annually as the mean number of citations to articles published in any given journal in the two preceding years.

The rest of that article and the 233 comments that follow it explain in detail why impact factors are a problem, and why they need to be discarded. The hard part is coming up with other ways of gauging the influence of people who write in high-profile publications — one of the main reasons why many academics cling to the impact factor system. A story in Nature reports on a bold idea from a top Chinese university in this area:

One of China’s most prestigious universities plans to give some articles in newspapers and posts on major social-media outlets the same weight as peer-reviewed publications when it evaluates researchers.

It will work like this:

articles have to be original, written by the researcher and at least 1,000 words long; they need to be picked up by major news outlets and widely disseminated through social media; and they need to have been seen by a large number of people. The policy requires an article to be viewed more than 100,000 times on WeChat, China’s most popular instant-messaging service, or 400,000 times on news aggregators such as Toutiao. Articles that meet the criteria will be considered publications, alongside papers in peer-reviewed journals.

The university has also established a publication hierarchy, with official media outlets such as the People’s Daily considered most important, regional newspapers and magazines occupying a second tier, and online news sites such as Sina, NetEase or Sohu ranking third./blockquote>

One of the advantages of this idea is that it recognizes that publishing in non-academic titles can be just as valid as appearing in conventional peer-reviewed journals. It also has the big benefit of encouraging academics to communicate with the public — something that happens too rarely at the moment. That, in its turn, might help experts learn how to explain their often complex work in simple terms. At the same time, it would allow non-experts to hear about exciting new ideas straight from the top people in the field, rather than mediated through journalists, who may misunderstand or distort various aspects.

However, there are clear risks, too. For example, there is a danger that newspapers and magazines will be unwilling to accept articles about difficult work, or from controversial academics. Equally, mediocre researchers that hew to the government line may benefit from increased exposure, even resulting in them being promoted ahead of other, more independent-minded academics. Those are certainly issues. But what’s interesting here is not just the details of the policy itself, but the fact that it was devised and is being tried in China. That’s another sign that the country is increasingly a leader in many areas, and no longer a follower.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: academic research, china, impact factors, journals, open access, peer review

Stupid Patent Of The Month: Elsevier Patents Online Peer Review

from the my-peers-think-this-is-stupid dept

On August 30, 2016, the Patent Office issued U.S. Patent No. 9,430,468, titled; “Online peer review and method.” The owner of this patent is none other than Elsevier, the giant academic publisher. When it first applied for the patent, Elsevier sought very broad claims that could have covered a wide range of online peer review. Fortunately, by the time the patent actually issued, its claims had been narrowed significantly. So, as a practical matter, the patent will be difficult to enforce. But we still think the patent is stupid, invalid, and an indictment of the system.

Before discussing the patent, it is worth considering why Elsevier might want a government granted monopoly on methods of peer review. Elsevier owns more than 2000 academic journals. It charges huge fees and sometimes imposes bundling requirements whereby universities that want certain high profile journals must buy a package including other publications. Universities, libraries, and researchers are increasingly questioning whether this model makes sense. After all, universities usually pay the salaries of both the researchers that write the papers and of the referees who conduct peer review. Elsevier’s business model has been compared to a restaurant where the customers bring the ingredients, do all the cooking, and then get hit with a $10,000 bill.

The rise in wariness of Elsevier’s business model correlates with the rise in popularity and acceptance of open access publishing. Dozens of universities have adopted open access policies mandating or recommending that researchers make their papers available to the public, either by publishing them in open access journals or by archiving them after publication in institutional repositories. In 2013, President Obama mandated that federally funded research be made available to the public no later than a year after publication, and it’s likely that Congress will lock that policy into law.

Facing an evolving landscape, Elsevier has sought other ways to reinforce its control of publishing. The company has tried to stop researchers from sharing their own papers in institutional repositories, and entered an endless legal battle with rogue repositories Sci-Hub and and LibGen. Again and again, when confronted with the changing face of academic publishing, Elsevier resorts to takedowns and litigation rather than reevaluating or modernizing its business model.

Elsevier recently acquired SSRN, the beloved preprints repository for the social sciences and humanities. There are early signs that it will be a poor steward of SSRN. Together, the SSRN acquisition and this month’s stupid patent present a troubling vision of Elsevier’s new strategy: if you can’t control the content anymore, then assert control over the infrastructures of scholarly publishing itself.

Elsevier filed its patent application on June 28, 2012. The description of the invention is lengthy, but is essentially a description of the process of peer review, but on a computer. For example, it includes a detailed discussion of setting up user accounts, requiring new users to pass a CAPTCHA test, checking to see if the new user’s email address is already associated with an account, receiving submissions, reviewing submissions, sending submissions back for corrections, etc, etc, etc.

The patent departs slightly from typical peer review in its discussion of what it calls a “waterfall process.” This is “the transfer of submitted articles from one journal to another journal.” In other words, authors who are rejected by one journal are given an opportunity to immediately submit somewhere else. The text of the patent suggests that Elsevier believed that this waterfall process was its novel contribution. But the waterfall idea was not new in 2012. The process had been written about since at least 2009 and is often referred to as “cascading review.”

The patent examiner rejected Elsevier’s application three times. But, taking advantage of the patent system’s unlimited do-overs, Elsevier amended its claims by adding new limitations and narrowing the scope of its patent. Eventually, the examiner granted the application. The issued claims include many steps. Some of these steps, like “receive an author-submitted article,” would be quite hard to avoid. Others are less essential. For example, the claims require automatically comparing a submission to previously published articles and using that data to recommend a particular journal as the best place to send the submission. So it would be an exaggeration to suggest the patent locks up all online peer review.

We hope that Elsevier will not be aggressive in its own interpretation of the patent’s scope. Unfortunately, its early statements suggest it does take an expansive view of the patent. For example, an Elsevier representative tweeted: “There is no need for concern regarding the patent. It’s simply meant to protect our own proprietary waterfall system from being copied.” But the waterfall system, aka cascading peer review, was known years before Elsevier filed its patent application. It cannot claim to own that process.

Ultimately, even though the patent was narrowed, it is still a very bad patent. It is similar to Amazon’s patent on white-background photography where narrowed but still obvious claims were allowed. Further, Elsevier’s patent would face a significant challenge under Alice v CLS Bank, where the Supreme Court ruled that abstract ideas do not become eligible for a patent simply because they are implemented on a generic computer. To our dismay, the Patent Office did not even raise Alice v CLS Bank even though that case was handed down more than two years before this patent issued_._ Elsevier’s patent is another illustration of why we still need fundamental patent reform.

Reposted from the EFF’sStupid Patent of the Month series.

Filed Under: patents, peer review, stupid patents
Companies: elsevier

Large-Scale Peer-Review Fraud Leads To Retraction Of 64 Scientific Papers

from the time-to-fix-the-real-problem dept

Techdirt has written numerous articles about an important move in academic publishing towards open access. By shifting the funding of production costs from the readers to the researchers’ institutions it is possible to provide free online access to everyone while ensuring that high academic standards are maintained. An important aspect of that, both for open access and traditional publishing, is peer review, which is designed to ensure that the most important papers are brought forward, and that they are checked and improved as they pass through the publication process. Given that pivotal role, the following story in The Washington Post is both shocking and troubling:

> One of the world?s largest academic publishers, Springer, has retracted 64 articles from 10 of its journals after discovering that their reviews were linked to fake e-mail addresses. The announcement comes nine months after 43 studies were retracted by BioMed Central (one of Springer?s imprints) for the same reason.

To put those numbers in context, a specialized site that tracks this and similar malpractice in the academic world, Retraction Watch, reports that the total number of papers withdrawn because of fake reviews is 230 in the past three years.

It’s not known exactly how the reviews of the 64 articles involved were faked, or by whom. But there are plenty of other cases that indicate ways in which the peer review system is being subverted. These range from the obvious ones like researchers reviewing their own papers or suggesting people they know as suitable reviewers, to more devious approaches, including the use of companies providing “specialist” services. As the Committee on Publication Ethics (COPE) wrote in its statement on “inappropriate manipulation of peer review processes“:

> While there are a number of well-established reputable agencies offering manuscript-preparation services to authors, investigations at several journals suggests that some agencies are selling services, ranging from authorship of pre-written manuscripts to providing fabricated contact details for peer reviewers during the submission process and then supplying reviews from these fabricated addresses. Some of these peer reviewer accounts have the names of seemingly real researchers but with email addresses that differ from those from their institutions or associated with their previous publications, others appear to be completely fictitious. > > We are unclear how far authors of the submitted manuscripts are aware that the reviewer names and email addresses provided by these agencies are fraudulent. However, given the seriousness and potential scale of the investigation findings, we believe that the scientific integrity of manuscripts submitted via these agencies is significantly undermined.

The Washington Post article goes on to discuss various policies that publishers are beginning to put in place in an attempt to prevent fakes from undermining the peer review system. But the real problem lies not in the publishing process, but in the way that academic careers are judged and advanced. Currently, too great an emphasis is placed on how many papers a researcher has published, and whether they are in “prestigious” journals, where “prestigious” is generally defined using the highly-unsatisfactory “impact factor,” supposedly a measure of academic influence. This creates an enormous “pressure to publish,” which inevitably leads to some people cutting corners.

The best way to address the growing problem of fake reviews is to adopt better, more inclusive ways of evaluating academics and their work, and thus move beyond today’s fixation on publishing papers in high impact-factor titles. While that thorny issue remains unaddressed, the great revolution in knowledge production and dissemination that open access potentially enables will remain incomplete and even compromised.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: fraud, open access, peer review, research, retractions, scientific research

DailyDirt: Problems With Peer Reviewed Publications

from the urls-we-dig-up dept

Peer review isn’t exactly a sexy topic, but it’s an essential part of academic publishing — and it may need to change a bit to keep up with the times. Peer review is typically a thankless chore that is distributed among academics working in a network of related fields, and sometimes personal politics can enter into the process if the subject matter is obscure enough. Misconduct in peer review doesn’t usually get the same kind of coverage as various journalistic scandals (eg. Rolling Stone, Buzzfeed, etc), but the damages done can be even more significant to society.

After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.

Filed Under: academia, journals, media, natural language processing, peer review, publications, retractions, rubriq, scandals

DailyDirt: Peer Reviewed Publications Are Everywhere

from the urls-we-dig-up dept

Peer reviewed publications have been under some additional scrutiny lately, as some of the practices of peer review aren’t quite as honest and reliable as they once might have been. Fortunately, there are some solutions that create alternatives to the peer review process that involve opening up the content for more reviewers to study, question and verify results. Having reliable information more widely available to the public sounds like it can’t go wrong, but it’s not easy to build a reputation on a small database of preprints. However, as more and more significant results come from unlikely people, the process of peer review will need to adapt and account for unexpected authors.

After you’ve finished checking out those links, if you have some spare change (or more) and would like to support Techdirt, take a look at our Daily Deals for cool gadgets and other awesome stuff.

Filed Under: academia.edu, academic journals, arxiv.org, frontiers for young minds, media, paywall, peer review, plos, preprints, publications, publishers, pubpeer, science

Is It Acceptable For Academics To Pay For Privatized, Expedited Peer Review?

from the bumps-along-the-way dept

Academic publishing is going through a turbulent time, not least because of the rise of open access, which disrupts the traditional model in key ways. But in one respect, open access is just like the old-style academic publishing it is replacing: it generally employs peer review to decide whether papers should be accepted, although there are some moves to open up peer review too. As this story from Science makes clear, commercial publishers are innovating here as well, although not always in ways that academics like:

> An editor of Scientific Reports, one of Nature Publishing Group’s (NPG’s) open-access journals, has resigned in a very public protest of NPG’s recent decision to allow authors to pay money to expedite peer review of their submitted papers.

According to the Science article, there are now several companies making millions of dollars from this kind of privatized, expedited peer review. Here’s more about Research Square, the one employed by NPG:

> “We have about 100 employees with Ph.D.s,? says Research Square?s CEO, Shashi Mudunuri. That small army of editors recruits scientists around the world as reviewers, guiding the papers through the review process. The reviewers get paid $100 for each completed review. The review process itself is also streamlined, using an online “scorecard” instead of the traditional approach of comments, questions, and suggestions.

Authors pay $750 to NPG, and are guaranteed a review within three weeks or they get their money back. Research Square seems to be flourishing:

> So far, Mudunuri says, the company has about 1400 active reviewers who have scored 920 papers. The company pulled in $20 million in revenue last year.

Still, the question has to be whether this leads to key benefits of the peer review process being lost. After all, the system is not just about accepting or rejecting papers. The NPG editor who resigned, Professor Mark Maslin, is quoted as saying:

> “Deep consideration and a well thought out review is much more important than its speed. I have had brilliant reviews which have considerably improved my papers and I really appreciated all the time taken.”

The other issue is that the expedited, paid-for route is discriminatory:

> “My objections are that it sets up a two-tiered system and instead of the best science being published in a timely fashion it will further shift the balance to well-funded labs and groups,” Mark Maslin, a biogeographer at University College London, tells ScienceInsider. “Academic Publishing is going through a revolution and we should expect some bumps along the way. This was just one that I felt I could not accept.”

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: class system, fast lane, journals, peer review, prioritized review, research, science, two tier system

DailyDirt: Science With (And Without) Verification

from the urls-we-dig-up dept

The scientific method has undoubtedly advanced the growth of knowledge, but with the enormous amount of data that can be collected now, it can be difficult to turn all that information into reliable and understandable facts. On the other hand, science is also pushing the boundaries of what can possibly be measured — but can we still call it science if we’re proposing unknowable multiverses and spatial dimensions that can never be explored? Almost anyone can publish their crazy ideas — and sometimes those sketchy papers submitted to arxiv.org lead to successful work proving an infinite number of twin primes. Do the crackpots outnumber the “real” scientists? Does it matter?

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.

Filed Under: arxiv, cosmology, crackpots, experiments, journals, knowledge, multiverses, peer review, physics, prime numbers, replication, science, scientific method, verification

The Open Science Peer Review Oath

from the transparent,-reproducible-and-responsible dept

Open access is about making academic research more widely available, particularly when it is publicly funded. But there is a broader open science movement that seeks to make the entire scientific process — from initial experiments to the final dissemination of results — transparent, and thus reproducible. One crucial aspect of that complete process is peer review, whereby experts in a field provide advice about the quality of new research, either to editors prior to a paper being published in a journal, or more directly, by reviewing work publicly online. Recognizing the importance of this step for the integrity and validity of the scientific process, a group has drawn up what it calls the “Open Science Peer Review Oath“:

> We have formulated an oath that codifies the role of reviewers in helping to ensure that the science they review is sufficiently open and reproducible; it includes guidelines not just on how to review professionally, but also on how to support transparent, reproducible and responsible research, while optimising its societal impact and maximising its visibility.

The Oath’s 17 components include commitments to act fairly and ethically, for example, the following:

> While reviewing this manuscript: > > i) I will sign my review in order to be able to have an open dialogue with you > > ii) I will be honest at all times > > v) I will not unduly delay the review process > > vi) I will not scoop research that I had not planned to do before reading the manuscript > > vii) I will be constructive in my criticism > > x) I will try to assist in every way I ethically can to provide criticism and praise that is valid, relevant and cognisant of community norms

It also includes actions specifically designed to foster science that is truly open:

> xi) I will encourage the application of any other open science best practices relevant to my field that would support transparency, reproducibility, re-use and integrity of your research > > xiii) I will check that the data, software code and digital object identifiers are correct, and the models presented are archived, referenced, and accessible > > xiv) I will comment on how well you have achieved transparency, in terms of materials and methodology, data and code access, versioning, algorithms, software parameters and standards, such that your experiments can be repeated independently > > xv) I will encourage deposition with long-term unrestricted access to the data that underpin the published concept, towards transparency and re-use > > xvi) I will encourage central long-term unrestricted access to any software code and support documentation that underpin the published concept, both for reproducibility of results and software availability

Although the framing of an “Oath” for open science peer review may sound rather over the top — slightly pompous, even — it rightly underlines the seriousness with which peer review ought to be conducted. It remains to be seen what kind of response it receives from the wider scientific community, and whether it becomes a fixed element of the open science movement.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: knowledge, open access, open sciencie, peer review, sharing