academia – Techdirt (original) (raw)
Stories filed under: "academia"
Plagiarism Is Fine
from the plagiarize-this dept
There’s plenty of hypocrisy and bad faith to go around in the ridiculous Claudine Gay plagiarism scandal. While Gay’s accusers are right that she technically violated Harvard’s plagiarism rules by copying phrases either without quotation marks or required attribution, they don’t actually care about plagiarism, only “scalping” Gay. What’s more, their own plagiarism accusations have already started biting them back. And while Gay’s defenders are right that her offenses were comically trivial, because she copied mere banalities, Harvard students are punished severely for doing exactly the same thing. In fact, some of Gay’s defenders probably did the punishing.
A pox on both their houses. Plagiarism is fine, plagiarism rules are stupid, and the plagiarism police should mind their own business.
Everyone “knows” plagiarism is bad, but no one can provide a coherent explanation why. Some people say plagiarism defrauds the reader. Give me a break. Readers don’t care, or if they do, it’s only because they’ve been browbeaten into believing plagiarism is wrong. Others say plagiarism is like stealing. But no one owns ideas, and no one should own the words we use to express them, either.
I’ll be blunt. The plagiarism police are just intellectual landlords, demanding rent in the form of attribution. And plagiarism rules are just a sneaky way for authors to claim de facto ownership of ideas, while cloaking themselves in false virtue. When the plagiarism police cry, “J’accuse!,” we should respond with a raspberry.
Don’t get me wrong, I’m not opposed to attribution. In fact, attribution is great, so long as it’s voluntary, rather than mandatory. Authors should absolutely attribute expressions and ideas, when they think it will help readers, or even just to honor an author they admire. But authors shouldn’t be required to attribute, unless they think it’s deserved. Let us cite out of love, rather than obligation.
Some people worry that eliminating plagiarism rules will harm disadvantaged authors, who often don’t get the credit they deserve. I doubt it. For one thing, plagiarism rules have existed for at least 2000 years. If they were going to protect disadvantaged authors, they would have done it by now. For another, plagiarism rules actually create a “Matthew Effect,” in which the most prominent authors get all the credit, and the disadvantaged authors get ignored. Why not adopt attribution norms that encourage citation of deserving disadvantaged authors instead of undeserving privileged ones?
You probably think I’m joking. I’m not. And I can prove it. I’ve published scholarly articles arguing that plagiarism rules are unjustified, authorizing plagiarism of myself, providing a “plagiarism license,” advocating a “right of reattribution,” offering to reattribute my own articles (please claim one!), using essay mills, plagiarizing every word (I stole the idea from Jonathan Lethem), proposing to teach law students how to plagiarize efficiently (in the practice of law, if you aren’t plagiarizing, you’re committing malpractice), and using AI to reflect on the legitimacy of plagiarism norms. I’m dead serious. Well, as serious as I get, anyway.
Think about it. We want to believe plagiarism rules protect original expressions and ideas. But AI shows us that most of what we produce is generic banalities. Why treat them like spun gold, rather than the chaff they really are?
We’ve now spent weeks debating how to interpret and apply plagiarism rules. If anything comes out of this idiotic “scandal,” I hope it’s that, when it comes to plagiarism norms, the juice definitely isn’t worth the squeeze. We should just admit they’re a waste of time and abandon them. We should stop punishing authors for “stealing” clichés, And we should especially stop punishing students “for their own good.” Plagiarism is also a way of learning, so we should encourage it, whenever it helps students learn more effectively and efficiently.
By the way, every word of this op-ed is plagiarized. Or maybe it isn’t. I’m not telling, because it doesn’t matter.
Filed Under: academia, academics, attribution, claudine gay, ideas, plagiarism
If A College Is Going To Make COVID-19 Contact Tracing Apps Mandatory, They Should At Least Be Secure
from the tracer-round dept
One of the more frustrating aspects of the ongoing COVID-19 pandemic has been the frankly haphazard manner in which too many folks are tossing around ideas for bringing it all under control without fully thinking things through. I’m as guilty of this as anyone, desperate as I am for life to return to normal. “Give me the option to get a vaccine candidate even though it’s in phase 3 trials,” I have found myself saying more than once, each time immediately realizing how stupid and selfish it would be to not let the scientific community do its work and do it right. Challenge trials, some people say, should be considered. There’s a reason we don’t do that, actually.
And contact tracing. While contact tracing can be a key part of siloing the spread of a virus as infectious as COVID-19, how we contact trace is immensely important. Like many problems we encounter these days, there is this sense that we should just throw technology at the problem. We can contract trace through our connected phones, after all. Except there are privacy concerns. We can use dedicated apps on our phones for this as well, except this is all happening so fast that it’s a damn-near certainty that there are going to be mistakes made in those apps.
This is what Albion College in Michigan found out recently. Albion told students two weeks prior to on-campus classes resuming that they would be required to use Aura, a contact tracing app. The app collects a ton of real-time and personal data on students in order to pull off the tracing.
Aura, however, goes all in on real-time location-tracking instead, as TechCrunch reports. The app collects students’ names, location, and COVID-19 status, then generates a QR code containing that information. The code either comes up “certified” if the data indicates a student has tested negative, or “denied” if the student has a positive test or no test data. In addition to tracking students’ COVID-19 status, the app will also lock a student’s ID card and revoke access to campus buildings if it detects that a student has left campus “without permission.”
TechCrunch used a network analysis tool to discover that the code was not generated on a device but rather on a hidden Aura website—and that TechCrunch could then easily change the account number in the URL to generate new QR codes for other accounts and receive access to other individuals’ personal data.
It gets worse. One Albion student was able to discover that the app’s source code also included security keys for Albion’s servers. Using those, other researchers into the app found that they could gain access to all kinds of data from the app’s users, including test results and personal identifying information.
Now, Aura’s developers fixed these security flaws…after the researchers brought them to light and after the school had made the use of the app mandatory. If anyone would like to place a bet that these are the only two privacy and security flaws in this app, then they must certainly not like having money very much.
To be clear, plenty of other schools are trying to figure out how to use technology to contact trace as well. And there’s probably a use for technology in all of this, with an acceptable level of risk versus the benefit of bringing this awful pandemic under control.
But going off half-cocked isn’t going to help. In fact, it’s only going to make the public less trustful of contact tracing attempts in the future, which is the last thing we need.
Filed Under: academia, contact tracing, covid-19, mandatory, security, students
Companies: albion college, aura
Mass Biometric Scanning Of Students Is COVID-19's Latest Dystopian Twist
from the a-whole-new-Big-Brother-program-for-troubled-teens dept
COVID-19 has disrupted almost everything. Most schools in the United States wrapped up the 2019-2020 school year with zero students in their buildings, hoping to slow the spread of the virus. Distance learning is the new normal — something deployed quickly with little testing, opening up students to a host of new problems, technical glitches, and in-home surveillance.
Zoom replaced classrooms, online products replaced teachers, and everything became a bit more dystopian, adding to the cloud of uncertainty ushered in by the worldwide spread of a novel virus with no proven cure, treatment, or vaccine.
Schools soon discovered Zoom and other attendance-ensuring options might be a mistake as miscreants invaded virtual classrooms, distributing sexual and racist content to unsuspecting students and teachers. These issues have yet to be solved as schools ease back into Distance Learning 2.0.
Then there’s the problem with tests. Teachers and administrators have battled cheating students as long as testing has existed. Now that tests are being taken outside of heavily controlled classrooms, software is stepping in to do the monitoring. That’s a problem. It’s pretty difficult to invade someone’s privacy in a public school, where students give up a certain amount of their rights to engage in group learning.
Now that learning is taking place in students’ homes, schools and their software providers seem to feel this same relinquishment of privacy should still be expected, even though the areas they’re now encroaching on have historically been considered private places. As the EFF reports, testing is now being overseen by Professor Big Brother and his many, many eyes. All of this is in place just to keep students from cheating on tests:
Recorded patterns of keystrokes and facial recognition supposedly confirm whether the student signing up for a test is the one taking it; gaze-monitoring or eye-tracking is meant to ensure that students don’t look off-screen too long, where they might have answers written down; microphones and cameras record students’ surroundings, broadcasting them to a proctor, who must ensure that no one else is in the room.
Mass biometric surveillance has finally come home. Like equally intrusive productivity software used by employers with work-at-home employees, this proctoring software treats people’s private spaces like classrooms. What would be upsetting to adults is targeting minors. And, unlike company employees who may have the option to pull the plug on their employment rather than be subjected to this, students generally don’t have this kind of flexibility. For many, it’s either this school (and its spyware) or nothing.
All this data and content being gathered on minors is subject to almost no public oversight. The software deployed by schools demands tons of personal info from students before it can even be used by them. And that’s just the beginning of the invasive data collection. As the EFF points out, some school software collects additional info, such as computer and software information. Other log URLs visited and how long students stay on certain sites or web pages.
It’s unclear what happens to all this information proctoring companies are gathering on minors. But there doesn’t appear to be anything preventing them from using this information however they please.
Some companies, like ProctorU, have no time limits on retention. Some of this information they share with third parties. And when student data is provided to the proctoring company by an educational institution, students are often left without a clear way to request that their data be deleted because they aren’t considered the data’s “owner.”
Even if you can ignore the dystopian mass biometric collections targeting minors, you’re left with the logistic issues. Some schools/software may think they’ve caught a cheater when all they’re really “caught” is someone struggling with a less-than-ideal internet connection or dealing with compatibility issues. Or maybe the perceived “cheating” is nothing more than uncooperative siblings wandering into rooms where testing is taking place.
Granted, controlling off-campus testing to limit cheating is a noble goal. It’s also an impossible one. Schools haven’t eliminated cheating in environments they completely control. The tradeoff here doesn’t appear to be acceptable. Students are being asked to give up a whole lot of privacy in exchange for minimal gains in remote testing integrity.
Filed Under: academia, biometrics, covid-19, dystopia, scanning, students
Companies: proctoru
It's Time For The Academic World To See The Positive Side Of Negative Results
from the go-on,-take-a-chance dept
Techdirt has written many times about the need to move from traditional academic publishing to open access. There are many benefits, including increasing the reach and impact of research, and allowing members of the public to read work that they have often funded, without needing to pay again. But open access is not a panacea; it does not solve all the problems of today’s approach to spreading knowledge. In particular, it suffers from the same serious flaw that afflicts traditional titles: a tendency to focus on success, and to draw a veil of silence over failure. As a new column in Nature puts it:
Scientists have become so accustomed to celebrating only success that we’ve forgotten that most technological advances stem from failure. We all want to see our work saving lives or solving world hunger, and I think the collective bias towards finding positive results in the face of failure is a dangerous motivation.
That’s true, though hardly a new insight. People have been pointing it out for years. But the fact that it still needs to be said shows how little progress has been made in this regard. For example, back in 2015, Stephen Curry, a professor of structural biology at London’s Imperial College, wrote a column in the Guardian entitled “On the importance of being negative“, which explains why negative results matter:
Their value lies in mapping out blind alleys, warning other investigators not to waste their time or at least to tread carefully. The only trouble is, it can be hard to get them published.
Curry noted that Elsevier was aiming to address that problem with the launch of the catchily-named journal “New Negatives in Plant Science”, which was “a platform for negative, unexpected or controversial results”. Unfortunately, looking at the journal’s Web page today, we read: “The Publisher has decided to discontinue the journal New Negatives in Plant Science.” Maybe papers about negative results were simply a bit, well, negative for many people. Undaunted, Cambridge University Press (CUP) is launching its own title in this space:
Experimental Results will offer a place where researchers can publish standalone experimental results “regardless of whether those results are novel, inconclusive, negative or supplementary to other published work.” The journal will also publish the outcome of attempts to reproduce previously published experiments, including those that dispute past findings.
Some journals publish full-paper negative or inconclusive results, but published stand-alone results are a rarity, said CUP.
That’s a welcome move, because the academic world effectively discards huge quantities of knowledge, often hard-won, about things that don’t work, don’t reproduce the results of others, or are simply unclear. Those may be messy and less glamorous than the big successes that hit the headlines and win prizes, but they are valuable nonetheless.
It’s instructive to compare the world of academic publishing with what happens in Silicon Valley. There, failure is celebrated as proof that entrepreneurs have been willing to try new things, and acknowledged as a valuable learning experience. It’s added to CVs with pride, not glossed over like some shameful secret. It’s time to bring some of that enthusiastic willingness to take risks to the rigorous but rather timid world of academia. — and to reward it accordingly.
Follow me @glynmoody on Twitter, Diaspora, or Mastodon.
Filed Under: academia, failure, journals, open access, research
Big Boost For Open Access As Wellcome And Bill & Melinda Gates Foundation Back EU's 'Plan S'
from the no-embargoes,-and-cc-by dept
Back in September, Techdirt wrote about the oddly-named ‘Plan S‘, which was nonetheless an important step forward for open access in Europe. As we remarked then, the hope was that others would support the initiative, and that has now happened, with two of the biggest names in the science funding world signing up to the approach:
To ensure that research findings are shared widely and are made freely available at the time of publication, Wellcome and the Bill & Melinda Gates Foundation have today (Monday) joined cOAlition S and endorsed the principles of Plan S.
An article in Nature on the move notes that Wellcome gave out 1.4billioningrantsin2016?17,whiletheGatesFoundationspent1.4 billion in grants in 2016?17, while the Gates Foundation spent 1.4billioningrantsin2016?17,whiletheGatesFoundationspent4.7 billion in 2017, although not all of that was on science. So the backing of these two organizations is a massive vote of confidence in Plan S and its requirements. Wellcome has also unveiled its new, more stringent open access policy, which includes a number of important changes, including the following:
All Wellcome-funded research articles must be made freely available through PubMed Central (PMC) and Europe PMC at the time of publication. We previously allowed a six-month embargo period. This change will make sure that the peer-reviewed version is freely available to everyone at the time of publication.
This move finally rectifies one of the biggest blunders by academic funding organizations: allowing publishers to impose an embargo — typically six or even 12 months — before publicly-funded research work was freely available as open access. There was absolutely no reason to allow this. After all, the funding organizations could simply have said to publishers: “if you want to publish work we paid for, you must follow our rules”. But in a moment of weakness, they allowed themselves to be bamboozled by publishers, granting an unnecessary monopoly on published papers, and slowing down the dissemination of research.
All articles must be published under a Creative Commons attribution licence (CC-BY). We previously only required this licence when an article processing charge (APC) was paid. This change will make sure that others — including commercial entities and AI/text-data mining services — can reuse our funded research to discover new knowledge.
Although a more subtle change, it’s an important one. It establishes unequivocally that anyone, including companies, may build on research financed by Wellcome. In particular, it explicitly allows anyone to carry out text and data mining (TDM), and to use papers and their data for training machine-learning systems. That’s particularly important in the light of the EU’s stupid decision to prevent companies in Europe from carrying out either TDM or training machine-learning systems on material to which they do not have legal access to unless they pay an additional licensing fee to publishers. This pretty much guarantees that the EU will become a backwater for AI compared to the US and China, where no such obstacles are placed in the way of companies.
Like Plan S, Wellcome’s open access policy no longer supports double-dipping “hybrid journals”, which charge researchers who want to release their work as open access, but also require libraries to take out full-price subscriptions for journals that include these freely-available articles. An innovative aspect of the new policy is that it will require some research to be published as preprints in advance of formal publication in journals:
Where there is a significant public health benefit to preprints being shared widely and rapidly, such as a disease outbreak, these preprints must be published:
before peer review
on an approved platform that supports immediate publication of the complete manuscript under a CC-BY licence.
That’s eminently sensible — in the event of public health emergencies, you want the latest research to be out there in the hands of health workers as soon as possible. It’s also a nice boost for preprints, which are rapidly emerging as an important way of sharing knowledge.
The Gates Foundation has said that it will update its open access policy, which in any case is already broadly in line with the principles of Plan S, over the next 12 months. Even without that revision, the latest announcement by these two funding heavyweights is highly significant, and is likely to make the argument for similar organizations around the world to align their open access policies with Plan S hard to resist. We can therefore probably expect more to join cOAlition S and help bring the world closer to the long-cherished dream of full open access to the world’s research, with no embargoes, and under a permissive CC-BY license.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: academia, academic research, creative commons, eu, open access, philanthropy, plan s
Companies: bill and melinda gates foundation, wellcome trust
Top Academic Publisher Kowtows To China: Censors Thousands Of Papers, Denies It Is Censorship
from the comments-that-insult-our-intelligence dept
It’s no secret that the Chinese authorities wish to have control over every aspect of life in China, including what people say and do online. Here they are laying down what academic papers people can read, as reported by a new story in the New York Times:
One of the world’s largest academic publishers was criticized on Wednesday for bowing to pressure from the Chinese government to block access to hundreds of articles on its Chinese website.
Springer Nature, whose publications include Nature and Scientific American, acknowledged that at the government’s request, it had removed articles from its mainland site that touch on topics the ruling Communist Party considers sensitive, including Taiwan, Tibet, human rights and elite politics.
The publisher defended its decision, saying that only 1 percent of its content was inaccessible in mainland China.
And if you think that its comment is ridiculous — “only” one percent is over 7000 articles — wait till you read what Springer said in its official statement on the move, reported by the Fresno Bee:
“This action is deeply regrettable but has been taken to prevent a much greater impact on our customers and authors and is in compliance with our published policy,” the statement said. “This is not editorial censorship and does not affect the content we publish or make accessible elsewhere in the world.”
According to Springer, it is not really censoring articles in China, because people outside can still read them. That insults both Chinese researchers, whom Springer clearly thinks don’t count, and our intelligence.
What makes Springer’s pusillanimity even more reprehensible is that another leading academic publisher was also told to censor articles in China, but took a different course of action. Back in August, Cambridge University Press (CUP) was ordered by the Chinese authorities to censor 300 articles from its journal China Quarterly. Initially, like Springer, it complied, but came to its senses a couple of days later:
It said the academic leadership of the university had reviewed the publisher’s decision and agreed to reinstate the blocked content with immediate effect to “uphold the principle of academic freedom on which the university?s work is founded”.
If Springer fails to do the same, researchers will be justified in concluding that, unlike CUP, it does not uphold that principle of academic freedom. In which case, they may decide to publish their future work elsewhere.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: academia, academic papers, censorship, china
Companies: cambridge university press, springer nature
Scientific Publishers Want Upload Filter To Stop Academics Sharing Their Own Papers Without Permission
from the where-there's-a-gate,-there's-got-to-be-a-gatekeeper dept
Back in March of this year, Techdirt wrote about ResearchGate, a site that allows its members to upload and share academic papers. Although the site says it is the responsibility of the uploaders to make sure that they have the necessary rights to post and share material, it’s clear that millions of articles on ResearchGate are unauthorized copies according to the restrictive agreements that publishers typically impose on their authors. As we wrote back then, it was interesting that academic publishers were fine with that, but not with Sci-Hub posting and sharing more or less the same number of unauthorized papers.
Somewhat belatedly, the International Association of Scientific Technical and Medical Publishers (STM) has now announced that it is not fine with authors sharing copies of their own papers on ResearchGate without asking permission. In a letter to the site from its lawyers (pdf), the STM is proposing what it calls “a sustainable way to grow and to continue the important role you play in the research ecosystem”. Here’s what it wants ResearchGate (“RG”) to do:
RG’s users could continue “claiming?, i.e. agreeing to make public or uploading documents in the way they may have become accustomed to with RG’s site. An automated system, utilizing existing technologies and ready to be implemented by STM members, would indicate if the version of the article could be shared publicly or privately. If publicly, then the content could be posted widely. If privately, then the article would remain available only to the co-authors or other private research groups consistent with the STM Voluntary Principles. In addition, a message could be sent to the author showing how to obtain rights to post the article more widely. This system could be implemented within 30-60 days and could then handle this “processing” well within 24 hours.
In other words, an upload filter, of exactly the kind proposed by the European Commission in its new Copyright Directive. There appears to be a concerted push by the copyright industry to bring in upload filters where it can, either through legislation, as in the EU, or through “voluntary” agreements, as with ResearchGate. Although the lawyer’s letter is couched in the politest terms, it leaves no doubt that if ResearchGate refuses to implement STM’s helpful suggestion, things might become less pleasant. It concludes:
On behalf of STM, I urge you therefore to consider this proposal. If you fail to accede to this proposal by 22 September 2017, then STM will be leaving the path open for its individual members to follow up with you separately, whether individually or in groups sharing a similar interest and approach, as they may see fit.
What this latest move shows is that publishers aren’t prepared to allow academics to share even their own papers without permission. It underlines that, along with fat profits, what the industry is most concerned about in this struggle is control. Academic publishers will graciously allow ResearchGate to exist, but only if they are recognized unequivocally as the gatekeeper.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: academia, copyright, knowledge, papers, sharing
Companies: researchgate, sci-hub
As Predicted, Elsevier's Attempt To Silence Sci-Hub Has Increased Public Awareness Massively
from the now,-what-do-we-call-that-again? dept
Last month, Techdirt wrote about the growing interest in Sci-Hub, which provides free access to research papers — more than 47,000,000 of them at the time of writing. As Mike noted then, Elsevier’s attempt to make the site go away by suing it has inevitably produced a classic Streisand Effect, whereby many more people know about it as a direct result. That was first pointed out by Mike Taylor in a short post, where he listed a few titles that had written about Sci-Hub. This week, David Rosenthal has produced a kind of update, listing many more posts on the subject that have appeared in the last month alone.
Rosenthal’s list includes an article entitled “Should All Research Papers Be Free?” that was published in Sunday’s edition of The New York Times. It’s probably the most significant contribution to spreading the word about Sci-Hub more widely, but it doesn’t really add much to the debate. By contrast, another post mentioned by Rosenthal, found on the Inside Higher Ed’s site, and written by the college librarian Barbara Fister, may lack the impact of The New York Times news analysis, but does make some genuinely novel observations about what is going on here.
Fister notes that alongside people who don’t have access to the articles they need because they are not affiliated with a well-funded Western library with the right subscriptions, many researchers turn to Sci-Hub because accessing articles has become a complicated and inconvenient process. As she says:
> For many folks, Sci-Hub is simply a more convenient library that doesn?t make you mess around with logins and interlibrary loans. Hey, we?re busy. Paywalls are a pain.
Techdirt has written before about this aspect before, and the growing evidence that piracy greatly diminishes once good, easy-to-use legal services become available. However, as Fister rightly says, the current situation is awkward for the people who are supposed to be overseeing those unsatisfactory legal services:
> Librarians are in a nasty spot. Sometimes I wonder if we can even call ourselves librarians anymore. We feel we are virtually required to provide access to whatever researchers in our local community ask for while restricting access from anyone outside that narrowly-defined community of users. Instead of curators, we’re personal shoppers who moonlight as border guards. This isn?t working out well for anyone. Unaffiliated researchers have to find illegal work-arounds, and faculty who actually have access through libraries are turning to the black market for articles because it seems more efficient than contacting their personal shopper, particularly when the library itself doesn’t figure in their work flow. In the meantime, all that money we spend on big bundles of articles (or on purchasing access to articles one at a time when we can’t afford the bundle anymore) is just a really high annual rent. We can’t preserve what we don’t own, and we don’t curate because our function is to get what is asked for.
Not only is lack of open access soul-destroying for librarians longing to take a more active and creative role in the provision of information to the academics they support, it comes with a hidden financial cost that has been overlooked:
> It is labor — lots of labor — to maintain link resolvers, keep [academic journal] license agreements in order, and deal with constant changes in subscription contents. We [librarians] have to work a lot harder to be publishers’ border guards than people realize.
That’s a hugely important point that I have not seen made elsewhere. Alongside all the other tasks that traditional publishers push onto the academic community — notably, writing articles and refereeing them for free — there is another major burden, imposed specifically on librarians, who are forced to spend much of their time acting as the publishers’ hated “border guards.” That’s a tragic waste of their highly-specialized skills, and not what they were employed to do.
This extra cost is yet another reason to move rapidly to open access publishing for all academic work, which would not only legalize Sci-Hub, but make it superfluous.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: academia, copyright, open access, research, sci-hub, streisand effect
Companies: elsevier, sci-hub
DailyDirt: Problems With Peer Reviewed Publications
from the urls-we-dig-up dept
Peer review isn’t exactly a sexy topic, but it’s an essential part of academic publishing — and it may need to change a bit to keep up with the times. Peer review is typically a thankless chore that is distributed among academics working in a network of related fields, and sometimes personal politics can enter into the process if the subject matter is obscure enough. Misconduct in peer review doesn’t usually get the same kind of coverage as various journalistic scandals (eg. Rolling Stone, Buzzfeed, etc), but the damages done can be even more significant to society.
- Peer review processes aren’t free of corruption — with some third party agencies offering services that fake reviews or try to improve a paper’s odds of being published in other unsavory ways. The publication system for scientific work doesn’t seem to have a great way to deal with this issue besides retracting (instead of correcting) articles published in error. Dozens of papers have been retracted by BioMed Central recently, but the problems with peer review appear to be much more widespread. [url]
- Some scientific papers are published for a fee — with absolutely no quality control whatsoever. The impressively-titled International Journal of Advanced Computer Technology accepted a paper that consisted of nothing but “Get me off your fucking mailing list” repeated over and over. [url]
- Paying for expedited peer review sounds sketchy, right? Rubriq’s peer-review service promised a review within 3 weeks or your money back — but perhaps these kinds of services should be subject to yet another round of reviews. [url]
- Can publishers try to automate the detection of fake papers and poorly-reviewed articles before they turn into embarrassing mistakes? Artificial intelligence just isn’t that good, but perhaps software will make it harder for people to detect shady predatory publications. [url]
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: academia, journals, media, natural language processing, peer review, publications, retractions, rubriq, scandals
Shameful: American Society Of Civil Engineers Issues DMCA Notices Against Academics For Posting Their Own Research
from the for-the-encouragement-of-learning dept
As we’ve pointed out many times in the past, the originally stated purpose of copyright law was to encourage the sharing of scientific knowledge for the purpose of learning. The first copyright act in the US was actually entitled “for the encouragement of learning.” Yet, as copyright law has evolved, it’s frequently been used to make learning much more difficult. Just a few months ago, we covered how publishing giant Elsevier had started to demand that academics who had published their own research on Academia.edu take down those works. As we noted then, while big journal publishers often demand that academics hand over their copyright in order to get published, they usually would either grant an exception for an academic to post their own work, or at least look the other way when the academics would do so. And many, many academics obviously decided to post their own papers to the web.
As TorrentFreak reports, the American Society of Civil Engineers has taken it up a level, hiring one of the more well-known copyright enforcement companies out there, Digimarc, to go around issuing DMCA takedown on academics uploading their own works:
The publisher has hired the piracy protection firm Digimarc to police the internet for articles that are posted in the wild. As a result, universities all across the globe were targeted with takedown notices, which were also sent to Google in some cases.
The list of rogue researchers is long, and includes professors from MIT, Stanford, Northwestern University, University of Washington, UC Berkeley, University of Michigan, University of Wisconsin–Madison and many international universities.
Yes, basically, ASCE has declared that its own academic authors are a bunch of pirates. If you’re a civil engineer, now is the time to start looking seriously at alternatives for publishing beyond the ASCE. Declaring war on the academics who provide you all of your content for free, just seems like a bad idea.
Torrentfreak notes that it appears that some universities have resisted these takedown demands. Stanford, MIT and UC Berkeley still have the works in question up. Other schools, however, have quickly caved in. University of Wisconsin-Madison and University of Texas-Austin appear to have pulled down the works. Because, you can’t support the progress of science if your damn academics are giving away their works for free… instead of paying hundreds of thousands of dollars a year for access to basic knowledge and research.
Filed Under: academia, copyright, dmca, papers, research, takedowns
Companies: asce, digimarc