Wherein The Copia Institute Updates The Copyright Office On The First Amendment Problems With The DMCA (original) (raw)

from the rights-of-the-roundtable dept

A few years ago the Copyright Office commenced several studies on the DMCA. One, on Section 1201, resulted in a report to Congress and some improvements to the triennial rulemaking process. But for the other study, on Section 512, things had been quiet for a while. Until earlier this year, when the Copyright Office announced it was hosting an additional roundtable hearing to solicit additional input. What the Copyright Office wanted to know in particular was how recent developments in US and international law should inform the recommendations they may issue as a result of this study.

The Copia Institute had already submitted two rounds of comments, and both Mike and I had separately given testimony at the hearing held in San Francisco. This new hearing was a good chance to remind the Copyright Office of the First Amendment concerns with the DMCA we had already warned them about, many of which are just as worrying ? if not more so ? today.

One significant, overarching problem is the way the DMCA results in such severe consequences for speech, speakers, and platforms themselves based on the mere accusation of infringement. It is unique in American law for there to be such an effect like this: in most instances, sanction cannot follow unless and until a court has found there to be actual liability. In fact, when it comes to affecting speech interests it is expressly forbidden by the First Amendment to punish speakers or speech before a court has found specific instances of speech unlawful. To do otherwise ? to punish speech, or, worse, to punish a speaker before they’ve even had a chance to make wrongful speech ? is prior restraint, and not constitutional. Yet in the DMCA context, this sort of punishment happens all the time. And since the last roundtable hearing it has only gotten worse.

Several things are making it worse. One is that Section 512(f) remains toothless, thanks to the Supreme Court refusing to review the Ninth Circuit’s decision in Lenz v. Universal. Section 512(f) is the provision in the DMCA that is supposed to deter, and punish, those who send invalid takedown notices. Invalid takedown notices force the removal of speech that may be perfectly lawful because they put the platform’s safe harbor at risk if it doesn’t remove it. Unfortunately, in the wake of Lenz it has been functionally impossible for those whose speech has been removed to hold the sender of these invalid notices liable for the harm they caused. And it’s not like there are other options for affected speakers to use to try to remediate their injury.

Also, it is not only the sort of notices at issue in Lenz that have been impacting speakers and speech. An important thing to remember is that the DMCA actually provides for four different kinds of safe harbors. We most often discuss the Section 512(c) safe harbor, which is for platforms that store content “at the direction of users.” Section 512(c) describes the “takedown notices” that copyright holders need to send these platforms to get that user-stored content removed. But the service providers that instead use the safe harbor at Section 512(a) aren’t required to accept these sorts of takedown notices. Which makes sense, because there’s nothing for them to take down. These sorts of platforms are generally all-purpose ISPs, including broadband ISPs, of which there are all-too-few choices for customers to use if they are cut off from one. All the user expression they handle is inherently transient, because the sole job of these providers is to deliver it to where it’s going, not store it.

And yet, these sorts of providers are also required, like any other platform that uses any of the other safe harbors, to comply with Section 512(i) and have a policy to terminate repeat infringers. The question, of course, is how are they supposed to know if one of their users is actually a repeat infringer. And that’s where recent case law has gotten especially troubling from a First Amendment standpoint.

The issue is that, while there are plenty of problems with Section 512(c) takedown notices, the sorts of notices that are being sent to 512(a) service providers are even uglier. As was the case with the notices sent by Rightscorp in the BMG v. Cox case ? the first in an expanding line of cases pushing 512(a) service providers like Cox to lose their safe harbor for not holding these mere allegations of infringement against their users in order to terminate them from their services ? these notices are often duplicative, voluminous beyond any reasonable measure, extortionate in their demands, and reflective of completely invalid copyright claims. And yet the courts have not yet seemed to care.

As we noted at the roundtable, the court in Cox ultimately threw out all the infringement claims for an entire plaintiff because it wasn’t clear that it even owned the relevant copyrights, despite Rightscorp having sent numerous notices to Cox claiming that it did. But instead of finding that these deficiencies in the notices justified the ISP’s suspicions about the merit of the other notices it had received, the court still held it against the ISP that they hadn’t automatically credited all the other claims in all the other notices it had received, despite ample reason for being dubious about them. Worse, the court faulted the ISP for not just refusing to automatically believing the alleged infringement notices it had received but for not acting upon them to terminate people who had accumulated too many. As we and other participants flagged at the hearing, there are significant problems with this reasoning. One relates to the very idea that termination of a user is ever an appropriate or Constitutional reaction, even the user is actually infringing copyright. Since the last hearing the Supreme Court has announced in Packingham v. North Carolina that being cut off from the Internet in this day and age is unconstitutional. (As someone at the else roundtable this time pointed out, if it isn’t OK to kick someone off the Internet for being a sex offender, it is less likely that it’s OK to kick someone off the Internet for merely infringing copyright.)

Secondly, the Cox court ran square into the crux of the First Amendment problem with the DMCA: that it forces ISPs to act against their users based on unadjudicated allegations of infringement. It’s bad enough that legitimate speech gets taken down by unadjudicated claims in the 512(c) notice-and-takedown context, but to condition a platform’s safe harbor on preventing a person from ever getting to speak online ever again, simply because they’ve received too many allegations of infringement, presents an even bigger problem. Especially since, as we pointed out, it opens the door to would-be censors to game the system. Simply make as many unfounded accusations of infringement as you want against the speaker you don’t like (which no one will ever be able to effectively sanction you for doing) and the platform will have no choice but to kick them off their service in order to protect their safe harbor.

There is also yet another major problem underlying this, and every other, aspect of the DMCA’s operation: that there is no way to tell on its face whether user speech is actually infringing. Is there actually a copyright? If so, who owns it? Is there a license that permitted the use? What about fair use? Any provider that gets an infringement notice will have no way to accurately assess the answers to these questions, which is why it’s so problematic that they are forced to presume every allegation is meritorious, since so many won’t be.

But the roundtable also hit on another line of cases that also suffers from the same problem of infringement never being facially apparent. In Mavrix v. Livejournal the Ninth Circuit considered that the moderation Livejournal was doing ? as allowed (and encouraged) by CDA Section 230 ? to have potentially waived its safe harbor. The problem with the court’s decision was that it construed the way Livejournal screened user-supplied content as converting it from content stored “at the direction of users” to its own content, and several roundtable participants pointed out that this reading was not a good one. In fact, it’s terrible, if you want to ensure that platforms remain motivated ? and able ? to perform the screening functions Congress wanted them to perform when it passed Section 230. Because there’s a more general concern: if various provisions of the DMCA suddenly turn out to be gotchas that cause platforms to lose their safe harbor, if in the process of screening content they happen to see some that might be infringing, they won’t be able to keep doing it. Perhaps this is not a full-on First Amendment problem, but it still affects online expression and the ability of platforms to enable it.

Filed Under: 512h, 512i, censorship, copyright, dmca, dmca 512, free speech, takedowns
Companies: bmg, cox, rightscorp