cda – Techdirt (original) (raw)
Please Take A Moment To Celebrate How A Very Different Supreme Court Saved The Internet 25 Years Ago
from the can-we-please-not-have-to-do-this-again dept
The terrible, awful, no good, horrible plans to regulate the internet keep coming faster and furiouser these days. So, it’s worth remembering a time back when Congress passed one of the worst laws about the internet: the Communications Decency Act. Yes, these days we talk about the CDA more reverently, but that’s only because we’re talking about the one part of it that wasn’t declared unconstitutional: Section 230. Section 230, of course, was never even supposed to be a part of the CDA in the first place. It was crafted by then Representatives Chris Cox and Ron Wyden as an alternative approach to the ridiculousness that was coming out of Senator James Exon in the Senate.
But, you know, this is Congress, and rather than just do the right thing, it mashed the two approaches together in one bill and figured God or the courts would sort it out. And, thankfully, the courts did sort it out. Twenty-five years ago this week, the court decided Reno v. ACLU, dumped the entire CDA (minus Section 230) as blatantly unconstitutional, and, in effect, saved the internet.
Jared Schroeder and Jeff Kosseff wrote up a nice article about the 25th anniversary of the Reno decision that is well worth reading.
When faced with the first significant case about online expression, justices went in a completely different direction than Congress, using the Reno case to confer the highest level of protections on online expression.
The case started when a broad coalition of civil liberties groups, business interests, and others, including the American Civil Liberties Union, American Library Association, Planned Parenthood Federation of America, and Microsoft, sued. A three-judge panel in Philadelphia struck down much of the law, and the case quickly moved to the Supreme Court.
The federal government tried to justify these restrictions partly by pointing to a 1978 opinion in which the court allowed the FCC to sanction a radio station that broadcast George Carlin’s “seven dirty words.” Justices dismissed these arguments. They saw something different in the internet and rejected attempts to apply weaker First Amendment protections to the internet. Justices reasoned the new medium was fundamentally different from the scarce broadcast spectrum.
“This dynamic, multifaceted category of communication includes not only traditional print and news services, but also audio, video, and still images, as well as interactive, real-time dialogue,” Justice John Paul Stevens wrote. “Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.”
The article has a lot more details about the case, and why it’s still relevant. Also, how the messages from that ruling are still useful today as we are, once again, facing many attempts to regulate the internet.
The precedent’s relevance isn’t in the case’s dated facts or romanticized predictions. Its enduring value is in the idea the internet should generally be protected from government control. Without the Supreme Court’s lucid and fervent defense of online free speech, regulators, legislators, and judges could have more easily imposed their values on the internet.
There’s a lot more in that article, but go read it… on this very internet that would have been a very, very different place without that ruling.
Filed Under: 1st amendment, cda, communications decency act, internet, reno, reno v. aclu
Companies: aclu
Section 230 and Criminal Law; DOJ 230 Workshop Review, Part II
from the don't-break-the-internet dept
In Part I of this series on the Department of Justice?s February 19 workshop, Section 230 ? Nurturing Innovation or Fostering Unaccountability? (archived video and agenda), we covered why Section 230 is important, how it works, and how panelists proposed to amend it.
Here, Part II covers how Section 230 intersects with criminal law, especially around child sexual abuse material (CSAM). Part III will ask what?s really driving DOJ, and explore how to get tough on CSAM without amending Section 230 or banning encryption.
Section 230 Has Never Stopped Enforcement of Most Criminal Laws
The second panel in particular focused on harms that either already are covered by federal criminal law (like CSAM) or that arguably should be (like revenge porn). So it?s worth reiterating two things up front:
- Section 230?s protections for websites have always excluded federal criminal law
- Section 230 has never stopped state or local prosecutors from enforcing state criminal laws against the users responsible for harmful conduct online.
Plaintiff?s lawyer Carrie Goldberg repeatedly mentioned Herrick v. Grindr. Her client Matthew Herrick sued Grindr for failing to stop his ex-boyfriend from repeatedly creating fake Grindr profiles of Herrick, each claiming he had a rape fantasy, and using these profiles to send over 1200 men to attempt to rape him. Both state criminal law and federal harassment law already cover such conduct. In fact, contrary to Goldberg?s claims that law enforcement did nothing to help her client, Herrick?s ex was arrested in 2017 and charged with stalking, criminal impersonation, making a false police report, and disobeying a court order.
On the same panel, Yiota Souras, Senior Vice President and General Counsel, National Center for Missing and Exploited Children, acknowledged that Section 230 didn?t stop federal prosecutors from charging executives of Backpage.com. Indeed, the former CEO plead guilty literally one day after President Trump signed FOSTA-SESTA — the first legislation to amend Section 230 since the law was enacted in 1996. Souras claimed that the only reason other sites haven’t rushed to fill the gap left by Backpage (in hosting ads for child sex trafficking) was the the deterrence effect of the new law.
Correction Notice: This post originally misattributed the above to Prof. Mary Anne Franks, rather than Yiota Souras.
But since FOSTA-SESTA was enacted nearly two years ago, not a single prosecution has been brought under the new law. By contrast, the DOJ managed to actually shut down Backpage.com and its former CEO, Carl Ferrer. Ferrer is now awaiting sentencing and could face up to five years in prison plus a $250,000 fine. (You can read his plea bargain if you?re interested.) Meanwhile, the two other arrested Backpage executives are continuing to fight their legal case, in which there is increasing evidence that the Justice Department is trying to railroad them into a guilty plea by misrepresenting their efforts to help stop trafficking as evidence they were helping to promote it. It?s a messy case, but with one criminal plea under pre-existing law and zero prosecutions for the new law, it?s hard to argue that the new law accounts for all of the deterrence value Franks ascribes to it.
The Role of States and State Criminal Law
Nebraska Attorney General Doug Peterson said state AGs wanted only one tiny tweak to Section 230: adding state criminal law to the list of exceptions to Section 230?s protections. (The National Association of Attorneys General has been pushing this idea for nearly a decade). It may sound moderate: after all, since 230 doesn?t bar enforcement of federal criminal law, why stop the application of state criminal law? But, as Prof. Goldman noted, there?s a world of difference between the two.
The AGs? proposal would create four distinct problems:
- Section 230 has ensured that we have a consistent national approach to using criminal law to police how websites and Internet services operate. But if website operators could be charged under any state or local law, you?d have a crazy-quilt of inconsistent state laws. Every state and locality in America could regulate the entire Internet.
- Most scholars agree that federal criminal law has become far too broad, but compared to any one state?s body of criminal law, it?s narrow and tailored. State criminal law includes an almost endless array of offenses, from panhandling to disturbing the peace, etc. Few people would argue that such laws should be applied on the Internet — yet, if Section 230 were expanded to allow prosecution of all state laws, creative prosecutors could charge just about any website with just about anything.
- In particular, half the states in the country still criminalize defamation, so opening the door to the enforcement of state criminal law means making websites liable for defamation committed by users — the thing Section 230 was most specifically intended to prevent. Yes, criminal cases involve a higher burden of proof but also stiffer penalties. And if websites face criminal penalties whenever users can complain about other users? speech, the chilling effects would be enormous. Any potentially sensitive or objectionable speech would be censored before anyone even complains. Politicians would be in a particularly privileged position, able to silence their critics simply by threatening to have criminal charges filed. Think Trump on steroids — for every politician in America (and anyone else who could get prosecutors to file a criminal complaint, or at least threaten to do so).
- These laws weren?t written for the Internet and don?t reflect the difficult balancing that would have to be done to answer the critical questions: exactly when would a website be responsible for each of the potentially billions of pieces of content it hosts? What kind of knowledge is required? The example of Italian prosecutors charging a Google executive with criminal cyberbullying simply because Google was too slow to take down a video of students taunting an autistic classmate illustrates just how high the stakes could be (never mind that the charges were ultimately overturned by the Italian Supreme Court).
There?s no need to open this can of worms. If the problem is that we don?t have a law for something like revenge porn, we should have that debate — but in Congress, not in every state legislature or town hall. A new federal criminal law could be enforced without amending Section 230.
But if the problem is that federal law enforcement lacks the resources to enforce existing criminal law — again, this is absolutely true for CSAM — the obvious answer would be to enlist state prosecutors in the fight. In fact, the U.S. Attorney General can already designate state prosecutors as ?special attorneys? under 18 U.S.C. § 543. Section 230 wouldn?t stop them from prosecuting websites because Section 230(e)(1) preserves the enforceability of federal criminal law regardless of who?s doing the enforcing. The fact that you?ve almost certainly never heard of this provision ought to make clear that this has never really been about getting state prosecutors more engaged — and make you question the state AG?s motives. (The same goes for formalizing this process by amending specific federal criminal laws to allow state prosecutors to enforce them.)
We proposed using Section 543 in the SESTA-FOSTA debate back in 2017 but the idea was dismissed out of hand. As a practical matter, it would require state prosecutors to operate in federal court — and thus, in many cases, to learn new practice rules. But that can?t possibly be what?s stopping them from getting involved in CSAM cases.
In Part III, we?ll ask what?s really driving DOJ here. Hint: it?s not really about ?protecting the children.?
Filed Under: bill barr, cda, content moderation, csam, defamation, internet, law, publishing, section 230
Why Section 230 Matters And How Not To Break The Internet; DOJ 230 Workshop Review, Part I
from the don't-break-the-internet dept
Festivus came early this year — or perhaps two months late. The Department of Justice held a workshop Wednesday: Section 230 – Nurturing Innovation or Fostering Unaccountability? (archived video and agenda). This was perhaps the most official “Airing of Grievances” we’ve had yet about Section 230. It signals that the Trump administration has declared war on the law that made the Internet possible.
In a blistering speech, Trump’s embattled Attorney General, Bill Barr, blamed the 1996 law for a host of ills, especially the spread of child sexual abuse material (CSAM). That proved a major topic of discussion among panelists. Writing in Techdirt three weeks ago, TechFreedom’s Berin Szóka analyzed draft legislation that would use Section 230 to force tech companies to build in backdoors for the U.S. government in the name of stopping CSAM — and predicted that Barr would use this workshop to lay the groundwork for that bill. While Barr never said the word “encryption,” he clearly drew the connection — just as Berin predicted in a shorter piece just before Barr’s speech. Berin’s long Twitter thread summarized the CSAM-230 connection the night beforehand and continued throughout the workshop.
This piece ran quite long, so we’ve broken it into three parts:
- This post, on why Section 230 is important, how it works, and how panelists proposed to amend it.
- Part two, discussing how Section 230 has never applied to federal criminal law, but a host of questions remain about new federal laws, state criminal laws and more.
- Part three, which will be posted next week, discussing what?s really driving the DOJ. Are they just trying to ban encryption? And can we get tough on CSAM without amending Section 230 or banning encryption?
Why Section 230 Is Vital to the Internet
The workshop’s unifying themes were “responsibility” and “accountability.” Critics claim Section 230 prevents stopping bad actors online. Actually, Section 230 places responsibility and liability on the correct party: whoever actually created the content, be it defamatory, harassing, generally awful, etc. Section 230 has never prevented legal action against individual users — or against tech companies for content they themselves create (or for violations of federal criminal law, as we discuss in Part II). But Section 230 does ensure that websites won’t face a flood of lawsuits for every piece of content they publish. One federal court decision (ultimately finding the website responsible for helping to create user content and thus not protected by Section 230) put this point best:
Websites are complicated enterprises, and there will always be close cases where a clever lawyer could argue that something the website operator did encouraged the illegality. Such close cases, we believe, must be resolved in favor of immunity, lest we cut the heart out of section 230 by forcing websites to face death by ten thousand duck-bites, fighting off claims that they promoted or encouraged — or at least tacitly assented to — the illegality of third parties.
Several workshop panelists talked about “duck-bites” but none really explained the point clearly: One duck-bite can’t kill you, but ten thousand might. Likewise, a single lawsuit may be no big deal, at least for large companies, but the scale of content on today’s social media is so vast that, without Section 230, a large website might face far more than ten thousand suits. Conversely, litigation is so expensive that even one lawsuit could well force a small site to give up on hosting user content altogether.
A single lawsuit can mean death by ten thousand duck-bites: an extended process of appearances, motions, discovery, and, ultimately, either trial or settlement that can be ruinously expensive. The most cumbersome, expensive, and invasive part may be “discovery”: if the plaintiff’s case turns on a question of fact, they can force the defendant to produce that evidence. That can mean turning a business inside out — and protracted fights over what evidence you do and don’t have to produce. The process can easily be weaponized, especially by someone with a political ax to grind.
Section 230(c)(1) avoids all of that by allowing courts to dismiss lawsuits without defendants having to go through discovery or argue difficult questions of First Amendment case law or the potentially infinite array of potential causes of action. Some have argued that we don’t need Section 230(c)(1) because websites should ultimately prevail on First Amendment grounds or that the common law might have developed to allow websites to prevail in court. The burden of litigating such cases at the scale of the Internet — i.e., for each of the billions and billions of pieces of content created by users found online, or even the thousands, hundreds or perhaps even dozens of comments that a single, humble website might host — would be impossible to manage.
As Profs. Jeff Kosseff and Eric Goldman explained on the first panel, Congress understood that websites wouldn’t host user content if the law imposed on them the risk of even a few duck bites per posting. But Congress also understood that, if websites faced increased liability for attempting to moderate harmful or objectionable user content on their sites, they’d do less content moderation — and maybe none at all. That was the risk created by Stratton Oakmont, Inc. v. Prodigy Services Co. (1995): Whereas CompuServe had, in 1991, been held not responsible for user content because it did not attempt to moderate user content, Prodigy was held responsible because it did.
Section 230 solved both problems. And it was essential that, the year after Congress enacted Section 230, a federal appeals court in Zeran v. America Online, Inc. construed the law broadly. Zeran ensured that Section 230 would protect websites generally against liability for user content — essentially, it doesn’t matter whether plaintiffs call websites “publishers” or “distributors.” Pat Carome, a partner at WilmerHale and lead defense counsel in Zeran, deftly explained the road not taken: If AOL had a legal duty as a “distributor” to take down content anyone complained about, anything anyone complained about would be taken down, and users would lose opportunities to speak at all. Such a notice-and-takedown system just won’t work at the scale of the Internet.
Why Both Parts of Section 230 Are Necessary
Section 230(c)(1) says simply that “No provider or user of an interactive computer service [content host] shall be treated as the publisher or speaker of any information provided by another information content provider [content creator].” Many Section 230 critics, especially Republicans, have seized upon this wording, insisting that Facebook, in particular, really is a “publisher” and so should be held “accountable” as such. This misses the point of Section 230(c)(1), which is to abolish the publisher/distributor distinction as irrelevant.
Miami Law Professor Mary Anne Franks proposed scaling back, or repealing, 230(c)(1) but leaving 230(c)(2)(A), which shields “good faith” moderation practices. She claimed this section is all that tech companies need to continue operations as “Good Samaritans.”
But as Prof. Goldman has explained, you need both parts of Section 230 to protect Good Samaritans: (c)(1) protects decisions to publish or not to publish broadly, while (c)(2) protects only proactive decisions to remove content. Roughly speaking, (c)(1) protects against complaints that content should have been taken down or taken down faster, while (c)(2) protects against complaints that content should not have been taken down or that content was taken down selectively (or in a “biased” manner).
Moreover, (c)(2) turns on an operator’s “good faith,” which they must establish to prevail on a motion to dismiss. That question of fact opens the door to potentially ruinous discovery — many duck-bites. A lawsuit can usually be dismissed via Section 230(c)(1) for relatively trivial legal costs (say, <$10k). But relying on a common law or 230(c)(2)(A) defense — as opposed to a statutory immunity — means having to argue both issues of fact and harder questions of law, and thus could raise that cost to easily ten times or more. Having to spend, say, $200k to win even a groundless lawsuit creates an enormous “nuisance value” to such claims — which, in turn, encourages litigation for the purpose of shaking down companies to settle out of court.
Class action litigation increases legal exposure for websites significantly: Though fewer in number, class actions are much harder to defeat because plaintiff’s lawyers are generally sharp and intimately familiar with how to wield maximum pressure to settle through the legal system. This is a largely American phenomenon and helps to explain why Section 230 is so uniquely necessary in the United States.
Imagining Alternatives
The final panel discussed “alternatives” to Section 230. FTC veteran Neil Chilson (now at the Charles Koch Institute) hammered a point that can’t be made often enough: it’s not enough to complain about Section 230; instead, we have to evaluate specific proposals to amend section 230 and ask whether they would make users better off. Indeed! That requires considering the benefits of Section 230(c)(1) as a true immunity that allows websites to avoid the duck-bites of the litigation (or state/local criminal prosecution) process. Here are a few proposed alternatives, focused on expanding civil liability. Part II (to be posted later today) will discuss expanding state and local criminal liability.
Imposing Size Caps on 230’s Protections
Critics of Section 230 often try to side-step startup concerns by suggesting that any 230 amendments preserve the original immunity for smaller companies. For example, Sen. Hawley’s Ending Support For Internet Censorship Act would make 230 protections contingent upon FTC certification of the company’s political neutrality if the company had 30 million active monthly U.S. users, more than 300 million active monthly users worldwide, or more than $500 million in global annual revenue.
Julie Samuels, Executive Director of Tech:NYC, warned that such size caps would “create a moat around Big Tech,” discouraging the startups she represents from growing. Instead, a size cap would only further incentivize startups to become acquired by Big Tech before they lose immunity. Prof. Goldman noted two reasons why it’s tricky to distinguish between large and small players on the Internet: (1) several smaller companies are among the top 15 U.S. services, e.g., Craigslist, Wikipedia, and Reddit, with small staffs but large footprints; and (2) some enormous companies rarely deal with user generated content, e.g., Cloudflare, IBM, but these companies would still be faced with all of the obligations that apply to companies that had a bigger user generated footprint. You don’t have to feel sorry for IBM to see the problem for users: laws like Hawley could drive such companies to get out of the business of hosting user-generated content altogether, deciding that it’s too marginal to be worth the burden.
Holding Internet Services Liable for Violating their Terms of Service
Goldberg and other panelists proposed amending Section 230 to hold Internet services liable for violating their terms of service agreements. Usually, when breach of contract or promissory estoppel claims are brought against services, they involve post or account removals. Courts almost always reject such claims on 230(c)(1) grounds as indirect attempts to hold the service liable as a publisher for those decisions. After all, Congress clearly intended to encourage websites to engage in content moderation, and removing posts or accounts is critical to how social media keep their sites usable.
What Goldberg really wants is liability for failing to remove the type of content that sites explicitly disallow in their terms (e.g., harassment). But such liability would simply cause Internet services to make their terms of service less specific — and some might even stop banning harassment altogether. Making sites less willing to remove (or ban) harmful content is precisely the “moderator’s dilemma” that Section 230 was designed to avoid.
Conversely, some complain that websites’ terms of service are too vague — especially Republicans, who complain that, without more specific definitions of objectionable content, websites will wield their discretion in politically biased ways. But it’s impossible for a service to foresee all of the types of awful content its users might create, so if websites have to be more specific in their terms of service, they’d have to constantly update their terms of service, and if they could be sued for failing to remove every piece of content they say they prohibit… that’s a lot of angry ducks. The tension between these two complaints should be clear. Section 230, as written, avoids this problem by simply protecting websites operators from having to litigate these questions.
Finally, in general, contract law requires a plaintiff to prove both breach and damages/harm. But with online content, damages are murky. How is one harmed by a violation of a TOS? It’s unclear exactly what Goldberg wants. If she’s simply saying Section 230 should be interpreted, or amended, not to block contract actions based on supposed TOS violations, most of those are going to fail in court anyway for lack of damages. But if they allow a plaintiff to get a foot in the door, to survive an initial motion to dismiss based on some vague theory of alleged harm, even having to defend against lawsuits that will ultimately fail creates a real danger of death-by-duck-bites.
Compounding the problem — especially if Goldberg is really talking about writing a new statute — is the possibility that plaintiffs’ lawyers could tack on other, even flimsier causes of action. These should be dismissed under Section 230, but, again, more duck-bites. That’s precisely the issue raised by Patel v. Facebook, where the Ninth Circuit allowed a lawsuit under Illinois’ biometric privacy law to proceed based on a purely technical violation of the law (failure to deliver the exact form of required notice for the company’s facial recognition tool). The Ninth Circuit concluded that such a violation, even if it amounted to “intangible damages,” was sufficient to confer standing on plaintiffs to sue as a class without requiring individual damage showings by each member of the class. We recently asked the Supreme Court to overrule the Ninth Circuit but they declined to take the case, leaving open the possibility that plaintiffs can get into federal court without alleging any clear damages. The result in Patel, as one might imagine, was a quick settlement by Facebook in the amount of $500 million shortly after the petition for certiorari was denied, given that the total statutory damages that would have been available to the class would have amounted to many billions. Even the biggest companies can be duck-bitten into massive settlements.
Limiting Immunity to Traditional Publication Torts
Several panelists claimed Section 230(c)(1) was intended to only cover traditional publication torts (defamation, libel and slander) and that over time, courts have wrongly broadened the immunity’s coverage. But there’s just no evidence for this revisionist account. Prof. Kosseff found no evidence for this interpretation after exhaustive research on Section 230’s legislative history for his definitive book. Otherwise, as Carome noted, Congress wouldn’t have needed to contemplate the other non-defamation related exceptions in the statute, like intellectual property, and federal criminal law.
Anti-Conservative Bias
Republicans have increasingly fixated on one overarching complaint: that Section 230 allows social media and other Internet services to discriminate against them, and that the law should require political neutrality. (Given the ambiguity of that term and the difficulty of assessing patterns at the scale the content available on today’s Internet, in practice, this requirement would actually mean giving the administration the power to force websites to favor them.)
The topic wasn’t discussed much during the workshop, but, according to multiple reports from participants, it dominated the ensuing roundtable. That’s not surprising, given that the roundtable featured only guests invited by the Attorney General. The invite list isn’t public and the discussion was held under Chatham House rules, but it’s a safe bet that it was a mix of serious (but generally apolitical) Section 230 experts and the Star Wars cantina freak show of right-wing astroturf activists who have made a cottage industry out of extending the Trumpist persecution complex to the digital realm.
TechFreedom has written extensively on the unconstitutionality of inserting the government into the exercise of editorial discretion by website operators. Just for example, read our statement on Sen. Hawley’s proposed legislation on regulating the Internet and Berin’s 2018 Congressional testimony on the idea (and Section 230, at that shit-show of a House Judiciary hearing that featured Diamond and Silk). Also read our 2018 letter to Jeff Sessions, Barr’s predecessor, on the unconstitutionality of attempting to coerce websites in how they exercise their editorial discretion.
Conclusion
Section 230 works by ensuring that duck-bites can’t kill websites (though federal criminal prosecution can, as Backpage.com discovered the hard way — see Part II). This avoids both the moderator’s dilemma (being more liable if you try to clean up harmful content) and that websites might simply decide to stop hosting user content altogether. Without Section 230(c)(1)’s protection, the costs of compliance, implementation, and litigation could strangle smaller companies even before they emerge. Far from undermining “Big Tech,” rolling back Section 230 could entrench today’s giants.
Several panelists poo-pooed the “duck-bites” problem, insisting that each of those bites involve real victims on the other side. That’s fair, to a point. But again, Section 230 doesn’t prevent anyone from holding responsible the person who actually created the content. Prof. Kate Klonick (St. John’s Law) reminded the workshop audience of “Balk’s law”: “THE INTERNET IS PEOPLE. The problem is people. Everything can be reduced to this one statement. People are awful. Especially you, especially me. Given how terrible we all are it’s a wonder the Internet isn’t so much worse.” Indeed, as Prof. Goldman noted, however new technologies might aggravate specific problems, better technologies are essential to facilitating better interaction. We can’t hold back the tide of change; the best we can do is to try to steer the Digital Revolution in better directions. And without Section 230, innovation in content moderation technologies would be impossible.
For further reading, we recommend the seven principles we worked with a group of leading Section 230 experts to draft last summer. Several panelists referenced them at the workshop, but they didn’t get the attention they deserved. Signed by 27 other civil society organizations across the political spectrum and 53 academics, we think they represent the best starting point for how to think about Section 230 yet offered.
Next up, in Part II, how Section 230 intersects with the criminal law. And, in Part III… what’s really driving the DOJ, banning encryption, and how to get tough on CSAM.
Filed Under: bill barr, cda, content moderation, csam, internet, law, publishing, section 230
Ted Cruz Demands A Return Of The Fairness Doctrine, Which He Has Mocked In The Past, Due To Misunderstanding CDA 230
from the grandstanding-idiocy dept
Remember the Fairness Doctrine? It was an incredibly silly policy of the FCC from 1949 to 1987 requiring some form of “equal time” to “the other side” of controversial matters of public interest. It’s a dumb idea because most issues have a lot more than two sides, and simply pitting two arguments against one another tends to do little to elucidate actual truth — but does tend to get people to dig in more. However, despite the fact that the fairness doctrine was killed more than 30 years ago, Republicans* regularly claim that it’s about to be brought back.
* Our general policy is not to focus on political parties, unless it’s a necessary part of the story, and in this case it is. If you look at people freaking out about the supposed return of the fairness doctrine (which is not returning) it is always coming from Republicans, stirring up their base and claiming that Democrats are trying to bring back the fairness doctrine to silence the Rush Limbaughs and Sean Hannitys of the world.
But that’s why it’s so bizarre that Ted Cruz has taken to the pages of Fox News… to incorrectly claim that the fairness doctrine applies to the internet based on his own tortured (i.e. dead wrong) reading of Section 230 of the Communications Decency Act. We already discussed how wrong Cruz was about CDA 230 in his questions to Mark Zuckerberg (while simultaneously noting how ridiculous Zuck’s responses were).
In his Fox News op-ed, Cruz argues that if a platform is “non-neutral” it somehow loses CDA 230 protections:
Section 230 of the Communications Decency Act (CDA) states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This is a good provision. It means that, for example, if you run a blogging platform and someone posts a terrorist threat in the comments section, you?re not treated as the person making the threat. Without Section 230, many social media networks could be functionally unable to operate.
In order to be protected by Section 230, companies like Facebook should be ?neutral public forums.? On the flip side, they should be considered to be a ?publisher or speaker? of user content if they pick and choose what gets published or spoken.
This is Cruz only reading Section (c)(1) of CDA 230, and totally ignoring the part right after it that says:
No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
There’s plenty of case law that has made it clear that moderating content on your platform doesn’t make you liable under CDA 230. The very point of this section was to encourage exactly this kind of moderation. Indeed, this part of the CDA was added directly in response to the infamous Stratton Oakmont v. Prodigy case, where Prodigy was found liable for certain posts in part because it moderated other messages.
Now, you could claim that Cruz is not misreading the law — but rather he’s advocating a return to the days of that Stratton Oakmont ruling being law. After all he says “Facebook should be ‘neutral public forums.'” But, that’s both an impossible standard (what the hell is “neutral” in this context anyway?) and basically calling for a return to the fairness doctrone.
Republicans, who have spent years freaking out about the fairness doctrine should be really pissed off at Cruz for basically demanding not just a return of the fairness doctrine, but demanding it for all online platforms and setting it at an impossible standard.
And, of course, this is the same Cruz who has railed against the fairness doctrine itself in the past.
“You know, the Obama FCC has invoked the Fairness Doctrine a number of times with sort of wistful glances to the past. Nostalgia,” he said. “You know if I had suggested years ago that the Obama administration would send government observers into the newsrooms of major media organizations, that claim would have been ridiculed. And yet that is exactly what the FCC did.”
Amusingly, this was right after he railed against the Obama FCC for pushing for net neutrality.
So… to sum up Ted Cruz’s views on the internet, net neutrality is evil and attack on free speech, but platform “neutrality” is necessary. How does that work? Oh, and the fairness doctrine is censorship, but Facebook needs to engage in a form of the fairness doctrine or face stifling civil liability.
It’s almost as if Ted Cruz has no idea what the fuck he’s talking about concerning internet regulations, free speech, neutrality and fairness — but does know that if he hits on certain buzzwords, he’s sure to fire up his base.
Filed Under: cda, fairness doctrine, section 230, ted cruz
Companies: facebook
Amended Complaint Filed Against Backpage… Now With SESTA/FOSTA
from the because-of-course dept
What a weird week for everyone promoting FOSTA/SESTA as being necessary to takedown Backpage.com. After all, last Friday, before FOSTA/SESTA was signed into law, the FBI seized Backpage and all its servers, and indicted a bunch of execs there (and arrested a few of them). The backers of FOSTA/SESTA even tried to take credit for the shutting down of the site, despite the fact that the law they “wrote” wasn’t actually the law yet. Separately, as we pointed out, after the bill was approved by Congress, but before it was signed into law, two separate courts found that Backpage was not protected by CDA 230 in civil suits brought by victims of sex trafficking.
On Wednesday, President Trump finally signed the bill despite all of the reasons we were told it was necessary already proven to be untrue (and many of the concerns raised by free speech advocates already proven true). And, on Thursday, in the civil case in Massachusetts (the first to rule that Backpage wasn’t protected by CDA 230 for ads where it helped create illegal content), an amendment complaint was filed, this time with FOSTA/SESTA included. Normally, this wouldn’t make any sense, but thanks to the unconstitutional retroactive clause in FOSTA/SESTA it could possibly apply (assuming the judge ignores the Constitutional problems).
From the amended complaint:
In March 2018, Congress passed the ?Allow States and Victims to Fight Online Sex Trafficking Act of 2017? (?FOSTA?), and the President signed it into law on April 11, 2018. Pub. L. No. 115-___, ___ Stat. ___ (2018) (codified at, inter alia, 47 U.S.C. § 230). FOSTA specifically states, among its legislative findings, that Section 230 of the Communications Decency Act (?CDA?), 47 U.S.C. § 230, ?was never intended to provide legal protection to websites that . . . facilitate traffickers in advertising the sale of unlawful sex with sex trafficking victims,? and that ?websites that promote and facilitate prostitution have been reckless in allowing the sale of sex trafficking victims and have done nothing to prevent the trafficking of children and victims of force, fraud, and coercion.? FOSTA § 2(1)-(2). Accordingly, Congress passed FOSTA to ?clarify that section 230 of [the CDA] does not prohibit the enforcement against providers and users of interactive computer services of Federal and State criminal and civil law relating to sexual exploitation or sex trafficking.? … FOSTA amended, inter alia, Section 230(e) of the CDA to provide that ?[n]othing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit . . . any claim in a civil action brought under section 1595 of title 18, United States Code, if the conduct underlying the claim constitutes a violation of section 1591 of that title.? Id. ? 4(a). FOSTA also provides that its amendment to Section 230(e) ?shall apply regardless of whether the conduct alleged occurred, or is alleged to have occurred, before, on, or after [FOSTA?s] date of enactment.? … The effect of FOSTA is to ensure that website operators like Backpage can be held civilly liable to their victims for their violations of federal criminal law.
And thus, the retroactive clause is already in play. Assuming Backpage continues to fight this, you have to imagine it will note the serious constitutional problems with retroactive clauses like the one in FOSTA/SESTA.
But, that of course, depends on Backpage being around to fight this, and the company is gone thanks to the DOJ action. Oh, and apparently the company and its CEO have accepted plea deals to plead guilty to certain charges (though many other execs have pleaded not guilty).
Still, expect to see other civil lawsuits attempt to use the FOSTA/SESTA retroactive clause in the very near future.
Filed Under: cda, fosta, section 230, sesta
Companies: backpage
Despite Repeated Evidence That It's Unnecessary And Damaging, Trump Signs SESTA/FOSTA
from the because-of-course dept
This was no surprise, but as everyone expected, yesterday President Trump signed SESTA/FOSTA into law leading to the usual excitement from the bill’s supporters — despite the fact that events of the past couple weeks have proved them all wrong. The bill’s supporters repeatedly insisted that SESTA/FOSTA was necessary to stop one company, Backpage.com, because (they falsely insisted) CDA 230 made the site completely immune. Except, that’s clearly not true. In the two weeks since the bill was approved by Congress, two separate courts declared Backpage not protected by CDA 230 and (more importantly) the DOJ seized the whole damn site and indicted most of the company’s execs — all without SESTA/FOSTA.
And, on top of that, many many sites have already shut down or modified how they do business because of SESTA/FOSTA proving that the bill’s reach clearly is impacting free expression online — just as tons of civil liberties experts warned. And that’s not even touching on the very real concerns of those involved in sex work on how SESTA/FOSTA literally puts their lives in danger — and how it makes it that much more difficult to actually rescue victims of sex trafficking.
As usual, Professor Eric Goldman has a pretty thorough summary of the situation, and notes that there are still a bunch of open questions — including the inevitable constitutional challenges to the bill. The retroactive clause (saying it applies to things that happened prior to the bill being signed) is so obviously unconstitutional that even the Justice Department warned that it would doom the bill if not fixed (which Congress dutifully ignored). But, to me, there’s a bigger question: whether or not a First Amendment challenge could knock out SESTA/FOSTA in the same way that it got most of the original Communications Decency Act tossed out 20 years ago (CDA 230 was all that survived of the original CDA).
I am also curious whether or not we will see any reaction from those who promoted and supported SESTA for the past year or so, when the rates of sex trafficking don’t decrease, but the ability to rescue such victims does decline. Somehow, I get the feeling they’ll have moved on and forgotten all of this. And that’s because, for most of them, “stopping sex trafficking” was a convenient excuse for trying to attack the internet.
Filed Under: cda, fosta, free speech, section 230, sesta, trump
Techdirt Podcast Episode 157: The Worst Of Both Worlds – SESTA & FOSTA Together
from the sum-of-its-parts dept
It wasn’t very long ago that we last discussed SESTA on the podcast, but now that the House has voted to approve its version of the bill with SESTA tacked on, it’s unfortunately time to dig into the issues again. So this week we’re joined by returning guest Emma Llansó from the Center for Democracy and Technology and, for the first time, law professor Eric Goldman to talk about why the combination of SESTA and FOSTA has resulted in the worst of both worlds.
Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: cda, emma llanso, eric goldman, fosta, podcast, section 230, sesta
Looking Forward To Next 20 Years Of A Post-Reno Internet
from the your-free-internet dept
Earlier this week, we wrote a little bit about the 20th anniversary of a key case in internet history, Reno v. ACLU, and its important place in internet history. Without that ruling, the internet today would be extraordinarily different — perhaps even unrecognizable. Mike Godwin, while perhaps best known for making sure his own obituary will mention Hitler, also played an important role in that case, and wrote up the following about his experience with the case, and what it means for the internet.
The internet we have today could have been very different, more like the over-the-air broadcast networks that still labor under broad federal regulatory authority while facing declining relevance.
But 20 years ago this week, the United States made a different choice when the U.S. Supreme Court handed down its 9-0 opinion in Reno v. American Civil Liberties Union, the case that established how fundamental free-speech principles like the First Amendment apply to the internet.
I think of Reno as “my case” because I’d been working toward First Amendment protections for the internet since my first days as a lawyer—the first staff lawyer for the Electronic Frontier Foundation (EFF), which was founded in 1990 by software entrepreneur Mitch Kapor and Grateful Dead lyricist John Perry Barlow. There are other lawyers and activists who feel the same possessiveness about the Reno case, most with justification. What we all have in common is the sense that, with the Supreme Court’s endorsement of our approach to the internet as a free-expression medium, we succeeded in getting the legal framework more or less right.
We had argued that the internet—a new, disruptive and, to some large extent, unpredictable medium—deserved not only the free-speech guarantees of the traditional press, but also the same freedom of speech that each of us has as an individual. The Reno decision established that our government has no presumptive right to regulate internet speech. The federal government and state governments can limit free speech on the internet only in narrow types of cases, consistent with our constitutional framework. As Chris Hanson, the brilliant ACLU lawyer and advocate who led our team, recently put it: “We wanted to be sure the internet had the same strong First Amendment standards as books, not the weaker standards of broadcast television.”
The decision also focused on the positive benefits this new medium had already brought to Americans and to the world. As one of the strategists for the case, I’d worked to frame this part of the argument with some care. I’d been a member of the Whole Earth ‘Lectronic Link (the WELL) for more than five years and of many hobbyist computer forums (we called them bulletin-board systems or “BBSes”) for a dozen years. In these early online systems—the precursors of today’s social media like Facebook and Twitter—I believed I saw something new, a new form of community that encompassed both shared values and diversity of opinion. A few years before Reno v. ACLU—when I was a relatively young, newly minted lawyer—I’d felt compelled to try to figure out how these new communities work and how they might interact with traditional legal understandings in American law, including the “community standards” relevant to obscenity law and broadcasting law.
When EFF, ACLU and other organizations, companies, and individuals came together to file a constitutional challenge to the Communications Decency Act that President Bill Clinton signed as part of the Telecommunications Act of 1996, not everyone on our team saw this issue the way I did, at the outset. Hanson freely admits that “[w]hen we decided to bring the case, none of [ACLU’s lead lawyers] had been online, and the ACLU did not have a website.” Hanson had been skeptical of the value of including testimony about what we now call “social media” but more frequently back then referred to as “virtual communities.” As he puts it:
“I proposed we drop testimony about the WELL — the social media site — on the grounds that the internet was about the static websites, not social media platforms where people communicate with each other. I was persuaded not to do that, and since I was monumentally wrong, I’m glad I was persuaded.”
Online communities turned out to be vastly more important than many of the lawyers first realized. The internet’s potential to bring us together meant just as much as the internet’s capacity to publish dissenting, clashing and troubling voices. Justice John Paul Stevens, who wrote the Reno opinion, came to understand that community values were at stake, as well. In early sections of his opinion, Justice Stevens dutifully reasons through traditional “community standards” law, as would be relevant to obscenity and broadcasting cases. He eventually arrives at a conclusion that acknowledges that a larger community is threatened by broad internet-censorship provisions:
“We agree with the District Court’s conclusion that the CDA places an unacceptably heavy burden on protected speech, and that the defenses do not constitute the sort of ‘narrow tailoring; that will save an otherwise patently invalid unconstitutional provision. In Sable, 492 U. S., at 127, we remarked that the speech restriction at issue there amounted to ‘ ‘burn[ing] the house to roast the pig.’ ‘ The CDA, casting a far darker shadow over free speech, threatens to torch a large segment of the Internet community.”
The opinion’s recognition of “the Internet community” paved the way for the rich and expressive, but also divergent and sometime troubling internet speech and expression we have today.
Which leaves us with the question: now that we’ve had two decades of experience under a freedom-of-expression framework for the internet—one that has informed not just how we use the internet in the United States but also how other voices around the world use it—what do we now need to do to promote “the Internet community”?
In 2017, not everyone views the internet as an unalloyed blessing. Most recently, we’ve seen concern about whether Google facilitates copyright infringement, whether Twitter’s political exchanges are little more than “outrage porn” and whether Facebook enables “hate speech.” U.K. Prime Minister Theresa May, who is almost exactly the same age I am, seems to view the internet primarily as an enabler of terrorism.
Even though we’re now a few decades into the internet revolution, my view is that it’s still too early to make the call that the internet needs more censorship and government intervention. Instead, we need more protection of the free expression and online communities that we’ve come to expect. Part of that protection may come from some version of the network neutrality principles currently being debated at the Federal Communications Commission, although it may not be the version in place under today’s FCC rules.
In my view, there are two additional things the internet community needs now. The first is both legal and technological guarantees of privacy, including through strong encryption. The second is universal access—including for lower-income demographics and populations in underserved areas and developing countries—that would enable everyone to particulate fully, not just as consumers but as contributors to our shared internet. For me, the best way to honor the 40th anniversary of Reno v. ACLU will be to make sure everybody is here on the internet to celebrate it.
Mike Godwin (mnemonic@gmail.com) is a senior fellow at R Street Institute. He formerly served as staff counsel for the Electronic Frontier Foundation and as general counsel for the Wikimedia Foundation, which operates Wikipedia.
Filed Under: cda, cda 230, first amendment, free speech, internet, reno v. aclu
How The ACLU's Fight To Protect 'Indecent' Speech Saved The Internet From Being Treated Like Broadcast TV
from the early-adopters-FTW dept
The ACLU is celebrating twenty years of making the internet better. On June 26th, 1997, the ACLU prevailed in Reno v. ACLU, with the Supreme Court striking down the anti-indecency portions of the 1996 Communications Decency Act (CDA).
As can be gathered by the law’s name, it was written from a position of morality and panic — the fear that the internet’s connectivity would drown the nation’s youth in easily-accessible porn. And yet, the law survives today as one of the most important factors in the internet’s speedy growth, thanks to Section 230, which prevents service providers and social media platforms from being held civilly responsible for users’ posts and actions.
But it might not have been that way. In 1996, the ACLU didn’t even have a website of its own and most legislators had nothing more than bill sponsors’ parades of horribles to go on. So, for the children, the CDA criminalized “obscene or indecent” material if it could be viewed by minors.
It was another case of legislators “knowing” what was indecent when they saw it. But even under that wholly subjective standard, the government spent most of its time shrugging.
During the various internet censorship cases the ACLU brought, we asked the government to identify speech in each category, and they were largely unable to do so. For example, they said that an online photo on Playboy’s website of a topless woman was not harmful to minors, but a virtually identical photo on Penthouse’s website was.
The ACLU’s website was born from this legal battle. In order to show standing, the ACLU had to publish something the government might consider “indecent.” It chose a Supreme Court decision declaring George Carlin’s famous “Seven Words You Can’t Say on TV” monologue “indecent.” The entire monologue was included in the decision’s appendix. The ACLU posted the decision and asked readers to guess which words the Supreme Court had found indecent. Obviously, it ended up with far more than seven words, which was enough to give it standing to challenge the CDA provision.
The plan worked. The ACLU took its challenge all the way to the Supreme Court and won. If it hadn’t, the internet would be as boring and lifeless as the blandest of network TV offerings. That’s the standard legislators were hoping to apply to the world’s greatest communication platform: the same rules the FCC applies to broadcast TV. The Supreme Court struck down this damaging provision, recognizing the enormous potential of the web and the threat posed to it by “think of the children” legislation.
The record demonstrates that the growth of the Internet has been and continues to be phenomenal. As a matter of constitutional tradition, in the absence of evidence to the contrary, we presume that governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it. The interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship.
The ACLU’s site has a long interview with Chris Hansen, who led the ACLU’s litigation. It’s well worth reading, especially considering what the web might have become if no one had stepped up to defend “indecent” speech.
Filed Under: cda, first amendment, free speech, indecency, internet, obscenity, reno v. aclu
Companies: aclu
Also Turning 20 Years Old Today: John Perry Barlow's Declaration Of The Independence Of Cyberspace
from the still-a-work-in-progress dept
Earlier today, we wrote about how 20 years ago today, the Communications Decency Act became law (most importantly, Section 230, rather than the rest of it, which was dropped as unconstitutional). Of course, at the time, everyone was mostly focused on the unconstitutional parts trying to outlaw lots of smut online. It was partly that signing (which itself was a part of the larger Telecommunications Reform Act that inspired an apparently fairly drunk John Perry Barlow to pen his now quite famous Declaration of the Independence of Cyberspace — which is now regularly quoted. A snippet:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.
Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.
Of course, many have attacked its words, and these days, it — like Stewart Brand’s famed “information wants to be free” quote (which is much longer and more nuanced than most people think) — is more often referenced by people who hold it up for the sake of mockery, and to talk about how times have changed, or need to change.
And yet, there are (and remain) some very important concepts in that “dashed off” statement, and Barlow still stands by them today, even as think tanks laugh factories like ITIF (who brought you brilliant ideas like “SOPA”) pretend he no longer supports it.
The Declaration was not a statement of inevitability, but rather a notice that things are different online. And they are. We’ve seen this over and over again — from back then and continuously up through today. So many of the disputes that we run into are about this very different nature of the internet from the physical world. Borders are not easily marked online, though people have tried. Artificial property restrictions are make much less sense when there is no physical scarcity, but digital abundance allows for anyone to simply make their own copy. Questions about jurisdiction and power remain. Self-organizing communities continue to show up. Some work better than others. Some work for a time and fail. Other experiments show up to replace it.
And, yes, of course, there have been many attempts to either move existing laws into the internet world, or to craft new ones for that purpose. At the same time, many big corporations have stepped in as well, where their own terms of service often act as a type of constitution. Some of these work better than others. The little tiny good law tucked deep into the horrible law of the CDA, has actually been a key element in protecting much of what Barlow spoke about.
But, as Barlow notes today, it takes a lot of work to keep the system moving in the right direction, and it’s something we cannot and should not take for granted:
Barlow admits that what he describes as the ?immune system? of the Internet isn?t exactly automatic. It requires effort on the part of activists like himself. ?It wasn?t a slam dunk and it isn?t now. I wouldn?t have started the EFF and the Freedom of the Press Foundation? if it were, he says. But he nonetheless believes that there is a kind of inexorable direction of the Internet?s political influence toward individual liberty.
The technology and innovation continues to make things possible, but what happens next, depends on what people do with it.
Filed Under: cda, cda 230, declaration of independence of cyberspace, internet rights, john perry barlow, jurisdiction