public square – Techdirt (original) (raw)

Two Dogmas Of The Free Speech Panic

from the not-to-be-dogmatic... dept

Antonio García Martínez recently invited me on his podcast, The Pull Request. I was thrilled. Antonio is witty, charming, and intimidatingly brilliant (he was a PhD student in physics at Berkeley, and it shows). We did the episode, and we had a great time. But we never got to an important topic—Antonio’s take on free speech and the Internet.

In April, Antonio released a piece on his Substack, Freeze peach and the Internet, in which he asserts the existence of a “‘content moderation’ regime that is utterly re-defining speech in liberal societies.” That “regime” wants, Antonio contends, to “arbitrate truth and regulate online behavior for the sake of some supposed greater good.” It is opposed by those who still support freedom of speech. Antonio believes that the “regime” and its opponents are locked in an epic battle, and that we all must pick a side.

I’m not sure what to make of some of Antonio’s claims. We’re told, for instance, that “freedom of reach is freedom of speech”—which sounds like a nod to the New Left’s call, in the 1960s and 70s, to seize “the means of communication.” But then we’re told that “Twitter isn’t obligated to give you reach if user interest in your speech is low.” So Antonio is not demanding reach equality. “It’s simply not the case,” he says, “that freedom of speech is some legal binary switched between an abstract allow/not-allow state.” Maybe, then, the point is that we must think about the effects of algorithmic amplification. Who is ignoring or attacking that point, I do not know.

At any rate, a general critique of Antonio’s article this post is not.

In 1951 Willard Van Orman Quine, one of the great analytic philosophers of the twentieth century, wrote a short paper called “Two Dogmas of Empiricism.” Quine put to the torch two key assumptions made by the logical positivists, a philosophical school popular in the first half of the century. Antonio, in his piece, promotes two key assumptions commonly made by those who fear “Big Tech censorship.” If Mike Masnick can riff on Arrow’s impossibility theorem to explain why content moderation is so difficult, I figure I can riff on Quine’s “dogmas” paper to explore two ways in which the fears of online “censorship” by private platforms are overblown. As we’re about to see, in fact, Quine’s work can teach us something valuable about content moderation.

Antonio’s first dogma is the belief that either you’re for free speech, or you’re not—you’re for the censors and the would-be arbiters of truth. His second is the belief that Twitter is the “public square,” and that the state of the restrictions there is the proper gauge of the state of free speech in our nation as a whole. With apologies to H.L. Mencken, these dogmas are clear, simple, and wrong.

Dogma #1: Free Speech: With Us or Against Us

AGM insists that the debate about content moderation boils down to a single overriding divide. “The real issue,” he says—the issue “the consensus pro-censorship crowd will never directly address”—is this:

Do you think freedom of speech includes the right to say and believe obnoxious stupid shit that’s almost certainly false, or do you feel platforms have the responsibility to arbitrate truth and regulate online behavior for the sake of some supposed greater good?

That’s it. “If you think” that “dumb and even offensive speech” is “protected speech,” you’re “on the Elon [Musk] side of this debate.” Otherwise, you think that “platforms should be putting their fingers on the scales,” and you’re therefore on “the anti-Elon” side. As if to add an exclamation point, Antonio declares: “Some countries have real free speech, and some countries have monarchs on their coins.” (I’ve seen it said, in a similar vein, that all anyone “really” cares about is “political censorship,” and that _that_’s the key issue the “consensus pro-censorship crowd” won’t grapple with.)

Antonio presents a nice, neat dividing line. There’s the stuff no one likes—Antonio points to dick pics, beheading videos, child sexual abuse material, and hate speech that incites violence—and then there’s people’s opinions. All the talk of content moderation is just obfuscation—an elaborate effort to hide this clear line. “Quibbling over the precise content policy in the pro-content moderation view,” Antonio warns, “is just haggling over implementation details, and essentially ceding the field to that side of the debate.”

The logical positivists, too, wanted some nice, neat lines. Bear with me.

Like most philosophers, the LPs wanted to know what we can know. One reason arguments often go in circles, or bog down in confusion, is that humans make a lot of statements that aren’t so much wrong as simply meaningless. Many sentences don’t connect to anything in the real world over which a productive argument can be had. (Extreme example: “the Absolute enters into, but is itself incapable of, evolution and progress.”) The LPs wanted to separate the wheat (statements of knowledge) from the chaff (metaphysical gobbledygook, empty emotive utterances, tribal call signs, etc.). To that end, they came up with something called the verification principle.

In 1936 a brash young thinker named A.J. Ayer—the AGM of early twentieth century philosophy—published a crisp and majestic but (as Ayer himself later admitted) often mistaken book, Language, Truth & Logic, in which he set forth the verification principle in its most succinct form. Can observation of the world convince us of the likely truth or falsity of a statement? If so, the statement can be verified. And “a sentence,” Ayer argued, “says nothing unless it is empirically verifiable.” That’s it.

Problem: mathematics and formal logic seem to reveal useful—indeed, surprising—things about the world, but without adhering to the verification principle. In the LPs’ view, though, this was just a wrinkle. They postulated a distinction between good, juicy “synthetic” statements that can be verified, and drab old “analytic” statements that, according to (young) Ayer, are just games we play with definitions. (“A being whose intellect was infinitely powerful would take no interest in logic and mathematics. For he would be able to see at a glance everything that his definitions implied[.]”)

So the LPs had two dogmas: that a sentence either does or does not refer to immediate experience, and that a sentence can be analytic or synthetic. But as Quine explained in his paper, these pat categories are rubbish. He addressed the latter dogma first, raising a number of problems with it that aren’t worth getting into here. (For one thing, definitions are set by human convention; their “correct” use is open to empirical debate.) He then took aim at the verification principle—or, as he put it, the “dogma of reductionism”—itself.

The logical positivists went wrong, Quine observed, in supposing “that each statement, taken in isolation from its fellows, can admit of confirmation or infirmation.” It’s “misleading to speak of the empirical content of an individual statement,” he explained, because statements “face the tribunal of sense experience not individually but only as a corporate body.” There aren’t two piles of statements—those that can be verified and those that can’t. Rather, “the totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even pure mathematics and logic,” is a continuous “man-made fabric.” As we learn new things, “truth values have to be redistributed over some of our statements. Re-evaluation of some statements entails re-evaluation of others.” Our knowledge is not a barrel of apples that we go through, apple-by-apple, keeping the ripe ones and tossing the rotten. It is, in the words of philosopher Simon Blackburn, a “jelly of belief,” the whole of which “quiver[s] in reaction to ‘recalcitrant’ or surprising experience.”

See how this ties into content moderation? Steve Bannon was booted from Twitter because he said: “I’d put [Anthony Fauci’s and Christopher Wray’s] heads on pikes. Right. I’d put them at the two corners of the White House. As a warning to federal bureaucrats: Either get with the program or you’re gone.” Is this just an outlandish opinion—some “obnoxious stupid shit that’s almost certainly false”—or is it an incitement to violence? Why is this statement different from, say, “I’d put Gentle’s and Funshine’s heads on pikes . . . as a warning to the other Care Bears”?

When Donald Trump told the January 6 rioters, “We love you. You’re very special,” was that political speech? Or was it sedition? As with “heads on pikes,” the statement itself won’t answer that question for you. The same problem arises when Senate candidate Eric Greitens invites you to go “RINO hunting,” or when a rightwing pundit announces that the Consitution is “null and void.” And who says we must look at each piece of content in isolation? Say the Oath Keepers are prevalent on your platform. They’re not planning an insurrection right now; they’re just riling each other up and getting their message out and recruiting. Is this just (dumb) political speech? Or is it more like a slowly developing beheading video? (If a platform says, “Don’t care where you go, guys, but you can’t stay here,” is it time to put monarchs on our coins?)

Similar issues arise with harassment. Doxxing, deadnaming, coordinated pile-ons, racist code words, Pepe memes—all present line-drawing issues that can’t be resolved with appeals to a simple divide between bad opinions and bad behavior. In each instance, we have no choice but to “quibbl[e] over the precise content policy.” Disagreement will reign, moreover, because each of us will enter the debate with a distinct set of political, cultural, contextual, and experiential priors. To some people, Jordan Peterson deadnaming Elliot Page is obviously harassment. To others (including, I confess, myself), his doing so pretty clearly falls within the rough-and-tumble of public debate. But that disagreement is not, at bottom, about that individual piece of content; it’s about the entire panoply of clashing priors.

It’s great that we have acerbic polemicists like Antonio. I’m glad that he’s out there pushing his conception of freedom and decrying safety-ism. (He’s on his strongest footing, I suppose, when he complains about the labeling, “fact-checking,” and blocking of Covid claims.) I hope that he and his swashbuckling ilk never stop defending “our American birthright of constant and cantankerous rebellion against the status quo.” But it’s just not true that there’s a free speech crowd and a pro-censorship crowd and nothing in between. Content moderation is complicated and difficult, and people’s views about it sit on a continuum.

Dogma #2: The Public Square, Website-by-Website

Antonio’s other dogma is the view—held by many—that Twitter is in some meaningful sense the “public square.” Antonio has some pointed criticisms for those who believe that “Twitter isn’t the public forum, and as such shouldn’t be treated with the sacrosanct respect we typically imbue anything First Amendment-related.”

As the second part of that sentence suggests, AGM gets to his destination by an idiosyncratic route. He seems to think that, in other people’s minds, the public square is where solemn and civilized discussion of public issues occurs. But as Antonio points out, there’s never been such a place. We’re Americans; we’ve always hashed things out by shouting at each other. Today, one of the places where we shout at each other is on Twitter. Ergo, in Antonio’s mind, Twitter is the public square.

I don’t get it. “Everyone invoking some fusty idea of ‘debate’ or even a healthy ‘marketplace of ideas,’” Antonio writes, “is citing bygone utopias that never were, and never will be.” Who is this “everyone”? Anyway, just because there’s a place where debate occurs does not mean that that place is the “public square.” In 2019 Antonio was saying that we should break up Facebook because it has a “stranglehold” on “attention.” So why isn’t it the public square? Perhaps it’s both Twitter and Facebook? But then what about Substack—where AGM published his piece? What about the many podcast platforms that carry his conversations? What about Rumble and TikTok? Heck, what about Techdirt? The “public square”—if we really must go about trying to precisely define such a thing—is not Twitter but the Internet.

Antonio appeals to the “conditions our democracy was born in.” The “vicious, ribald, scabrous, offensive, and often violent tumult of the Founders’ era,” he notes, “makes modern Twitter look like a Mormon picnic by comparison.” This begs the question. Look at what Americans are saying on the Internet as a whole; it’s as vicious, ribald, scabrous, offensive, and violent as you please. If what matters is that our discourse resemble that of the founding era, we can rest easy. Ben Franklin’s brother used his publication, The New-England Courant, to rail against smallpox inoculation; modern anti-vaxxers use Gab to similar effect. James Callender used newspapers and pamphlets to viciously (but often accurately) attack Adams, Hamilton, and Jefferson; Matt Taibbi and Glenn Greenwald use newsletters and podcasts to viciously (but at times accurately) attack Joe Biden and Hillary Clinton. In his Porcupine’s Gazette, William Cobbett cried, “Professions of impartiality I shall make none”; the website American Greatness boasts about being called “a hotbed of far-right Trumpist nationalism.” Plus ça change . . .

Antonio says that we need “unfettered debate” in a “public square” that we “shar[e]” with “our despised political enemies.” Surveying the Internet, I’d say we have exactly that.

Now, I don’t deny that there’s a swarm of activists, researchers, academics, columnists, politicians, and government officials—not to mention the tech companies themselves—that make up what journalist Joe Bernstein calls “Big Disinfo.” Not surprisingly, the old gatekeepers of information, along with those who once benefited from greater information gatekeeping, are upset that social media allows information to bypass gates. “That the most prestigious liberal institutions of the pre-digital age are the most invested in fighting disinformation,” Bernstein submits, “reveals a lot about what they stand to lose, or hope to regain.” Indeed.

But so what? There’s a certain irony here. The people most convinced that our elite institutions are inept and crumbling are also the ones most concerned that those institutions will take over the Internet, throttle speech, and (toughest of all) reshape opinion—all, presumably, without violating the First Amendment. Are the forces of Big Disinfo really that competent? Please.

Antonio and I are both fans of Martin Gurri, whose 2014 book The Revolt of the Public is basically a long meditation on why Antonio’s “content-moderation regime” can’t succeed. “A curious thing happens to sources of information under conditions of scarcity,” Gurri proposes. “They become authoritative.” Thanks to the Internet, however, we are living through an unprecedented information explosion. When there’s information abundance, no claim is authoritative. Many claims must compete with each other. All claims (but especially elite claims) are questioned, challenged, and ridiculed. (In this telling, our current tumult is more vicious, ribald, etc., than that of the founding era.) Unable to shut down competing claims, elites can’t speak with authority. Unable to speak with authority, they can’t shut down competing claims.

Short of an asteroid strike, World War III, the rise of a thoroughgoing despotism, or some kind of Butlerian jihad, the flow of information can’t be stopped.

Filed Under: antonio garcia martinez, content moderation, free reach, free speech, public square

Musk, Twitter, Why The First Amendment Can’t Resolve Content Moderation (Part I)

from the let's-see-how-that-works-in-practice dept

“Twitter has become the de facto town square,” proclaims Elon Musk. “So, it’s really important that people have both the reality and the perception that they’re able to speak freely within the bounds of the law.” When pressed by TED’s Chris Anderson, he hedged: “I’m not saying that I have all the answers here.” Now, after buying Twitter, his position is less clear: “I am against censorship that goes far beyond the law.” Does he mean either position literally?

Musk wants Twitter to stop making contentious decisions about speech. “[G]oing beyond the law is contrary to the will of the people,” he declares. Just following the First Amendment, he imagines, is what the people want. Is it, though? The First Amendment is far, far more absolutist than Musk realizes.

Remember Neo-Nazis with burning torches and screaming “the Jews will not replace us!”? The First Amendment required Charlottesville to allow that demonstration. Some of them were arrested and prosecuted for committing acts of violence. One even killed a bystander with his car. The First Amendment permits the government to punish violent conduct but—contrary to what Musk believes—almost none of the speech associated with it.

The Constitution protects “freedom for the thought that we hate,” as Justice Oliver Wendell Holmes declared in a 1929 dissent that has become the bedrock of modern First Amendment jurisprudence. In most of the places where we speak, the First Amendment does not set limits on what speech the host, platform, proprietor, station, or publication may block or reject. The exceptions are few: actual town squares, company-owned towns, and the like—but not social media, as every court to decide the issue has held.

Musk wants to treat Twitter as if it were legally a public forum. A laudable impulse—and of course Musk has every legal right to do that. But does he really want to? His own statements indicate not. And on a practical level, it would not make much sense. Allowing anyone to say anything lawful, or even almost anything lawful, would make Twitter a less useful, less vibrant virtual town square than it is today. It might even set the site on a downward spiral from which it never recovers.

Can Musk have it both ways? Can Twitter help ensure that everyone has a soapbox, however appalling their speech, without alienating both users and the advertisers who sustain the site? Twitter is already working on a way to do just that—by funding Bluesky—but Musk doesn’t seem interested. Nor does he seem interested in other technical and institutional improvements Twitter could make to address concerns about arbitrary content moderation. None of these reforms would achieve what seems to be Musk’s real goal: politically neutral outcomes. We’ll discuss all this in Part II.

How Much Might Twitter’s Business Model Change?

A decade ago, a Twitter executive famously described the company as “the free speech wing of the free speech party.” Musk may imagine returning to some purer, freer version of Twitter when he says “I don’t care about the economics at all.” But in fact, increasing Twitter’s value as a “town square” will require Twitter to continue striking a careful balance between what individual users can say and creating an environment that so many people want to use so regularly.

User Growth. A traditional public forum (like Lee Park in Charlottesville) is indifferent to whether people choose to use it. Its function is simply to provide a space for people to speak. But if Musk didn’t care how many people used Twitter, he’d buy an existing site like Parler or build a new one. He values Twitter for the same reason any network is valuable: network effects. Digital markets have always been ruled by Metcalfe’s Law: the impact of any network is equal to the square of the number of nodes in the network.

No, not all “nodes” are equal. Twitter is especially popular among journalists, politicians and certain influencers. Yet the site has only 39.6 million active daily U.S. users. That may make Twitter something like ten times larger than Parler, but it’s only one-seventh the size of Facebook—and only the world’s fifteenth-largest social network. To some in the “very online” set, Twitter may seem like everything, but 240 million Americans age 13+ don’t use Twitter every day. Quadrupling Twitter’s user base would make the site still only a little more than half as large as Facebook, but Metcalfe’s law suggests that would make Twitter roughly sixteen times more impactful than it is today.

Of course, trying to maximize user growth is exactly what Twitter has been doing since 2006. It’s a much harder challenge than for Facebook or other sites premised on existing connections. Getting more people engaged on Twitter requires making them comfortable with content from people they don’t know offline. Twitter moderates harmful content primarily to cultivate a community where the timid can express themselves, where moms and grandpas feel comfortable, too. Very few Americans want to be anywhere near anything like the Charlottesville rally—whether offline or online.

User Engagement. Twitter’s critics allege the site highlights the most polarizing, sensationalist content because it drives engagement on the site. It’s certainly possible that a company less focused on its bottom line might change its algorithms to focus on more boring content. Whether that would make the site more or less useful as a town square is the kind of subjective value judgment that would be difficult to justify under the First Amendment if the government attempted to legislate it.

But maximizing Twitter’s “town squareness” means more than maximizing “time on site”—the gold standard for most sites. Musk will need to account for users’ willingness to actually engage in dialogue on the site.

https://twitter.com/ARossP/status/1519062065490673670

Short of leaving Twitter altogether, overwhelmed and disgusted users may turn off notifications for “mentions” of them, or limit who can reply to their tweets. As Aaron Ross Powell notes, such a response “effectively turns Twitter from an open conversation to a set of private group chats the public can eavesdrop on.” It might be enough, if Musk truly doesn’t care about the economics, for Twitter to be a place where anything lawful goes and users who don’t like it can go elsewhere. But the realities of running a business are obviously different from those of traditional, government-owned public fora. If Musk wants to keep or grow Twitter’s user base, and maintain high engagement levels, there are a plethora of considerations he’ll need to account for.

Revenue. Twitter makes money by making users comfortable with using the site—and advertisers comfortable being associated with what users say. This is much like the traditional model of any newspaper. No reputable company would buy ads in a newspaper willing to publish everything lawful. These risks are much, much greater online. Newspapers carefully screen both writers before they’re hired and content before it’s published. Digital publishers generally can’t do likewise without ruining the user experience. Instead, users help a mixture of algorithms and human content moderators flag content potentially toxic to users and advertisers.

Even without going as far as Musk says he wants to, alternative “free speech” platforms like Gab and Parler have failed to attract any mainstream advertisers. By taking Twitter private, Musk could relieve pressure to maximize quarterly earnings. He might be willing to lose money but the lenders financing roughly half the deal definitely aren’t. The interest payments on their loans could exceed Twitter’s 2021 earnings before interest, taxes, depreciation, and amortization. How will Twitter support itself?

Protected Speech That Musk Already Wants To Moderate

As Musk’s analysts examine whether the purchase is really worth doing, the key question they’ll face is just what it would mean to cut back on content moderation. Ultimately, Musk will find that the First Amendment just doesn’t offer the roadmap he thinks it does. Indeed, he’s already implicitly conceded that by saying he wants to moderate certain kinds of content in ways the First Amendment wouldn’t allow.

Spam. “If our twitter bid succeeds,” declared Musk in announcing his takeover plans, “we will defeat the spam bots or die trying!” The First Amendment, if he were using it as a guide for moderation, would largely thwart him.

Far from banning spam, as Musk proposes, the 2003 CAN-SPAM Act merely requires email senders to, most notably, include unsubscribe options, honor unsubscribe requests, and accurately label both subject and sender. Moreover, the law defines spam narrowly: “the commercial advertisement or promotion of a commercial product or service.” Why such a narrow approach?

Even unsolicited commercial messages are protected by the First Amendment so long as they’re truthful. Because truthful commercial speech receives only “intermediate scrutiny,” it’s easier for the government to justify regulating it. Thus, courts have also protected the constitutional right of public universities to block commercial solicitations.

But, as courts have noted, “the more general meaning” of “spam” “does not (1) imply anything about the veracity of the information contained in the email, (2) require that the entity sending it be properly identified or authenticated, or (3) require that the email, even if true, be commercial in character.” Check any spam folder and you’ll find plenty of messages that don’t obviously qualify as commercial speech, which the Supreme Court has defined as speech which does “no more than propose a commercial transaction.”

Some emails in your spam folder come from non-profits, political organizations, or other groups. Such non-commercial speech is fully protected by the First Amendment. Some messages you signed up for may inadvertently wind up in your spam filter; plaintiffs regularly sue when their emails get flagged as spam. When it’s private companies like ISPs and email providers making such judgments, the case is easy: the First Amendment broadly protects their exercise of editorial judgment. Challenges to public universities’ email filters have been brought by commercial spammers, so the courts have dodged deciding whether email servers constituted public fora. These courts have implied, however, that if such taxpayer-funded email servers were public fora, email filtering of non-commercial speech would have to be content- and viewpoint-neutral, which may be impossible.

Anonymity. After declaring his intention to “defeat the spam bots,” Musk added a second objective of his plan for Twitter: “And authenticate all real humans.” After an outpouring of concern, Musk qualified his position:

Authentication is important, but so is anonymity for many. A balance must be struck.

— Elon Musk (@elonmusk) May 1, 2022

Whatever “balance” Musk has in mind, the First Amendment doesn’t tell him how to strike it. Authentication might seem like a content- and viewpoint-neutral way to fight tweet-spam, but it implicates a well-established First Amendment right to anonymous and pseudonymous speech.

Fake accounts plague most social media sites, but they’re a bigger problem for Twitter since, unlike Facebook, it’s not built around existing offline connections and Twitter doesn’t even try to require users to use their real names. A 2021 study estimated that “between 9% and 15% of active Twitter accounts are bots” controlled by software rather than individual humans. Bots can have a hugely disproportionate impact online. They’re more active than humans and can coordinate their behavior, as that study noted, to “manufacture fake grassroots political support, promote terrorist propaganda and recruitment, manipulate the stock market, and disseminate rumors and conspiracy theories.” Given Musk’s concerns about “cancel culture,” he should recognize that online harassment, especially targeting employers and intimate personal connections, as a way that lawful speech can be wielded against lawful speech.

When Musk talks about “authenticating” humans, it’s not clear what he means. Clearly, “authentication” means more than simply requiring captchas to make it harder for machines to create Twitter accounts. Those have been shown to be defeatable by spambots. Surely, he doesn’t mean making real names publicly visible, as on Facebook. After all, pseudonymous publications have always been a part of American political discourse. Presumably, Musk means Twitter would, instead of merely requiring an email address, somehow verify and log the real identity behind each account. This isn’t really a “middle ground”: pseudonyms alone won’t protect vulnerable users from governments, Twitter employees, or anyone else who might be able to access Twitter’s logs. However such logs are protected, the mere fact of collecting such information would necessarily chill speech by those concerned of being persecuted for their speech. Such authentication would clearly be unconstitutional if a government were to do it.

“Anonymity is a shield from the tyranny of the majority,” ruled the Supreme Court in McIntyre v. Ohio Elections Comm’n (1995). “It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.” As one lower court put it, “the free exchange of ideas on the Internet is driven in large part by the ability of Internet users to communicate anonymously.”

We know how these principles apply to the Internet because Congress has already tried to require websites to “authenticate” users. The Child Online Protection Act (COPA) of 1998 required websites to age-verify users before they could access material that could be “harmful to minors.” In practice, this meant providing a credit card, which supposedly proved the user was likely an adult. Courts blocked the law and, after a decade of litigation, the U.S. Court of Appeals for the Eighth Circuit finally struck it down in 2008. The court held that “many users who are not willing to access information non-anonymously will be deterred from accessing the desired information.” The Supreme Court let that decision stand. The United Kingdom now plans to implement its own version of COPA, but First Amendment scholars broadly agree: age verification and user authentication are constitutional non-starters in the United States.

What kind of “balance” might the First Amendment allow Twitter to strike? Clearly, requiring all users to identify themselves wouldn’t pass muster. But suppose Twitter required authentication only for those users who exhibit spambot-like behavior—say, coordinating tweets with other accounts that behave like spambots. This would be different from COPA, but would it be constitutional? Probably not. Courts have explicitly recognized a right to engage send non-commercial spam (unsolicited messages), for example: “were the Federalist Papers just being published today via e-mail,” warned the Virginia Supreme Court in striking down a Virginia anti-spam law, “that transmission by Publius would violate the statute.”

Incitement. In his TED interview, Musk readily agreed with Anderson that “crying fire in a movie theater” “would be a crime.” No metaphor has done more to sow confusion about the First Amendment. It comes from the Supreme Court’s 1919 Schenck decision, which upheld the conviction of the head of the U.S. Socialist Party for distributing pamphlets criticizing the military draft. Advocating obstructing military recruiting, held the Court, constituted a “clear and present danger.” Justice Oliver Wendell Holmes mentioned “falsely shouting fire in a theatre” as a rhetorical flourish to drive the point home.

But Holmes revised his position just months later when he dissented in a similar case, Abrams v. United States. “[T]he best test of truth,” he wrote, “is the power of the thought to get itself accepted in the competition of the market.” That concept guides First Amendment decisions to this day—not _Schenk_’s vivid metaphor. Musk wants the open marketplace of ideas Holmes lauded in _Abrams_—yet also, somehow, _Schenck_’s much lower standard.

In Brandenburg v. Ohio(1969), the Court finally overturned Schenck: the First Amendment does not “permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Thus, a Klansman’s openly racist speech and calls for a march on Washington were protected by the First Amendment. The Brandenburg standard has proven almost impossible to satisfy when speakers are separated from their listeners in both space and time. Even the Unabomber Manifesto wouldn’t qualify—which is why The New York Times and The Washington Post faced no legal liability when they agreed to publish the essay back in 1995 (to help law enforcement stop the serial mail-bomber).

Demands that Twitter and other social media remove “harmful” speech—such as COVID misinformation—frequently invoke Schenck. Indeed, while many expect Musk will reinstate Trump on Twitter, his embrace of Schenck suggests the opposite: Trump could easily have been convicted of incitement under _Schenck_’s “clear and present danger” standard.

Self-Harm. Musk’s confusion over incitement may also extend to its close cousin: speech encouraging, or about, self-harm. Like incitement, “speech integral to criminal conduct” isn’t constitutionally protected, but, also like incitement, courts have defined that term so narrowly that the vast majority of content that Twitter currently moderates under its suicide and self-harm policy is protected by the First Amendment.

William Francis Melchert-Dinkel, a veteran nurse with a suicide fetish, claimed to have encouraged dozens of strangers to kill themselves and to have succeeded at least five times. Using fake profiles, Melchert-Dinkel entered into fake suicide pacts (“i wish [we both] could die now while we are quietly in our homes tonite:)”), invoked his medical experience to advise hanging over other methods (“in 7 years ive never seen a failed hanging that is why i chose that”), and asked to watch his victims hang themselves. He was convicted of violating Minnesota’s assisted suicide law in two cases, but the Minnesota Supreme Court voided the statute’s prohibitions on “advis[ing]” and “encourag[ing]” suicide. Only for providing “step-by-step instructions” on hanging could Melchert-Dinkel ultimately be convicted.

In another case, the Massachusetts Supreme Court upheld the manslaughter conviction of Michelle Carter; “she did not merely encourage the victim,” her boyfriend, also age 17, “but coerced him to get back into the truck, causing his death” from carbon monoxide poisoning. Like Melchert-Dinkel, Carter provided specific instructions on completing suicide: “knowing the victim was inside the truck and that the water pump was operating — … she could hear the sound of the pump and the victim’s coughing — [she] took no steps to save him.”

Such cases are the tiniest tip of a very large iceberg of self-harm content. With nearly one in six teens intentionally hurting themselves annually, researchers found 1.2 million Instagram posts in 2018 containing “one of five popular hashtags related to self-injury: #cutting, #selfharm, #selfharmmm, #hatemyself and #selfharmawareness.” More troubling, the rate of such posts nearly doubled across that year. Unlike suicide or assisted suicide, self-harm, even by teenagers, isn’t illegal, so even supplying direct instructions about how to do it it would be constitutionally protected speech. With the possible exception of direct user-to-user instructions about suicide, the First Amendment would require a traditional public forum to allow all this speech. It wouldn’t even allow Twitter to restrict access to self-harm content to adults—for the same reasons COPA’s age-gating requirement for “harmful-to-minors” content was unconstitutional.

Trade-Offs in Moderating Other Forms of Constitutionally Protected Content

So it’s clear that Musk doesn’t literally mean Twitter users should be able to “speak freely within the bounds of the law.” He clearly wants to restrict some speech in ways that the government could not in a traditional public forum. His invocation of the First Amendment likely refers primarily to moderation of speech considered by some to be harmful—which the government has very limited authority to regulate. Such speech presents one of the most challenging content moderation issues: how a business should balance a desire for free discourse with the need to foster the environment that the most people will want to use for discourse. That has to matter to Musk, however much money he’s willing to lose on supporting a Twitter that alienates advertisers.

Hateful & Offensive Speech. Two leading “free speech” networks moderate, or even ban, hateful or otherwise offensive speech. “GETTR defends free speech,” the company said in January after banning former Blaze TV host Jon Miller, “but there is no room for racial slurs on our platform.” Likewise, Gab bans “doxing,” the exposure of someone’s private information with the intent to encourage others to harass them. These policies clearly aren’t consistent with the First Amendment: hate speech is fully protected by the First Amendment, and so is most speech that might colloquially be considered “harassment” or “bullying.”

In Texas v. Johnson (1989), the Supreme Court struck down a ban on flag burning: “if there is a bedrock principle underlying the First Amendment, it is simply that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” In Matal v. Tam (2017), the Supreme Court reaffirmed this principle and struck down a prohibition on offensive trademark registrations: “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express the thought that we hate.”

Most famously, in 1978, the American Nazi Party won the right to march down the streets of Skokie, Illinois, a majority-Jewish town where ten percent of the population had survived the Holocaust. The town had refused to issue a permit to march. Displaying the swastika, Skokie’s lawyers argued, amounted to “fighting words”—which the Supreme Court had ruled, in 1942, could be forbidden if they had a “direct tendency to cause acts of violence by the persons to whom, individually, the remark is addressed.” The Illinois Supreme Court disagreed: “The display of the swastika, as offensive to the principles of a free nation as the memories it recalls may be, is symbolic political speech intended to convey to the public the beliefs of those who display it”—not “fighting words.” Even the revulsion of “the survivors of the Nazi persecutions, tormented by their recollections … does not justify enjoining defendants’ speech.”

Protection of “freedom for the thought we hate” in the literal town square is sacrosanct. The American Civil Liberties Union lawyers who defended the Nazis’ right to march in Skokie were Jews as passionately committed to the First Amendment as was Justice Holmes (post-_Schen_ck). But they certainly wouldn’t have insisted the Nazis be invited to join in a Jewish community day parade. Indeed, the Court has since upheld the right of parade organizers to exclude messages they find abhorrent.

Does Musk really intend Twitter to host Nazis and white supremacists? Perhaps. There are, after all, principled reasons for not banning speech, even in a private forum, just because it is hateful. But there are unavoidable trade-offs. Musk will have to decide what balance will optimize user engagement and keep advertisers (and those financing his purchase) satisfied. It’s unlikely that those lines will be drawn entirely consistent with the First Amendment; at most, it can provide a very general guide.

Harassment & Threats. Often, users are banned by social media platforms for “threatening behavior” or “targeted abuse” (e.g., harassment, doxing). The first category may be easier to apply, but even then, a true public forum would be sharply limited in which threats it could restrict. “True threats,” explained the Court in Virginia v. Black (2003), “encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.” But courts split on whether the First Amendment requires that a speaker have the subjective intent to threaten the target, or if it suffices that a reasonable recipient would have felt threatened. Maximal protection for free speech means a subjective requirement, lest the law punish protected speech merely because it might be interpreted as a threat. But in most cases, it would be difficult—if not impossible—to establish subjective intent without the kind of access to witnesses and testimony courts have. These are difficult enough issues even for courts; content moderators will likely find it impossible to adhere strictly, or perhaps even approximately, to First Amendment standards.

Targeted abuse and harassment policies present even thornier issues; what is (or should be) prohibited in this area remains among the most contentious aspects of content moderation. While social media sites vary in how they draw lines, all the major sites “[go] far beyond,” as Musk put it, what the First Amendment would permit a public forum to proscribe.

Mere offensiveness does not suffice to justify restricting speech as harassment; such content-based regulation is generally unconstitutional. Many courts have upheld harassment laws insofar as they target not speech but conduct, such as placing repeated telephone calls to a person in the middle of the night or physically stalking someone. Some scholars argue instead that the consistent principle across cases is that proscribable harassment involves an unwanted physical intrusion into a listener’s private space (whether their home or a physical radius around the person) for the purposes of unwanted one-on-one communication. Either way, neatly and consistently applying legal standards of harassment to content moderation would be no small lift.

Some lines are clear. Ranting about a group hatefully is not itself harassment, while sending repeated unwanted direct messages to an individual user might well be. But Twitter isn’t the telephone network. Line-drawing is more difficult when speech is merely about a person, or occurs in the context of a public, multi-party discussion. Is it harassment to be the “reply guy” who always has to have the last word on everything? What about tagging a person in a tweet about them, or even simply mentioning them by name? What if tweets about another user are filled with pornography or violent imagery? First Amendment standards protect similar real-world speech, but how many users want to party to such conversation?

Again, Musk may well want to err on the side of more permissiveness when it comes to moderation of “targeted abuse” or “harassment.” We all want words to keep their power to motivate; that remains their most important function. As the Supreme Court said in 1949: “free speech… may indeed best serve its high purpose when it induces a condition of unrest … or even stirs people to anger. Speech is often provocative and challenging. It may strike at prejudices and preconceptions and have profound unsettling effects as it presses for the acceptance of an idea.”

But Musk’s goal is ultimately, in part, to attract users and keep them engaged. To do that, Twitter will have to moderate some content that the First Amendment would not allow the government to punish. Content moderators have long struggled on how to balance these competing interests. The only certainty is that this is, and will continue to be, an extremely difficult tightrope to walk—especially for Musk.

Obscenity & Pornography. Twitter already allows pornography involving consenting adults. Yet even this is more complicated than simply following the First Amendment. On the one hand, child sexual abuse material (CSAM) is considered obscenity, which the First Amendment simply doesn’t protect. All social media sites ban CSAM (and all mainstream sites proactively filter for, and block, it). On the other hand, nonconsensual pornography involving adults isn’t obscene, and therefore is protected by the First Amendment. Some courts have nonetheless upheld state “revenge porn” laws, but those laws are actually much narrower than Twitter’s flat ban (“You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.”)

Critical to the Vermont Supreme Court’s decision to uphold the state’s revenge porn law were two features that made the law “narrowly tailored.” First, it required intent to “harm, harass, intimidate, threaten, or coerce the person depicted.” Such an intent standard is a common limiting feature of speech restrictions upheld by courts. Yet none of Twitter’s policies turn on intent. Again, it would be impossible to meaningfully apply intent-based standards at the scale of the Internet and outside the established procedures of courtrooms. Intent is a complex inquiry unto itself; content moderators would find it nearly impossible to make these decisions with meaningful accuracy. Second, the Vermont law excluded “[d]isclosures of materials that constitute a matter of public concern,” and those “made in the public interest.” Twitter does have a public-interest exception to its policies, yet, Twitter notes:

At present, we limit exceptions to one critical type of public-interest content—Tweets from elected and government officials—given the significant public interest in knowing and being able to discuss their actions and statements.

It’s unlikely that Twitter would actually allow public officials to post pornographic images of others without consent today, simply because they were public officials. But to “follow the First Amendment,” Twitter would have to go much further than this: it would have to allow anyone to post such images, in the name of the “public interest.” Is that really what Musk means?

Gratuitous Gore. Twitter bans depictions of “dismembered or mutilated humans; charred or burned human remains; exposed internal organs or bones; and animal torture or killing.” All of these are protected speech. Violence is not obscenity, the Supreme Court ruled in Brown v. Entertainment Merchants Association (2011), and neither is animal cruelty, ruled the Court in U.S. v. Stevens (2010). Thus, the Court struck down a California law barring the sale of “violent” video games to minors and requiring that they be labeled “18,” and a federal law criminalizing “crush videos” and other depictions of the torture and killing of animals.

The Illusion of Constitutionalizing Content Moderation

The problem isn’t just that the “bounds of the law” aren’t where Musk may think they are. For many kinds of speech, identifying those bounds and applying them to particular facts is a far more complicated task than any social media site is really capable of.

It’s not as simple as whether “the First Amendment protects” certain kinds of speech. Only three things we’ve discussed fall outside the protection of the First Amendment altogether: CSAM, non-expressive conduct, and speech integral to criminal conduct. In other cases, speech may be protected in some circumstances, and unprotected in others.

Musk is far from the only person who thinks the First Amendment can provide clear, easy answers to content moderation questions. But invoking First Amendment concepts without doing the kind of careful analysis courts do in applying complex legal doctrines to facts means hiding the ball: it conceals subjective value judgments behind an illusion of faux-constitutional objectivity.

This doesn’t mean Twitter couldn’t improve how it makes content moderation decisions, or that it couldn’t come closer to doing something like what courts do in sussing out the “bounds of the law.” Musk would want to start by considering Facebook’s initial efforts to create a quasi-judicial review of the company’s most controversial, or precedent-setting, moderation decisions. In 2018, Facebook funded the creation of an independent Oversight Board, which appointed a diverse panel of stakeholders to assess complaints. The Board has issued 23 decisions in little more than a year, including one on Facebook’s suspension of Donald Trump for posts he made during the January 6 storming of the Capitol, expressing support for the rioters.

Trump’s lawyers argued the Board should “defer to the legal principles of the nation state in which the leader is, or was governing.” The Board responded that its “decisions do not concern the human rights obligations of states or application of national laws, but focus on Facebook’s content policies, its values and its human rights responsibilities as a business.” The Oversight Board’s charter makes this point very clear. Twitter could, of course, tie its policies to the First Amendment and create its own oversight board, chartered with enforcing the company’s adherence to First Amendment principles. But by now, it should be clear how much more complicated that would be than it might seem. While constitutional protection of speech is clearly established in some areas, new law is constantly being created on the margins—by applying complex legal standards to a never-ending kaleidoscope of new fact patterns. The complexities of these cases keep many lawyers busy for years; it would be naïve to presume that an extra-judicial board will be able to meaningfully implement First Amendment standards.

At a minimum, any serious attempt at constitutionalizing content moderation would require hiring vastly more humans to process complaints, make decisions, and issue meaningful reports—even if Twitter did less content moderation overall. And Twitter’s oversight board would have to be composed of bona fide First Amendment experts. Even then, it may be that the decision of such a board might later be undercut by actual court decisions involving similar facts. This doesn’t mean that attempting to hew to the First Amendment is a bad idea; in some areas, it might make sense, but it will be far more difficult than Musk imagines.

In Part II, we’ll ask what principles, if not the First Amendment, should guide content moderation, and what Musk could do to make Twitter more of a “de facto town square.”

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Filed Under: 1st amendment, business models, community, content moderation, harassment, incitement, protected speech, public square, self-harm, spam
Companies: twitter

Florida Presents Its Laughable Appeal For Its Unconstitutional Social Media Content Moderation Law

from the disney-exempt! dept

Now that Texas has signed its unconstitutional social media content moderation bill into law, the action shifts back to Florida’s similar law that was already declared unconstitutional in an easy decision by the district court. Florida has filed its opening brief in its appeal before the 11th Circuit and… it’s bad. I mean, really, really bad. Embarrassingly bad. I mean, this isn’t a huge surprise since their arguments in the district court were also bad. But now that they’ve had a judge smack them down fairly completely, including in terribly embarrassing oral arguments, you’d think that maybe someone would think to try to lawyer better? Though, I guess, you play with the hand your dealt, and Florida gave its lawyers an unconstitutionally bad hand.

Still, I’d expect at least marginally better lawyering than the kind commonly found on Twitter or in our comments. It starts out bad and gets worse. First off, it claims that it’s proven that social media platforms “arbitrarily discriminate against disfavored speakers” and uses a really bad example.

The record in this appeal leaves no question that social media platforms arbitrarily discriminate against disfavored speakers, including speakers in Florida. The record is replete with unrebutted examples of platforms suppressing user content for arbitrary reasons. E.g., App.891 (Doc.106-1 at 802) (Facebook censoring The Babylon Bee, a Florida-based media company, for obviously satirical content). When caught, platforms frequently cast these decisions off as ?mistakes.? E.g., App.1693 (Doc.106-5 at 201). But systematic examinations show that platforms apply their content standards differently to content and speakers that express different views but are otherwise similarly situated, all while publicly claiming to apply those standards fairly. See App.999, 1007, 1183 (Doc.106-2 at 14, 22; Doc.106-3 at 17). There are many examples in the Appendix, and even that list is hardly exhaustive.

Except that at scale, tons of mistakes are made, so yes, many of these are mistakes. And others may not be, but it is up to the platform to determine who breaks the rules. But, much more importantly, it is totally within the right of private companies to moderate how they see fit and interpret their own terms of service. So even if there were proof of “discrimination” here (and there is not), it’s not against the law.

From there it just gets silly:

Undoubtedly, social media is ?the modern public square.? Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017). In S.B. 7072 (the ?Act?)…

Generally speaking, citing Packingham is a demonstration for support of your plan to force private actors to host speech shows you have totally misunderstood Packingham and are either too ignorant or too disingenuous to take seriously. Packingham is about preventing the government from passing laws that remove full internet access from people. It does not mean that any private company has to provide access to anyone.

The argument that Florida’s law is not pre-empted by Section 230 is nonsense. Section 230 is clear that no state law can contradict it and do anything to put liability on private website operators (or users) regarding the actions of their users. But that’s exactly what Florida’s law does.

As the District Court tacitly acknowledged, the only part of that statute that could possibly preempt the Act is Section 230(c)(2). But that provision serves only to absolve platforms of liability when they remove in good faith content that is ?objectionable? within the meaning of Section 230(c)(2). That leaves myriad ways in which the Act can apply consistently with Section 230(c)(2). For example, the Act and Section 230 can peacefully coexist when a social media platform fails to act in ?good faith,? when the Act does not regulate the removal or restriction of content, or when a platform removes unobjectionable material.

This is disingenuous to downright wrong, and completely ignores the interplay between 230(c)(1) and 230(c)(2) and, notably, the fact that nearly every lawsuit regarding moderation has said that (c)(1) protects all moderation choices, whether or not they are “good faith.” And Section 230 clearly also pre-empts any attempt by a state to ignore moderation that is protect by (c)(1). Florida’s lawyers just ignore this. Which is kind of stunning. It’s not like the lawyers for NetChoice and CCIA are going to ignore it too. And they can point to dozens upon dozens of cases that prove Florida wrong.

The 1st Amendment argument is even worse:

Plaintiffs are also unlikely to succeed on their claim that the Act violates the First Amendment on its face. Most of the Act is directed at ensuring that social media platforms host content in a transparent fashion. For example, the Act requires non-controversial, factual disclosures, and disclosure requirements have long coexisted with the First Amendment. Even the portions of the Act that regulate the manner in which platforms host speech are consistent with the First Amendment. When properly analyzed separately from the Act?s other provisions?and from the extraneous legislative statements on which the District Court primarily relied?these requirements parallel other hosting regulations that the Supreme Court has held are consistent with the First Amendment. E.g., Rumsfeld v. FAIR, Inc., 547 U.S. 47, 63 (2006). The Act?s hosting regulations prevent the platforms from silencing others. They leave platforms free to speak for themselves, create no risk that a user?s speech will be mistakenly attributed to the platforms, and intrude on no unified speech product of any platform. These requirements are little different from traditional regulation of common carriers that has long been thought consistent with the First Amendment.

The reliance on Rumsfeld v. FAIR is quite silly, and the few people who have brought it up also tend to look quite silly. This is not even remotely similar to the Rumsfeld situation, which was very narrow and very specific and cannot be extended to apply to an entire social media platform. And to just sort of toss in the idea that social media is a common carrier — when they do not meet (at all) the classification of a common carrier, and have never been deemed a common carrier — is just boldly stupid.

There’s more, of course, but those are the basics. You never know how a court is going to decide — and perhaps you get a confused and persuadable judge (there are, unfortunately, a few of those out there). But, this is really weak and seems unlikely to stand.

Filed Under: 1st amendment, common carrier, content moderation, florida, free speech, packingham, pruneyard, public square, section 230, social media

Conservatives Want Common Carriage. They're Not Going to Like It.

from the that's-not-what-common-carrier-means dept

Tue, Jun 8th 2021 10:43am - Kir Nuthi

From calls to break up Big Tech to Florida?s latest anti-tech law, one thing is clear?America?s lawmakers and bureaucrats are looking to regulate the online world. Building on the momentum of the Facebook Oversight Board?s recent ruling on President Trump and Justice Thomas?s concurrence in Biden v. Knight Institute, alternative proposals like common carriage are gaining traction among conservative lawmakers looking for new regulatory solutions.

More and more conservatives critique social media by arguing that websites like Facebook, Twitter, and Google are effectively the modern public square that shouldn?t have moderation practices built to balance online safety and free speech. So it?s only natural that a proposal like common carriage gained traction in the Trump presidency and has not lost momentum since. Just look at Sen. Hagerty?s 21st Century FREE Speech Act.

Some conservative critics think treating these sites as common carriers ticks many of their boxes?less content moderation, less alleged anti-conservative bias, and more regulation of America?s tech companies. But they?re wrong. Not only is it an unconstitutional solution, its design to work around First Amendment jurisprudence will almost certainly make the internet worse, not better, for conservatives. Common carriage will inch the internet towards an online ecosystem devoid of family-friendly options and teeming with the worst humanity can offer? including the very content conservatives hate like pornography, indecency, and profanity.

An attempt at common carriage regulation is unlikely to succeed in court?social media simply doesn?t fit the criteria necessary for this centuries-old designation. Derived from common law, common carriage was a way for the entire public to receive and transport goods and services deemed essential. When America started its own classification of common carriage in the 1800s, the principle of nondiscrimination was at the forefront of the discussion. American courts identify industries and businesses as common carriers if they do not distinguish between customers or decide what they will and will not carry.

Nondiscrimination is a central feature of traditional common carriers, but it is not a feature of social media. Unlike the railroads and communications companies of the Gilded Age, social media relies on the ability to contextualize and discriminate between different content to provide useful information to users. Content moderation is at the center of that, providing websites the ability to balance free expression and online safety to maximize both and make the internet somewhere we want to spend time. Concerned parents shouldn?t have to wade through expletives, references to violence, and sexual content just to connect with their friends and family as well as protect their kids online.

The ability to moderate is a feature, not a bug, of social media. This is not a matter of transporting goods and services from California to New York?in fact, it’s not really a matter of transporting anything. Rather than transporting data like telecommunications businesses, social media hosts content. They offer a space online on which content is posted and established in perpetuity as part of internet history, more like a museum than a railroad. Therefore, ensuring a curated collection of high-quality posts is a key part of their business model, rather than simply serving as a conduit of communication.

This is a matter of private forums and businesses with constitutional protections from government action under the Bill of Rights. Social media sites like any private businesses have First Amendment rights that prevent the government from coming in and forcing them to host speech they disagree with.

Placing social media under common carriage regulations would fall counter to their First Amendment rights and ensure content moderation is effectively impossible as the incentive to maintain online safety disappears. And by taking away websites? ability to moderate what gets posted online, the internet could easily become rampant with unwanted, offensive, and disgusting content, rendering many services unsafe for use at work or with family. That would only run counter to the founding values and family principles that conservatives seek to protect.

By doing their utmost to ensure websites aren?t allowed to remove lawful but awful content, conservatives may feel like they?re fighting to defend the principles of free speech, but instead they are stifling the free speech rights of media companies and risking exposing the everyday American looking to connect with family, friends, and coworkers to the worst aspects of the internet.

Conservatives, like all Americans, have the right to voice their concerns about the decisions made by social media platforms?and they should do so. But they shouldn?t mistakenly support actions that could put American families and kids in harms? way online and that would undermine free expression and free enterprise. No matter how it?s sliced or diced, common carriage classification will force social media and the internet writ large to become a cesspool of filth, completely devoid of either conservative or family-friendly values. Treating social media like common carriers could lead to a staggering increase of content that conservatives actively work to mitigate like online harassment, the proliferation of pornography, and other explicit materials that undermine the conservative commitment to family values.

Social media relies on the ability to discriminate between user-generated posts to succeed, actively not treating themselves like neutral transports of information or services like a common carrier would. With their longstanding practice of content moderation and their lack of a natural monopoly, the courts would simply be unlikely to categorize social media as common carriers. And we should be wary of categorizing websites for users of all ages as common carriers lest they become filled with offensive content even adults don?t want to engage with.

By turning to common carriage in their crusade to fight alleged anticonservative bias, conservatives might not like the result?an internet that ignores the best it offers while proliferating the worst.

Kir Nuthi is the Public Affairs Manager at NetChoice and a Contributor at Young Voices.

Filed Under: common carriage, conservatives, content moderation, family values, public square
Companies: facebook, google, twitter

Court Tosses Dennis Prager's Silly Lawsuit Against YouTube, Refuses His Request For Preliminary Injunction

from the insufficient-everything dept

You will recall that conservative commentator Dennis Prager sued YouTube late last year because he didn’t like how the site administered its “restricted mode” relating to several of his Prager University videos. The whole lawsuit was a mess to begin with, resting on Prager’s claims that YouTube violated federal and state laws by silencing his speech as a conservative and falsely advertising YouTube as place for free and open speech. At the same time that YouTube asked the court to toss this canard, Prager sought a preliminary injunction to keep YouTube from operating its own site as it saw fit. In support of its petition to dismiss the suit, YouTube’s Alice Wu offered the court a declaration that more or less showed every single one of Prager’s claims, especially his central claim of censorship of conservatives, to be as wrong as it possibly could be.

Now, mere weeks later, the court has agreed, penning a full-throated dismissal order that essentially takes Prager’s legal team to task for failing to make anything resembling a valid claim before the court. We’ll start with the court’s response to Prager’s First Amendment claims, which he makes by stating that YouTube is somehow a legally public forum, rather than a privately run website.

In their motion to dismiss, Defendants argue that Plaintiff’s complaint should be dismissed because (1) the Communications Decency Act (“CDA”), 47 U.S.C. § 230(c), bars all of Plaintiff’s causes of action except Plaintiff’s First Amendment claim, Mot. at 8–13; (2) the First Amendment bars all of Plaintiff’s causes of action, id. at 13–15; and (3) Plaintiff’s complaint fails to sufficiently plead any causes of action. Id. at 15–24. The Court finds that Plaintiff’s complaint should be dismissed for failure to state any federal claims, and therefore declines to address Defendants’ other arguments for dismissal. The Court first addresses Plaintiff’s federal causes of action, and then addresses together Plaintiff’s state law claims.

And further, on the matter of whether YouTube is a public forum under the jurisdiction of the First Amendment.

Plaintiff does not point to any persuasive authority to support the notion that Defendants, by creating a “video-sharing website” and subsequently restricting access to certain videos that are uploaded on that website, Compl. ¶¶ 35, 41–46, have somehow engaged in one of the “very few” functions that were traditionally “exclusively reserved to the State.”

That “very few” functions precedent is backed by caselaw and the court points out that none of it applies to an entity like YouTube or to the services it provides. The court therefore says this is not a valid claim. The court then moves on to Prager’s claims that YouTube violated the Lanham Act by falsely advertising the site as a platform for open and diverse speech.

Although the section of Plaintiff’s complaint dedicated to the Lanham Act does not identify any specific representations made by Defendants, see Compl. ¶¶ 115–19, Plaintiff’s opposition to Defendants’ motion to dismiss points to a handful of discrete alleged instances of false advertising by Defendants. Opp. at 24. In particular, Plaintiff identifies (1) YouTube’s suggestion that some of Plaintiff’s videos are “inappropriate”; (2) YouTube’s policies and guidelines for regulating video content; (3) YouTube’s statement that “voices matter” and that YouTube is “committed to fostering a community where everyone’s voice can be heard”; (4) YouTube’s statement on its “Official Blog” that YouTube’s “mission” is to “give people a voice” in a “place to express yourself” and in a “community where everyone’s voice can be heard,” and that YouTube is “one of the largest and most diverse collections of self-expression in history” that gives “people opportunities to share their voice and talent no matter where they are from or what their age or point of view”; and (5) Defendants’ representations in the terms of the agreements between Plaintiff and Defendants that Defendants seek to “help you grow,” “discover what works best for you,” and “giv[e] you tools, insights and best practices for using your voice and videos.” Id. (citing Compl. ¶¶ 3, 11, 14, 28, 104, 112). The Court agrees with Defendants that Plaintiff has failed to allege sufficient facts to support a Lanham Act false advertising claim based on any of these representations.

The ruling goes on to dismantle each of the specific claims Prager eventually made in opposition to YouTube’s motion to dismiss. Frankly, it’s a pretty thorough drubbing of whoever put together Prager’s legal documents and claims.

Finally, after all of that, the court summarily dismisses Prager’s claims under California state law because all of Prager’s other claims have been dismissed earlier in the order. When the plaintiff in a case like this fails in his or her federal claims completely, the federal court typically refuses to consider the state claims. The court explains that’s what it is doing in this order. It subsequently refuses to issue the preliminary injunction, making Prager the loser on every attempt he made before the court.

For the foregoing reasons, the Court GRANTS Defendants’ motion to dismiss Plaintiff’s federal causes of action with leave to amend, DISMISSES Plaintiff’s state law claims with leave to amend, and DENIES Plaintiff’s motion for a preliminary injunction without prejudice. Should Plaintiff elect to file an amended complaint curing the deficiencies identified herein, Plaintiff shall do so within thirty days of this Order. Failure to meet this thirty-day deadline or failure to cure the deficiencies identified herein will result in a dismissal with prejudice of the deficient claims. Plaintiff may not add new causes of action or parties without leave of the Court or stipulation of the parties pursuant to Federal Rule of Civil Procedure 15. IT IS SO ORDERED.

Now, Prager can go and write up a better lawsuit and try all of this again within 30 days if he chooses, but I wouldn’t recommend it. There is no YouTube liberal conspiracy against Prager and his conservative ilk. His facts are wrong, his insinuations are not true about whether liberals or conservatives have videos more often placed in the restricted mode, and if he insists on wasting the court’s time with any of this any further, then I will insist that we all agree that he’s a legal dunce.

IT IS SO ORDERED.

Filed Under: cda 230, dennis prager, discrimination, first amendment, free speech, intermediary liability, public square, restricted, youtube
Companies: google, prageru, youtube

Supreme Court Says You Can't Ban People From The Internet, No Matter What They've Done

from the good-to-see dept

Going all the way back to 2002 (and many times after that), we’ve talked about courts struggling with whether or not it’s okay to ban people from the internet after they’ve committed a crime. The question comes up in many different cases, but most prevalently in cases involving child predators. While courts have struggled with this issue for so long, it’s only now that the Supreme Court has weighed in and said you cannot ban someone from the internet, even if they’re convicted of horrific crimes — in this case, sex crimes against a minor. The case is Packingham v. North Carolina, and the Supreme Court had to determine if it violated the First Amendment’s free speech clause and the Fourteenth Amendment’s due process clause, to make it a felony for convicted sex offenders to visit social media sites like Facebook and Twitter, as was the case under a North Carolina law.

In this case, Lester Packingham is a convicted sex offender for an event that happened back in 2002. In 2010, he went on Facebook to brag about getting a traffic ticket dismissed — using his middle name as his last name. A local police officer saw the post and connected the dots to figure out that the poster “J.R. Gerard” was actually Lester Gerard Packingham and charged him with violating that NC law on using social media as a sex offender. Various state courts went back and forth with the NC Supreme Court eventually saying that the law was “constitutional in all respects.” The Supreme Court of the United States, however, did not agree.

The ruling is interesting on a number of levels. It cites, pretty directly, EFF’s amicus brief, noting just how important and central to our lives sites like Facebook have become.

While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace?the ?vast democratic forums of the Internet? in general, Reno v. American Civil Liberties Union, 521 U. S. 844, 868 (1997), and social media in particular. Seven in ten American adults use at least one Internet social networking service. Brief for Electronic Frontier Foundation et al. as Amici Curiae 5?6. One of the most popular of these sites is Facebook, the site used by petitioner leading to his conviction in this case. According to sources cited to the Court in this case, Facebook has 1.79 billion active users. Id., at 6. This is about three times the population of North America.

Social media offers ?relatively unlimited, low-cost capacity for communication of all kinds.? Reno, supra, at 870. On Facebook, for example, users can debate religion and politics with their friends and neighbors or share vacation photos. On LinkedIn, users can look for work, advertise for employees, or review tips on entrepreneurship. And on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose. See Brief for Electronic Frontier Foundation 15?16. In short, social media users employ these websites to engage in a wide array of protected First Amendment activity on topics ?as diverse as human thought.?

The opinion, written by Justice Kennedy, notes that the internet is a vast and changing place, and notes that the court does need to proceed with caution — but that caution must be in the direction of protecting Constitutional rights:

This case is one of the first this Court has taken to address the relationship between the First Amendmentand the modern Internet. As a result, the Court must exercise extreme caution before suggesting that the FirstAmendment provides scant protection for access to vast networks in that medium.

And then, the opinion dives right in and says that the law is obviously a violation of the First Amendment for not being “narrowly tailored.” Again, while there are a few limited exceptions to the First Amendment, they are very narrowly tailored and the Supreme Court has shown little to no interest in expanding them:

Even making the assumption that the statute is content neutral and thus subject to intermediate scrutiny, the provision cannot stand. In order to survive intermediate scrutiny, a law must be ?narrowly tailored to serve a significant governmental interest.? … In other words, the law must not ?burden substantially more speech than is necessary to further the government?s legitimate interests.? …

And this law is not, at all, narrowly tailored. Once again, SCOTUS leans heavily on EFF’s amicus brief to point out how overly broad this NC law is:

It is necessary to make two assumptions to resolve this case. First, given the broad wording of the North Carolina statute at issue, it might well bar access not only to commonplace social media websites but also to websites as varied as Amazon.com, Washingtonpost.com, and Webmd.com. See post, at 6?9; see also Brief for Electronic Frontier Foundation 24?27; Brief for Cato Institute et al. as Amici Curiae 10?12, and n. 6. The Court need not decide the precise scope of the statute. It is enough to assume that the law applies (as the State concedes it does) to social networking sites ?as commonly understood??that is, websites like Facebook, LinkedIn, and Twitter….

From there, it notes that clearly a state could bar more specific and narrowly tailored actions that are not broadly targeting speech:

Second, this opinion should not be interpreted as barring a State from enacting more specific laws than the one at issue. Specific criminal acts are not protected speech even if speech is the means for their commission…. Though the issue is not before the Court, it can be assumed that the First Amendment permits a State to enact specific, narrowly tailored laws that prohibit a sex offender from engaging in conduct that often presages a sexual crime, like contacting a minor or using a website to gather information about a minor.

But this law obviously goes way beyond that, and the Court is troubled by this, calling it “unprecedented in the scope of First Amendment speech it burdens.”

Even with these assumptions about the scope of the law and the State?s interest, the statute here enacts a prohibition unprecedented in the scope of First Amendment speech it burdens. Social media allows users to gain access to information and communicate with one another about it on any subject that might come to mind…. By prohibiting sex offenders from using those websites, North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with anInternet connection to ?become a town crier with a voice that resonates farther than it could from any soapbox.”…

In sum, to foreclose access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights. It is unsettling to suggest that only a limited set of websites can be used even by persons who have completed their sentences. Even convicted criminals?and in some instances especially convicted criminals?might receive legitimate benefits from these means for access to the world of ideas, in particular if they seek to reform and to pursue lawful and rewarding lives.

The above part is the key part of this ruling, and I fully expect it to be cited repeatedly in future cases. It’s the Supreme Court declaring, quite clearly, that the ability to use the internet is vital to being a part of society today, and thus there’s a fundamental First Amendment right to be able to do so.

Three Justices — Alito, Roberts and Thomas — concur with the overall opinion, but do take some issue with the expansive nature of Kennedy’s opinion, suggesting it goes too far. In the concurrence, written by Alito, they note:

I cannot join the opinion of the Court, however, because of its undisciplined dicta. The Court is unable to resist musings that seem to equate the entirety of the internet with public streets and parks…. And this language is bound to be interpreted by some to mean that the States are largely powerless to restrict even the most dangerous sexual predators from visiting any internet sites, including, for example, teenage dating sites and sites designed to permit minors to discuss personal problems with their peers. I am troubled by the implications of the Court?s unnecessary rhetoric.

I don’t see how they can read the majority opinion to say that. Kennedy’s opinion makes it quite clear that such things can be restricted where it’s clear that these actions are narrowly targeted at situations that “often presages a sexual crime.” Either way, I get the feeling that, despite these concerns, this case will be cited in useful ways to protect free speech in the future…

Filed Under: first amendment, free speech, internet, lester packingham, north carolina, packingham, public square, scotus, supreme court