hb 20 – Techdirt (original) (raw)

Court: No, You Can’t Sue Facebook Claiming It Takes Down More Pro-Palestinian Speech Than Pro-Israel Speech

from the that's-not-how-any-of-this-works dept

The Supreme Court is about to review Texas’ HB20 law, that (among other unworkable things) says that websites cannot moderate based on “viewpoint.” Of course, websites don’t moderate based on viewpoint, but rather whether or not they think you’ve violated their rules/terms of service. Should the law be allowed to go into effect, it’s not at all clear how websites would go about complying with the law, but you can bet there would be a shit ton of very, very stupid lawsuits.

Prof. Eric Goldman alerts us to just the sort of lawsuit that would likely be common, even though this was filed unrelated to Texas’ law. Amro Elansari* filed a pro se lawsuit against Meta, claiming that the company moderates more pro-Palestinian content and less pro-Israel content (this lawsuit was filed prior to October 7th, and various additional news reports suggesting this underlying claim may be accurate).

However, as the court rightly points out to Elansari… so what? The district court dismissed the case and Elansari appealed to the 3rd circuit who easily upheld the lower court decision. The court notes that none of Elansari’s own content was moderated, but that he’s bringing a general anti-discrimination claim. But, it still doesn’t make any sense. As the court (rightly) notes, there’s no “right to information.”

Instead, Elansari relies on Title II, which bars certain forms of discrimination but does not create a right to information. Moreover, this statute cannot be understood as granting him a right to relief because he does not allege that he was personally denied the “full and equal enjoyment” of Meta’s service or that he could not access the same content as any other Meta user, let alone that he could not do so on the basis of his race or religion. See 42 U.S.C. § 2000a(a). Additionally, Title II does not entitle Elansari to a right to relief because Meta is not a “place of public accommodation.” See Ford v. Schering-Plough Corp., 145 F.3d 601, 612–14 (3d Cir. 1998) (holding that Title II is limited to physical structures and accommodations).

Thankfully, the court also notes that Section 230 would separately kill the lawsuit, as you can’t sue an interactive computer service for its moderation decisions.

Furthermore, Elansari seeks to hold Meta liable for its decisions regarding which content to publish, but § 230 of the Communications Decency Act “‘precludes courts from entertaining claims that would place a computer service provider in a publisher’s role,’ and therefore bars ‘lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions – such as deciding whether to publish, withdraw, postpone, or alter content.’” Green v. America Online (AOL), 318 F.3d 465, 471 (3d Cir. 2003) (quoting Zeran v. America Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). In sum, Elansari presents no support, nor are we familiar with any, to ground his liberally construed argument that he is legally entitled to relief because Meta does not publish his preferred racial or religious content.

This is a pretty simple and straightforward case, and it’s no surprise that the court dismissed the case easily (and that the appeals court upheld that ruling). But as Goldman notes, it is just the type of case we’d likely see flooding the courts if the Supreme Court allows HB20 and other laws like it.

Everyone is mad about some aspects of how websites moderate. I dislike how most websites moderate. But none of that should give me the right to sue over those moderation choices. If I don’t like it, I can go elsewhere, or even seek to create my own site that does things differently. Or, of course, I can seek to persuade a website to change. That should be the extent of the recourse anyone has if they disagree with moderation decisions.

I do wonder if people who support laws like HB20 — and falsely think that it will magically end the “anti-conservative” bias in moderation that they think exists (but studies repeatedly show does not) — realize how much the laws will be used by the very people they dislike the most to force their content into certain spaces.

* FWIW, Elansari appears to be a serial litigant, as we’ve written about his failed lawsuits against Runescape for muting his account, and against Tinder for sending notifications that “people like you,” and convincing him to sign up for a paid account.

Filed Under: amro elansari, content moderation, discrimination, hb 20, viewpoint discrimination
Companies: meta

Fucking Hell: David Mamet Files The Most Pointless, Silly Amicus Brief In Supreme Court Content Moderation Case

from the amicus-briefs-are-for-closers dept

Okay, look, this post is basically a repost of a post from two years ago. But, because David Mamet has decided to refile the exact same amicus brief he filed for the 5th Circuit again at the Supreme Court, I figured we can repost the same exact post ripping it apart (with a few tiny changes to reference the Supreme Court instead of the 5th Circuit). And yes, the new filing appears exactly the same, including the copyright notice from 2022 (though I can find no evidence that he ever bothered to register the copyright), but signed by a different lawyer who apparently didn’t write any of the actual brief.

About the only positive thing you can say about famed play/movie writer David Mamet deciding to file an amicus brief in support of Texas’s ability to mandate how social media companies moderate content at the Supreme Court… that it has fewer swear words than your typical Mamet production.

We expected some silly amicus briefs in support of Texas, and there have been many (some of which we’ll cover elsewhere), but by far the most bizarre is David Mamet’s decision to, um, weigh in, I guess?

Mamet, if you don’t know, took a Trumpian turn, and like pretty much anyone supporting this law, seems to think that their support of one dude now, um, trumps any actual principles. Or rather, they demonstrate impressively demented levels of cognitive dissonance by twisting themselves into knots pretending that commandeering private property, compelling speech, and removing the 1st Amendment rights of association from private companies is somehow… all about freedom?

The filing starts out with Mamet’s statement of interest in which he — in a brief supporting the government compelling private parties to host speech — claims he’s really concerned about government interference in our freedoms.

Proposed amicus David Mamet aspires to enjoy freedom of speech without government-enabled censorship. Mr. Mamet worries about how Americans can navigate their world when firms that control information conduits, and are privileged and subsidized by the government, serve curated “information” to users and the public which no longer maps onto the world that Americans personally observe.

That’s silly enough, but the actual amicus brief, well, holy shit. It’s… um… a story? It includes no citations. It makes no arguments. It’s just some sort of fictional story that feels like something a freshman in high school might write after getting high the first time and thinking they were profound. It starts out thusly:

The pilot wants to orient himself. He knows approximately where he is, for he knows the direction in which he’s been flying, the speed of the plane, and the time of flight. And he has a chart. Given a 100 mph airspeed, flying west for one hour, he should be at this point on the chart. He should, thus, see, to his right a camelbacked double hill, and, off to his left, a small lima bean shaped lake.

He now looks out, but he can’t find the objects the chart informed him he’d see. He concludes that he is lost.

How can he determine his location? He has a map, but he’s just misused it. How?

The Map is not the territory. The territory is the territory

It goes on like that. Mercifully, not for that long.

But I can assure you that this is likely to be the only amicus brief ever to include the line:

I report as an outdoorsman, that Panic is real. It is the loss of the mind and will to Pan, God of the Woods.

Anyway, after two pages of this silly drivel, he concludes:

A pilot in this situation might conclude he’d simply picked up the wrong map.

But what if the government and its privileged conduits prohibited him from choosing another?

copyright © 2022 by D. Mamet

Deep man. Pass the bong.

Anyway, where to start? Oh, hell, let’s start with the copyright notice. For over a decade, we’ve written about a few different cases where questions were raised about whether or not you could even copyright a legal brief. And the courts seem to say that you can in some circumstances, though almost no one bothers to register their briefs because, really, why would you? In a few situations where it was a pure money-grab by lawyers looking to shake down database companies like Westlaw and Lexis/Nexus, the courts have not looked too kindly on these arguments. That said, we did cover one case where a copyright claim survived over a legal brief, but it was a pretty unique situation, involving a lawyer who had been working with another lawyer, defending two different defendants in a case, and the one lawyer basically copied a huge section of a draft brief the other lawyer was going to file in the very same case. Under that circumstance, the court found the copyright claim to be more reasonable.

Either way, it should be clear that reposting and commenting on Mamet’s brief here, as silly as it is, constitutes clear fair use.

As for the brief itself, even in today’s Supreme Court, it seems quite likely that this particular brief will be mostly ignored, other than perhaps to wonder at what inspired Mamet to submit such nonsense.

As for the — and I hesitate to call it this, but whatever — “substance” of Mamet’s argument, even given the most forgiving read of it, Mamet seems to be claiming that the obviously unconstitutional restriction on private property rights and the 1st Amendment rights of social media companies should be allowed, because… otherwise the government “and its privileged conduits prohibited” you from choosing another platform.

Except, that’s not at all what any of this is about. First off, social media platforms are not the government’s privileged conduits under any conceivable definition. Second, at no point is anyone prohibited from choosing another platform. And these days there are so many platforms for anyone to choose from, including special “maps” made specifically for Trumpists like Mamet. No one is trying to stop those from existing. Indeed, this law that Mamet is supporting would make it much harder for such sites to exist, leaving it such that only the largest platforms could exist, if they followed the government’s rules.

As someone who has attended apparently a few too many of Mamet’s plays and movies, I’ll say that in his old age, he seems to have completely lost the plot.

Filed Under: content moderation, david mamet, free speech, hb 20, maps, supreme court, texas
Companies: ccia, netchoice

Texas’ Ridiculous Content Moderation Bill Put On Hold Until The Supreme Court Can Consider It

from the a-brief-reprieve dept

Normally, this wouldn’t be surprising, and normally, this wouldn’t even require a blog post, but because nothing in the 5th Circuit makes sense these days, it is a little surprising and it is worth a post to note that despite the insanity of Judge Andy Oldham’s ruling putting Texas’ content moderation law back on the books, he has now agreed to put that ruling on hold while the parties ask the Supreme Court to hear the case.

Again, such a thing is pretty standard in lots of cases, but this is the case where, back in May, Oldham decided to say that the law should go into effect immediately without any explanation at all. That necessitated a rush to the Supreme Court’s shadow docket, where the justices put the law on hold, in order to allow the regular, normal procedure to take place. As you’re well aware, months later, Oldham finally came out with his batshit crazy decision that required Oldham to ignore a century’s worth of precedent, as well as decades worth of conservative 1st Amendment orthodoxy in order to argue that the 1st Amendment’s association rights no longer apply to social media.

NetChoice and CCIA, the plaintiff trade groups in the case, asked Oldham to stay his ruling (that is, stop it from taking effect) in order to ask the Supreme Court to weigh in. A few weeks ago, Florida already asked the Supreme Court to hear its appeal of the 11th Circuit’s rejection of a similar law. The two laws are not identical, and differ in some potentially important ways, but the two appeals courts rulings are in clear conflict, and it is extremely likely that the Supreme Court will take these cases (and likely merge them into one), in what may be the most important Supreme Court case regarding the internet ever.

Perhaps surprisingly, Texas chose not to oppose the request for the stay. Again, in normal times, that wouldn’t be a surprise or even noteworthy. But, again, these are not normal times, and Texas politicians keep insisting they really, really need this law. Of course, they’re smart enough to know that it was going to the Supreme Court eventually anyway, so it probably did no one any good to play petty politics over this.

Of course, there’s the other theory as well: that this is a case of the dog (Texas) actually catching the car (a blatantly unconstitutional content moderation law), and having no idea what to actually do with it. I do kinda wonder if at least some of the folks in the Texas government were beginning to realize just how messed up things would be if the law actually went into effect, because it’s literally impossible to comply with. So, getting to wait until the Supreme Court reviews the case gives those folks an “out.” They don’t end up creating a huge mess for the internet just days before the midterm elections, and if (fingers crossed) the Supreme Court gets stuff right next year, they can just blame the Supreme Court to their gullible base, rather than have to deal with the fallout of their spite-driven nonsense.

Filed Under: 5th circuit, content moderation, hb 20, supreme court, texas
Companies: ccia, netchoice

No One Has Any Clue How Texas’ Social Media Law Can Actually Work (Because It Can’t Work)

from the this-is-all-so-so-dumb dept

Lots of people are still trying to mentally process the bizarrely confused 5th Circuit ruling that has reinstated Texas’ social media content moderation law. I wrote an initial analysis of the ruling here, and then a further analysis of just some of the most egregious problems with it over at The Daily Beast. This week I’ve been at the TrustCon conference, where multiple people who actually have to implement the law have been repeatedly telling me that they have no idea how anyone even thinks it’s possible to follow the law. Because it is, quite clearly, impossible.

The Atlantic’s Charlie Warzel released an article asking if this ruling “is the beginning of the end of the internet?” which may feel hyperbolic, but at the very least, if the ruling stands, it’s certainly the end of the internet as we know it. What comes after that is going to be something quite different. Warzel interviewed me for the piece, among some others, but the key part of the article comes from Stanford’s Daphne Keller, noting that it seems unlikely that even Texas legislators who wrote and passed the law have any idea what the law will do:

Keller, of Stanford’s Cyber Policy Center, has tried to game out future scenarios, such as social networks having a default non-moderated version that might quickly become unusable, and a separate opt-in version with all the normal checks and balances (terms-of-service agreements and spam filters) that sites have now. But how would a company go about building and running two simultaneous versions of the same platform at once? Would the Chaos Version run only in Texas? Or would companies try to exclude Texas residents from their platforms?

“You have potential situations where companies would have to say, ‘Okay, we’re kicking off this neo-Nazi, but he’s allowed to stay on in Texas,” Masnick said. “But what if the neo-Nazi doesn’t live in Texas?” The same goes for more famous banned users, such as Trump. Do you ban Trump’s tweets in every state except Texas? It seems almost impossible for companies to comply with this law in a way that makes sense. The more likely reality, Masnick suggests, is that companies will be unable to comply and will end up ignoring it, and the Texas attorney general will keep filing suit against them, causing more simmering resentment among conservatives against Big Tech.

What is the endgame of a law that is both onerous to enforce and seemingly impossible to comply with? Keller offered two theories: “I think passing this law was so much fun for these legislators, and I think they might have expected it would get struck down, so the theater was the point.” But she also believes that there is likely some lack of understanding among those responsible for the law about just how extreme the First Amendment is in practice. “Most people don’t realize how much horrible speech is legal,” she said, arguing that historically, the constitutional right has confounded logic on both the political left and right. “These legislators think that they’re opening the door to some stuff that might offend liberals. But I don’t know if they realize they are also opening the door to barely legal child porn or pro-anorexia content and beheading videos. I don’t think they’ve understood how bad the bad is.”

This is almost certainly true. Remember, the bill’s own author once got so angry at me on Twitter that he seemed to imply that he knows that Section 230 pre-empts his entire bill.

So… as it stands we have a bill where the social media companies have no clue how to comply with the bill, and the lawmakers who wrote the bill have no idea how to comply with the bill (and don’t seem to much care). The whole thing is just pure nonsense — legislating out of pure spite.

Texas lawmakers don’t actually understand any of this. What they wanted was to make “big tech” feel bad. But they didn’t actually do that either. They just made everyone confused, because no one in their right mind would pass a law that effectively requires all horrible content to remain online in perpetuity.

But that’s what Texas lawmakers did.

So, now they’re the dog that caught the car, and while they almost certainly don’t realize it yet (assuming the Supreme Court doesn’t step in and fix things), they’re going to find out that they don’t actually like their jaws clamped to a car zipping down the highway…

Filed Under: 5th circuit, content moderation, hb 20, social media, texas

Conservatives Loved Expanding The 1st Amendment To Corporations… Until Last Year. Wonder Why?

from the what-could-it-possibly-be dept

Right after the 5th Circuit’s ruling on Texas’ HB 20 law on content moderation came out, I wrote up a long post going through the many, many oddities (and just flat out mistakes) of the ruling.

Since then, one thing that was bothering about this ruling was that it wasn’t just wrong on the law, wrong on the relevant precedents, and wrong on the 1st Amendment… but it literally went against the last few decades of how conservative Federalist Society judges have been expanding the 1st Amendment to cover more and more activity by organizations (which, contrary to popular opinion, I actually think has been mostly correct).

The Daily Beast asked me to write up an analysis of the 5th Circuit ruling, and one thing I focused on was just how blatantly basically the entire Republican ecosystem completely reversed on this issue over the last year and a half since Donald Trump got banned from Twitter. I mean, at a very direct level, Republicans insisted (falsely) that net neutrality was an attack on the “free speech rights” of internet providers, and that the very limited net neutrality rules that the FCC put in place were “the government takeover of the internet.” Yet they suddenly have no problem applying much more aggressive and 1st Amendment violative rules to edge providers that are nothing like internet service providers.

And while I kept hearing people say that the Dobbs ruling showed that the Supreme Court will now ignore precedent to get to the results it wants, there’s something different about the 5th Circuit’s ruling in the NetChoice case:

The cynical will point to things like the Supreme Court’s decision in Dobbs (which overturned Roe v. Wade) and note that we’ve entered an era of Calvinball jurisprudence—in which precedents are no longer an impediment to whatever endgame Federalist Society judges want. (The beloved comic strip Calvin and Hobbes introduced us to the concept of “Calvinball”—a sport in which the participants make up the rules as they go, never using the same rules twice.)

But in some ways this decision is even more ridiculous. There are pockets of the conservative world that have spent 50 years honing arguments to overturn Roe. The opposite is true when it comes to upending the First Amendment.

Indeed, the same forces that worked to overturn Roe spent nearly the same amount of time working to strengthen and expand judicial recognition of the First Amendment rights of companies—from allowing a baker to choose not to decorate a cake, to allowing companies to cite the First Amendment as a reason not to provide contraception as part of a health plan, and deciding that the First Amendment did not allow Congress to bar certain types of expenditures in support of political candidates.

No matter how you feel about Masterpiece Cakeshop, Hobby Lobby or Citizens United, all three were cases driven by conservative arguments that relied heavily on the fundamental position that the First Amendment barred restrictions on corporate expression, including the right to not be forced to endorse, enable, or support certain forms of expression.

I pointed out how Ken White had once noted that there just wasn’t a deep bench of conservative judges looking to take away 1st Amendment rights. And that actually held for a while:

As First Amendment lawyer Ken White noted back in the comparatively innocent days of November 2016, regarding Donald Trump’s call to open up our libel laws, “You can go shopping for judicial candidates whose writings or decisions suggest they will overturn Roe v. Wade, but it would be extremely difficult to find ones who would reliably overturn [key First Amendment precedents.]”

But, as if to just put a spotlight on their lack of actual principles, a huge part of the Republican establishment flipped on this point on a dime, solely to punish tech companies that they feel have become “too woke.” It’s almost as if they only support the 1st Amendment for those who ideologically agree with them.

I mean, Justice Clarence Thomas, who almost certainly will vote to uphold the 5th Circuit, will be doing a complete 180 on his concurrence in Masterpiece Cakeshop. In that one, he argued the Supreme Court should have gone even further to make it clear that forcing a baker to decorate a cake for a gay couple would violate the baker’s free speech, and dismissed the key cases the 5th Circuit relied on in the NetChoice case (FAIR and Pruneyard) as being wholly inapplicable, while highlighting the importance of Miami Herald v. Tornillo (the case that the 5th Circuit says is wholly different) on the 1st Amendment protecting the right for private operators to “exercise control over the messages” they send.

With Dobbs, everyone knew where it was going, because conservatives spent 50 years working up to it. But the 5th Circuit ruling lays bare how there are no principles among an unfortunately large segment of today’s Republicans in both statehouses and courts. It’s not about principles. It is entirely focused on punishing people they don’t like.

There’s a lot more in the Daily Beast piece, but I wanted to highlight that one element that hadn’t received as much attention.

Filed Under: 1st amendment, 5th circuit, andy oldham, clarence thomas, compelled speech, content moderation, hb 20, social media, texas

Did The 5th Circuit Just Make It So That Wikipedia Can No Longer Be Edited In Texas?

from the bang-up-job,-andy dept

I wrote up an initial analysis of the 5th Circuit’s batshit crazy ruling re-instating Texas’s social media content moderation law last week. I have another analysis of it coming out shortly in another publication (I’ll then write about it here). A few days ago, Prof. Eric Goldman did his own analysis as well, which is well worth reading. It breaks out a long list of just flat-out errors made by Judge Andy Oldham. It’s kind of embarrassing.

But there is one point in the piece that seemed worth calling out and highlighting. There is something of an open question as to what platforms technically fall under Texas’ law. The law defines “social media platform” as follows:

“Social media platform” means an Internet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images. The term does not include:

> (A) an Internet service provider as defined by Section 324.055 > (B) electronic mail; or > (C) an online service, application, or website: > > > (i) that consists primarily of news, sports, entertainment, or other information or content that is not user generated but is preselected by the provider; and > > (ii) for which any chat, comments, or interactive functionality is incidental to, directly related to, or dependent on the provision of the content described by Subparagraph (i)

The operative “anti-censorship” provision only applies to such social media platforms that have “more than 50 million active users in the United States in a calendar month.” Leaving aside that no one really knows how many active users they truly have, the definition above sweeps in a lot more companies than people realize.

In its filings in the case, Texas had claimed that the only companies covered by the law were Facebook, Twitter, and YouTube. Judge Andy Oldham, in his ridiculous ruling, stated that “the plaintiff trade associations represent all the Platforms covered by HB 20.”

But, from the definition above, that’s clearly false. First off, it’s not even clear if Twitter actually qualifies. As we’ve learned (oh so painfully), Twitter no longer even reports its “monthly active users,” but instead chooses to release its “monetizable daily active users” which is not even close to the same thing. When it last did post info on its monthly active users, apparently it was only 38 million — meaning it might not even be subject to the anti-censorship provisions of the law!

But also, there are other platforms which are not members of either trade association, and yet still qualify under the definition above. Law professor Daphne Keller put together a list of public information on internet company sizes for Senate testimony earlier this year, and it’s a useful guide.

One name that stands out: Wikipedia. According to Keller’s estimate, it has more than 97 million monthly active users on the site. It meets the definition under the law. It’s a website that is open to the public, allows a user to create an account and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images.

It doesn’t meet any of the exceptions. It’s not an ISP. It does not provide email. It does not consist “primarily” about news, sports, entertainment or “other information or content that is not user generated.” Wikipedia is all user generated. And the interactive nature of the site is not incidental to the service. It’s the whole point.

So… Wikipedia qualifies.

Now… how does Wikipedia comply?

Under the law, Wikipedia cannot “censor” based on “the viewpoint of the user.” But, Wikipedia is constantly edited by users. Even if you were to claim that a user chose to edit an entry because of the “viewpoint” of the content, how would Wikipedia even prevent that?

Wikipedia must also create (I’m already laughing) an email address where users can send complaints and a whole “complaint system.”

I don’t see how that can happen.

Anyway, it’s possible this means that Wikipedia can no longer stop people from adding more and more content (true or not) to Judge Andy Oldham’s profile, because having users take it down would potentially violate the law (but don’t do that: vandalizing Wikipedia is always bad, even if you’re trying to make a point).

The entire law is based on the idea that all moderation takes place by the company itself, and not by users.

It’s also possible that Reddit is swept up under the law (it’s unclear if they have enough US users, but it’s close), and again, I don’t see how it can comply. Moderation there is multi-layered, but there is user voting, which certainly might be based on viewpoints. There are admin level moderation decisions (so, under this law, Reddit might not have been able to ban a bunch of abusive subreddits). But, each subreddit has its own rules and its own moderators. Will individual subreddit moderation run afoul of this law? Can subreddits even still operate?

¯\_(ツ)_/¯

No one knows!

Discord might also be close to the trigger line and again, I don’t see how it could comply, since each Discord server has its own administrators and moderators.

On Twitter, someone noted that the job board, Indeed.com claims to have over 250 million unique visitors every month. That was as of 2020, yet some more recent numbers show it much higher, with the latest monthly numbers (from May of this year) showing over 650 million visits. Visits and users are not the same, but it’s not difficult to see how that turns into over 50 million active users in the US.

And… that creates more problems, as the lawyer noted to me on Twitter, if someone now posts a job opening on Indeed that violates the EEOC by saying certain races shouldn’t apply, well, under the Texas law, Indeed would have to leave that ad up (though, under the EEOC they’d have to take it down).

This just part of the reason we have a dormant commerce clause in the Constitution that should have gotten this law tossed even earlier, but alas…

Anyway, if the law does actually go into effect, we’re going to discover lots of nonsense like this. But that’s because the Texas legislature, the Texas executive branch, and foolish judges like Andy Oldham don’t actually understand any of this. They’re just real angry that Donald Trump got banned from Twitter for being an ass.

Filed Under: 5th circuit, andy oldham, content moderation, hb 20, wikipedia
Companies: discord, indeed, reddit, wikipedia

Florida Officially Asks Supreme Court To Review Its Social Media Content Moderation Law

from the let's-gooooooooooooooooooo dept

Back in May, an 11th Circuit appeals court panel found that Florida’s ridiculous content moderation law was clearly unconstitutional, mostly upholding a district court ruling saying the same thing. As you’ll recall, Florida passed this law, mainly in response to Trump being banned from social media, that limits how websites can moderate content, largely focused on content posted by politicians. The 11th Circuit did push back on one part of the lower court decision, saying that the transparency requirements of the law were likely constitutional.

As you also know, Texas passed a similar law and the various fights over both states’ laws have been mostly intertwined. Last week, the 5th Circuit issued its bewildering ruling (we’ll have more on that soon) that basically ignored a century’s worth of 1st Amendment law, while misreading both other existing precedent and literally rewriting Section 230 and pretending it was somehow controlling over the 1st Amendment.

Anyway, over the summer, Florida had told the lower courts that it intended to ask the Supreme Court to hear the appeal over its law, and on Wednesday that finally happened. Florida has petitioned the Supreme Court to review the decision, highlighting the two key questions it sees from the ruling. While the Supreme Court does not need to take the case, it seems likely it will. It’s possible that an appeal on the 5th Circuit’s ruling will get consolidated into this case as well, or perhaps it will remain separate.

Florida presents these as the two questions the appeal seeks to answer:

1. Whether the First Amendment prohibits a State from requiring that social-media companies host third-party communications, and from regulating the time, place, and manner in which they do so.

2. Whether the First Amendment prohibits a State from requiring social-media companies to notify and provide an explanation to their users when they censor the user’s speech.

Both of these questions could have a huge impact on the future of the internet. The answer to both of these should be yes. Indeed, there’s some argument that it’s a little weird that Florida constructed the questions in a way where they want the answer to be “no” rather than “yes.” But, beyond that, this case is going to be a big, big deal.

It’s unclear if Florida deliberately waited for the 5th Circuit’s opinion, but the petition plays up the circuit split between the 5th and 11th Circuits.

The Fifth Circuit split with the decision below on the threshold question of whether the platforms are speaking at all when they censor a user’s speech.

The Eleventh Circuit below said “yes.” It reasoned that “[w]hen a platform selectively removes what it perceives to be incendiary political rhetoric, pornographic content, or public-health misinformation, it conveys a message and thereby engages in ‘speech’ within the meaning of the First Amendment.” App.19a–20a. And it reached that conclusion because it thought that “editorial judgments” are protected by the First Amendment. App.20a.

The Fifth Circuit said “no.” In rejecting the Eleventh Circuit’s reasoning, it explained that the Eleventh Circuit’s “‘editorial-judgment principle’ conflicts with” this Court’s cases. Paxton, 2022 WL 4285917, at *39. As the Fifth Circuit pointed out, this Court has held that some hosts can be denied the “right to decide whether to disseminate or accommodate a” speaker’s message

That certainly tees things up for the 5th Circuit ruling to be consolidated into this case.

Much of the argument by Florida is basically just repeating the 5th Circuit’s nonsense ruling, which is to be expected. I don’t need to go over why it’s all wrong — that’s pretty well established. I will have more soon on why multiple Supreme Court justices would need to completely reverse themselves on earlier decisions to agree with both Texas and Florida, but that’s not impossible these days.

Either way, the Supreme Court is likely to hear this and it’s just the future of the open internet and editorial freedom at stake.

Filed Under: 1st amendment, content moderation, editorial discrection, florida, hb 20, sb 7072, supreme court, texas
Companies: ccia, netchoice

5th Circuit Rewrites A Century Of 1st Amendment Law To Argue Internet Companies Have No Right To Moderate

from the batshit-crazy dept

As far as I can tell, in the area the 5th Circuit appeals court has jurisdiction, websites no longer have any 1st Amendment editorial rights. That’s the result of what appears to me to be the single dumbest court ruling I’ve seen in a long, long time, and I know we’ve seen some crazy rulings of late. However, thanks to judge Andy Oldham, internet companies no longer have 1st Amendment rights regarding their editorial decision making.

Let’s take a step back. As you’ll recall, last summer, in a fit of censorial rage, the Texas legislature passed HB 20, a dangerously unconstitutional bill that would bar social media websites from moderating as they see fit. As we noted, the bill opens up large websites to a lawsuit over basically every content moderation decision they make (and that’s just one of the problems). Pretty quickly, a district court judge tossed out the entire law as unconstitutional in a careful, thorough ruling that explained why every bit of the law violated websites’ own 1st Amendment rights to put in place their own editorial policies.

On appeal to the 5th Circuit, the court did something bizarre: without giving any reason or explanation at all, it reinstated the law and promised a ruling at some future date. This was procedurally problematic, leading the social media companies (represented by two of their trade groups, NetChoice and CCIA) to ask the Supreme Court to slow things down a bit, which is exactly what the Supreme Court did.

Parallel to all of this, Florida had passed a similar law, and again a district court had found it obviously unconstitutional. That, too, was appealed, yet in the 11th Circuit the court rightly agreed with the lower court that the law was (mostly) unconstitutional. That teed things up for Florida to ask the Supreme Court to review the issue.

However, remember, back in May when Texas initially reinstated the law, it said it would come out with its full ruling later. Over the last few months I’ve occasionally pondered (sometimes on Twitter) whether the 5th Circuit would ever get around to actually releasing an opinion. And that’s what it just did. And, as 1st Amendment lawyer Ken White notes, it’s “the most angrily incoherent First Amendment decision I think I’ve ever read.”

It is difficult to state how completely disconnected from reality this ruling is, and how dangerously incoherent it is. It effectively says that companies no longer have a 1st Amendment right to their own editorial policies. Under this ruling, any state in the 5th Circuit could, in theory, mandate that news organizations must cover certain politicians or certain other content. It could, in theory, allow a state to mandate that any news organization must publish opinion pieces by politicians. It completely flies in the face of the 1st Amendment’s association rights and the right to editorial discretion.

There’s going to be plenty to say about this ruling, which will go down in the annals of history as a complete embarrassment to the judiciary, but let’s hit the lowest points. The crux of the ruling, written by Judge Andy Oldham, is as follows:

Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. Because the district court held otherwise, we reverse its injunction and remand for further proceedings.

Considering just how long Republicans (and Oldham was a Republican political operative before being appointed to the bench) have spent insisting that corporations have 1st Amendment rights, this is a major turnaround, and (as noted) an incomprehensible one. Frankly, Oldham’s arguments sound much more like the arguments made by ignorant trolls in our comments than anyone with any knowledge or experience with 1st Amendment law.

I mean, it’s as if Judge Oldham has never heard of the 1st Amendment’s prohibition on compelled speech.

First, the primary concern of overbreadth doctrine is to avoid chilling speech. But Section 7 does not chill speech; instead, it chills censorship. So there can be no concern that declining to facially invalidate HB 20 will inhibit the marketplace of ideas or discourage commentary on matters of public concern. Perhaps as-applied challenges to speculative, now-hypothetical enforcement actions will delineate boundaries to the law. But in the meantime, HB 20’s prohibitions on censorship will cultivate rather than stifle the marketplace of ideas that justifies the overbreadth doctrine in the first place.

Judge Oldham insists that concerns about forcing websites to post speech from Nazis, terrorist propaganda, and Holocaust denial are purely hypothetical. Really.

The Platforms do not directly engage with any of these concerns. Instead, their primary contention—beginning on page 1 of their brief and repeated throughout and at oral argument—is that we should declare HB 20 facially invalid because it prohibits the Platforms from censoring “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s].” Red Br. at 1.

Far from justifying pre-enforcement facial invalidation, the Platforms’ obsession with terrorists and Nazis proves the opposite. The Supreme Court has instructed that “[i]n determining whether a law is facially invalid,” we should avoid “speculat[ing] about ‘hypothetical’ or ‘imaginary’ cases.” Wash. State Grange, 552 U.S. at 449–50. Overbreadth doctrine has a “tendency . . . to summon forth an endless stream of fanciful hypotheticals,” and this case is no exception. United States v. Williams, 553 U.S. 285, 301 (2008). But it’s improper to exercise the Article III judicial power based on “hypothetical cases thus imagined.” Raines, 362 U.S. at 22; cf. SinenengSmith, 140 S. Ct. at 1585–86 (Thomas, J., concurring) (explaining the tension between overbreadth adjudication and the constitutional limits on judicial power).

These are not hypotheticals. This is literally what these websites have to deal with on a daily basis. And which, under Texas’ law, they no longer could do.

Oldham continually focuses (incorrectly and incoherently) on the idea that editorial discretion is censorship. There’s a reason that we’ve spent the last few years explaining how the two are wholly different and part of it was to avoid people like Oldham getting confused. Apparently it didn’t work.

We reject the Platforms’ efforts to reframe their censorship as speech. It is undisputed that the Platforms want to eliminate speech—not promote or protect it. And no amount of doctrinal gymnastics can turn the First Amendment’s protections for free speech into protections for free censoring.

That paragraph alone is scary. It basically argues that the state can now compel any speech it wants on private property, as it reinterprets the 1st Amendment to mean that the only thing it limits is the power of the state to remove speech, while leaving open the power of the state to foist speech upon private entities. That’s ridiculous.

Oldham then tries to square this by… pulling in wholly unrelated issues around the few rare, limited, fact-specific cases where the courts have allowed compelled speech.

Supreme Court precedent instructs that the freedom of speech includes “the right to refrain from speaking at all.” Wooley v. Maynard, 430 U.S. 705, 714 (1977); see also W. Va. State Bd. of Educ. v. Barnette, 319 U.S. 624, 642 (1943). So the State may not force a private speaker to speak someone’s else message. See Wooley, 430 U.S. at 714.

But the State can regulate conduct in a way that requires private entities to host, transmit, or otherwise facilitate speech. Were it otherwise, no government could impose nondiscrimination requirements on, say, telephone companies or shipping services. But see 47 U.S.C. § 202(a) (prohibiting telecommunications common carriers from “mak[ing] any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services”). Nor could a State create a right to distribute leaflets at local shopping malls. But see PruneYard Shopping Ctr. v. Robins, 447 U.S. 74, 88 (1980) (upholding a California law protecting the right to pamphleteer in privately owned shopping centers). So First Amendment doctrine permits regulating the conduct of an entity that hosts speech, but it generally forbids forcing the host itself to speak or interfering with the host’s own message.

From there, he argues that forcing websites to host speech they disagree with is not compelled speech.

The Platforms are nothing like the newspaper in Miami Herald. Unlike newspapers, the Platforms exercise virtually no editorial control or judgment. The Platforms use algorithms to screen out certain obscene and spam-related content. And then virtually everything else is just posted to the Platform with zero editorial control or judgment.

Except that’s the whole point. The websites do engage in editorial control. The difference from newspapers is that it’s ex post control. If there are complaints, they will review the content afterwards to see if it matches with their editorial policies (i.e., terms of use). So, basically, Oldham is simply wrong here. They do exercise editorial control. That they use it sparingly does not mean they give up the right. Yet Oldham thinks otherwise.

From there, Oldham literally argues there is no editorial discretion under the 1st Amendment. Really.

Premise one is faulty because the Supreme Court’s cases do not carve out “editorial discretion” as a special category of First-Amendment-protected expression. Instead, the Court considers editorial discretion as one relevant consideration when deciding whether a challenged regulation impermissibly compels or restricts protected speech.

To back this up, the court cites Turner v. FCC, which has recently become a misleading favorite among those who are attacking Section 230. But the Turner case really turned on some pretty specific facts about cable TV versus broadcast TV which are not at all in play here.

Oldham also states that content moderation isn’t editorial discretion, even though it literally is.

Even assuming “editorial discretion” is a freestanding category of First-Amendment-protected expression, the Platforms’ censorship doesn’t qualify. Curiously, the Platforms never define what they mean by “editorial discretion.” (Perhaps this casts further doubt on the wisdom of recognizing editorial discretion as a separate category of First-Amendment-protected expression.) Instead, they simply assert that they exercise protected editorial discretion because they censor some of the content posted to their Platforms and use sophisticated algorithms to arrange and present the rest of it. But whatever the outer bounds of any protected editorial discretion might be, the Platforms’ censorship falls outside it. That’s for two independent reasons.

And here it gets really stupid. The ruling argues that because of Section 230, internet websites can’t claim editorial discretion. This is a ridiculously confused misreading of 230.

First, an entity that exercises “editorial discretion” accepts reputational and legal responsibility for the content it edits. In the newspaper context, for instance, the Court has explained that the role of “editors and editorial employees” generally includes “determin[ing] the news value of items received” and taking responsibility for the accuracy of the items transmitted. Associated Press v. NLRB, 301 U.S. 103, 127 (1937). And editorial discretion generally comes with concomitant legal responsibility. For example, because of “a newspaper’s editorial judgments in connection with an advertisement,” it may be held liable “when with actual malice it publishes a falsely defamatory” statement in an ad. Pittsburgh Press Co. v. Pittsburgh Comm’n on Human Rels., 413 U.S. 376, 386 (1973). But the Platforms strenuously disclaim any reputational or legal responsibility for the content they host. See supra Part III.C.2.a (quoting the Platforms’ adamant protestations that they have no responsibility for the speech they host); infra Part III.D (discussing the Platforms’ representations pertaining to 47 U.S.C. § 230)

Then, he argues that there’s some sort of fundamental difference between exercising editorial discretion before or after the content is posted:

Second, editorial discretion involves “selection and presentation” of content before that content is hosted, published, or disseminated. See Ark. Educ. Television Comm’n v. Forbes, 523 U.S. 666, 674 (1998); see also Miami Herald, 418 U.S. at 258 (a newspaper exercises editorial discretion when selecting the “choice of material” to print). The Platforms do not choose or select material before transmitting it: They engage in viewpoint-based censorship with respect to a tiny fraction of the expression they have already disseminated. The Platforms offer no Supreme Court case even remotely suggesting that ex post censorship constitutes editorial discretion akin to ex ante selection.17 They instead baldly assert that “it is constitutionally irrelevant at what point in time platforms exercise editorial discretion.” Red Br. at 25. Not only is this assertion unsupported by any authority, but it also illogically equates the Platforms’ ex post censorship with the substantive, discretionary, ex ante review that typifies “editorial discretion” in every other context

So, if I read that correctly, websites can now continue to moderate only if they pre-vet all content they post. Which is also nonsense.

From there, Oldham goes back to Section 230, where he again gets the analysis exactly backwards. He argues that Section 230 alone makes HB 20’s provisions constitutional, because it says that you can’t treat user speech as the platform’s speech:

We have no doubts that Section 7 is constitutional. But even if some were to remain, 47 U.S.C. § 230 would extinguish them. Section 230 provides that the Platforms “shall [not] be treated as the publisher or speaker” of content developed by other users. Id. § 230(c)(1). Section 230 reflects Congress’s judgment that the Platforms do not operate like traditional publishers and are not “speak[ing]” when they host usersubmitted content. Congress’s judgment reinforces our conclusion that the Platforms’ censorship is not speech under the First Amendment.

[….]

Section 230 undercuts both of the Platforms’ arguments for holding that their censorship of users is protected speech. Recall that they rely on two key arguments: first, they suggest the user-submitted content they host is their speech; and second, they argue they are publishers akin to a newspaper. Section 230, however, instructs courts not to treat the Platforms as “the publisher or speaker” of the user-submitted content they host. Id. § 230(c)(1). And those are the exact two categories the Platforms invoke to support their First Amendment argument. So if § 230(c)(1) is constitutional, how can a court recognize the Platforms as First-Amendment-protected speakers or publishers of the content they host?

Oldham misrepresents the arguments of websites that support Section 230, claiming that by using 230 to defend their moderation choices they have claimed in court they are “neutral tools” and “simple conduits of speech.” But that completely misrepresents what has been said and how this plays out.

It’s an upside down and backwards misrepresentation of how Section 230 actually works.

Oldham also rewrites part of Section 230 to make it work the way he wants it to. Again, this reads like some of our trolls, rather than how a jurist is supposed to act:

The Platforms’ only response is that in passing § 230, Congress sought to give them an unqualified right to control the content they host— including through viewpoint-based censorship. They base this argument on § 230(c)(2), which clarifies that the Platforms are immune from defamation liability even if they remove certain categories of “objectionable” content. But the Platforms’ argument finds no support in § 230(c)(2)’s text or context. First, § 230(c)(2) only considers the removal of limited categories of content, like obscene, excessively violent, and similarly objectionable expression. It says nothing about viewpoint-based or geography-based censorship. Second, read in context, § 230(c)(2) neither confers nor contemplates a freestanding right to censor. Instead, it clarifies that censoring limited categories of content does not remove the immunity conferred by § 230(c)(1). So rather than helping the Platforms’ case, § 230(c)(2) further undermines the Platforms’ claim that they are akin to newspapers for First Amendment purposes. That’s because it articulates Congress’s judgment that the Platforms are not like publishers even when they engage in censorship.

Except that Section 230 does not say “similarly objectionable.” It says “otherwise objectionable.” By switching “otherwise objectionable” to “similarly objectionable,” Oldham is insisting that courts like his own get to determine what counts as “similarly objectionable,” and that alone is a clear 1st Amendment problem. The courts cannot decide what content a website finds objectionable. That is, yet again, the state intruding on the editorial discretion of a website.

Also, completely ridiculously, Oldham leaves out that (c)(2) does not just include that list of objectionable categories, but it states: “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” In other words, the law explicitly states that whether or not something falls into that list is up to the provider or user and not the state. To leave that out of his description of (c)(2) is beyond misleading.

Also notable: Oldham completely ignores the fact that Section 230 pre-empts state laws like Texas’s, saying that “no liability may be imposed under any State or local law that is inconsistent with this section.” I guess Oldham is arguing that Texas’s law somehow is not inconsistent with 230, but it certainly is inconsistent with two and a half decades of 230 jurisprudence.

There’s then a long and, again, nonsensical discussion of common carriers, basically saying that the state can magically declare social media websites common carriers. I’m not even going to give that argument the satisfaction of covering it, it is so disconnected from reality. Social media literally meets none of the classifications of traditional common carriers. The fact that Oldham claims, that “the Platforms are no different than Verizon or AT&T” makes me question how anyone could take anything in this ruling seriously.

I’m also going to skip over the arguments for why the “transparency” bits are constitutional according to the 5th Circuit, other than to note that California must be happy, because under this ruling its new social media transparency laws would also be deemed constitutional even if they now conflict with Texas’s (that’ll be fun).

There are a few notable omissions from the ruling. It never mentions ACLU v. Reno, which seems incredibly relevant given its discussion of how the internet and the 1st Amendment work together, and is glaring in its absence. Second, it completely breezes past Justice Kavanaugh’s ruling in the Halleck case, which clearly established that under the First Amendment a “private entity may thus exercise editorial discretion over the speech and speakers in the forum.” The only mention of the ruling is in a single footnote, claiming that ruling only applies to “public forums” and saying it’s distinct from the issue raised here. But, uh, the quote (and much of the ruling) literally says the opposite. It’s talking about private forums. This is ridiculous. Third, as noted, the ruling ignores the pre-emption aspects of Section 230. Fourth, while it discusses the 11th Circuit’s ruling regarding Florida’s law, it tries to distinguish the two (while also highlighting where the two Circuits disagree to set up the inevitable Supreme Court battle). Finally, it never addresses the fact that the Supreme Court put its original “turn the law back on” ruling on hold. Apparently Oldham doesn’t much care.

The other two judges on the panel also provided their own, much shorter opinions, with Judge Edith Jones concurring and just doubling down on Oldham’s nonsense. There is an opinion from Judge Leslie Southwick that is a partial concurrence and partial dissent. It concurs on the transparency stuff, but dissents regarding the 1st Amendment.

The majority frames the case as one dealing with conduct and unfair censorship. The majority’s rejection of First Amendment protections for conduct follows unremarkably. I conclude, though, that the majority is forcing the picture of what the Platforms do into a frame that is too small. The frame must be large enough to fit the wide-ranging, free-wheeling, unlimited variety of expression — ranging from the perfectly fair and reasonable to the impossibly biased and outrageous — that is the picture of the First Amendment as envisioned by those who designed the initial amendments to the Constitution. I do not celebrate the excesses, but the Constitution wisely allows for them.

The majority no doubt could create an image for the First Amendment better than what I just verbalized, but the description would have to be similar. We simply disagree about whether speech is involved in this case. Yes, almost none of what others place on the Platforms is subject to any action by the companies that own them. The First Amendment, though, is what protects the curating, moderating, or whatever else we call the Platforms’ interaction with what others are trying to say. We are in a new arena, a very extensive one, for speakers and for those who would moderate their speech. None of the precedents fit seamlessly. The majority appears assured of their approach; I am hesitant. The closest match I see is caselaw establishing the right of newspapers to control what they do and do not print, and that is the law that guides me until the Supreme Court gives us more.

Judge Southwick then dismantles, bit by bit, each of Oldham’s arguments regarding the 1st Amendment and basically highlights how his much younger colleague is clearly misreading a few outlier Supreme Court rulings.

It’s a good read, but this post is long enough already. I’ll just note this point from Southwick’s dissent:

In no manner am I denying the reasonableness of the governmental interest. When these Platforms, that for the moment have gained such dominance, impose their policy choices, the effects are far more powerful and widespread than most other speakers’ choices. The First Amendment, though, is not withdrawn from speech just because speakers are using their available platforms unfairly or when the speech is offensive. The asserted governmental interest supporting this statute is undeniably related to the suppression of free expression. The First Amendment bars the restraints.

This resonated with me quite a bit, and drove home the problem with Oldham’s argument. It is the equivalent of one of Ken White’s famed free speech tropes. Oldham pointed to the outlier cases where some compelled speech was found constitutional, and turned that automatically into “if some compelled speech is constitutional, then it’s okay for this compelled speech to be constitutional.”

But that’s not how any of this works.

Southwick also undermines Oldham’s common carrier arguments and his Section 230 arguments, noting:

Section 230 also does not affect the First Amendment right of the Platforms to exercise their own editorial discretion through content moderation. My colleague suggests that “Congress’s judgment” as expressed in 47 U.S.C. § 230 “reinforces our conclusion that the Platforms’ censorship is not speech under the First Amendment.” Maj. Op. at 39. That opinion refers to this language: “No provider or user of an interactive computer service” — interactive computer service being a defined term encompassing a wide variety of information services, systems, and access software providers — “shall be treated as the publisher or speaker of any information provided by another content provider.” 47 U.S.C. § 230(c)(1). Though I agree that Congressional fact-findings underlying enactments may be considered by courts, the question here is whether the Platforms’ barred activity is an exercise of their First Amendment rights. If it is, Section 230’s characterizations do not transform it into unprotected speech.

The Platforms also are criticized for what my colleague sees as an inconsistent argument: the Platforms analogize their conduct to the exercise of editorial discretion by traditional media outlets, though Section 230 by its terms exempts them from traditional publisher liability. This may be exactly how Section 230 is supposed to work, though. Contrary to the contention about inconsistency, Congress in adopting Section 230 never factually determined that “the Platforms are not ‘publishers.’” Maj. Op. at 41. As one of Section 230’s co-sponsors — former California Congressman Christopher Cox, one of the amici here — stated, Section 230 merely established that the platforms are not to be treated as the publishers of pieces of content when they take up the mantle of content moderation, which was precisely the problem that Section 230 set out to solve: “content moderation . . . is not only consistent with Section 230; its protection is the very raison d’etre of Section 230.” In short, we should not force a false dichotomy on the Platforms. There is no reason “that a platform must be classified for all purposes as either a publisher or a mere conduit.” In any case, as Congressman Cox put it, “because content moderation is a form of editorial speech, the First Amendment more fully protects it beyond the specific safeguards enumerated in § 230(c)(2).” I agree.

Anyway, that’s the quick analysis of this mess. There will be more to come, and I imagine this will be an issue for the Supreme Court to sort out. I wish I had confidence that they would not contradict themselves, but I’m not sure I do.

The future of how the internet works is very much at stake with this one.

Filed Under: 1st amendment, 5th circuit, andy oldham, content moderation, hb 20, leslie southwick, social media, texas

Court Ignores That Texas Social Media Censorship Law Was Blocked As Unconstitutional: Orders Meta To Reinstate Account

from the [gestures-to-lowest-common-denominator]-who-among-us dept

Remember how Texas passed a social media content moderation law which was then blocked as unconstitutional by a federal court? Apparently people in Texas remember the passing of the law, but not the fact that it was blocked. Incredibly, this includes a judge as well.

If it had been allowed to be come law, Texas’ new, batshit insane “social media censorship” law would have allowed all sorts of bizarre claims to be made in court and greatly increased the odds even the most ridiculous of complaints will result in at least a temporary win for aggrieved plaintiffs. But, because it seems that everyone in a Texas court ignored the fact that the law has been blocked, we get to see how it all would have played out otherwise.

Welcome to the litigation party, Trump acolyte and would-be gubernatorial hopeful, Chad Prather. “Chad Whom?,” I hear you legitimately asking. Well, according to this Wikipedia article, Prather is “an American conservative political commentator, comedian and internet personality.” Whew.

He’s also attempting to unseat the current Texas governor, Greg Abbott — who is pretty much the same guy Prather is, only without the “comedian and internet personality” bio. Abbott is also a Trump acolyte and another fine argument for returning Texas to Mexico. Abbott has his own problems with so-called social media “censorship.” He has gone after Google and approved unconstitutional laws attempting to undermine Section 230 immunity and fully supports similar efforts proposed by others as idiotic and short-sighted as he is.

Chad Prather apparently feels Abbott is operating too far to the left, considering the governor is angling for a Facebook data center while spending a great deal of his ill-spent time seeking to undermine the legal protections that give idiots like Governor Abbott a sizable presence on sizable social media platforms.

Prather is now suing Facebook for suspending his account, under this new law (which, I remind you, has already been deemed unconstitutional and blocked from being put into effect) something he claims in a series of conclusory statements is obviously some sort of conspiracy between Meta and Governor Abbott to derail his attempt to unseat Abbott.

Thanks to Courthouse News Service — which always posts copies of legal documents it covers — we can read the inadvertently hilarious lawsuit [PDF] Prather has filed in a Texas county court. There are ways to be taken semi-seriously. And then there is what Prather has chosen to do: a series of conclusory statements tossed into a county court in hopes of using Texas’ shitty (again, already blocked as unconstitutional) new social media law to dodge well-settled moderation issues that have generated plenty of precedent in federal courts. That the law was blocked before it went into effect apparently doesn’t seem to matter to anyone involved in this lawsuit, which is just a little bit strange.

Let’s first take a look at the claims:

On February 21, 2022, just 8 days before the Election, Defendant suspended Prather from its Facebook social media platform for at least 7 days.

Facebook’s action against Prather severely inhibits his ability to communicate with potential voters and will cause immediate and irreparable harm by damaging his chances at winning the Election. There is no available remedy at law to Plaintiff for this interference with his ability to effectively campaign through social media.

Sure, there is. There are plenty of “available remedies,” starting with the inexplicably unpopular “more speech.” The plaintiff runs his own website. He also has apparently uninterrupted access to Parler, Twitter, and Instagram. Nevertheless, Prather insists this temporary inconvenience is not only actionable, but the direct result of collusion between Facebook and the state’s current governor:

It is likely no coincidence that Facebook chose to censor Prather so close to this hotly contested Election against Gov. Abbott. While publicly speaking out against censorship on social media, Gov. Abbott has been privately negotiating a deal with Facebook to bring the company’s new data center to Texas.

Is it likely, though? Prather has added communications between the governor and Facebook about the data center to his lawsuit (as exhibits), but fails to explain how this led Facebook to target his page for suspension. It certainly isn’t collusion. Prather doesn’t even register on the public’s radar, according to recent polls. Also, Prather seems to ignore that Gov. Abbott was a vocal supporter of this (unconstitutional and blocked by the courts) law that Prather is now trying to use… to claim that he was blocked to protect Gov. Abbott. Which, by itself should raise all sorts of questions.

Nonetheless, Prather insists he’s been beset upon all sides by powerful enemies.

The implications of this letter and the timing of Facebook’s censorship of Chad Prather should shock the conscience of this Court. Prather has a massive following on Facebook and has been a vocal critic of Gov. Abbott on his social media. It appears Facebook has likely censored a highly popular grassroots candidate for governor running against Gov. Abbott for the purpose of shoring up Abbott’s chances of winning the primary in order to protect Facebook’s pending deal with Gov. Abbott.

In other words, a California-based social media platform is actively interfering in the Texas gubernatorial elections to tip the scales in favor of the sitting governor of Texas who has just signed a law targeting them, so that he can give them a sweetheart business deal using taxpayer money. Sure. Uh huh. Makes sense.

Even if we assume these statements to be true (and we certainly don’t), what’s actionable here? Normally nothing would be. And here, nothing should be because the courts already blocked this law from going into effect. But Prather is trying to use the same law passed by the governor he now claims is colluding against him to bring this lawsuit against Facebook. This stupid law allows Texas residents to bring this completely stupid cause of action. Or it would if a court hadn’t blocked it. But, again, everyone seems to be ignoring that kind of important point.

Declaratory Relief for Social Media Censorship

And these are the sort of lawsuits this law would encourage, something Governor Abbott may come to regret (if the law actually is allowed to go into effect). After all, the law allows pretty much anyone to sue a social media service over any form of moderation they experience.

CPRC § 143A.002 provides: “(a) a social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state.”

According to Prather’s filing, he was suspended over a direct message that was presumably reported as harassing by the recipient. That’s all it takes to trigger a lawsuit under Texas’ social media law. However, clearing this extremely low bar is not the same as credibly alleging collusion between the governor and Meta, which Prather has done here.

And it appears the court may have already sided with Prather, at least temporarily even though this law never went into effect. Somehow, he has already secured a temporary restraining order [PDF] that tells Facebook to reinstate his account. The judge cites the new social media law even though the federal court already enjoined it as unconstitutional. It is unclear how this is even possible, though Prather and his lawyer, Paul Davis, who has made quite the name for himself as an insurrectionist lawyer who tried to sue to undo the entire 2020 Presidential election, are celebrating. I mean, to their credit, it is quite a feat to get an unconstitutional prior restraint ruling issued on a law that has already been declared unconstitutional and enjoined from being put into effect. So, kudos?

The garbage law has allowed a ridiculous person to force a private company to bend to his wishes — even though the law was not allowed to go into effect because of its unconstitutional nature. And Abbott’s hypocritical support of a law that undermines his belief the free market should not be fucked with has resulted in one of his political challengers having his account reinstated to be used as a megaphone to tout a lawsuit claiming the governor is in bed with Facebook. There are no winners here, just a bunch of losers who can’t handle being told to shut up by the free services they exploit.

Filed Under: 1st amendment, chad prather, content moderation, greg abbott, hb 20, hb20, paul davis, prior restraint, social media, temporary restraining order, texas
Companies: facebook, meta