hb20 – Techdirt (original) (raw)
Fifth Circuit: You Have To Do A Ton Of Busywork To Show Texas’s Social Media Law Violates The First Amendment
from the get-to-work dept
If the government passes a law that infringes on the public’s free speech rights, how should one challenge the law?
As recent events have shown, the answer is more complex than many realized.
A few years ago, both Texas and Florida passed “social media content moderation” laws, which would both limit how social media platforms could engage in any kind of moderation, while simultaneously demanding they explain their editorial decision-making. The laws were then challenged as unconstitutional under the First Amendment.
While three out of the four lower courts (two district courts and one of the two appeals courts) that heard the challenges found it to be patently obvious that the laws were unconstitutional incursions on free speech, the Supreme Court took a different approach to the cases. The Supreme Court effectively punted on the issue, while giving some clues about how the First Amendment should apply.
Specifically, the Supreme Court sent the challenges of both laws back to the lower courts, saying that since both challenges — brought by the trade groups NetChoice and CCIA — were presented as “facial challenges,” it required a different analysis than any of the lower courts had engaged in.
A “facial challenge” is one where the plaintiffs are saying, “yo, this entire law is clearly unconstitutional.” An alternative approach would be an “as applied challenge,” in which case you effectively have to wait until one of the states tried to use the law against a social media platform. Then you can respond and say “see? this violates my rights and therefore is unconstitutional!”
The Supreme Court said that if something is a facial challenge, then the courts must first do a convoluted analysis of every possible way the law could be applied to see if there are some parts of applications of the law that might be constitutional.
That said, the Supreme Court’s majority reason still took the Fifth Circuit to task, highlighting how totally blinkered and disconnected from the clear meaning and historical precedents its analysis of the First Amendment was. Over and over again, the Supreme Court dinged Texas’ law as pretty obviously unconstitutional. Here’s just one snippet of many:
They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the content moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.
Indeed, the Supreme Court noted that it can already see that the Fifth Circuit is on the wrong track, even as it was sending the case back over the procedural issues required for a facial challenge:
But there has been enough litigation already to know that the Fifth Circuit, if it stayed the course, would get wrong at least one significant input into the facial analysis. The parties treated Facebook’s News Feed and YouTube’s homepage as the heartland applications of the Texas law. At least on the current record, the editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit’s view, protected expressive activity. And Texas may not interfere with those judgments simply because it would prefer a different mix of messages. How that matters for the requisite facial analysis is for the Fifth Circuit to decide. But it should conduct that analysis in keeping with two First Amendment precepts. First, presenting a curated and “edited compilation of [third party] speech” is itself protected speech. Hurley, 515 U. S., at 570. And second, a State “cannot advance some points of view by burdening the expression of others.” PG&E, 475 U. S., at 20. To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.
But, either way, the case has gone back to the Fifth Circuit, and it is now sending the case back to the lower court, with the instructions that the trade groups are going to have to argue every single point as to why the law should be considered unconstitutional.
As the Supreme Court recognized, it is impossible to apply that standard here because “the record is underdeveloped.” Id. at 2399. Who is covered by Texas House Bill 20 (“H.B. 20”)? For these actors, which activities are covered by H.B. 20? For these covered activities, how do the covered actors moderate content? And how much does requiring each covered actor to explain its content-moderation decisions burden its expression? Because these are fact-intensive questions that must be answered by the district court in the first instance after thorough discovery, we remand.
So, basically, get ready for a ridiculously long and involved process for challenging the law and takes a swipe at the district court in the process.
A proper First Amendment facial challenge proceeds in two steps. The “first step” is to determine every hypothetical application of the challenged law. Id. at 2398 (majority opinion). The second step is “to decide which of the law[’s] applications violate the First Amendment, and to measure them against the rest.” Ibid. If the “law’s unconstitutional applications substantially outweigh its constitutional ones,” then and only then is the law facially unconstitutional. Id. at 2397. “[T]he record” in this case “is underdeveloped” on both fronts. See id. at 2399; see also id. at 2410–11 (Barrett, J., concurring) (noting the record failed to “thoroughly expose[] the relevant facts about particular social-media platforms and functions”); id. at 2411 (Jackson, J., concurring in part and concurring in the judgment) (noting plaintiffs failed to show “how the regulated activities actually function”); id. at 2412 (Thomas, J., concurring in the judgment) (noting plaintiffs “failed to provide many of the basic facts necessary to evaluate their challenges to H.B. 20”); id. at 2422 (Alito, J., concurring in the judgment) (noting the “incompleteness of this record”). That is a consequence of how this case was litigated in district court
There is plenty of busywork for all involved:
There is serious need of factual development at the second step of the analysis as well. To determine if any given application of H.B. 20’s “content-moderation provisions” is unconstitutional, the district court must determine “whether there is an intrusion on protected editorial discretion.” Id. at 2398 (citation omitted). That requires a detailed understanding of how each covered actor moderates content on each covered platform. See id. at 2437 (Alito, J., concurring in the judgment) (“Without more information about how regulated platforms moderate content, it is not possible to determine whether these laws lack a plainly legitimate sweep.” (quotation omitted)). Focusing primarily on Facebook’s News Feed or YouTube’s homepage will not suffice, as “[c]urating a feed and transmitting direct messages,” for example, likely “involve different levels of editorial choice, so that the one creates an expressive product and the other does not.” Id. at 2398 (majority opinion).
Moreover, one of the principal factual deficiencies in the current record, according to the Supreme Court, concerns the algorithms used by plaintiffs’ members. See, e.g., id. at 2404 n.5; id. at 2410–11 (Barrett, J., concurring); id. at 2424, 2427, 2436–38 (Alito, J., concurring in the judgment). It matters, for example, if an algorithm “respond[s] solely to how users act online,” or if the algorithm incorporates “a wealth of user-agnostic judgments” about the kinds of speech it wants to promote. Id. at 2404 n.5 (majority opinion); see also id. at 2410 (Barrett, J., concurring). And this is only one example of how the “precise technical nature of the computer files at issue” in each covered platform’s algorithm might change the constitutional analysis. ROA.539 (quotation omitted). It also bears emphasizing that the same covered actor might use a different algorithm (or use the same algorithm differently) on different covered services. For example, it might be true that X is a covered actor and that both its “For You” feed and its “Following” feed are covered services. But it might also be true that X moderates content differently or that its algorithms otherwise operate differently across those two feeds. That is why the district court must carefully consider how each covered actor moderates content on each covered service.
Separately, there’s the question about the transparency and explanatory parts of the law. Incredibly, the ruling says that the lower court has to explore whether or not being required to explain your editorial decisions is a First Amendment-violating burden:
When performing the second step of the analysis, the district court must separately consider H.B. 20’s individualized-explanation provisions. As the Supreme Court has instructed, that requires “asking, again as to each thing covered, whether the required disclosures unduly burden expression.” Moody, 144 S. Ct. at 2398 (majority opinion). The first issue to address here is the same one addressed above: whether each covered actor on each covered platform is even engaging in expressive activity at all when it makes content-moderation decisions. See id. at 2399 n.3 (explaining that these provisions “violate the First Amendment” only “if they unduly burden expressive activity” (emphasis added)). Then for each covered platform engaging in expressive activity, _the district court must assess how much the requirement to explain that platform’s content-moderation decisions burdens the actor’s expressio_n.
The one interesting tidbit here is the role that ExTwitter plays in all of this. Already, the company has shown that while it is grudgingly complying with the EU DSA’s requirements to report all moderation activity, it’s not doing so happily. Given the nature of the Fifth Circuit (and this panel of judges in particular), it would certainly be interesting to have Elon actually highlight how burdensome the law is on his platform.
Remember, the law at issue, HB 20, was passed under the (false) belief that “big social media companies” were unfairly moderating to silence conservatives. The entire point of the law was to force such companies to host conservative speech (including extremist, pro-Nazi speech). The “explanations” portion of the law was basically to force the companies to reveal any time they took actions against such speech so that people could complain.
But now that ExTwitter is controlled by a friend — though one who is frequently complaining about excessive government regulation — it would be quite interesting if he gets dragged into this lawsuit and participates by explaining just how problematic the law is in a way that even Judge Andrew Oldham (who seems happy to rule whichever way makes Donald Trump happiest) might even realize that the law is bad.
Either way, for now, as the case goes back to the district court, NetChoice and CCIA will have an awful lot of work to do, for two groups that are already incredibly overburdened in trying to protect the open internet.
Filed Under: 1st amendment, 5th circuit, andrew oldham, facial challenge, hb20, moody v. netchoice, netchoice v. paxton, texas
Companies: ccia, netchoice, twitter, x
Subreddit Discriminates Against Anyone Who Doesn’t Call Texas Governor Greg Abbott ‘A Little Piss Baby’ To Highlight Absurdity Of Content Moderation Law
from the greg-abbott-is-a-little-piss-baby dept
Last year, I tried to create a “test suite” of websites that any new internet regulation ought to be “tested” against. The idea was that regulators were so obsessively focused on the biggest of the big guys (i.e., Google, Meta) that they never bothered to realize how it might impact other decently large websites that involved totally different setups and processes. For example, it’s often quite impossible to figure out how a regulation about Google and Facebook content moderation would work on sites like Wikipedia, Github, Discord, or Reddit.
Last week, we called out that Texas’s HB 20 social media content moderation law almost certainly applies to sites like Wikipedia and Reddit, yet I couldn’t see any fathomable way in which those sites could comply, given that so much of the moderation on each is driven by users rather than the company. It’s been funny watching supporters of the law try to insist that this is somehow easy for Wikipedia (probably the most transparent larger site on the internet) to comply with by being “more transparent and open access.”
If you somehow can’t see that tweet or screenshot, it’s a Trumpist defender of the law responding to someone asking how Wikipedia can comply with the law, saying:
Wikipedia would have to offer more transparent and open access to their platform, which would allow truth to flourish over propaganda there? Is that what you’re worried about, or what is it?
To which a reasonably perplexed Wikipedia founder Jimmy Wales rightly responds:
What on earth are you talking about? It’s like you are writing from a different dimension.
Anyway… it seems some folks on Reddit are realizing the absurdity of the law and trying to demonstrate it in the most internety way possible. Michael Vario alerts us that the r/PoliticalHumor subreddit is “messing with Texas” by requiring every comment to include the phrase “Greg Abbott is a little piss baby” or be deleted in a fit of content moderation discrimination in violation of the HB20 law against social media “censorship.”
Until further notice, all comments posted to this subreddit must contain the phrase “Greg Abbott is a little piss baby”
There is a reason we’re doing this, the state of Texas has passed H.B. 20, Full text here, which is a ridiculous attempt to control social media. Just this week, an appeals court reinstated the law after a different court had declared it unconstitutional. Vox has a pretty easy to understand writeup, but the crux of the matter is, the law attempts to force social media companies to host content they do not want to host. The law also requires moderators to not censor any specific point of view, and the language is so vague that you must allow discussion about human cannibalization if you have users saying cannibalization is wrong. Obviously, there are all sorts of real world problems with it, the obvious ones being forced to host white nationalist ideology or insurrectionist ideation. At the risk of editorializing, that might be a feature, not a bug for them.
Anyway, Reddit falls into a weird category with this law. The actual employees of the company Reddit do, maybe, one percent of the moderation on the site. The rest is handled by
disgusting janniesvolunteer moderators, who Reddit has made quite clear over the years, aren’t agents of Reddit (mainly so they don’t lose millions of dollars every time a mod approves something vaguely related to Disney and violates their copyright). It’s unclear whether we count as users or moderators in relation to this law, and none of us live in Texas anyway. They can come after all 43 dollars in my bank account if they really want to, but Virginia has no obligation to extradite or anything.We realized what a ripe situation this is, so we’re going to flagrantly break this law. Partially to raise awareness of the bullshit of it all, but mainly because we find it funny. Also, we like this Constitution thing. Seems like it has some good ideas.
They also include a link to the page where people can file a complaint with the Texas Attorney General, Ken Paxton, asking him to investigate whether the deletion of any comments that don’t claim that his boss, Governor Greg Abbott, is “a little piss baby” is viewpoint discrimination in violation of the law.
Filed Under: 5th circuit, content moderation, greg abbott, greg abbott is a little piss baby, hb20, texas
Companies: reddit
Supreme Court Makes The Right Call: Puts Texas Social Media Law Back On Hold
from the but-the-lack-of-details-is-concerning dept
Exhale.
Just a little while ago, the Supreme Court put Texas’s ridiculous content moderation law back on hold. Specifically, it granted NetChoice and CCIA’s emergency application to put the law on hold, following the 5th Circuit’s decision to reinstate the law without any explanation (which came about in response to a district court’s lengthy explanation for why the law was unconstitutional.)
The Supreme Court’s ruling here… is a little strange. It was a 5-4 decision, but probably not the lineup you might expect. The ruling to grant the stay (i.e., to block the law from being enforced) was supported by Chief Justice Roberts, along with Justices Barrett, Breyer, Kavanaugh, and Sotomayor. That leaves the four who wished to have the law still in place as Justices Alito, Thomas, Gorsuch, and… Kagan?!
Unfortunately there’s little in the way of details here, as there is no explanation for the majority decision to put the law on hold. Kagan only notes that she would deny the application. Many are speculating that her reasoning was based on her distaste for the so-called “Shadow Docket” of emergency applications where this all played out. Though, as shadow docket expert Steve Vladeck notes, even though Kagan has been vocal about disapproving of the use of the shadow docket, that hasn’t prevented her from granting relief via it in the past.
And while there is no majority opinion to explain the thinking, Alito did write a dissent that, as perhaps could be expected, is just full of nonsense. Thomas signed onto it, along with Gorsuch. That Alito and Thomas would align on this isn’t that surprising, given what they’ve said in the past (though one would hope with slightly more briefing in front of them they might have realized their positions are fundamentally mistaken — but no such luck). Gorsuch is kind of surprising, as on similar issues he’s seemed more open to reason.
It’s good to see Kavanaugh stay consistent here, as his ruling in the Halleck case was an important precedent, and it would be bizarre to see him flip so quickly.
As for the dissent, authored by Alito, well, it’s a mess. We don’t need to do a full analysis on it, because it doesn’t really matter yet. But Alito seems extremely confused about a few important concepts and it will be important to carefully brief those concepts in more detail when this issue, inevitably, returns to the Supreme Court docket along more traditional lines. Also, it’s quite incredible for him and his two co-signers to suggest that you can simply take away 1st Amendment rights and only come back and determine if that was okay at a later date.
It is also… not entirely clear to me what happens next. In theory, the 5th Circuit is still expected to release its more complete opinion turning the law back on. But… now that doesn’t matter because the Supreme Court has already blocked that? Or, could that turn the law back on again? It’s all a bit unclear, but at least in the very, very short term, by an uncomfortably narrow margin, Texas’ dangerously bad content moderation law is not in effect.
Filed Under: 1st amendment, clarence thomas, content moderation, elena kagan, hb20, neil gorsuch, samuel alito, shadow docket, social media, supreme court, texas
Companies: ccia, netchoice
And Now The Copia Institute Tells The US Supreme Court There’s A Big Problem With Texas’s Social Media Law
from the first-amendment-fire-drill dept
Last week a bizarre one-line order from the Fifth Circuit lifted the injunction on Texas’s social media law, allowing it to go into effect, despite all the massive problems with it – including the extent to which it violates the First Amendment and Section 230.
So NetChoice and CCIA filed an emergency application with the U.S. Supreme Court to try to have it at least reinstate the injunction while the case worked its way through the appellate courts. And yesterday Copia Institute filed an amicus brief supporting their application.
The brief is, in many ways, an encore performance of the brief we’d submitted to the Fifth Circuit, using ourselves and Techdirt as an example of how the terms of HB20 violates our constitutional and statutory rights, but this time around there are a few additional arguments that may be worth their own posts (in fact, one is a throwback to an old post). Also, one key argument that we added applies less to the problems with HB20 itself and more to the problems involved with the Fifth Circuit lifting the injunction, and especially in the way that it did. We pointed out that lifting the injunction, and without any explanation for why, looked an awful lot like the sort of prior restraint that had long been considered verboten under the First Amendment. State actors (including courts) are not supposed to chill the exercise of expression unless and until there’s been the adjudication needed to find that the First Amendment permits that sanction. Here the Fifth Circuit technically heard the case, but it issued a sanction stymying the exercise of speech (lifting the injunction) without ever actually having ruled that HB20’s chilling terms were actually ok under the First Amendment. Perhaps the court truly think’s HB20 is perfectly sound under the First Amendment, we don’t really know. And we can’t know, because they didn’t say anything. Which also means there’s nothing to appeal, because if the Fifth Circuit made an error in thinking HB20 is ok (which seems likely, because that law conflicts with so much established First Amendment precedent, as well as common sense) no one can say where that error was, or what of its judgment should be reversed.
Nevertheless, the law is out there, in effect now, doing harm to platforms’ expression. HB20 still needs to be thrown out on the merits, but for the moment we just all need the Supreme Court to get the bleeding to stop.
Filed Under: 1st amendment, 5th circuit, content moderation, free speech, hb20, prior restraint, supreme court, texas
Companies: copia institute, techdirt
Author Of Texas’ Social Media Law Admits That He Meant The Law To Exempt Any Moderation Decisions Protected By Section 230 (That’s Everything)
from the briscoe-briscoe-cain dept
Well, this is awkward. Yesterday I wrote about how there was a strong argument that Twitch’s removal of the mass murderer in Buffalo’s livestream of his murder spree violated Texas’s ridiculous social media law. The main saving grace for Twitch would be that it was possible (though it’s unclear) its userbase was just under the 50 million US average monthly users required to trigger the law. However, even if the law didn’t reach Twitch, it definitely reaches Facebook and Twitter, two other platforms that have been trying (and not always succeeding) to remove the video.
That said, it was a bit surprising when the main author of the bill, Briscoe Cain, showed up in my Twitter mentions to insist that the bill does not prevent Twitch from removing the video. His answer was revealing, though not in the way he meant it to be.
If you can’t see the image, Cain says that “HB20 specifically authorizes social media platforms to censor that kind of content.” Then he posts a screenshot of two laws. First he posts the part of HB20 (Section 143A.006) that says:
This chapter does not prohibit a social media platform from censoring expression that:
(1) the social media platform is specifically authorized to censor by federal law;
And he highlights the “federal law” part. Then he, somewhat amazingly, posts a screen shot of the Good Samaritan section of Section 230, and specifically highlights the “excessively violent” part of 230(c)(2).
No provider or user of an interactive computer services shall be held liable on account of–
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protect.
So, there are many, many, many problems with this, but let’s get to the biggest one. Mainly: he is admitting that any moderation choices that are protected under Section 230 are exempt from his law, because he’s claiming that his law incorporates Section 230. Which is all moderation choices. Which means he is admitting that his law actually does nothing at all. Or, at best, that it’s a kind of “trigger law” that really only matters if Section 230 is repealed or massively reformed.
Considering that, in defending the law, the State of Texas explicitly claimed that HB20 is not preempted by Section 230, this is quite an admission. Here was the argument the state made, which the author of the bill now concedes as false:
Section 230 simply does not preempt H.B. 20. This is so for two reasons. Preemption is a specific concept: “Congress enacts a law that imposes restrictions or confers rights on private actors; a state law confers rights or imposes restrictions that conflict with the federal law; and therefore the federal law takes precedence and the state law is preempted.” The “restrictions” that H.B. 20 imposes on interactive computer services do not conflict with the “rights”—immunity from damages liability for third party content hosted— Section 230 confers on them.
So, HB20 is not preempted by 230, but since 230 protects the moderation choices and HB20 is preempted by federal law… it does?
Anyway, Cain’s argument is even dumber. Note what HB20 says: that it does not prohibit moderation choices (which he falsely calls censorship) if the website is “specifically authorized to censor by federal law.” The implication of his claim, then, is that he thinks (incorrectly) that moderation only exists on social media platforms because 230 “authorizes” them to moderate.
That is very, very wrong. The 1st Amendment is what allows websites to moderate. They have their own 1st Amendment rights that allow for editorial discretion and a right not to associate with anyone or any idea. Section 230 simply provides a procedural setup that allows bogus mistargeted lawsuits to get kicked out of court quickly.
But just the fact that Briscoe Cain thinks that social media websites need 230 to “authorize” them to moderate raises questions about his competence as an actual legislator to understand literally any of this.
Of course, when people started to confront him over this, he refused to give a direct answer, and started claiming that people had trouble reading his law. I don’t believe that’s true. The actual problem is that Cain apparently doesn’t even understand the law he has written, and how it intersects with both Section 230 and the 1st Amendment.
Yet another reminder: we should elect fewer stupid people.
Filed Under: 1st amendment, briscoe cain, content moderation, free speech, hb20, preemption, section 230, social media, texas
Did Twitch Violate Texas’ Social Media Law By Removing Mass Murderer’s Live Stream Of His Killing Spree?
from the you-asked-for-this-texas dept
As you’ve no doubt heard, on Saturday there was yet another horrific shooting, this one in Buffalo, killing 10 people and wounding more. From all current evidence, the shooter, a teenager, was a brainwashed white nationalist, spewing nonsense and hate in a long manifesto that repeated bigoted propaganda found in darker corners of the internet… and on Fox News’ evening shows. He also streamed the shooting rampage live on Twitch, and apparently communicated some of his plans via Discord and 4chan.
Twitch quickly took down the stream and Discord is apparently investigating. All of this is horrible, of course. But, it seems worth noting that it’s quite possible Twitch’s removal could violate Texas’ ridiculously fucked up social media law. Honestly, the only thing that might save the two companies (beyond the fact that it’s unlikely someone would go to court over this… we think) is that both Twitch and Discord might be just ever so slightly below the 50 million average monthly US users required to trigger the law. But that’s not entirely clear (another reason why this law is stupid: it’s not even clear who is covered by it).
A year ago, Discord reported having 150 million monthly active users, though that’s worldwide. The question is how many of them are in the US. Is it more than a third? Twitch apparently has a very similar 140 million monthly active users globally. At least one report says that approximately 21% of Twitch’s viewership is in the US. That same report says that Twitch’s US MAUs are at 44 million.
Of course the Texas law, HB20, defines user quite broadly, and also says once you have over 50 million in a single month you’re covered. So it’s quite possible both companies are covered.
Focusing on Twitch: taking down the streamer’s account might violate the law. Remember that the law says that you cannot “censor” based on viewpoint. And anyone in the state of Texas can bring a lawsuit claiming they were deprived of content based on viewpoint. Some will argue back that a livestream of a killing spree isn’t about viewpoint, but remember, this idiot teenager made it clear he was doing this as part of his political views. At the very least, there’s a strong argument that any effort to take down his manifesto (if not the livestream) could be seen as violating the law.
And just to underline that this is what the Texas legislature wanted, you may recall that we wrote about a series of amendments that were proposed when this law was being debated. And one of the amendments said that the law would not block the removal of content that “directly or indirectly promotes or supports any international or domestic terrorist group or any international or domestic terrorist acts.” AND THE LEGISLATURE VOTED IT DOWN.
So, yes, the Texas legislature made it abundantly clear that this law should block the ability of website to remove such content.
And, due to the way the law is structured, it’s not just those who were moderated who can sue, but anyone who feels their “ability to receive the expression of another person” was denied over the viewpoint of the speaker. So, it appears that a white nationalist in Texas could (right now) sue Twitch and demand that it reinstate the video, and Twitch would have to defend its reasons for removing the video, and convince a court it wasn’t over “viewpoints” (or that Twitch still has fewer than 50 million monthly average users, and that it has never passed that threshold).
Seems kinda messed up either way.
Of course, I should also note that NY’s governor is already suggesting (ridiculously) that Twitch should be held liable for not taking the video down fast enough.
Gov. Hochul said the fact that the live-stream was not taken down sooner demonstrates a responsibility those who provide the platforms have, morally and ethically, to ensure hate cannot exist there. She also said she hopes it will also demonstrate a legal responsibility for those providers.
“The fact that this act of barbarism, this execution of innocent human beings could be live-streamed on social media platforms and not taken down within a second says to me that there is a responsibility out there … to ensure that such hate cannot populate these sites.”
So, it’s possible that Twitch could face legal fights in New York for being too slow to take down the video and in Texas for taking down the video at all.
It would be kind of nice if politicians on both sides of the political aisle remembered how the 1st Amendment actually works, and focused the blame on those actually responsible, not the social media tools that are used to communicate.
Filed Under: buffalo, content moderation, hb20, mass murder, racism, shooting, social media, texas, white nationalist
Companies: discord, twitch
Supreme Court Asked For An Emergency Review Of Texas’ Dangerous Social Media Law
from the and-we're-off... dept
As you’ll recall, last Wednesday, the 5th Circuit surprised lots of people by immediately reinstating Texas’s ridiculous content moderation law that basically creates an open season to sue large social media sites for any moderation choices those sites make. The surprise wasn’t necessarily the judges’ decision, which had been telegraphed two days earlier via the judges’ (plural) extremely confused questions regarding the law (including saying that Twitter was not a website, which it is). The bigger surprise was that they reinstated the law just two days later, without any written opinion, or giving the plaintiffs (trade groups that represent many large internet companies) a chance to appeal. That’s just weird.
Late on Friday, the trade associations, NetChoice and CCIA, petitioned Justice Alito with an emergency application to stop the law from going into effect. Technically, it’s an “emergency application for immediate administrative relief and to vacate stay of preliminary injunction.” Just to break that apart: the law was passed, and the district court granted a preliminary injunction, blocking the law from going into effect (while noting the law was pretty clearly unconstitutional). The 5th Circuit’s reversal was putting a “stay” on the preliminary injunction, meaning that the law could go into effect. So, to block the law again, they need the Supreme Court to vacate the stay on the preliminary injunction blocking the law. Simple. Got it? Got it.
Also, the reason they petitioned Alito is that each Circuit court gets one of the Justices as that Circuit’s Justice, and Alito covers the 5th. So these kinds of emergency applications, which are part of the now infamous “shadow docket” of the court, have to go up to the Justice for that Circuit. If that Justice refuses, then the petitioners can try other Justices. In this case, on Saturday, Alito gave Texas until Wednesday to file a response.
The petition itself is worth reading. It’s 55 incredibly thorough pages. We’ll get to the content in a moment, but it’s worth noting that there is some serious legal fire power here, with a heavy focus on both knowing the law in Texas, and knowing the conservative Justices. The eye catching name is Paul Clement, former Solicitor General of the US under George W. Bush, who is extremely well known in legal circles and has been involved in tons of high profile cases. And then it also includes two recent Texas Solicitor Generals, Kyle Hawkins and Scott Keller, who both were appointed by current governor Greg Abbott, who pushed for this law. I mean, Hawkins only stepped down from that role last year. Notably, Hawkins also clerked for Alito at the Supreme Court (and one of the judges on the 5th Circuit panel). Another lawyer on this filing is Katherine Yarger, who clerked for Neil Gorsuch when he was on the 10th Circuit and Clarence Thomas at the Supreme Court. These are not coincidences.
As for the content of the request, it comes in with a strong opening:
Texas House Bill 20 (“HB20”) is an unprecedented assault on the editorial discretion of private websites (like Facebook.com, Instagram.com, Pinterest.com, Twitter.com, Vimeo.com, and YouTube.com) that would fundamentally transform their business models and services. HB20 prohibits covered social media platforms (many of which are members of Applicants NetChoice and CCIA) from engaging in any viewpoint-based editorial discretion. Thus, HB20 would compel platforms to disseminate all sorts of objectionable viewpoints—such as Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders. HB20 also imposes related burdensome operational and disclosure requirements designed to chill the millions of expressive editorial choices that platforms make each day.
First point they make is that the 5th Circuit’s stay without any opinion is problematic in itself, before even getting to the underlying law:
Yet, on Wednesday night, a divided Fifth Circuit panel issued a one-sentence order granting a stay motion filed by the Texas Attorney General five months earlier, allowing him to immediately enforce HB20. This unexplained order deprives Applicants of the “careful review and a meaningful decision” to which they are “entitle[d].” Nken v. Holder, 556 U.S. 418, 427 (2009). The Fifth Circuit has yet to offer any explanation why the District Court’s thorough opinion was wrong. This Court should allow the District Court’s careful reasoning to remain in effect while an orderly appellate process plays out.
They also point out that this rush to reinstate the law could interfere with the 11th Circuit, which heard Florida’s appeal regarding its similar law a few weeks before the 5th Circuit heard its appeal. The 11th Circuit is still waiting to rule (and the expectation is they may take a while). As the briefing here notes, immediately reinstating the Texas law upsets the status quo in a scenario where it’s likely that no matter what happens with both laws, the Supreme Court will have to hear a more fully briefed case about them at the relevant point in the future. But rather than letting any of that play out, the 5th Circuit was just like “yup, turn on the law.” Which is generally not how these things work.
Vacating the stay in this case will maintain the status quo while the Eleventh Circuit also considers a parallel appeal concerning a preliminary injunction against Florida’s similar law. NetChoice, LLC v. Moody, 546 F. Supp. 3d 1082, 1086 (N.D. Fla. 2021), appeal docketed, 11th Cir. No. 21-12355 (11th Cir. July 13, 2021). Until the Fifth Circuit issued this stay, the status quo had been maintained pending a decision from at least one federal court of appeals weighing in on the constitutionality of unprecedented state laws regulating the worldwide speech of only some governmentdisfavored social media platforms. And even then, that decision would not have gone into effect until the appellate court’s mandate had issued or the parties sought further review in this Court. By issuing a stay and allowing the Texas Attorney General to enforce HB20 while appeals are still pending, the Fifth Circuit short-circuited the normal review process, authorizing Texas to inflict a massive change to leading global websites and undoubtedly also interfering with the Eleventh Circuit’s consideration of Applicants’ challenge to the similar Florida law.
It also points out how damaging it is to just put the law into effect.
Furthermore, the covered platforms face immediate irreparable injury many times over. Unrebutted record evidence demonstrates that it will be impossible for these websites to comply with HB20’s key provisions without irreversibly transforming their worldwide online platforms to disseminate harmful, offensive, extremist, and disturbing content—all of which would tarnish their reputations for offering appropriate content and cause users and advertisers to leave. As one of Applicants’ declarants stated, HB20 “would force us to change all of our systems to try to come into compliance.” App.350a. And because there is no “off-switch” to platforms’ current operations, the cost of revamping the websites’ operations would undo years of work and billions of dollars spent on developing some platforms’ current systems. Id. Even if platforms could revamp their entire communities, they would lose substantial revenue from boycotts by advertisers who do not want their ads to appear next to vile, objectionable expression. In the past, YouTube and Facebook “lost millions of dollars in advertising revenue” from advertisers who did not want their advertisements next to “extremist content and hate speech.”
And then we get to the basics of the 1st Amendment issues inherent here, starting with a citation of the (very useful) Justice Kavanaugh-authored ruling three years ago in Halleck. We’ve pointed to that case regularly, as it says quite clearly that private platforms have their own rights to moderate as they see fit and the government should not interfere. It’s no surprise that this filing kicks off with a strong reminder of that ruling, followed by a long list of other famous cases regarding the constitutional problems with compelled speech and association, and closing it out with a cite to Justice Thomas’ concurrence in Denver Area v. FCC, which was basically a precursor case to Halleck.
More fundamentally, the Fifth Circuit’s order contradicts bedrock First Amendment principles established by this Court. When “a private entity provides a forum for speech,” it may “exercise editorial discretion over the speech and speakers in the forum.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1930 (2019). This Court thus has repeatedly recognized that private entities have the right under the First Amendment to determine whether and how to disseminate speech. E.g., Hurley v. Irish-Am. Gay, Lesbian & Bisexual Group of Bos., 515 U.S. 557, 581 (1995); PG&E v. PUC of Cal., 475 U.S. 1, 12 (1986) (plurality op.);1 Miami Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 258 (1974); see also Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011); Arkansas Educ. TV Comm’n v. Forbes, 523 U.S. 666, 674 (1998); Denver
The simple reality is that until maybe a year or two ago, questions about the government compelling websites to carry speech easily would have been a slam dunk as unconstitutional under the 1st Amendment, with the most conservative members of the Court being the most vocal. It’s only in the last two years or so that a concerted effort has been made to flip conservatives completely into arguing that you can force private property owners to host speech. And both Thomas and Alito have publicly suggested they’re on-board with this position. This brief works hard to remind them, and their colleagues, of their principles, from back when it was believed they had them.
For what it’s worth, this is also likely why, later in the filing, the petitioners want to remind the Justices of the Masterpiece Cakeshop ruling, again specifically citing Thomas’ concurrence in that case.
Fourth, private entities cannot be compelled to disseminate speech even if they could “dissociate” themselves from the compelled publication by “simply post[ing] a disclaimer,” as that would “justify any law compelling speech.” Masterpiece Cakeshop, Ltd. v. Colorado C.R. Comm’n, 138 S. Ct. 1719, 1745 (2018) (Thomas, J., concurring). A publisher’s ability to disclaim compelled speech was present in Tornillo, PG&E, Hurley, and Wooley v. Maynard, 430 U.S. 705, 717 (1977). And the Court consistently held that government could not compel speech. (In any event, HB20 prohibits platforms from disclaiming compelled speech, because they are not permitted to “discriminate” among speech on their platform
The petition does a pretty nice job of laying out how content moderation is a form of editorial discretion, and that lots of websites wish to cultivate their own kinds of communities, and the government can’t just come in and interfere with that:
In short, platforms “publish,” Reno, 521 U.S. at 853, and “disseminate” speech authored by others, Sorrell, 564 U.S. at 570. But just as a newspaper does not publish every opinion piece it receives, these platforms do not disseminate all speech users submit—or treat all user-submitted speech equally. Instead, each platform has its own rules about what speech is acceptable for its particular service and community. Platforms all have hate-speech policies, for example. App.21a, 389a-445a. Platforms also differ in important ways that accord with the websites’ designs and different editorial policies and emphases. YouTube, for example, supports a “community that fosters self-expression on an array of topics as diverse as its user base,” while prohibiting “harmful, offensive, and unlawful material” like “pornography, terrorist incitement, [and] false propaganda spread by hostile foreign governments.” App.146a, 149a. Twitter allows a wider range of expression such as adult content.3 Other social media platforms—including Texas-favored websites excluded from HB20’s coverage that tout less-moderated communities—still have similar policies. App.115a, 134a.
For all platforms, the expressive act of policy enforcement is critical to the distinctive experiences that platforms provide their users—and to ensuring that the services remain hospitable and useful services. Without these policies, platforms would offer fundamentally worse (and perhaps even useless) experiences to their users, potentially overrun with spam, vitriol, and graphic content. App.20a-21a. The record confirms that when platforms have failed to remove harmful content, their users and advertisers have sought to hold platforms accountable—including through boycotts. App.126a, 135a-38a, 168a-69a, 187a. And when platforms have chosen to remove, or reduce the distribution of, objectionable content, they have faced criticism from users as well as elected officials. App.73a.
From the moment users access a social media platform, everything they see is subject to editorial discretion by the platform in accordance with the platforms’ unique policies. Platforms dynamically create curated combinations of user-submitted expression, the platforms’ own expression, and advertisements. This editorial process involves prioritizing, arranging, and recommending content according to what users would like to see, how users would like to see it, and what content reflects (what the platform believes to be) accurate or interesting information. App.21a; see App.312a (YouTube: “I believe in 2018 that data was about 70 percent of views are driven by recommendations.”).
Those decisions begin with the very basic design and functions of the site. YouTube and Vimeo, for instance, disseminate both videos and users’ comments on those videos. Facebook and LinkedIn have a broader range of videos and text. Instagram focuses on images and video, though it too has options for comments. Twitter is largely limited to 280-character text “tweets,” with options to post videos and images. TikTok has short videos. And Pinterest has images on digital “pin boards.” Across all these websites, platforms make decisions about the user interface and appearance of the platform. Some provide filters or parental controls to offer users even more curated experiences. And all this content appears next to the platforms’ distinctive branding.
Given their size and dynamic nature, platforms must constantly make editorial choices on what speech to disseminate and how to present it. At a minimum, this involves the platforms’ determination of what should show up at the top of users’ “feeds” and search results—which are functions the platforms engage in for each user and countless times a day. App.163a. Platforms also recommend or prioritize content they consider relevant or most useful. App.150a. Consequently, much like a newspaper must decide what stories deserve the front page, how long stories should be, what stories should be next to other stories, and what advertisements should be next to what stories, social media platforms engage in the same kinds of editorial and curatorial judgments both for individual users and the platforms as a whole.
The petition also digs deep into the ridiculousness of the no-explanation stay, leading to the law immediately going into effect:
The cursory manner in which the Fifth Circuit panel majority allowed HB20 to take effect alone justifies the granting of this Application. See Nken, 556 U.S. at 427
Last year, both Texas and Florida embarked on an unprecedented effort to override the editorial discretion of social media platforms and to compel them to disseminate a plethora of speech the platforms deem objectionable and antithetical to the speech they want to present to users (and advertisers). App.6a-7a; NetChoice, 546 F. Supp. 3d at 1085. Both laws are an undisguised effort to level the speech playing field and control “Big Tech.” To that end, both laws override editorial discretion and compel speech—imposing their burdens only on selected speakers and carving out favored content. App.28a-29a; NetChoice, 546 F. Supp. 3d at 1093-94. In short, the laws defy established First Amendment doctrine by taking virtually every action forbidden to state actors by the First Amendment.
Both states recognized that their laws would transform the Internet and fundamentally change the way platforms exercise editorial discretion and disseminate speech, so they delayed their effective dates to allow regulated platforms to try to come into compliance. App.9a; NetChoice, 546 F. Supp. 3d at 1085. Applicants took advantage of that interval to seek preliminary injunctive relief that would prevent the laws from taking immediate transformative effect, while allowing the parties to debate the legal issues and giving jurists time to consider all the issues as part of an orderly review process. The results were two well-reasoned district court opinions carefully explaining the provisions of the respective laws and each preliminarily enjoining those laws as rather obvious affronts to the First Amendment.
Those two decisions paved the way for an orderly appellate process in the courts of appeals. Florida did not even seek a stay of that preliminary injunction, but pursued a modestly expedited appeal that is fully briefed and was argued late last month. See Docket, 11th Cir. No. 21-12355. While Texas sought a stay, a Fifth Circuit motions panel referred that stay to the merits panel, which considered the important issues pursuant to an orderly appellate process that included full briefing and an oral argument. App.4a. But on Wednesday, a divided panel threw both the Internet and the orderly appellate process into chaos by issuing a one-sentence order purporting to allow the Texas Attorney General to enforce HB20 immediately. App.2a
As this Court explained in Nken, appellate courts may not enter stays pending appeal “reflexively,” but only after the movant has satisfied its “heavy burden,” and only after the panel has conducted “careful review” and issued a “meaningful decision.” 556 U.S. at 427; id. at 439 (Kennedy, J., concurring). Yet this one-sentence order explains nothing—in stark contrast to the extensively reasoned district court opinions that explained the various provisions of the laws, suggested some possible limiting constructions, and identified the precise constitutional defects. The Fifth Circuit’s order creates immediate obligations, compels all sorts of speech, and essentially forces Applicants to try to conform their global operations to Texas’s vision of how they should operate—and they must do so essentially overnight. Equally important, the order undermines the orderly appellate process in this Court (and the Eleventh Circuit), which necessitates this emergency application.
It did not have to be this way. Even if a majority of the Fifth Circuit panel disagrees with the well-reasoned opinion of the district court, it could have explained its reasoning in an opinion subject to the normal rules for issuing appellate mandates, which would then have permitted Applicants to seek rehearing and petition for certiorari. That course would have allowed an appellate process that gave this Court the same opportunity for the calm and orderly consideration that every other court has enjoyed in considering these momentous legal issues that go to the heart of the First Amendment.
There are many more arguments made in the filing, but I did want to call out two quick points raised in it that push back on specious arguments made by many (including people in our comments) to say that governments can force social media websites to host content. First, the two popular cases people like to bring up are PruneYard and Rumsfeld v. Fair. Those don’t apply (and I’ll note in passing that Clement argued the Rumsfeld case on behalf of the US government, so he should know).
Neither Rumsfeld v. FAIR, 547 U.S. 47 (2006), nor PruneYard Shopping Center v. Robins, 447 U.S. 74 (1980), justify HB20 or Defendant’s “hosting” theory. Neither case involved private editorial choices about what speech to disseminate. See FAIR, 547 U.S. at 64 (“A law school’s recruiting services lack the expressive quality of a parade, a newsletter, or the editorial page of a newspaper.”); PruneYard, 447 U.S. at 88 (no “intrusion into the function of editors”). In PruneYard, the shopping mall “owner did not even allege that he objected to the content of the [speech]; nor was the access right content based.” PG&E, 475 U.S. at 12 (discussing PruneYard). And FAIR distinguished the “conduct” of a law school’s employment recruitment assistance from a “number of instances” where the Court “limited the government’s ability to force one speaker to host or accommodate another speaker’s message”—citing Hurley, PG&E, and Tornillo. FAIR, 547 U.S. at 63
And then there’s the whole “common carrier” bit, which they note is completely nonsensical in this context.
Seventh, social media platforms are not common carriers, and the First Amendment analysis would not change if they were. “A common carrier does not make individualized decisions, in particular cases, whether and on what terms to deal.” FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979). Far from “hold[ing] themselves out as affording neutral, indiscriminate access to their platform without any editorial filtering,” unrebutted evidence establishes that platforms constantly engage in editorial filtering, providing unique experiences to each user and limiting both who may access their platforms and how they may use the platforms, as discussed above (at pp.5-9) USTA, 855 F.3d at 392 (Srinivasan & Tatel, JJ., concurring in the denial of reh’g en banc) (emphasis added). Consequently, “web platforms such as Facebook, Google, Twitter, and YouTube . . . are not considered common carriers.” Id.; see also Cablevision Sys. Corp. v. FCC, 597 F.3d 1306, 1321-22 (D.C. Cir. 2010) (Kavanaugh, J., dissenting) (“A video programming distributor . . . is constitutionally entitled to exercise ‘editorial discretion over which stations or programs to include in its repertoire.’ As a result, the Government cannot compel video programming distributors to operate like ‘dumb pipes’ or ‘common carriers’ that exercise no editorial control.”) (citations omitted)
This Court’s precedents likewise recognize that government cannot convert private entities that exercise editorial judgments into common carriers. See FCC v. League of Women Voters of Cal., 468 U.S. 364, 379 (1984) (compelled publication unlawful because it would “transform broadcasters into common carriers and would intrude unnecessarily upon the editorial discretion of broadcasters”). This Court recognized that even television broadcasters have protected editorial discretion, id., though broadcasters receive less First Amendment protection than Internet websites. See Reno, 521 U.S. at 870.
In all events, even common carriers retain the “right to be free from state regulation that burdens” speech. PG&E, 475 U.S. at 17-18 & n.14. So HB20’s label as “a common carrier scheme has no real First Amendment consequences,” because “impos[ing] a form of common carrier obligation” cannot justify a law that “burdens the constitutionally protected speech rights” of platforms “to expand the speaking opportunities” of others. Denver, 518 U.S. at 824-26 (Thomas, J., concurring in the judgment in part and dissenting in part). Similarly, government cannot declare private entities’ dissemination of speech as a “public accommodation.” Hurley, 515 U.S. at 573
Anyway, there’s a lot more in there, but it’s a strong filing. Hopefully Alito recognizes that…
Filed Under: 1st amendment, 5th circuit, content moderation, hb20, samuel alito, section 230, social media, supreme court, texas
Companies: ccia, netchoice
Just How Incredibly Fucked Up Is Texas’ Social Media Content Moderation Law?
from the let-us-count-the-ways dept
So, I already had a quick post on the bizarre decision by the 5th Circuit to reinstate Texas’ social media content moderation law just two days after a bizarrely stupid hearing on it. However, I don’t think most people actually understand just how truly fucked up and obviously unconstitutional the law is. Indeed, there are so many obvious problems with it, I’m not even sure I can do them adequate justice in a single post. I’ve seen some people say that it’s easy to comply with, but that’s wrong. There is no possible way to comply with this bill. You can read the full law here, but let’s go through the details.
The law declares social media platforms as “common carriers” and this was a big part of the hearing on Monday, even though it’s not at all clear what that actually means and whether or not a state can just magically declare a website a common carrier (as we’ve explained, that’s not how any of this works). But, it’s mainly weird because it doesn’t really seem to mean anything under Texas law. The law could have been written entirely without declaring them “common carriers” and I’m not sure how it would matter.
The law applies to “social media platforms” that have more than 50 million US monthly average users (based on whose counting? Dunno. Law doesn’t say), and limits it to websites where the primary purpose is users posting content to the site, not ones where things like comments and such are a secondary feature. It also excludes email and chat apps (though it’s unclear why). Such companies with over 50 million users in the US probably include the following as of today (via Daphne Keller’s recent Senate testimony): Facebook, YouTube, Tiktok, Snapchat, Wikipedia, and Pinterest are definitely covered. Likely, but not definitely, covered would be Twitter, LinkedIn, WordPress, Reddit, Yelp, TripAdvisor, and possibly Discord. Wouldn’t it be somewhat amusing if, after all of this, Twitter’s MAUs fall below the threshold?! Also possibly covered, though data is lacking: Glassdoor, Vimeo, Nextdoor, and Twitch.
And what would the law require of them? Well, mostly to get sued for every possible moderation decision. You only think I’m exaggerating. Litigator Ken White has a nice breakdown thread of how the law will encourage just an absolutely insane amount of wasteful litigation:
https://twitter.com/Popehat/status/1524535770425401344
As he notes, a key provision and the crux of the bill is this bizarre “anti-censorship” part:
CENSORSHIP PROHIBITED. (a) A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state. (b) This section applies regardless of whether the viewpoint is expressed on a social media platform or through any other medium.
So, let’s break this down. It says that a website cannot “censor” (by which it clearly means moderate) based on the user’s viewpoint or geographic location. And it applies even if that viewpoint doesn’t occur on the website.
What does that mean in practice? First, even if there is a good and justifiable reason for moderating the content — say it’s spam or harassment or inciting violence — that really doesn’t matter. The user can simply claim that it’s because of their viewpoints — even those expressed elsewhere — and force the company to fight it out in court. This is every spammer’s dream. Spammers would love to be able to force websites to accept their spam. And this law basically says that if you remove spam, the spammer can take you to court.
Indeed, nearly all of the moderation that websites like Twitter and Facebook do are, contrary to the opinion of ignorant ranters, not because of any “viewpoint” but because they’re breaking actual rules around harassment, abuse, spam, or the like.
While the law does say that a site must clearly post its acceptable use policy, so that supporters of this law can flat out lie and claim that a site can still moderate as long as it follows its policies, that’s not true. Because, again, all any aggrieved user has to do is to claim the real reason is due to viewpoint discrimination, and the litigation is on.
And let me tell you something about aggrieved users: they always insist that any moderation, no matter how reasonable, is because of their viewpoint. Always. And this is especially true of malicious actors and trolls, who are in the game of trolling just to annoy in the first place. If they can take that up a notch and drag companies into court as well? I mean, the only thing stopping them will be the cost, but you already know that a cottage industry is going to pop up of lawyers who will file these cases. I wouldn’t even be surprised if cases start getting filed today.
And, as Ken notes in his thread, the law seems deliberately designed to force as much frivolous litigation on these companies as possible. It says that even if one local court has rejected these lawsuits or blocked the Attorney General from enforcing the law, you can still sue in other districts. In other words, keep on forum shopping. Also, it has a nonmutual claim and issue preclusion, meaning that even if a court says that these claims are bogus, each new claim must be judged anew. Again, this seems uniquely designed to force these companies into court over and over and over again.
I haven’t even gotten to the bit that says that you can’t “censor” based on geographic location. That portion can basically be read to be forcing social media companies to stay in Texas. Because if you block all of your Texas users, they can all sue you, claiming that you’re “censoring” them based on their geographic location.
So, yeah, here you have the “free market” GOP passing a law that effectively says that social media companies (1) have to operate in Texas and (2) have to be sued over every moderation decision they make, even if it’s in response to clear policy violations.
Making it even more fun, the law forbids any waivers, so social media companies can’t just put a new thing in their terms of service saying that you waive your rights to bring a claim under this law. They really, really, really just want to flood every major social media website with a ton of purely frivolous and vexatious litigation. The party that used to decry trial lawyers just made sure that Texas has full employment for trial lawyers.
And that’s not all that this law does. That’s just the part about “censorship.”
There is the whole transparency bit, requiring that a website “disclose accurate information regarding its content management, data management, and business practices.” That certainly raises some issues about trade secrets, general security and more. But, it also is going to effectively require that websites publish all the details that spammers, trolls, and others need to be more effective.
The covered companies will also have to keep a tally over every form of moderation and post it in its transparency report. So, every time a spam posting is removed, it will need to be tracked and recorded. Even any time content is “deprioritized.” What does that mean? All of these companies recommend stuff based on algorithms, meaning that some stuff is prioritized and some stuff is not. I don’t care to see when people I follow tweet about football, because I don’t watch football. But it appears that if the algorithm learns that about me and chooses to deprioritize football tweets just for me, the company will need to include that in its transparency report.
Now, multiply that by every user, and every possible interaction. I think you could argue that these sites “deprioritize” content billions of times a day just by the natural functioning of the algorithm. How the hell do you track all the content you don’t show someone?!
The law also requires detailed, impossible complaint procedures, including a full tracking system if someone follows a complaint. That’s required as of last night. So best of wishes to every single covered platform, none of whom have this technology in place.
It also requires that if the website is alerted to illegal content, it has to determine whether or not the content is actually illegal within 48 hours. I’ll just note that, in most cases, even law enforcement isn’t that quick, and then there’s the whole judicial process that can take years to determine if something is illegal. Yet websites are given 48 hours?
Hilariously, the law says that you don’t have to give a user the opportunity to appeal if the platform “knows that the potentially policy-violating content relates to an ongoing law enforcement investigation.” Except, won’t this kind of tip people off? Your content gets taken down, but the site doesn’t give you the opportunity to appeal… Well, the only exemption there is if you’re subject to an ongoing law enforcement investigation, so I guess you now know there is one, because the law says that’s the only reason they can refuse to take your appeal. Great work there, Texas.
The appeal must be decided within 14 days, which sure sounds good if you have no fucking clue how long some of these investigations might take — especially once the system is flooded with the appeals required under this law.
And, that’s not all. Remember last week when I was joking about how Republicans wanted to make sure your inboxes were filled with spam? I had forgotten about the provision in this law that makes a lot of spam filtering a violation of the law. I only wish I was joking. For unclear reasons, the law also amends Texas’ existing anti-spam law. It added (and it’s already live in the law) a section saying the following:
Sec. 321.054. IMPEDING ELECTRONIC MAIL MESSAGES PROHIBITED. An electronic mail service provider may not intentionally impede the transmission of another person’s electronic mail message based on the content of the message unless:
(1) the provider is authorized to block the transmission under Section 321.114 or other applicable state or federal law; or
(2) the provider has a good faith, reasonable belief that the message contains malicious computer code, obscene material, material depicting sexual conduct, or material that violates other law.
So that literally says the only reasons you can “impede” email is if it contains malicious code, obscene material, sexual content, or violates other laws. Now the reference to 321.114 alleviates some of this, since that section gives services (I kid you not) “qualified immunity” for blocking certain commercial email messages, but only with certain conditions, including enabling a dispute resolution process for spammers.
There are many more problems with this law, but I am perplexed at how anyone could possibly think this is either workable or Constitutional. It’s neither. The only proper thing to do would be to shut down in Texas, but again the law treats that as a violation itself. What an utter monstrosity.
And, yes, I know, very very clueless people will comment here about how we’re just mad that we can’t “censor” people any more (even though it’s got nothing to do with me or censoring). But can you at least try to address some of the points raised above and explain how any of these services can actually operate without getting sued out of existence, or allowing all garbage all the time to fill the site?
Filed Under: 1st amendment, appeals, common carrier, content moderation, editorial discretion, email, free speech, hb20, litigation, social media, texas, transparency, viewpoint discrimination
The 5th Circuit Reinstates Texas’ Obviously Unconstitutional Social Media Law Effective Immediately
from the what-a-clusterfuck dept
Florida and Texas both passed blatantly unconstitutional laws limiting the ability of social media websites to moderate. Lawsuits were filed challenging both laws. In both cases, the district courts correctly blocked the laws from going into effect, noting that it was obviously a 1st Amendment violation to tell websites how they could and could not moderate. Both states appealed. A few weeks back there was a hearing in the 11th Circuit over the Florida law, where it became quite clear that the judges seemed to grasp the issues, and had lots of really tough questions for Florida’s lawyers. However, they have not issued an actual ruling yet.
On Monday of this week, the notoriously bad about everything 5th Circuit heard Texas’s appeal on its law, and the hearing went sideways from the very beginning, with one of the judges even trying to argue that Twitter wasn’t a website. That was only the tip of the iceberg of misunderstanding the three judge panel presented, confusing a number of issues around free speech, common carriers, private property and more. Based on the hearing, it seemed likely that the court was going to make a huge mess of things, but even then, it would be normal to take a few months to think about it, and maybe (hopefully?) reread the briefings. Also, standard practice would be to release a ruling where there would be a nominal period in which to file some sort of appeal. Instead, late Wednesday, the court just reinstated the law with no explanation at all.
An opinion is likely to follow at some point, but the whole setup of everything is bizarre and not very clear at all. The only bit of info provided is that the panel was not unanimous, suggesting that Judge Southwick, who seemed to have a better grasp of the matter than his two colleagues, probably went the other way.
So… what does this mean? Well, Texas is now a mess for any social media company. Operating in Texas and daring to do something as basic as stopping harassment and abuse on your platform now opens you up to significant litigation and potential fines. It strips editorial discretion, the right to cultivate your own community, and much much more that is fundamentally necessary to running a website with 3rd party content. I’ll have a second post later today exploring the many, many ways in which this law is effectively impossible to comply with.
I am positive that every decently sized social media company had to talk to its lawyers Wednesday evening and assess whether or not it makes sense to block access to everyone in Texas (even though some of the language in the bill suggests that it requires companies to operate in Texas). Others may decide to open the floodgates of hate, harassment, and abuse and say “well, this is what you required.” And it still won’t result in them not getting sued.
For what it’s worth, Trump’s own website, Truth Social, has moderation practices that clearly run afoul of this law, and he’s only protected from it to the extent that it still has less than 50 million monthly users.
It would be nice if the 11th Circuit came out with their ruling going the opposite way, and did so in a clear and reasoned fashion, setting up a circuit split that the Supreme Court could review. But that seems unlikely. I’ve been told that the judges on the 11th Circuit panel are famous for their excessively slow writing of opinions. The tech companies could seek an en banc review from the entire 5th Circuit, though much of the 5th Circuit is ridiculous and I’m not convinced it would help at all. There could be an attempt to appeal immediately to the Supreme Court’s shadow docket, but that’s also fundamentally an unknown arena right now.
So, in summary, Texas is fucked. Social media in Texas is now a risky proposition. And whether or not the companies continue to operate in Texas, the floodgates have been opened for ridiculous lawsuits. If you thought that Texas lawsuits over patent trolls created an entire industry unto itself, you haven’t seen anything yet.
Filed Under: 1st amendment, 5th circuit, common carrier, content moderation, discrimination, free speech, hb20, liability, social media, texas
Court Ignores That Texas Social Media Censorship Law Was Blocked As Unconstitutional: Orders Meta To Reinstate Account
from the [gestures-to-lowest-common-denominator]-who-among-us dept
Remember how Texas passed a social media content moderation law which was then blocked as unconstitutional by a federal court? Apparently people in Texas remember the passing of the law, but not the fact that it was blocked. Incredibly, this includes a judge as well.
If it had been allowed to be come law, Texas’ new, batshit insane “social media censorship” law would have allowed all sorts of bizarre claims to be made in court and greatly increased the odds even the most ridiculous of complaints will result in at least a temporary win for aggrieved plaintiffs. But, because it seems that everyone in a Texas court ignored the fact that the law has been blocked, we get to see how it all would have played out otherwise.
Welcome to the litigation party, Trump acolyte and would-be gubernatorial hopeful, Chad Prather. “Chad Whom?,” I hear you legitimately asking. Well, according to this Wikipedia article, Prather is “an American conservative political commentator, comedian and internet personality.” Whew.
He’s also attempting to unseat the current Texas governor, Greg Abbott — who is pretty much the same guy Prather is, only without the “comedian and internet personality” bio. Abbott is also a Trump acolyte and another fine argument for returning Texas to Mexico. Abbott has his own problems with so-called social media “censorship.” He has gone after Google and approved unconstitutional laws attempting to undermine Section 230 immunity and fully supports similar efforts proposed by others as idiotic and short-sighted as he is.
Chad Prather apparently feels Abbott is operating too far to the left, considering the governor is angling for a Facebook data center while spending a great deal of his ill-spent time seeking to undermine the legal protections that give idiots like Governor Abbott a sizable presence on sizable social media platforms.
Prather is now suing Facebook for suspending his account, under this new law (which, I remind you, has already been deemed unconstitutional and blocked from being put into effect) something he claims in a series of conclusory statements is obviously some sort of conspiracy between Meta and Governor Abbott to derail his attempt to unseat Abbott.
Thanks to Courthouse News Service — which always posts copies of legal documents it covers — we can read the inadvertently hilarious lawsuit [PDF] Prather has filed in a Texas county court. There are ways to be taken semi-seriously. And then there is what Prather has chosen to do: a series of conclusory statements tossed into a county court in hopes of using Texas’ shitty (again, already blocked as unconstitutional) new social media law to dodge well-settled moderation issues that have generated plenty of precedent in federal courts. That the law was blocked before it went into effect apparently doesn’t seem to matter to anyone involved in this lawsuit, which is just a little bit strange.
Let’s first take a look at the claims:
On February 21, 2022, just 8 days before the Election, Defendant suspended Prather from its Facebook social media platform for at least 7 days.
Facebook’s action against Prather severely inhibits his ability to communicate with potential voters and will cause immediate and irreparable harm by damaging his chances at winning the Election. There is no available remedy at law to Plaintiff for this interference with his ability to effectively campaign through social media.
Sure, there is. There are plenty of “available remedies,” starting with the inexplicably unpopular “more speech.” The plaintiff runs his own website. He also has apparently uninterrupted access to Parler, Twitter, and Instagram. Nevertheless, Prather insists this temporary inconvenience is not only actionable, but the direct result of collusion between Facebook and the state’s current governor:
It is likely no coincidence that Facebook chose to censor Prather so close to this hotly contested Election against Gov. Abbott. While publicly speaking out against censorship on social media, Gov. Abbott has been privately negotiating a deal with Facebook to bring the company’s new data center to Texas.
Is it likely, though? Prather has added communications between the governor and Facebook about the data center to his lawsuit (as exhibits), but fails to explain how this led Facebook to target his page for suspension. It certainly isn’t collusion. Prather doesn’t even register on the public’s radar, according to recent polls. Also, Prather seems to ignore that Gov. Abbott was a vocal supporter of this (unconstitutional and blocked by the courts) law that Prather is now trying to use… to claim that he was blocked to protect Gov. Abbott. Which, by itself should raise all sorts of questions.
Nonetheless, Prather insists he’s been beset upon all sides by powerful enemies.
The implications of this letter and the timing of Facebook’s censorship of Chad Prather should shock the conscience of this Court. Prather has a massive following on Facebook and has been a vocal critic of Gov. Abbott on his social media. It appears Facebook has likely censored a highly popular grassroots candidate for governor running against Gov. Abbott for the purpose of shoring up Abbott’s chances of winning the primary in order to protect Facebook’s pending deal with Gov. Abbott.
In other words, a California-based social media platform is actively interfering in the Texas gubernatorial elections to tip the scales in favor of the sitting governor of Texas who has just signed a law targeting them, so that he can give them a sweetheart business deal using taxpayer money. Sure. Uh huh. Makes sense.
Even if we assume these statements to be true (and we certainly don’t), what’s actionable here? Normally nothing would be. And here, nothing should be because the courts already blocked this law from going into effect. But Prather is trying to use the same law passed by the governor he now claims is colluding against him to bring this lawsuit against Facebook. This stupid law allows Texas residents to bring this completely stupid cause of action. Or it would if a court hadn’t blocked it. But, again, everyone seems to be ignoring that kind of important point.
Declaratory Relief for Social Media Censorship
And these are the sort of lawsuits this law would encourage, something Governor Abbott may come to regret (if the law actually is allowed to go into effect). After all, the law allows pretty much anyone to sue a social media service over any form of moderation they experience.
CPRC § 143A.002 provides: “(a) a social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state.”
According to Prather’s filing, he was suspended over a direct message that was presumably reported as harassing by the recipient. That’s all it takes to trigger a lawsuit under Texas’ social media law. However, clearing this extremely low bar is not the same as credibly alleging collusion between the governor and Meta, which Prather has done here.
And it appears the court may have already sided with Prather, at least temporarily even though this law never went into effect. Somehow, he has already secured a temporary restraining order [PDF] that tells Facebook to reinstate his account. The judge cites the new social media law even though the federal court already enjoined it as unconstitutional. It is unclear how this is even possible, though Prather and his lawyer, Paul Davis, who has made quite the name for himself as an insurrectionist lawyer who tried to sue to undo the entire 2020 Presidential election, are celebrating. I mean, to their credit, it is quite a feat to get an unconstitutional prior restraint ruling issued on a law that has already been declared unconstitutional and enjoined from being put into effect. So, kudos?
The garbage law has allowed a ridiculous person to force a private company to bend to his wishes — even though the law was not allowed to go into effect because of its unconstitutional nature. And Abbott’s hypocritical support of a law that undermines his belief the free market should not be fucked with has resulted in one of his political challengers having his account reinstated to be used as a megaphone to tout a lawsuit claiming the governor is in bed with Facebook. There are no winners here, just a bunch of losers who can’t handle being told to shut up by the free services they exploit.
Filed Under: 1st amendment, chad prather, content moderation, greg abbott, hb 20, hb20, paul davis, prior restraint, social media, temporary restraining order, texas
Companies: facebook, meta