zauderer – Techdirt (original) (raw)

After Inexplicably Allowing Unconstitutional Book Ban To Stay Alive For Six Months, The Fifth Circuit Finally Shuts It Down

from the nonsensically-delaying-justice dept

Texas is in a close race with Florida for the title of “Most Unconstitutional Laws Enacted.” Florida’s legislators will probably end up taking this title because they seem crazier/more productive than their counterparts in Texas.

But let’s not encourage those Texas underachievers! These are bad laws written by worse people. They’re almost universally incapable of surviving a constitutional challenge.

Unless they’re passed in the Fifth Circuit. Then all bets are off. The Fifth Circuit Court of Appeals has upheld obviously unconstitutional laws twice in recent months. And six months ago it chose to allow Texas to enforce its unconstitutional book ban simply by refusing to keep an injunction put in place by the district court.

It’s not like it was a close question. The lower court’s ruling explained in detail how the state’s READER Act (Restricting Explicit and Adult-Designated Educational Resources) violated the Constitution so hard it could not possibly be allowed to remain in force. This decision was appealed and, last September, the Appeals Court inexplicably decided the law could be enforced until it finally got around to examining the case.

Nearly six months later, the Appeals Court has finally handed down its ruling. And it’s not even a close question here, either, which makes this delay all that more frustrating.

The law requires book vendors selling to school libraries to issue sexual-content ratings for all books they have sold or will sell. Books containing “sexually explicit” or “sexually relevant” content must be flagged as such, subjecting them to possible removal or restricted access.

Books flagged by the new rating system must be submitted to the Texas Education Agency (TEA), which enforces the restriction/removal process. Any books sold in the past that make the list must be “recalled” from the educational institutions that purchased them. The law also requires booksellers to list flagged books “in a conspicuous place on the agency’s Internet website.”

Clearly the law violates the First Amendment. Not only is it prior restraint (because it deters booksellers from offering certain books for sale), it’s also compelled speech — the forcible application of ratings to books in order to continue selling books to school libraries.

As the book vendor plaintiffs point out, sales are pretty much nonexistent as schools have paused all purchasing until the rating system is in place. They also point out it could cost several million dollars to vet all past and present books carried by these vendors — something that will likely put at least one vendor (Blue Willow) completely out of business.

The state argued that even if those allegations are true, the government can still violate the First Amendment because the “commercial speech” exception applies. While it’s true commercial speech can be regulated to ensure consumers receive factual and accurate information, that’s not what’s happening here. From the opinion [PDF]:

According to the State, Zauderer applies here because the library-material ratings are “purely factual and uncontroversial” like a nutrition label; they simply tell the buyer what they are receiving rather than pass judgment or express a view on the material’s appropriateness for children. We disagree. The ratings READER requires are neither factual nor uncontroversial. The statute requires vendors to undertake contextual analyses, weighing and balancing many factors to determine a rating for each book. Balancing a myriad of factors that depend on community standards is anything but the mere disclosure of factual information. And it has already proven controversial.

And while “thinking about the children” can sometimes be a cognizable government interest demanding a limited incursion on constitutional rights, this ain’t it, Texas.

We agree with the State that it has an interest in protecting children from harmful library materials. But “neither [the State] nor the public has any interest in enforcing a regulation that violates federal law.”

The long-paused injunction is back on. As the court notes, there’s very little chance the state of Texas will be harmed by being unable to enforce a statute that “likely violates the First Amendment.” It goes back down to the court that got this right the first time. And, with any luck, this temporary injunction should swiftly be made permanent.

Filed Under: 1st amendment, 5th circuit, book ban, free speech, texas, zauderer

ExTwitter Unfortunately Loses Round One In Challenging Problematic Content Moderation Law

from the well,-that's-unfortunate dept

Back in September we praised Elon Musk for deciding to challenge California’s new social media transparency law, AB 587. As we had discussed while the bill was being debated, while it’s framed as a transparency bill, it has all sorts of problems. It would (1) enable the California government officials (including local officials) to effectively put pressure on social media companies regarding how they moderate by enabling litigation for somehow failing to live up to a terms of service, (2) make it way more difficult for social media companies to deal with bad actors by limiting how often they can change their terms of service, and (3) hand bad and malicious actors a road map for being able to claim they’re respecting the rules, while clearly abusing them.

Yet, the largest social media companies (including Meta and Google) apparently are happy with the law, because they know it creates another moat for themselves. They can deal with the compliance requirements of the law, but they know that smaller competitors cannot. And, because of that, it wasn’t clear if anyone would actually challenge the law.

A few Twitter users sued last year, but with a very silly lawyer, and had the case thrown out because none of the plaintiffs had standing. But in the fall, ExTwitter filed suit to block the law from going into effect, using esteemed 1st Amendment lawyer Floyd Abrams (though, Abrams has had a series of really bad takes on the 1st Amendment and tech over the past decade or so).

The complaint still seemed solid, and Elon deserved kudos for standing up for the 1st Amendment here, especially given the larger tech companies’ unwillingness to challenge the law.

Unfortunately, though, the initial part of the lawsuit — seeking a preliminary injunction barring the law to go into effect — has failed. Judge William Shubb has sided with California against ExTwitter, saying that Elon’s company has failed to show a likelihood of success in the case.

The ruling relies heavily on a near total misreading of the Zauderer case, regarding whether or not compelled commercial speech was allowed under the 1st Amendment. As we discussed with Professor Eric Goldman a while back, reading Zauderer, you see that the case was ruled on narrow grounds, saying you could mandate transparency if it was about the text in advertisements, required disclosure of purely factual information, the information disclosed would be uncontroversial, and required the disclosure to be about the terms of an advertiser’s service. If all those conditions are met, the law might still be found unconstitutional if the disclosure requirements are not related to preventing consumer deception or if the disclosure requirements are unduly burdensome.

As professor Goldman has compellingly argued, laws requiring social media companies reveal to government officials their moderation policies meet basically none of the Zauderer conditions. It’s not about advertising. It’s not purely factual information. The disclosures can be extremely controversial. The disclosures are not about any advertiser’s services. And, on top of that, it has nothing to do with preventing consumer deception and the requirements can be unduly burdensome.

A New York Court threw out a similar law, recognizing that Zauderer shouldn’t be stretched this far.

Unfortunately, Shubb goes the other way, and argues that Zauderer makes this kind of mandatory disclosure compatible with the 1st Amendment. He does so by rewriting the Zauderer test, leaving out some of the important conditions, and then mis-applying the test:

Considered as such, the terms of service requirement appears to satisfy the test set forth by the Supreme Court in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985), for determining whether governmentally compelled commercial disclosure is constitutionally permissible under the First Amendment. The information required to be contained in the terms of service appears to be (1) “purely factual and uncontroversial,” (2) “not unjustified or unduly burdensome,” and (3) “reasonably related to a substantial government interest.”

The court admits that the compelled speech here is different, but seems to think it’s okay, citing both the 5th and 11th Circuits in the NetChoice cases (who both also applied the Zauderer test incorrectly — which is why we pointed out this part of the otherwise strong 11th Circuit decision was going to be a problem):

The reports to the Attorney General compelled by AB 587 do not so easily fit the traditional definition of commercial speech, however. The compelled disclosures are not advertisements, and social media companies have no particular economic motivation to provide them. Nevertheless, the Fifth and Eleventh Circuits recently applied Zauderer in analyzing the constitutionality of strikingly similar statutory provisions requiring social media companies to disclose information going well beyond what is typically considered “terms of service.”

Even so, this application of the facts to the misconstrued Zauderer test… just seems wrong?

Following the lead of the Fifth and Eleventh Circuits, and applying Zauderer to AB 587’s reporting requirement as well, the court concludes that the Attorney General has met his burden of establishing that that the reporting requirement also satisfies Zauderer. The reports required by AB 587 are purely factual. The reporting requirement merely requires social media companies to identify their existing content moderation policies, if any, related to the specified categories. See Cal. Bus. & Prof. Code § 22677. The statistics required if a company does choose to utilize the listed categories are factual, as they constitute objective data concerning the company’s actions. The required disclosures are also uncontroversial. The mere fact that the reports may be “tied in some way to a controversial issue” does not make the reports themselves controversial.

But… that’s not even remotely accurate on multiple accounts. It is not “purely factual information,” that is required to be disclosed. The disclosure is about the highly subjective and constantly changing processes by which social media sites choose to moderate. Beyond covering way more than merely factual information, it’s also extraordinarily controversial.

And that’s not just because they’re often tied to controversial issues, but rather because users of social media are constantly “rules litigating” moderation decisions, and insisting that websites should or should not moderate in certain ways. The entire point of this law is to try to pressure websites to moderate in a certain way (which alone should show the Constitutional infirmities in the law). In this case, it’s California trying to force websites to remove “hate speech” by demanding they reveal their hate speech policies.

Now, assuming most of you don’t like hate speech, you might not see this as all that controversial, but if that’s allowed, what’s to stop other states from requiring the same thing regarding how companies deal with other issues, like LGBTQ content. Or criticism of the police.

But, the court here insists that this is all uncontroversial.

And worse, it ignores that the Zauderer test is limited only to issues of consumer deception.

The California bill has fuck all to do with consumer deception. It is entirely about pressuring websites in how they moderate.

Also, Shubb shrugs off the idea that this law might be unduly burdensome:

While the reporting requirement does appear to place a substantial compliance burden on social medial companies, it does not appear that the requirement is unjustified or unduly burdensome within the context of First Amendment law.

The Court also (again, incorrectly in my opinion) rejects ExTwitter’s reasonable argument that Section 230 pre-empts this. Section 230 explicitly exempts any state law that seeks to limit a website’s independence in making moderation decisions, and thus this law should be pre-empted as such. Not so, says the court:

AB 587 is not preempted. Plaintiff argues that “[i]f X Corp. takes actions in good faith to moderate content that is ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,’ without making the disclosures required by AB 587, it will be subject to liability,” thereby contravening section 230. (Pl.’s Mem. (Docket No. 20) at 72.) This interpretation is unsupported by the plain language of the statute. AB 587 only contemplates liability for failing to make the required disclosures about a company’s terms of service and statistics about content moderation activities, or materially omitting or misrepresenting the required information. See Cal. Bus. & Prof. Code § 22678(2). It does not provide for any potential liability stemming from a company’s content moderation activities per se. The law therefore is not inconsistent with section 230(c) and does not interfere with companies’ ability to “self-regulate offensive third party content without fear of liability.” See Doe, 824 F.3d at 852. Accordingly, section 230 does not preempt AB 587.

Again, this strikes me as fundamentally wrong. The whole point of the law is to force websites to moderate in a certain way, and to limit how they can moderate in many scenarios, thus creating liability for moderation decisions regarding whether or not those decisions match the policies disclosed to government officials under the law. That seems squarely within the pre-emption provisions of Section 230.

This is a disappointing ruling, though it is only at stage one in this case. One hopes that Elon will appeal the decision and hopefully the 9th Circuit has a better take on the matter.

Indeed, I’d almost hope that this case were one that makes it to the Supreme Court, given the makeup of the Justices on the Supreme Court today and the (false, but whatever) belief that Elon has enabled “more free speech” on ExTwitter. It seems like this might be a case where the conservative Justices might finally understand why these kinds of transparency laws are problematic, by seeing how California is using them (as opposed to the Florida and Texas laws it’s reviewing currently, where that wing of the Supreme Court is more likely willing to side with those states and their goals).

Filed Under: 1st amendment, ab 587, california, elon musk, mandated transparency, rob bonta, section 230, terms of service, transparency, william shubb, zauderer
Companies: twitter, x

Texas Ruling Shows You Can’t Regulate Online Pornography Like A Public Health Crisis

from the your-porn-addiction-isn't-real dept

A Texas federal district judge granted a preliminary injunction blocking the enforcement of a controversial age verification law set to enter force September 1.

The court determined that House Bill (HB) 1181 was overly broad, even in the narrowest interpretations, and violated the First Amendment and Section 230 of the Communications Decency Act of 1996. No brainer, as Mike described earlier today.

But what made HB 1181 alarming to adult industry firms and digital rights activists is how the sponsors of the legislation, religious Texas state lawmakers, tried to impose pseudoscientific claims of porn addiction into statute. At length, Senior U.S. District Judge David Alan Ezra outlined that the requirement of labeling porn sites with public health warnings in a fashion similar to how other federal and state laws require sites advertising alcoholic beverages and tobacco products doesn’t match the intent of protecting minors or the science.

Some of you reading this will likely be enraged by this, but pornography consumption in the United States isn’t a public health crisis. Your porn addiction is not real. I make this statement because major medical groups and public health agencies the world over find little to no evidence of online sexual content being addictive.

This isn’t to say that individuals who might struggle with pornography consumption aren’t experiencing a degree of distress. Too much of anything can be damaging for some, but these tendencies are more related to compulsive behavior and a lack of regulation of that behavior. And there is a fine line between addiction and compulsion that can easily be checked by the proper interventions. But to say that you are a porn addict or that pornography is a public health crisis in the same context as, say, obesity or drunk driving among minors is a misinformed assessment that derives from social settings, your political views, the role of religion in your life, and how you perceive the role of sexuality in the culture. Studies overwhelmingly dispel claims of the porn addiction hypothesis attributing the fact that the people who predominantly report this are subject to hyper-religious environments that feature patriarchal structures that demonize consensual sexual expression outside of procreational purposes. Anything outside of this is regarded as sinful and demonic to them.

Clearly this is my own annotation of the ruling, but Judge Ezra recognizes the lack of scientific and medical consensus on the claims drafted in the law.

House Bill 1181, in addition to requiring age assurance measures, requires adult entertainment sites such as Pornhub or xHamster to publish warnings ostensibly to warn minors of the supposed harms of pornography. A selection of these warnings feature the endorsement of the Texas Health and Human Services Commission.

“Although these warnings carry the label ‘Texas Health and Human Services,’ it appears that the Texas of Health and Human Services Commission has not made these findings or announcements,” writes Judge Ezra.

As already stated above, none of the major medical associations recognize any perceived public health harms from pornography. This goes toward additional sentiment that Ezra highlights, which include the fact that compelling a private enterprise to post government scripted communication that is unfounded and disputed is far-reaching. Ezra indicates that “the relaxed standard for certain compelled disclosures applies if they contain ‘purely factual and uncontroversial information.'” Or, in other words, the judge cites Zauderer v. Office of Disciplinary Counsel of the Supreme Court of Ohio and the ‘Zauderer standard’ that was discussed on a Techdirt podcast last year.

The Zauderer standard allows governments to compel certain commercial speech situationally without any violation of the advertiser’s First Amendment rights. Think of the compelled commercial speech tobacco product manufacturers have to place on their packs of cigarettes. Messaging is clear that smoking could kill people and is the leading cause of preventable death in the United States. “It is unreasonable to warn adults about the dangers of legal pornography in order to protect minors. But even assuming this was a cognizable interest, Zauderer would still not apply,” the judge stated. He added that the requirement set out for the typeface and font size was burdensome, in addition to requiring messaging for a mental health helpline.

“It does not assert a fact, and instead requires companies to post the number of a mental health hotline,” continues Ezra. “The implication, when viewers see the notice, is that consumption of pornography (or any sexual material) is so associated with mental illness that those viewing it should consider seeking professional crisis help. The statement itself is not factual, and it necessarily places a severe stigma on both the websites and its visitors.”

This speaks volumes. The ideological underpinnings of the law are clear and show very little basis in fact.

Per the judge’s sentiments, such an attempt at compelling commercial speech for the supposed benefit of the general public is total bullshit. And, he recognizes that the health disclosure requirement frames information that is “factually disputed.” “Plaintiffs introduce substantial evidence showing that Texas’s health disclosures are either inaccurate or contested by existing medical research,” Ezra concludes.

Considering this information, it’s even clearer that you cannot regulate online pornography, or any type of protected form of expression, through the guise of public health and safety. This is simply a vehicle for moralistic paternalists looking to restrict and even censor forms of speech that they disfavor.

That’s not how this works, Texas.

Michael McGrady is the contributing editor at AVN.com.

Filed Under: 1st amendment, age verification, compelled speech, disclosures, hb 1181, texas, transparency, zauderer
Companies: free speech coalition

Biden Administration’s Supreme Court Filing Over Social Media Laws Is Mostly Good, But Partly Bad

from the good,-good,-good,-good,-bad,-bad dept

This one will take a bit of background to explain where things stand. As you likely know, two years ago first Florida and then Texas each signed laws that would restrict social media companies and how they moderate content on their platforms. Both laws were quickly challenged by two trade associations for internet companies: NetChoice and CCIA. The lower courts in both states ruled against the laws, saying they were clearly unconstitutional.

On appeal, the 11th Circuit (covering Florida) upheld the lower court ruling, agreeing that it was mostly obviously unconstitutional, with one problematic exception: the court ruled that some of Florida’s “disclosure” rules were constitutional. These rules required social media companies to disclose their “moderation standards.”

On this, the court said that based on the 1985 Zauderer Supreme Court decision, states can compel speech from businesses if it’s for transparency and disclosure. However, as we discussed with Professor Eric Goldman (who wrote a whole paper on the subject), Zauderer set up a pretty clear test for when such disclosure could be mandated, and it was limited to purely factual information about advertising, and as long as the disclosure was “uncontroversial.” As we discussed with Goldman, content moderation rules meet none of those criteria. Furthermore, demanding “disclosure” of content moderation rules is effectively demanding that a media organization reveal its editorial policies.

As I’ve noted elsewhere, if a state government required a local newspaper to publish how it decided what stories would go on the front page, or what stories wouldn’t get published, the 1st Amendment problem with those laws would be somewhat obvious, in part because the 1st Amendment says it’s none of the government’s business, but more importantly because of the obvious potential chilling effects of such a rule. Such required disclosure of editorial decision making would be used to intimidate media orgs over how they choose to make editorial policy.

And, indeed, it’s the same situation with these social media laws. The true intent is to force those companies to moderate the way the state governments would like, and the “disclosure” rules are just one part of that.

So, the 11th Circuit ruling found the specific demands on how to moderate to be clearly unconstitutional, but said that the mandated transparency of “moderation standards” is possibly Constitutional (it left it open until later in the process, at least).

Meanwhile, over in the 5th Circuit, which was reviewing Texas’ law (which, again, the district court rightly tossed out as unconstitutional), we had this bizarre situation were literally days after the oral arguments, the 5th Circuit with no opinion or explanation said that it was reversing the lower court and the law should go into effect immediately. Immediately.

This resulted in an emergency plea to the Supreme Court’s shadow docket, pointing out that this was fucking crazy not just in terms of the problems with Texas’ law, but just procedurally. Perhaps surprisingly, the Supreme Court agreed and put the law back on hold. Then, a few months later, the 5th Circuit finally got around to writing up its batshit crazy ruling, overturning decades of 1st Amendment precedent and reinstating the law once again, though thankfully it agreed to hold off having the law go into effect until the Supreme Court could review it.

That set in motion a bunch of requests to the Supreme Court to hear appeals on the 5th Circuit ruling and the 11th Circuit ruling (in that case, Florida asked the Supreme Court to review the part the court said was unconstitutional, while NetChoice/CCIA asked the court to review the part that was said to be constitutional).

Given the very clear circuit split between the two courts, as well as the widespread interest (including from certain Supreme Court Justices) on this very issue, it was widely expected that of course the Court would grant cert, likely combining the two cases, and hearing the appeal. But… then the Supreme Court surprised just about everyone by taking the Gonazalez and Taamneh cases, which were also about moderation, and which very few people thought the Supreme Court would review. Hell, when those cases were finally heard, even the Supreme Court Justices appeared confused as to why they took them, leading them to effectively punt on the issue when the final opinion came out.

And “punting on the issue” is something that the Roberts Court is particularly good at, so it did that again with the Florida and Texas cases. Even as everyone expected the court to hear the appeal, it delayed everything by asking the Solicitor General to weigh in on whether or not it should hear the cases at all. As we noted at the time, there’s no reason at all for the Supreme Court to want to hear from the SG regarding whether it should hear the case, and the only reason to do this was basically to let the Supreme Court put this case off until the next term (which starts up this fall).

Finally, on Monday, the Solicitor General did what the Court had asked it to do back in January, and gave its thoughts on these cases. The brief is not surprising, and is good in some ways, but problematic in others.

On the “good” side of things, the SG says that, “yes, of course, the Supreme Court should hear the appeals” on the Constitutionality of the content moderation provisions, noting that it’s a clear circuit split:

As all parties agree, this Court should grant certiorari to resolve the lower courts’ disagreement about States’ authority to restrict a business’s ability to select, edit, and arrange the third-party content that appears on its social-media platform. Moody Pet. 8-18; Moody Br. in Resp. 31-34; Paxton Br. in Resp. 13-15. The decisions below create a square and acknowledged circuit split on that important First Amendment question. And even before that conflict emerged, this Court recognized that the question presented would likely warrant review by vacating the Fifth Circuit’s stay of the preliminary injunction in the Texas case. 142 S. Ct. 1715; see id. at 1716 (Alito, J., dissenting from grant of application to vacate stay) (“This application concerns issues of great importance that will plainly merit this Court’s review.”).

In the government’s view, the Court should grant review in both the Florida and Texas cases. Although the cases turn on the same fundamental question about the First Amendment status of the platforms’ content moderation activities, S.B. 7072 and H.B. 20 target different types of content moderation and impose different obligations. Those differences ultimately may not be material to the Court’s First Amendment analysis, but considering the two laws together would give the Court the fullest opportunity to address the relevant issues.

Also good: the Solicitor General says that the Supreme Court should uphold the 11th Circuit’s decision saying the content moderation restrictions are unconstitutional, and overturn the 5th Circuit’s decision saying otherwise:

On the merits of the content-moderation provisions, this Court should affirm the Eleventh Circuit and reverse the Fifth Circuit. When a social-media platform selects, edits, and arranges third-party speech for presentation to the public, it engages in activity protected by the First Amendment. That activity, and the platforms’ business practices more generally, are not immune from regulation. But here, the States have not articulated interests that justify the burdens imposed by the content-moderation restrictions under any potentially applicable form of First Amendment scrutiny.

It even (smartly) cites the recent 303 Creative case in making its argument. While many people were (understandably) annoyed with how that ruling played out, we had noted that the opinion was actually important for cases exactly like the NetChoice cases, and the Solicitor General realizes that as well:

Indeed, given the torrent of content created on the platforms, one of their central functions is to make choices about which content will be displayed to which users, in which form and which order. The act of culling and curating the content that users see is inherently expressive, even if the speech that is collected is almost wholly provided by users. A speaker “‘does not forfeit constitutional protection simply by combining multifarious voices’ in a single communication.” 303 Creative LLC v. Elenis, 143 S. Ct. 2298, 2313 (2023) (quoting Hurley, 515 U.S. at 569). And especially because the covered platforms’ only products are displays of expressive content, a government requirement that they display different content—for example, by including content they wish to exclude or organizing content in a different way—plainly implicates the First Amendment.

The brief also argues that the laws’ requirements for “individualized-explanations” for certain content moderation decisions is also unconstitutional:

The Fifth and Eleventh Circuits’ rulings on the individualized-explanation requirements likewise warrant review because the two courts reached conflicting results on an important First Amendment question. The Eleventh Circuit held that S.B. 7072’s requirement to provide a “‘thorough rationale’” for certain content-moderation decisions would “‘chill protected speech’” by discouraging the “exercise of editorial judgment.” Moody Pet. App. 64a-65a (brackets and citations omitted). The Fifth Circuit reached the opposite conclusion, holding that H.B. 20’s even more burdensome requirement to provide an explanation and an appeal does not chill speech. Paxton Pet. App. 96a.

This Court should grant certiorari to resolve that conflict, which is rooted in the courts’ conflicting views about whether the covered platforms’ content-moderation activities are protected by the First Amendment at all. And as with the content-moderation provisions, the Court should review both the Florida and Texas laws so that it may consider any potentially relevant differences between their requirements.

But… when it gets to the Zauderer issue regarding more general “disclosure” provisions, the Biden administration tells the Supreme Court not to hear that challenge. It doesn’t dig in on the merits, really, but effectively says “look, there’s too much other important stuff going on in this case, leave this issue for another time.”

First, the general-disclosure provisions have not been the focus of this litigation. The parties’ briefs below devoted only a few pages to those provisions, and the courts of appeals did the same. See Moody Pet. App. 62a-64a; Paxton Pet. App. 91a-95a, 97a-98a. Perhaps for that reason, neither court addressed the principal argument that NetChoice presses in this Court— that the deferential standard articulated in Zauderer v. Office of Disciplinary Council, 471 U.S. 626 (1985), should apply only in “the context of correcting misleading advertising.” Moody Cross-Pet. 30. This Court is “a court of review, not of first view,” Cutter v. Wilkinson, 544 U.S. 709, 718 n.7 (2005), and it should not take up issues that have received such limited attention in the lower courts.

Second, and relatedly, this Court’s review of the general-disclosure provisions would be impaired by the pre-enforcement posture of these cases and the underdeveloped state of the present record. Among other things, it would be difficult to assess the burden imposed by the general-disclosure provisions because there is no record of enforcement and because the meaning of some of those provisions remains uncertain. NetChoice observes, for example, that it does not know “whether [the covered] websites’ current publicly posted editorial policies comply with [H.B. 20’s] requirement to publish an ‘acceptable use policy’ that ‘reasonably inform[s] users.’ ” Paxton Reply Br. 11 (quoting Tex. Bus. & Com. Code Ann. § 120.052(a) and (b)(1)) (third set of brackets in original).

Third, granting certiorari on the general-disclosure provisions would further complicate what would already be a complex process of merits briefing and argument. Review of the content-moderation and individualizedexplanation provisions would itself require consideration of more than a half-dozen distinct provisions contained in two different state laws. If the Court took up the general-disclosure provisions as well, the total number of provisions at issue would be more than a dozen. And because each of the general-disclosure provisions imposes a distinct requirement, the Court’s conclusions about the burdens and interests implicated by one provision would not necessarily carry over to the others; instead, a provision-by-provision analysis would likely be necessary.

And, yes, there is some truth to the fact that this would complicate matters further, but I have a real problem with the 2nd paragraph, highlighting the pre-enforcement issue as a reason not to hear the case. There’s a reason 1st Amendment cases challenging laws often need to be challenged pre-enforcement. Because the very nature of such laws is that they create serious chilling effects on those targeted, in that impacted parties won’t even try to do what the law threatens to avoid becoming a target.

Having to wait until the law is enforced means that many websites will likely first limit their content moderation “standards,” knowing that they’ll have to be released publicly.

Furthermore, I think there’s another reason the Biden administration likely doesn’t want this line of regulatory requirement to be challenged: because the Biden administration has (unfortunately) endorsed or supported similar laws that would require disclosure of policies, often pushed by Democrats in an effort to “shame” social media companies into moderating certain types of content.

I still hope that the Supreme Court takes up all of these issues, but there’s at least a decent chance that it will agree with the SG on just taking up the core content moderation issues instead. That’s better than not taking it at all, but it will still leave the problematic part of the 11th Circuit’s ruling in place.

Filed Under: 1st amendment, biden administration, content moderation, florida, solictor general, supreme court, texas, transparency, zauderer
Companies: ccia, netchoice

NY’s ‘Hateful Conduct’ Social Media Law Blocked As Unconstitutional

from the some-good-news dept

Last summer, we wrote about New York’s law to require websites to have “hateful conduct” policies, noting that it was “ridiculous” and “likely unconstitutional.” The law was passed in the wake of the horrific Buffalo super market shooting, where the state’s Governor and Attorney General sought to blame the internet, rather than the government’s own failings that contributed to the death toll.

While we noted the law wasn’t quite as bad as some other state laws, it was very problematic, in that it was pretty clearly trying to force websites to pull down content even if it was constitutionally protected speech. Some people argued back that since the law didn’t really require anything other than having a policy and some transparency, that it would pass muster.

Thankfully, though, the first court to take a look has agreed with me, and granted an injunction barring the law from taking effect over constitutional concerns. The ruling is… really good, and really clear.

With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation speech that may be deemed “hateful” and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal. In the face of our national commitment to the free expression of speech, even where that speech is offensive or repugnant, Plaintiffs’ motion for preliminary injunction, prohibiting enforcement of the law, is GRANTED.

The ruling then digs into the details, and notes that the requirement for a hateful conduct policy is compelling speech, which is a problem under the 1st Amendment:

Plaintiffs argue that the law regulates the content of their speech by compelling them to speak on an issue on which they would otherwise remain silent. (Pl.’s Mem., ECF No. 9 at 12; Tr., ECF No. 27 at 47:5–13.) Defendant argues that the law regulates conduct, as opposed to speech, because there is no requirement for how a social media network must respond to any complaints and because the law does not even require the network to specifically respond to a complaint of hateful content. (Def.’s Opp’n, ECF No. 21 at 9.) Instead, the law merely requires that the complaint mechanism allows the network to respond, if that is the social media network’s policy. (Tr., ECF No. 27 at 11:25–1212:4.)

Defendant likens the Hateful Conduct Law to the regulation upheld in Restaurant Law Ctr. v. City of New York, which required fast-food employers to set up a mechanism for their employees to donate a portion of their paychecks to a non-profit of that employee’s choosing. 360 F. Supp. 3d 192 (S.D.N.Y. 2019). The court found that this did not constitute “speech”—nor did it constitute “compelled speech”—noting that the “ministerial act” of administering payroll deductions on behalf of their employees did not constitute speech for the employers. Id. at 214. As such, the court applied rational basis review and found that the regulation passed muster. Id. at 221.

However, those facts are not applicable here. The Hateful Conduct Law does not merely require that a social media network provide its users with a mechanism to complain about instances of “hateful conduct”. The law also requires that a social media network must make a “policy” available on its website which details how the network will respond to a complaint of hateful content. In other words, the law requires that social media networks devise and implement a written policy—i.e., speech.

Furthermore, the court notes that the law more or less demands a specific kind of “hateful conduct” policy.

Similarly, the Hateful Conduct Law requires a social media network to endorse the state’s message about “hateful conduct”. To be in compliance with the law’s requirements, a social media network must make a “concise policy readily available and accessible on their website and application” detailing how the network will “respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3). Implicit in this language is that each social media network’s definition of “hateful conduct” must be at least as inclusive as the definition set forth in the law itself. In other words, the social media network’s policy must define “hateful conduct” as conduct which tends to “vilify, humiliate, or incite violence” “on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). A social media network that devises its own definition of “hateful conduct” would risk being in violation of the law and thus subject to its enforcement provision.

It’s good to see a court recognize that compelled speech is a 1st Amendment problem.

There are other problems as well that will create real chilling effects on speech:

The potential chilling effect to social media users is exacerbated by the indefiniteness of some of the Hateful Conduct Law’s key terms. It is not clear what the terms like “vilify” and “humiliate” mean for the purposes of the law. While it is true that there are readily accessible dictionary definitions of those words, the law does not define what type of “conduct” or “speech” could be encapsulated by them. For example, could a post using the hashtag “BlackLivesMatter” or “BlueLivesMatter” be considered “hateful conduct” under the law? Likewise, could social media posts expressing anti-American views be considered conduct that humiliates or vilifies a group based on national origin? It is not clear from the face of the text, and thus the law does not put social media users on notice of what kinds of speech or content is now the target of government regulation.

Last year, we had Prof. Eric Goldman on our podcast to discuss how many lawmakers (and some courts…) were insisting that the “Zauderer test” meant that it was okay to mandate transparency on social media policies. Both the 11th Circuit and the 5th Circuit‘s ruling in the Florida and Texas social media bills actually found the transparency requirements to be okay based on Zauderer. However, Goldman has argued (compellingly!) that both courts are simply misreading the Zauderer standard, which was limited to transparency around advertising, and only required transparency of “purely factual information” that was “uncontroversial” and for the purpose of preventing consumer deception.

All of that suggests that the Zauderer test should not and could not apply to laws mandating social media content moderation policy transparency.

Thankfully, it appears that this court in NY agrees, rejecting the attempts by the state to argue that because this is “commercial speech,” the law is fine. Not so, says the court:

The policy disclosure at issue here does not constitute commercial speech and conveys more than a “purely factual and uncontroversial” message. The law’s requirement that Plaintiffs publish their policies explaining how they intend to respond to hateful content on their websites does not simply “propose a commercial transaction”. Nor is the policy requirement “related solely to the economic interests of the speaker and its audience.” Rather, the policy requirement compels a social media network to speak about the range of protected speech it will allow its users to engage (or not engage) in. Plaintiffs operate websites that are directly engaged in the proliferation of speech—Volokh operates a legal blog, whereas Rumble and Locals operate platforms where users post video content and comment on other users’ videos.

Goldman wrote a detailed post on this ruling as well and notes the importance of how the court handles Zauderer:

The court’s categorical rejection of Zauderer highlights how Zauderer evangelists are using the precedent to normalize/justify censorship. This is why the Supreme Court needs to grant cert in the Florida and Texas cases. Ideally the Supreme Court will reiterate that Zauderer is a niche exception of limited applicability that does not include mandatory editorial transparency. Once Zauderer is off the table and legislatures are facing strict scrutiny for their mandated disclosures, I expect they will redirect their censorial impulses elsewhere.

Anyway, it’s good to see a clear rejection of this law. Hopefully we see more of that (and that this ruling stands on the inevitable appeal).

Filed Under: 1st amendment, compelled speech, eugene volokh, free speech, hateful conduct, new york, social media, transparency, zauderer
Companies: locals, rumble

Techdirt Podcast Episode 334: Can You Mandate Editorial Transparency?

from the not-so-fast dept

Amidst all the conversation around regulating social media, algorithmic amplification, and disinformation, one idea that tends to get a lot of broad support is mandating editorial transparency. After all, it sounds nice, since transparency is usually a good thing. But in fact, there are huge legal and conceptual problems with mandated transparency. Santa Clara Law’s Eric Goldman has written papers on the constitutionality of the idea and an important Supreme Court case related to this question, and this week he joins the podcast to discuss why mandated transparency isn’t as good (or as constitutional) as many people claim.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: eric goldman, free speech, podcast, supreme court, transparency, zauderer