ExTwitter’s New Lawsuit Accurately Argues California Deepfake Law Violates First Amendment, Section 230 (original) (raw)
from the broken-clocks dept
Elon Musk’s ExTwitter has filed an important First Amendment lawsuit against California over its unconstitutional law regulating deepfakes. This follows Musk’s earlier successful challenge to the state’s social media “transparency” law. Yes, sometimes Elon Musk actually does file good First Amendment cases that help protect free speech. I’m just as amazed as anyone, but it’s worth calling it out when he does the right thing.
It’s true of his newest lawsuit against California for yet another bogus First Amendment-ignoring law, this one having to do with deepfakes.
We similarly cheered on Elon Musk’s earlier lawsuit against California over its unconstitutional social media transparency law and were vindicated when the Ninth Circuit said the law violated the First Amendment.
The complaint filed by ExTwitter makes a compelling case that AB 2655 is unconstitutional on multiple fronts:
Like in that first lawsuit, ExTwitter has hired Floyd Abrams, one of the most well-known First Amendment lawyers out there, protesting one of California’s new anti-deepfake laws:
AB 2655 requires large online platforms like X, the platform owned by X Corp. (collectively, the “covered platforms”), to remove and alter (with a label) — and to create a reporting mechanism to facilitate the removal and alteration of — certain content about candidates for elective office, elections officials, and elected officials, of which the State of California disapproves and deems to be “materially deceptive.” It has the effect of impermissibly replacing the judgments of covered platforms about what content belongs on their platforms with the judgments of the State. And it imposes liability on the covered platforms to the extent that their judgments about content moderation are inconsistent with those imposed by the State. AB 2655 thus violates the First and Fourteenth Amendments of the United States Constitution; the free speech protections of Article I, Section 2, of the California Constitution; and the immunity provided to “interactive computer services” under Section 230 of the Communications Decency Act, 47 U.S.C. § 230(c).
Worse yet, AB 2655 creates an enforcement system that incentivizes covered platforms to err on the side of removing and/or labeling any content that presents even a close call as to whether it is “materially deceptive” and otherwise meets the statute’s requirements. This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary and will limit the type of “uninhibited, robust, and wide-open” “debate on public issues” that core First Amendment protections are designed to ensure. New York Times v. Sullivan, 376 U.S. 254, 270 (1964). As the United States Supreme Court has recognized, our strong First Amendment protections for such speech are based on our nation’s “profound national commitment” to protecting such debate, even if it often “include[s] vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.”
The complaint is strong and presents a clear explanation of the myriad problems with this law.
AB 2655 suffers from a compendium of serious First Amendment infirmities. Primary among them is that AB 2655 imposes a system of prior restraint on speech, which is the “most serious and the least tolerable infringement on First Amendment rights.” Nebraska Press Ass’n v. Stuart, 427 U.S. 539, 559 (1976). The statute mandates the creation of a system designed to allow for expedited “take downs” of speech that the State has targeted for removal from covered platforms in advance of publication. The government is involved in every step of that system: it dictates the rules for reporting, defining, and identifying the speech targeted for removal; it authorizes state officials (including Defendants here) to bring actions seeking removal; and, through the courts, it makes the ultimate determination of what speech is permissible. Rather than allow covered platforms to make their own decisions about moderation of the content at issue here, it authorizes the government to substitute its judgment for those of the platforms.
It is difficult to imagine a statute more in conflict with core First Amendment principles. As the United States Supreme Court has held, “it is a central tenet of the First Amendment that the government must remain neutral in the marketplace of ideas.” Hustler Magazine, Inc. v. Falwell, 485 U.S. 46, 56 (1988). Even worse, AB 2655’s system of prior restraint censors speech about “public issues and debate on the qualifications of candidates,” to which the “First Amendment affords the broadest protection” to ensure the “unfettered interchange of ideas for the bringing about of political and social changes desired by the people.” McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 346 (1995).
If challenging these deepfake laws sounds familiar, there already was one challenge to AB 2655 from a user whom California Governor Gavin Newsom directly called out as someone the law was designed to silence. In that case, two of the laws were challenged, and the court (very, very quickly) issued an injunction against the other one, AB 2839, which was set to go into effect immediately. The challenge to 2655 was put on the backburner, since it wasn’t set to go into effect until January 1st of next year.
Now ExTwitter is jumping in to challenge it as well, and hopefully it succeeds. The complaint is well done and makes good points, and I’m happy that Elon is challenging the law in this way. One hopes that perhaps the legal team representing him could do more to explain to him how the First Amendment actually works so he stops misrepresenting it in other contexts.
It’s also good to see that the complaint makes a big deal of how Section 230 protects ExTwitter from such laws, especially given how Elon’s best buddy, Donald Trump, has made noises about stripping Section 230 protections from websites.
AB 2655 directly contravenes the immunity provided to the covered platforms by 47 U.S.C. § 230(c)(1), which prohibits treating interactive computer service providers as the “publisher or speaker of any information provided by another information content provider.”
AB 2655’s Enforcement Provisions violate Section 230(c)(1) because they provide causes of action for “injunctive or other equitable relief against” the covered platform to remove or (by adding a disclaimer) alter certain content posted on the platform by its users. See §§ 20515(b), 20516. AB 2655 thus treats covered platforms “as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1).
Section 230(c)(1) bars such liability where the alleged duty violated derives from an entity’s conduct as a “publisher,” including “reviewing, editing, and deciding whether to publish or withdraw from publication third-party content.” See, e.g., Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009) (finding that Yahoo! was entitled to immunity under Section 230(c)(1) from claims concerning failure to remove offending profile), as amended (Sept. 28, 2009); Calise v. Meta Platforms, Inc., 103 F.4th 732, 744 (9th Cir. 2024) (finding that Meta was immune under Section 230(c)(1) from claims that would require Meta to “actively vet and evaluate third-party ads” in order to remove them).
The complaint also praises the Supreme Court’s good ruling in the Moody case about how social media sites have a First Amendment right to present content how they want:
Even if AB 2655 were not a prior restraint, it still violates the First Amendment because it runs counter to the United States Supreme Court’s recent decision in Moody v. NetChoice, LLC, in which the Court held, in no uncertain terms, that when a social media platform “present[s] a curated and ‘edited compilation of [third party] speech,’” that presentation “is itself protected speech.” 144 S. Ct. 2383, 2409 (2024) (quoting Hurley v. IrishAm. Gay, Lesbian & Bisexual Grp. of Boston, 515 U.S. 557, 570 (1995)); see also id. at 2401 (“A private party’s collection of third-party content into a single speech product (the operators’ ‘repertoire’ of programming) is itself expressive, and intrusion into that activity must be specially justified under the First Amendment.”); id. at 2405 (quoting Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974)) (“‘The choice of material,’ the ‘decisions made [as to] content,’ the ‘treatment of public issues’ — ‘whether fair or unfair’ — all these ‘constitute the exercise of editorial control and judgment.’ . . . For a paper, and for a platform too.”). Because AB 2655 impermissibly replaces the judgments of the covered platforms about what speech may be permitted on their platforms with those of the government, it cannot be reconciled with the Supreme Court’s decision in Moody.
AB 2655 disregards numerous significant First Amendment holdings by the Supreme Court in Moody — specifically, that (i) it is not a “valid, let alone substantial” interest for a state to seek “to correct the mix of speech” that “social-media platforms present,” id. at 2407; (ii) a “State ‘cannot advance some points of view by burdening the expression of others,’” id. at 2409 (quoting Pac. Gas & Elec. Co. v. Pub. Utilities Comm’n of California, 475 U.S. 1, 20 (1986)); (iii) the “government may not, in supposed pursuit of better expressive balance, alter a private speaker’s own editorial choices about the mix of speech it wants to convey,” id. at 2403; (iv) “it is no job for government to decide what counts as the right balance of private expression — to ‘un-bias’ what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others,” id. at 2394; and (v) “[h]owever imperfect the private marketplace of ideas,” a “worse proposal” is “the government itself deciding when speech [is] imbalanced, and then coercing speakers to provide more of some views or less of others,” id. at 2403.
Again, this seems important, given that the ruling in Moody was shooting down problematic GOP-pushed bills to force social media companies to host speech they didn’t want to host.
All in all, this is a strong complaint that is completely consistent with strong First Amendment principles. I’m glad that Elon was willing to have ExTwitter step up and bring it, even if he’s doing so for purely selfish reasons.
Filed Under: 1st amendment, ab 2655, california, content moderation, deepfakes, elon musk, free speech, gavin newsom, section 230
Companies: twitter, x