state actor – Techdirt (original) (raw)

More Of RFK Jr.’s ‘Don’t Moderate Me, Bro’ Cases Are Laughed Out Of Court

from the that's-not-how-any-of-this-works dept

In the last month, I wrote about two of Robert F. Kennedy Jr.’s bullshit crazy lawsuits over him being very, very mad that social media companies keep moderating or limiting the spread of his dangerous bullshit anti-vax nonsense. In one, the Ninth Circuit had to explain (not for the first time) to RFK and his disgraced Yale Law professor lawyer, Jed Rubenfeld, that Meta fact checking RFK Jr. does not violate the First Amendment, and that Section 230 does not turn every internet company into a state actor.

In the other case, one of the MAGA world’s favorite judges ignored both the facts and the scolding he just got from the Supreme Court to insist that the Biden administration has been trying to censor RFK Jr., a thing that has not actually happened.

But Professor Eric Goldman reminds me that there were two other cases involving RFK Jr. and his anger at being moderated that had developments that I hadn’t covered. And both of them were, thankfully, not in courtrooms of partisan judges who live in fantasylands.

First, we had a case in which RFK Jr. sued Meta again. I had mentioned this case when it was filed. The Ninth Circuit one mentioned above was also against Meta, but RFK Jr. decided to try yet again. In this case, he also sued them claiming that efforts to restrict a documentary about him by Meta violated his First Amendment rights.

If you don’t recall, Meta very temporarily blocked the ability to share the documentary, which they chalked up to a glitch. They fixed it very quickly. But RFK Jr. insisted it was a deliberate attempt to silence him, citing Meta’s AI chatbot as giving them the smoking gun (yes, they really did this, even the chatbot is just a stochastic parrot spewing whatever it thinks will answer a question).

What I had missed was that district court Judge William Orrick, who is not known for suffering fools lightly, has rejected RFK Jr.’s demands for a preliminary injunction. Judge Orrick is, shall we say, less than impressed by RFK Jr. returning to the well for another attempt at this specious argument, citing the very Ninth Circuit case that RFK Jr. just lost in his other case against Meta.

The plaintiffs assert that they are likely to succeed on the merits of their First Amendment claim, which is that Meta violated their rights to free speech by censoring their posts and accounts on Meta’s platforms. But the First Amendment “‘prohibits only governmental abridgment of speech’ and ‘does not prohibit private abridgment of speech.’” Children’s Health Def. v. Meta Platforms, Inc., —F. 4th—, No. 21-16210, 2024 WL 3734422, at *4 (9th Cir. Aug. 9, 2024) (first quoting Manhattan Cmty. Access Corp. v. Halleck, 587 U.S. 802, 808 (2019); and then citing Prager Univ. v. Google LLC, 951 F.3d 991, 996 (9th Cir. 2020)). Because there is no apparent state action, this claim is unlikely to succeed.

RFK Jr. twists himself into a pretzel shape to try to claim that Meta is magically a state actor, but the court has to remind him that these arguments are quite stupid.

The Ninth Circuit recently has twice affirmed dismissal of claims filed by plaintiffs alleging that social media platforms violated the plaintiffs’ First Amendment rights by flagging, removing, or otherwise “censoring” the plaintiffs’ content shared on those platforms. See Children’s Health, 2024 WL 3734422 at *2–4; O’Handley, 62 F.4th at 1153–55. In both cases, the Ninth Circuit held that the plaintiffs’ claims failed at the first step of the state action framework because of “the simple fact” that the defendants “acted in accordance with [their] own content-moderation policy,” not with any government policy…..

The only difference between those cases and this one is that here, the plaintiffs seem to allege that the “specific” harmful conduct is Meta’s censorship itself, rather than its policy of censoring. Based on the documents submitted and allegations made, that is a distinction without a difference.

RFK Jr. tried to argue that the ruling by Judge Doughty in Louisiana supports his position, but Judge Orrick wasn’t born yesterday and that he can actually read what the Supreme Court wrote in the Murthy decision rejecting these kinds of arguments.

The Murthy opinion makes my decision here straightforward. Murthy rejected Missouri’s factual findings and specifically explained that the Missouri evidence did not show that the federal government caused the content moderation decisions. Yet here, the plaintiffs rely on Missouri as their evidence that a state rule caused the defendants’ alleged censorship actions. Even if I accepted the vacated district court order as evidence here—which I do not—the Supreme Court has plainly explained why it does not support the plaintiffs’ argument.

Even though he notes that he doesn’t even need to go down this road, Judge Orrick also explains why the whole “state actor” argument is nonsense as well:

The plaintiffs’ theory is that Meta and the government colluded or acted jointly, or the government coerced Meta, to remove content related to Kennedy’s 2024 presidential campaign from Meta’s platforms. The problem with that theory is again the lack of evidence. The Missouri and Kennedy findings were rejected by the Supreme Court, as explained above. And they—and the interim report—suggest at most a relationship or communications between Meta and the government about removal of COVID-19 misinformation in 2020 and 2021. Even if the plaintiffs proved that Meta and the government acted jointly, or colluded, or that Meta was coerced by the government to remove and flag COVID-19 misinformation three years ago, that says nothing about Meta’s relationship and communications with the government in 2024. Nor does it suggest that Meta and the government worked together to remove pro-Kennedy content from Meta’s platforms.

Because of this, the plaintiffs fail to show likelihood of success on the merits—or serious questions going to the merits—for any of the three possible state action prongs. They do not provide evidence or allegations of a “specific[]” agreement between Meta and the government to specifically accomplish the goal of removing Kennedy content from Meta platforms. See Children’s Health, 2024 WL 3734422, at 5 (describing joint action test and collecting cases). Nor do they show that the government exercised coercive power or “significant encouragement” for Meta to remove Kennedy-related content in 2024. Id. at 9–10 (describing coercion test and finding that allegations about Congressmembers’ public criticism of COVID-19 misinformation on social media sites was insufficient to show government coerced platforms to remove it). And for similar reasons, the plaintiffs do not establish a “sufficiently close nexus” between the government and the removal of Kennedy-related content from Meta’s platforms. Id. at *5. Their First Amendment claim accordingly fails at step two of the state action inquiry. It is far from likely to succeed on the merits.

RFK Jr. also made a Voting Rights Act claim, that removing the documentary about him somehow interfered with people’s rights to vote for him. But the court notes that this argument is doomed by the fact that Meta noted that the blocking of links was an accident, which happens all the time:

The defendants point to compelling evidence that the video links were incorrectly automatically flagged as a phishing attack, a “not uncommon” response by its automated software to newly created links with high traffic flow. Oppo. 5–6 (citing Mehta Decl. Ex. A ¶ 7). The defendants’ evidence shows that once the defendants were alerted to the problem, through channels set up specifically for that purpose, the links were restored, and the video was made (and is currently still) available on its platform. Mehta Decl. Ex. A. ¶¶ 4–8, Exs. M–Q. Though the plaintiffs say the removal of the video was an effort to coerce them to not urge people to vote for Kennedy, the defendants’ competing evidence shows that it was a technological glitch and that the plaintiffs were aware of this glitch because they reported the problem in the first place. And if the plaintiffs were aware that a tech issue caused the removal of the videos, with that “context” it would probably not be reasonable for them to believe the video links were removed in an effort to coerce or intimidate them.

The court is also not impressed by the argument that other people (not parties to the case) had accounts removed or limited for sharing support for RFK Jr. As the judge makes clear, RFK Jr. doesn’t get to sue someone over a claim that they intimidated someone else (for which there isn’t any actual evidence anyway).

Third, the plaintiffs submit evidence that other peoples’ accounts were censored, removed, or threatened with removal when they posted any sort of support for Kennedy and his candidacy. See, e.g., Repl. 1:13–24; [Dkt No. 29-1] Exs. A, B. The defendants fail to respond to these allegations in their opposition, but the reason for this failure seems obvious. Section 11(b) provides a private right of action for Person A where Person B has intimidated, threatened, or coerced Person A “for urging or aiding any person to vote.” 52 U.S.C.A. § 10307(b). It does not on its face, or in any case law I found or the parties cite, provide a private right of action for Person C to sue Person B for intimidating, threatening, or coercing Person A “for urging or aiding any person to vote.” Id. Using that example, the three plaintiffs would be “Person C.” Their evidence very well might suggest that Meta is censoring other users’ pro-Kennedy content. But those users are not plaintiffs in this case and are not before me now.

Importantly, the plaintiffs had plenty of time and opportunity to add any of those affected users as new plaintiffs in this case, as they added Reed Kraus between filing the initial complaint and filing the AC and current motion. But they did not do so. Nor do they allege or argue that AV24 has some sort of organizational or third-party standing to assert the claims of those affected users. And while they seem to say that Kennedy himself is affected because that evidence shows Meta users are being coerced or threatened for urging people to vote for him, the effect on the candidate is not what § 11(b) protects. Accordingly, this evidence does not support the plaintiffs’ assertions. The plaintiffs, therefore, fail to counter the compelling evidence and reasons that the defendants identify in explanation for the alleged censorship.

More critically, the plaintiffs do not deny the defendants’ portrayal of and reasons for the defendants’ actions. The plaintiffs fail to incorporate those reasons into their assessment of how a “reasonable” recipient of Meta’s communications would interpret the communications in “context.” See Wohl III, 661 F. Supp. 3d at 113. Based on the evidence provided so far, a reasonable recipient of Meta’s communications would be unlikely to view them as even related to voting, let alone as coercing, threatening, or intimidating the recipient with respect to urging others to vote.

Towards the end of the ruling, the court finally gets to Section 230 and notes that the case is probably going nowhere even without everything earlier, because Section 230 makes Meta immune from liability for its moderation actions. However, the case didn’t hinge on that because neither side really went deep on the 230 arguments.

As for the other RFK Jr. case, I had forgotten that he had also sued Google/YouTube over its moderation efforts. At the end of last month, the Ninth Circuit also upheld a lower court ruling on that case in an unpublished four-page opinion where the three-judge panel made quick work of the nonsense lawsuit:

Google asserts that it is a private entity with its own First Amendment rights and that it removed Kennedy’s videos on its own volition pursuant to its own misinformation policy and not at the behest of the federal government. Kennedy has not rebutted Google’s claim that it exercised its independent editorial choice in removing his videos. Nor has Kennedy identified any specific communications from a federal official to Google concerning the removed Kennedy videos, or identified any threatening or coercive communication, veiled or otherwise, from a federal official to Google concerning Kennedy. As Kennedy has not shown that Google acted as a state actor in removing his videos, his invocation of First Amendment rights is misplaced. The district court’s denial of a preliminary injunction is AFFIRMED.

If RFK Jr. intends to appeal the latest Meta ruling (and given the history of his frivolous litigation, the chances seem quite high that he will), the Ninth Circuit might want to just repurpose this paragraph and swap out the “Google” for “Meta” each time.

Now, if only the Fifth Circuit would learn a lesson or two from the Ninth Circuit (or the Supreme Court), we could finally dispense with the one case that ridiculously went in RFK Jr.’s favor.

Filed Under: 1st amendment, 9th circuit, content moderation, free speech, jed rubenfeld, rfk jr., state actor, voting rights act, william orrick
Companies: google, meta, youtube

Court To RFK Jr.: Fact-Checking Doesn’t Violate 1st Amendment Nor Does Section 230 Make Meta A State Actor

from the that's-not-how-any-of-this-works dept

You may recall that RFK Jr.’s nonsense-peddling anti-vax organization “Children’s Health Defense” (CHD) sued Meta back in 2020 for the apparent crime of fact-checking and limiting the reach of the anti-vax nonsense it posted. Three years ago, the case was tossed out of court (easily) with the court pointing out that Meta is (*gasp*) a private entity that has the right to do all of this under its own free speech rights. The court needed to explain that the First Amendment applies to the government, and Meta is not the government.

Yes, Meta looked to the CDC for guidance on vaccine info, but that did not turn it into a state actor. It was a pretty clear and easy ruling smacking down CHD (represented, in part, by disgraced Yale law professor Jed Rubenfeld). So, of course RFK Jr. and CHD appealed.

Last week, the Ninth Circuit smacked them down again. And we learn that it’s going to go… very… slowly… to hopefully help RFK Jr. and Rubenfeld understand these things this time:

To begin by stating the obvious, Meta, the owner of Facebook, is a private corporation, not a government agency.

Yes, the majority opinion admits that there are some rare cases where private corporations can be turned into state actors, but this ain’t one of them.

CHD’s state-action theory fails at this threshold step. We begin our analysis by identifying the “specific conduct of which the plaintiff complains.” Wright, 48 F.4th at 1122 (quoting American Mfrs. Mut. Ins. Co. v. Sullivan, 526 U.S. 40, 51 (1999)). CHD challenges Meta’s “policy of censoring” posts conveying what it describes as “accurate information . . . challenging current government orthodoxy on . . . vaccine safety and efficacy.” But “the source of the alleged . . . harm,” Ohno, 723 F.3d at 994, is Meta’s own “policy of censoring,” not any provision of federal law. The closest CHD comes to alleging a federal “rule of conduct” is the CDC’s identification of “vaccine misinformation” and “vaccine hesitancy” as top priorities in 2019. But as we explain in more detail below, those statements fall far short of suggesting any actionable federal “rule” that Meta was required to follow. And CHD does not allege that any specific actions Meta took on its platforms were traceable to those generalized federal concerns about vaccine misinformation.

And, even if it could pass that first step, it would also fail at the second step of the test:

CHD’s failure to satisfy the first part of the test is fatal to its state action claim. See Lindke v. Freed, 601 U.S. 187, 198, 201 (2024); but see O’Handley, 62 F.4th at 1157 (noting that our cases “have not been entirely consistent on this point”). Even so, CHD also fails under the second part. As we have explained, the Supreme Court has identified four tests for when a private party “may fairly be said to be a state actor”: (1) the public function test, (2) the joint action test, (3) the state compulsion test, and (4) the nexus test. Lugar, 457 U.S. at 937, 939.

CHD invokes two of those theories of state action as well as a hybrid of the two. First, it argues that Meta and the federal government agreed to a joint course of action that deprived CHD of its constitutional rights. Second, it argues that Meta deprived it of its constitutional rights because government actors pressured Meta into doing so. Third, it argues that the “convergence” of “joint action” and “pressure,” as well as the “immunity” Meta enjoys under 47 U.S.C. § 230, make its allegations that the government used Meta to censor disfavored speech all the more plausible. CHD cannot prevail on any of these theories.

The majority opinion makes clear that CHD never grapples with the basic idea that the reason Meta might have taken action on CHD’s anti-vax nonsense was that it didn’t want kids to die because of anti-vax nonsense. Instead, it assumes without evidence that it must be the government censoring them.

But the facts that CHD alleges do not make that inference plausible in light of the obvious alternative—that the government hoped Meta would cooperate because it has a similar view about the safety and efficacy of vaccines.

Furthermore, the Court cites the recent Murthy decision at the Supreme Court (on a tangentially related issue) and highlighted how Meta frequently pushed back or disagreed with points raised by the government.

In any event, even if we were to consider the documents, they do not make it any more plausible that Meta has taken any specific action on the government’s say-so. To the contrary, they indicate that Meta and the government have regularly disagreed about what policies to implement and how to enforce them. See Murthy, 144 S. Ct. at 1987 (highlighting evidence “that White House officials had flagged content that did not violate company policy”). Even if Meta has removed or restricted some of the content of which the government disapproves, the evidence suggests that Meta “had independent incentives to moderate content and . . . exercised [its] own judgment” in so doing.

As for the fact that Meta offers a portal for some to submit reports, that doesn’t change the fact that it’s still judging those reports against its own policies and not just obeying the government.

That the government submitted requests for removal of specific content through a “portal” Meta created to facilitate such communication does not give rise to a plausible inference of joint action. Exactly the same was true in O’Handley, where Twitter had created a “Partner Support Portal” through which the government flagged posts to which it objected. 62 F.4th at 1160. Meta was entitled to encourage such input from the government as long as “the company’s employees decided how to utilize this information based on their own reading of the flagged posts.” Id. It does not become an agent of the government just because it decides that the CDC sometimes has a point.

The majority opinion also addresses the question of whether or not Meta was coerced. It first notes that if that were the issue, then Meta itself probably wouldn’t be the right defendant, the government would be. But then it notes the near total lack of evidence of coercion.

CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy. Instead, it cites statements by Members of Congress criticizing social media companies for allowing “misinformation” to spread on their platforms and urging them to combat such content because the government would hold them “accountable” if they did not. Like the “generalized federal concern[s]” in Mathis II, those statements do not establish coercion because they do not support the inference that the government pressured Meta into taking any specific action with respect to speech about vaccines. Mathis II, 75 F.3d at 502. Indeed, some of the statements on which CHD relies relate to alleged misinformation more generally, such as a statement from then-candidate Biden objecting to a Facebook ad that falsely claimed that he blackmailed Ukrainian officials. All CHD has pleaded is that Meta was aware of a generalized federal concern with misinformation on social media platforms and that Meta took steps to address that concern. See id. If Meta implemented its policy at least in part to stave off lawmakers’ efforts to regulate, it was allowed to do so without turning itself into an arm of the federal government.

To be honest, I’m not so sure of the last line there. If it’s true that Meta implemented policies because it wanted to avoid regulation, that strikes me as a potential First Amendment violation, but again, one that should be targeted at the government, not Meta.

The opinion also notes that angry letters from Rep. Adam Schiff and Senator Amy Klobuchar did not appear to cross the coercive line, despite being aggressive.

But in contrast to cases where courts have found coercion, the letters did not require Meta to take any particular action and did not threaten penalties for noncompliance….

Again, I think the opinion goes a bit too far here in suggesting that legislators mostly don’t have coercive power by themselves, giving them more leeway to send these kinds of letters.

Unlike “an executive official with unilateral power that could be wielded in an unfair way if the recipient did not acquiesce,” a single legislator lacks “unilateral regulatory authority.” Id. A letter from a legislator would therefore “more naturally be viewed as relying on her persuasive authority rather than on the coercive power of the government.”

I think that’s wrong. I’ve made the case that it’s bad when legislators threaten to punish companies for speech, and it’s frustrating that both Democrats and Republicans seem to do it regularly. Here, the 9th Circuit seems to bless that which is a bit frustrating and could lead to more attempts by legislators to suppress speech.

The Court then dismisses CHD’s absolutely laughable Section 230 state action theory. This was Rubenfeld’s baby. In January 2021, Rubenfeld co-authored one of the dumbest WSJ op-eds we’ve ever seen with a then mostly unknown “biotech exec” named Vivek Ramaswamy, arguing that Section 230 made social media companies into state actors. A few months later, Rubenfeld joined RFK Jr.’s legal team to push this theory in court.

It failed at the district court level and it fails here again on appeal.

The immunity from liability conferred by section 230 is undoubtedly a significant benefit to companies like Meta that operate social media platforms. It might even be the case that such platforms could not operate at their present scale without section 230. But many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government. If that were enough for state action, every large government contractor would be a state actor. But that is not the law.

The opinion notes that this crazy theory is based on a near complete misunderstanding of case law and how Section 230 works. Indeed, the Court calls out this argument as “exceptionally odd.”

It would be exceptionally odd to say that the government, through section 230, has expressed any preference at all as to the removal of anti-vaccine speech, because the statute was enacted years before the government was concerned with speech related to vaccines, and the statute makes no reference to that kind of speech. Rather, as the text of section 230(c)(2)(A) makes clear—and as the title of the statute (i.e., the “Communications Decency Act”) confirms—a major concern of Congress was the ability of providers to restrict sexually explicit content, including forms of such content that enjoy constitutional protection. It is not difficult to find examples of Members of Congress expressing concern about sexually explicit but constitutionally protected content, and many providers, including Facebook, do in fact restrict it.See, e.g., 141 Cong. Rec. 22,045 (1995) (statement of Rep. Wyden) (“We are all against smut and pornography . . . .”); id. at 22,047 (statement of Rep. Goodlatte) (“Congress has a responsibility to help encourage the private sector to protect our children from being exposed to obscene and indecent material on the Internet.”); Shielding Children’s Retinas from Egregious Exposure on the Net (SCREEN) Act, S. 5259, 117th Cong. (2022); Adult Nudity and Sexual Activity, Meta, https://transparency.fb.com/policies/communitystandards/adult-nudity-sexual-activity _[h_ttps://perma.cc/ SJ63-LNEA] (“We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content.”). While platforms may or may not share Congress’s moral concerns, they have independent commercial reasons to suppress sexually explicit content. “Such alignment does not transform private conduct into state action.”

Indeed, it points to the ridiculous logical conclusion of this Rubenfeld/Ramaswamy argument:

If we were to accept CHD’s argument, it is difficult to see why would-be purveyors of pornography would not be able to assert a First Amendment challenge on the theory that, viewed in light of section 230, statements from lawmakers urging internet providers to restrict sexually explicit material have somehow made Meta a state actor when it excludes constitutionally protected pornography from Facebook. So far as we are aware, no court has ever accepted such a theory

Furthermore, the Court makes clear that moderation decisions are up to the private companies, not the courts. And if people don’t like it, the answer is market forces and competition.

Our decision should not be taken as an endorsement of Meta’s policies about what content to restrict on Facebook. It is for the owners of social media platforms, not for us, to decide what, if any, limits should apply to speech on those platforms. That does not mean that such decisions are wholly unchecked, only that the necessary checks come from competition in the market—including, as we have seen, in the market for corporate control. If competition is thought to be inadequate, it may be a subject for antitrust litigation, or perhaps for appropriate legislation or regulation. But it is not up to the courts to supervise social media platforms through the blunt instrument of taking First Amendment doctrines developed for the government and applying them to private companies. Whether the result is “good or bad policy,” that limitation on the power of the courts is a “fundamental fact of our political order,” and it dictates our decision today

Even more ridiculous than the claims around content being taken down, CHD also claimed that the fact checks on their posts violated [checks notes… checks notes again] the Lanham Act, which is the law that covers things like trademark infringement and some forms of misleading advertising. The Court here basically does a “what the fuck are you guys talking about” to Kennedy and Rubenfeld.

By that definition, Meta did not engage in “commercial speech”—and, thus, was not acting “in commercial advertising or promotion”—when it labeled some of CHD’s posts false or directed users to fact-checking websites. Meta’s commentary on CHD’s posts did not represent an effort to advertise or promote anything, and it did not propose any commercial transaction, even indirectly.

And just to make it even dumber, CHD also had a RICO claim. Because, of course they did. Yet again, we will point you to Ken White’s “It’s not RICO dammit” lawsplainer, but the Court here does its own version:

The causal chain that CHD proposes is, to put it mildly, indirect. CHD contends that Meta deceived Facebook users who visited CHD’s page by mislabeling its posts as false. The labels that Meta placed on CHD’s posts included links to fact-checkers’ websites. If a user followed a link, the factchecker’s website would display an explanation of the alleged falsity in CHD’s post. On the side of the page, the fact-checker had a donation button for the organization. Meanwhile, Meta had disabled the donation button on CHD’s Facebook page. If a user decided to donate to the fact-checking organization, CHD maintains, that money would come out of CHD’s pocket, because CHD and factcheckers allegedly compete for donations in the field of health information.

The alleged fraud— Meta’s mislabeling of CHD’s posts— is several steps removed from the conduct directly responsible for CHD’s asserted injury: users’ depriving CHD of their donation dollars. At a minimum, the sequence relies on users’ independent propensities to intend to donate to CHD, click the link to a fact-checker’s site, and be moved to reallocate funds to that organization. This causal chain is far too attenuated to establish the direct relationship that RICO requires. Proximate cause “is meant to prevent these types of intricate, uncertain inquiries from overrunning RICO litigation.” Anza, 547 U.S. at 460.

CHD’s theory also strains credulity. It is not plausible that someone contemplating donating to CHD would look at CHD’s Facebook page, see the warning label placed there, and decide instead to donate to . . . a fact-checking organization. See Twombly, 550 U.S. at 555. The district court noted that CHD did not allege that any visitors to its page had in fact donated to other organizations because of Meta’s fraudulent scheme. CHD is correct that an actual transfer of money or property is not an element of wire fraud, as “[t]he wire fraud statute punishes the scheme, not its success.” Pasquantino v. United States, 544 U.S. 349, 371 (2005) (alteration in original) (quoting United States v. Pierce, 224 F.3d 158, 166 (2d Cir. 2000)). But the fact that no donations were diverted provides at least some reason to think that no one would have expected or intended the diversion of donations.

I love how the judge includes the… incredulous pause ellipses in that last highlighted section.

Oh, and I have to go back to one point. CHD had originally offered an even dumber RICO theory which it dropped, but the Court still mentions it:

In the complaint, CHD described a scheme whereby Meta placed warning labels on CHD’s posts with the intent to “clear the field” of CHD’s alternative point of view, thus keeping vaccine manufacturers in business so that they would buy ads on Facebook and ensure that Zuckerberg obtained a return on his investments in vaccine technology.

That’s brain worm logic speaking, folks.

There is a partial dissent from (Trump-appointed, natch) Judge Daniel Collins, who says that maybe, if you squint, there is a legitimate First Amendment claim. In part, this is because Collins thinks CHD should be able to submit additional material that wasn’t heard by the district court, which is not how any of this tends to work. You have to present everything at the lower court. The appeals court isn’t supposed to consider any new material beyond, say, new court rulings that might impact this ruling.

Collins then also seems to buy into Rubenfeld’s nutty 230-makes-you-a-state actor argument. He goes on for a while giving the history of Section 230 (including a footnote pointing out, correctly but pedantically, that it’s Section 230 of the Communications Act of 1934, but not Section 230 of the Communications Decency Act as most people call it — a point that only certain law professors talk about). The history is mostly accurate, highlighting the Stratton Oakmont decision and how it would be impossible to run an internet service if that had stood.

But then… it takes some leaps. It takes some giant leaps with massive factual errors. Embarrassing for a judge to be making:

The truly gigantic scale of Meta’s platforms, and the enormous power that Meta thereby exercises over the speech of others, are thus direct consequences of, and critically dependent upon, the distinctive immunity reflected in § 230. That is, because such massive third-party-speech platforms could not operate on such a scale in the absence of something like § 230, the very ability of Meta to exercise such unrestrained power to censor the speech of so many tens of millions of other people exists only by virtue of the legislative grace reflected in § 230’s broad immunity. Moreover, as the above discussion makes clear, it was Congress’s declared purpose, in conferring such immunity, to allow platform operators to exercise this sort of wide discretion about what speech to allow and what to remove. In this respect, the immunity granted by § 230 differs critically from other government-enabled benefits, such as the limited liability associated with the corporate form. The generic benefits of incorporation are available to all for nearly every kind of substantive endeavor, and the limitation of liability associated with incorporation thus constitutes a form of generally applicable non-speech regulation. In sharp contrast, both in its purpose and in its effect, § 230’s immunity is entirely a speech-related benefit—it is, by its very design, an immunity created precisely to give its beneficiaries the practical ability to censor the speech of large numbers of other persons.7 Against this backdrop, whenever Meta selectively censors the speech of third parties on its massive platforms, it is quite literally exercising a government-conferred special power over the speech of millions of others. The same simply cannot be said of newspapers making decisions about what stories to run or bookstores choosing what books to carry

I do not suggest that there is anything inappropriate in Meta’s having taken advantage of § 230’s immunity in building its mega-platforms. On the contrary, the fact that it and other companies have built such widely accessible platforms has created unprecedented practical opportunities for ordinary individuals to share their ideas with the world at large. That is, in a sense, exactly what § 230 aimed to accomplish, and in that particular respect the statute has been a success. But it is important to keep in mind that the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled.

Those highlighted sections are simply incorrect. Meta is constitutionally entitled to the ability to moderate thanks to the First Amendment. Section 230 simplifies the procedural aspects of it, in that companies need not fight an expensive and drawn-out First Amendment battle over it, as Section 230 shortcuts that procedurally by granting the immunity that ends cases much faster. Though it ends them the same way it would end otherwise, thanks to the First Amendment.

So, basically the key point that Judge Collins rests his dissent on is fundamentally incorrect. And it’s odd that he ignores the recent Moody ruling and even last year’s Taamneh ruling that basically explains why this is wrong.

Collins also seems to fall for the false idea that Section 230 requires a site to be a platform or a publisher, which is just wrong.

Rather, because its ability to operate its massive platform rests dispositively on the immunity granted as a matter of legislative grace in § 230, Meta is a bit of a novel legal chimera: it has the immunity of a conduit with respect to third-party speech, based precisely on the overriding legal premise that it is not a publisher; its platforms’ massive scale and general availability to the public further make Meta resemble a conduit more than any sort of publisher; but Meta has, as a practical matter, a statutory freedom to suppress or delete any third-party speech while remaining liable only for its own affirmative speech

But that’s literally wrong as well. Despite how it’s covered by many politicians and the media, Section 230 does not say that a website is not a publisher. It says that it shall not be treated as a publisher for third-party content even though it is engaging in publishing activities.

Incredibly, Collins even cites (earlier in his dissent) the 9th Circuit’s Barnes ruling which lays this out. In Barnes, the 9th Circuit is quite clear that Section 230 protects Yahoo from being held liable for third-party content explicitly because it is doing everything a publisher would do. Section 230 just removes liability from third-party content so that a website is not treated as a publisher, even when it is acting as a publisher.

In that case, the Court laid out all the reasons why Yahoo was acting as a publisher. It called what Yahoo engaged in “action that is quintessentially that of a publisher.” Then, it notes it couldn’t be held liable for those activities thanks to Section 230 (eventually Yahoo did lose that case, but under a different legal theory, related to promissory estoppel, but that’s another issue).

Collins even cites this very language from the Barnes decision:

As we stated in Barnes, “removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove.” Id. “Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties.”

So it’s bizarre that pages later, Collins falsely claims that Section 230 means that Meta is claiming not to be a publisher. As the Barnes case makes clear, Section 230 says you don’t treat the publisher of third-party content as a publisher of first-party content. But they’re both publishers of a sort. And Collins seemed to acknowledge this 20 pages earlier… and then… forgot?

Thankfully, Collin’s dissent is only a dissent and not the majority.

Still, as we noted back in May, RFK Jr. and Rubenfeld teamed up a second time to sue Meta yet again, once again claiming that Meta moderating him is a First Amendment violation. That’s a wholly different lawsuit, with the major difference being… that because RFK Jr. is a presidential candidate (lol), somehow this now makes it a First Amendment violation for Meta to moderate his nonsense.

So the Ninth Circuit should have yet another chance to explain the First Amendment to them both yet again.

Filed Under: 1st amendment, 9th circuit, content moderation, daniel collins, jed rubenfeld, lanham act, rfk jr., rico, section 230, state actor
Companies: children's health defense, meta

A Trio Of Failed Lawsuits Trying To Sue Websites For Moderating Content

from the that's-not-how-any-of-this-works dept

Why do people still file these lawsuits? For years now, we see lawsuits filed against websites over their content moderation decisions, despite Section 230 barring them (and the 1st Amendment rights of the platform backing that up). These lawsuits always fail.

Perhaps the reason we’re seeing a bunch more of these lately was because a ton of people completely misunderstood (helped along by the guy who I don’t think could fairly describe anything if he really tried) what happened with Twitter and Alex Berenson. All of the 1st Amendment claims in Berenson’s lawsuit were thrown out easily. The only reason the case moved forward (and then settled) was because an executive at Twitter had made statements to Berenson suggesting that he wouldn’t have his account blocked, and that opened up the possibility (though it still would have been a long shot in court) that a Barnes-style “promissory estoppel” ruling would come down.

But, because of how that case has been widely misrepresented to nonsense peddlers, they seemed to think it was open season on suing platforms. Anyway, all those cases are losing. Here are three examples that all happened recently, and all covered by Professor Eric Goldman. I’m playing a bit of catchup combining all three, but honestly, none of them represent anything ground-breaking or new. They’re just standard foolish lawsuits from people falsely thinking you can sue websites for moderating your content.

First up, we have well known nonsense peddler and pretend Presidential candidate RFK Jr. He’s been suing platforms for a while and it hasn’t gone well at all. In this case, RFK argued that YouTube was a “state actor” in taking down some videos, but the court isn’t buying it at all, noting the 9th Circuit has already said that such arguments are nonsense.

The Ninth Circuit held that Twitter exercised its own independent, judgment in adopting its content moderation policies and enforcing them. Id. at 1158. Additionally, the court held that the “private and state actors were generally aligned in their missions to limit the spread of misleading election information” and that “[s]uch alignment does not transform private conduct into state action.” Id. 1156–57.

Similarly, here, under either test, Plaintiff has not shown that the government so “insinuated itself into a position of interdependence” with Google or that it “exercised coercive power or has provided such significant encouragement” to Google that give rise to state action. Since Plaintiff’s counsel, at oral argument, conceded that the evidence provided in support of his application does not show that the government coerced Google, the Court limits its inquiry to whether there is evidence suggesting that the government insinuated itself into a position of interdependence or provided significant encouragement. Regardless of which test is used, the analysis is “necessarily fact-bound ….” Lugar v. Edmondson Oil Co., 457 U.S. 922, 939 (1982).

No state actor, no 1st Amendment. This case is going nowhere.

Next up, was a lawsuit against exTwitter from a pro se plaintiff, Taiming Zhang, arguing that his suspension from Twitter violated his contract with Twitter. That is… not how any of this works, as the court explained.

Zhang’s case gets tossed on straightforward Section 230 grounds, as his attempt to get around 230 was to say “but the contract was breached!” and the court says… nope:

Plaintiff’s argument “CDA 230 carries no relevance” because Twitter breached their contract is unavailing. There is no exception under Section 230 for breach of contract claims. See 47 U.S.C. § 230(e). Courts routinely hold Section 230 immunizes platforms from contract claims, where, as here, they seek to impose liability for protected publishing activity. See, e.g., King v. Facebook, Inc., 845 F. App’x 691, 692 (9th Cir. 2021) (affirming dismissal of pro se plaintiff’s contract claim based on, among other things, Facebook’s suspension of her user account, because “`any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230′”) (quoting Roommates, 521 F.3d at 1170-71); Murphy v. Twitter, Inc., 60 Cal. App. 5th 12, 28 (2021) (“many [courts] have concluded that [contract] claims were barred [by Section 230] because the plaintiff’s cause of action sought to treat the defendant as a publisher or speaker of user generated content”) (collecting cases).

Finally, we have Joseph Mercola, a somewhat infamous purveyor of absolute nonsense regarding vaccines, who had his account taken down by YouTube. He sued. It didn’t go well. He also argues a contractual violation and, as Goldman notes, Mercola seemed to switch legal strategies midstream going from originally suing over the content removals, to arguing that he just wanted access to his content (as if he didn’t already have copies?).

Either way, that’s not how any of this works:

As set forth in the Statement, YouTube had no obligation to host or serve content. The main issue is that the plaintiffs want access to the content. But no provision of the Agreement provides a right to access that content under the circumstances here: termination for cause under the agreement. In a different context, there is an avenue to export content: if YouTube terminates a user’s access for service changes, it gives the user sufficient time to export content, where reasonably possible. But that provision on its face does not apply here. The plaintiffs thus do not plead contract or quasi-contract claims related to denial of access to their content.

Similarly, as set forth in the Statement, YouTube had the discretion to take down content that harmed its users. The content here violated the Community Guidelines. Modifications to the Community Guidelines — such as the modification here to elaborate on YouTube’s existing prohibitions on medical misinformation to add COVID-19 and vaccines — could be effective immediately, without notice. YouTube had the discretion to terminate channels without warning after a single case of severe abuse. Under the contract, this determination was discretionary: the contract said that “[i]f we reasonably believe that any Content is in breach of this agreement or may cause harm, . . . we may remove or take down that Content in our discretion.”

Basically all three of these cases boil down to the same basic thing: a crackpot who a website decided violated its rules has their content taken down, and the crackpot feels entitled to commandeer someone else’s private property to host their speech.

That’s not how it works. It’s not how it’s ever worked. But, somehow, I doubt these lawsuits are going away any time soon.

Filed Under: 1st amendment, free speech, joseph mercola, moderation, rfk jr., robert f. kennedy jr., section 230, state actor, taiming zhang
Companies: google, twitter, youtube

Does The Government Have The Right To Keep And Arm Bears (With Cameras)?

from the never-mind-the-theoreticals dept

No matter what differences of opinion I might have with Volokh Conspiracy contributors, it must be said the site (now hosted at Reason after a brief run at the Washington Post) manages to surface truly interesting cases on a regular basis.

This is one of them. I’ll let Ilya Somin of the Volokh Conspiracy lead things off because he’s the one who first unearthed this Fourth Amendment lawsuit that cannot possibly have any directly applicable precedent:

A case recently filed in a federal district court in Connecticut alleges that a state government agency violated the Fourth Amendment by attaching a camera to a bear they knew frequented the plaintiff property owners’ land.

Go ahead and re-read that a couple of times. Once you’re done, feel free to move on to the mugshot of the alleged curtilage violator:

Here’s the complaint [PDF], which notes this particular bear was a frequent trespasser on the plaintiffs’ property.

During all times mentioned in this complaint, the defendant knew that bears, including a bear the defendant had tagged as Number 119, frequented the said property [belonging to the plaintiffs].

On an unknown date prior to May 20, 2023, but subsequent to January 1, 2023, the defendant affixed a collar to Bear Number 119 which contained a camera. The defendant thereupon released the camera-carrying bear in the vicinity of plaintiffs’ property.

At approximately 9:30 a.m. on May 20, 2023, Bear Number 119 approached to within 200 yards of the plaintiffs’ residence, which is located near the center of their property. It was wearing the aforesaid camera at the time and, upon information and belief, that camera was activated and taking and transmitting pictures or video of the interior of the plaintiffs’ property to the defendant.

The bear-mounted cam was allegedly supplied (and mounted by a particularly brave employee of) the Connecticut Department of Energy and Environmental Protection (DEEP). According to the complaint, the plaintiffs have been accused of “illegally” feeding bears on their property. So, this cambear appears to be part of DEEP’s efforts to prove the allegations against the couple (Mark and Carol Brault).

This surveillance attempt failed when the couple noticed the bear and its digital appendage. As far as the Braults know (at least at this point prior to discovery), no warrant was obtained before DEEP converted an apparent regular visitor to the Braults’ property into a confidential non-human source.

The Braults say this is a Fourth Amendment violation, with the bear acting as a government agent, albeit one incapable of being directly controlled. Orin Kerr, also writing for the Volokh Conspiracy, isn’t quite so sure this is an illegal search.

First, Kerr says the definition of the term “curtilage” doesn’t generally cover areas 200 yards from the actual residence. That may be so, but DEEP had no idea how close to the home the bear would wander, much less have any way of preventing it from encroaching on the curtilage. So, that seems to be a point in the Gaults’ favor.

This point seems just as questionable:

There’s reason to doubt the bears are covered by the Fourth Amendment. Does putting a camera around a bear’s neck make the bear a state actor, like a person? This isn’t necessarily a new question. There’s lots of lower-court caselaw on drug-detection dogs that are brought to a car and then jump into the car and sniff for drugs, alerting to drugs inside. Most (but not all) of that caselaw holds that, if the dog jumped into the car unprompted by a human officer, then it’s not action attributable to the government. If that caselaw applies here, then it seems dubious that the bears are covered by the Fourth Amendment at all.

It may not be a “government actor” in the sense it was never paid nor directly controlled by the government. But the camera says something different. The camera is the government’s and it’s the state actor. That it was carried by a bear, rather than a human, seems like the more pertinent question to be addressed.

If all bears in the area wore cameras for non-surveillance reasons (for unknown wildlife preservation reasons or whatever), and some of them happened to wander onto this property and caught someone doing something illegal, I can see this being a legitimate “not a state actor” argument. But, if the allegations are true, DEEP placed this camera on this specific bear because it knew the bear, more likely than not, would wander onto the Gaults’ land and perhaps capture footage of something incriminating. That sure makes it seem like it’s a state actor, even if it’s definitely non-traditional.

On the other hand, you can’t make a bear testify. So, it’s the government’s word against the Gaults’ without the benefit of cross-examining the bear to see if it had any vested government interest in approaching their home.

But it seems pretty clear this was state action. Whether or not it was actually a Fourth Amendment violation is up to the court to decide. And I have to believe this court will be thrilled to dive into the intricacies of wildlife-mounted surveillance efforts, because how often do judges get to deal with something that’s actually novel in the best sense of the word? And we should all look forward to the eventual opinion, which will hopefully contain plenty of bear puns and perhaps some Yogi Bear-centric hypotheticals. Stay tuned!

Filed Under: 4th amendment, bears, cameras, connecticut, ct deep, state actor

Antisemitic Conspiracy Theorist Who Sued YouTube For 1st Amendment Violations Now Owes YouTube Nearly $40,000

from the oh-THOSE-conservative-views dept

The people who claim to be confused about social media services and the First Amendment are never truly confused. It’s always the people you expect to claim they’re “confused.”

Most social media users understand they’re playing on someone else’s playground. They know that if they act like inveterate assholes, the social media company will repeatedly tap the “no assholes allowed” sign before deactivating their accounts.

It’s always the most inveterate of assholes that pretend to be confused about how social media services work and where the intersection of their right to free speech collides violently with their misconceptions about a supposed “right to be heard.” The latter right does not exist. And a social media company can’t violate your constitutional rights no matter how hard it tries.

So, when people start suing social media platforms for violating their rights, it’s ALWAYS these kinds of people, as discussed in this Section 230 post by Techdirt reader/contributor Joe Mullin a couple of years ago:

Marshall Daniels hosts a YouTube channel in which he has stated that Judaism isa complete lie_“ which was “_made up for political gain.” Daniels, who broadcasts as “Young Pharaoh,” has also called Black Lives Matter “an undercover LGBTQ Marxism psyop that is funded by George Soros.”

_In April 2020, Daniels live-streamed a video claiming that vaccines contain “rat brains,” that HIV is a “biologically engineered, terroristic weapon,” and that Anthony Fauci “has been murdering motherfuckers and causing medical illnesses since the 1980s._“

_In May 2020, Daniels live-streamed a video called “George Floyd, Riots & Anonymous Exposed as Deep State Psyop for NOW.” In that video, he claimed that nationwide protests over George Floyd’s murder were “the result of an operation to cause civil unrest, unleash chaos, and turn the public against [President Trump].” According to YouTube, he also stated the COVID-19 pandemic and Floyd’s murder “were covert operations orchestrated by the Freemasons,” and accused Hillary Clinton and her aide John Podesta of torturing children. Near the video’s end, Daniels stated: “If I catch you talking shit about Trump, I might whoop your ass fast._“

Yeah. That’s the stuff. Just mainlining 8chan and hoping at some point to have your personal shit together enough to be considered a respectable sociopath. These are people who somehow consistently fail to understand terms of service policies and their implicit promise to behave themselves when hanging out in other people’s spaces.

Daniels sued YouTube in 2021, claiming (despite all evidence to the contrary) that the company was a “state actor” because some politicians said something out loud about content moderation (namely Nancy Pelosi and Adam Schiff). He was represented by a couple of opportunistic lawyers who like suing YouTube for contradictory reasons when not representing the sort of people who shouldn’t be allowed within 100 feet of an open internet portal. Here’s Eric Goldman discussing Young Pharoah’s legal reps:

Personnel note: the plaintiff’s lawyers are Maria Cristina Armenta and Credence Elizabeth Sol, and this isn’t their first appearance on the blog. They will always hold special status in Internet Law for their ultimately-unsuccessful censorial efforts to force YouTube to remove the Innocence of Muslims video. Now, they are working–again unsuccessfully–to impose censorial must-carry obligations on Internet services.

In the most likely turn of events, Pharoah/Daniels has lost his lawsuit against YouTube. Not only has he lost, but if he has any money to spare, he’s out that as well. Here’s Eric Goldman again:

Google requested attorneys’ fees for its [Section] 1983 victory. 1983 allows for fee-shifting in “exceptional” cases, including frivolous cases like this one. The court says it was “frivolous from the outset….Mr. Daniels purported to assert a First Amendment claim against private entities based on legal theories that were either expressly foreclosed by existing precedent or entirely meritless on their own terms.” The court awards YouTube a fee-shift of $38,576. Boom.

Yep, that’s Section 1983, the one that allows lawsuits against government employees for rights violations. YouTube/Alphabet is not a government employee, so bringing this action under this legal code means YouTube can now take Mr. Daniels’ money, rather than the other way around.

As Goldman notes, it’s one thing for a court to call a lawsuit brought by a pro se complainant frivolous. They’re not expected to be intimately familiar with applicable laws. It’s quite another (and much more insulting) for a court to use the words “frivolous from the outset” to describe something authored by practicing attorneys.

It’s a loss. And no one should have ever expected this to end any differently. But this is what happens when a performative plaintiff pairs with performative lawyers: courts get annoyed and the people who thought they’d band together to take down… um… the… social media… uh… Deep State… I guess… are now on the hook for nearly forty grand.

Anyway, if you want to read something stupid, here’s the original complaint [PDF]. And if you want to see all of its stupidity pointed out pointedly, here’s the federal court opinion [PDF].

Private companies are not the government, no matter how often government employees might bitch about private companies. That YouTube doesn’t want your bigoted, dumbass content is wholly on you, Young Pharaoh. There’s no legal action here. Just an opportunity for you to understand where you’ve gone wrong. But it’s apparently going to be a very expensive learning experience.

Filed Under: attorney's fees, content moderation, credence elizabeth sol, maria cristina armenta, marshall daniels, section 1983, section 230, state actor, young pharoah
Companies: google, youtube

It Can Always Get Dumber: Trump Sues Facebook, Twitter & YouTube, Claiming His Own Government Violated The Constitution

from the wanna-try-that-again? dept

Yes, it can always get dumber. The news broke last night that Donald Trump was planning to sue the CEOs of Facebook and Twitter for his “deplatforming.” This morning we found out that they were going to be class action lawsuits on behalf of Trump and other users who were removed, and now that they’re announced we find out that he’s actually suing Facebook & Mark Zuckerberg, Twitter & Jack Dorsey, and YouTube & Sundar Pichai. I expected the lawsuits to be performative nonsense, but these are… well… these are more performative and more nonsensical than even I expected.

These lawsuits are so dumb, and so bad, that there seems to be a decent likelihood Trump himself will be on the hook for the companies’ legal bills before this is all over.

The underlying claims in all three lawsuits are the same. Count one is that these companies removing Trump and others from their platforms violates the 1st Amendment. I mean, I know we’ve heard crackpots push this theory (without any success), but this is the former President of the United States arguing that private companies violated HIS 1st Amendment rights by conspiring with the government HE LED AT THE TIME to deplatform him. I cannot stress how absolutely laughably stupid this is. The 1st Amendment, as anyone who has taken a civics class should know, restricts the government from suppressing speech. It does not prevent private companies from doing so.

The arguments here are so convoluted. To avoid the fact that he ran the government at the time, he tries to blame the Biden transition team in the Facebook and Twitter lawsuits (in the YouTube one he tries to blame the Biden White House).

Pursuant to Section 230, Defendants are encouraged and immunized by Congress to censor constitutionally protected speech on the Internet, including by and among its approximately three (3) billion Users that are citizens of the United States.

Using its authority under Section 230 together and in concert with other social media companies, the Defendants regulate the content of speech over a vast swath of the Internet.

Defendants are vulnerable to and react to coercive pressure from the federal government to regulate specific speech.

In censoring the specific speech at issue in this lawsuit and deplatforming Plaintiff, Defendants were acting in concert with federal officials, including officials at the CDC and the Biden transition team.

As such, Defendants? censorship activities amount to state action.

Defendants? censoring the Plaintiff?s Facebook account, as well as those Putative Class Members, violates the First Amendment to the United States Constitution because it eliminates the Plaintiffs and Class Member?s participation in a public forum and the right to communicate to others their content and point of view.

Defendants? censoring of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes viewpoint and contentbased restrictions on the Plaintiffs? and Putative Class Members? access to information, views, and content otherwise available to the general public.

Defendants? censoring of the Plaintiff and Putative Class Members violates the First Amendment because it imposes a prior restraint on free speech and has a chilling effect on social media Users and non-Users alike.

Defendants? blocking of the Individual and Class Plaintiffs from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on the Plaintiff and Putative Class Members? ability to petition the government for redress of grievances.

Defendants? censorship of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes a viewpoint and contentbased restriction on their ability to speak and the public?s right to hear and respond.

Defendants? blocking the Plaintiff and Putative Class Members from their Facebook accounts violates their First Amendment rights to free speech.

Defendants? censoring of Plaintiff by banning Plaintiff from his Facebook account while exercising his free speech as President of the United States was an egregious violation of the First Amendment.

So, let’s just get this out of the way. I have expressed significant concerns about lawmakers and other government officials that have tried to pressure social media companies to remove content. I think they should not be doing so, and if they do so with implied threats to retaliate for the editorial choices of these companies that is potentially a violation of the 1st Amendment. But that’s because it’s done by a government official.

It does not mean the private companies magically become state actors. It does not mean that the private companies can’t kick you off for whatever reason they want. Even if there were some sort of 1st Amendment violation here, it would be on behalf of the government officials trying to intimidate the platforms into acting — and none of the examples in any of the lawsuits seem likely to reach even that level (and, again the lawsuits are against the wrong parties anyway).

The second claim, believe it or not, is perhaps even dumber than the first. It asks for declaratory judgment that Section 230 itself is unconstitutional.

In censoring (flagging, shadow banning, etc.) Plaintiff and the Class, Defendants relied upon and acted pursuant to Section 230 of the Communications Decency Act.

Defendants would not have deplatformed Plaintiff or similarly situated Putative Class Members but for the immunity purportedly offered by Section 230.

Let’s just cut in here to point out that this point is just absolutely, 100% wrong and completely destroys this entire claim. Section 230 does provide immunity from lawsuits, but that does not mean without it no one would ever do any moderation at all. Most companies would still do content moderation — as that is still protected under the 1st Amendment itself. To claim that without 230 Trump would still be on these platforms is laughable. If anything the opposite is the case. Without 230 liability protections, if others sued the websites for Trump’s threats, attacks, potentially defamatory statements and so on, it would have likely meant that these companies would have pulled the trigger faster on removing Trump. Because anything he (and others) said would represent a potential legal liability for the platforms.

Back to the LOLsuit.

Section 230(c)(2) purports to immunize social media companies from liability for action taken by them to block, restrict, or refuse to carry ?objectionable? speech even if that speech is ?constitutionally protected.? 47 U.S.C. ? 230(c)(2).

In addition, Section 230(c)(1) also has been interpreted as furnishing an additional immunity to social media companies for action taken by them to block, restrict, or refuse to carry constitutionally protected speech.

Section 230(c)(1) and 230(c)(2) were deliberately enacted by Congress to induce, encourage, and promote social medial companies to accomplish an objective?the censorship of supposedly ?objectionable? but constitutionally protected speech on the Internet?that Congress could not constitutionally accomplish itself.

Congress cannot lawfully induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.? Norwood v. Harrison, 413 US 455, 465 (1973).

Section 230(c)(2) is therefore unconstitutional on its face, and Section 230(c)(1) is likewise unconstitutional insofar as it has interpreted to immunize social media companies for action they take to censor constitutionally protected speech.

This is an argument that has been advanced in a few circles, and it’s absolute garbage. Indeed, the state of Florida tried this basic argument in its attempt to defend its social media moderation law and that failed miserably just last week.

And those are the only two claims in the various lawsuits. That these private companies making an editorial decision to ban Donald Trump (in response to worries about him encouraging violence) violates the 1st Amendment (it does not) and that Section 230 is unconstitutional because it somehow involves Congress encouraging companies to remove Constitutionally protected speech. This is also wrong, because all of the cases related to this argument involve laws that actually pressure companies to act in this way. Section 230 has no such pressure involved (indeed, many of the complaints from some in government is that 230 is a “free pass” for companies to do nothing at all if they so choose).

There is a ton of other garbage — mostly performative throat-clearing — in the lawsuits, but none of that really matters beyond the two laughably dumb claims. I did want to call out a few really, really stupid points though. In the Twitter lawsuit, Trump’s lawyers misleadingly cite the Knight 1st Amendment Institute’s suit against Trump for blocking users on Twitter:

In Biden v. Knight 141 S. Ct. 1220 (2021), the Supreme Court discussed the Second Circuit?s decision in Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 18- 1691, holding that Plaintiff?s threads on Twitter from his personal account were, in fact, official presidential statements made in a ?public forum.?

Likewise, President Trump would discuss government activity on Twitter in his official capacity as President of the United States with any User who chose to follow him, except for seven (7) Plaintiffs in the Knight case, supra., and with the public at large.

So, uh, “the Supreme Court” did not discuss it. Only Justice Clarence Thomas did, and it was a weird, meandering, unbriefed set of musings that were unrelated to the case at hand. It’s a stretch to argue that “the Supreme Court” did that. Second, part of President Trump’s argument in the Knight case was that his Twitter account was not being used in his “official capacity,” but was rather his personal account that just sometimes tweeted official information. Literally. This was President Trump appealing to the Supreme Court in that case:

The government?s response is that the President is not acting in his official capacity when he blocks users….

To then turn around in another case and claim that it was official action is just galaxy brain nonsense.

Another crazy point: in all three lawsuits, Donald Trump argues that government officials threatening the removal of Section 230 in response to social media companies’ content moderation policies itself proves that the decisions by those companies make them state actors. Here’s the version from the YouTube complaint (just insert the other two companies where it says YouTube to see what it is in the others):

Below are just some examples of Democrat legislators threatening new regulations, antitrust breakup, and removal of Section 230 immunity for Defendants and other social media platforms if YouTube did not censor views and content with which these Members of Congress disagreed, including the views and content of Plaintiff and the Putative Class Members

But, uh, Donald Trump spent much of the last year in office doing exactly the same thing. He literally demanded the removal of Section 230. He signed an executive order to try to remove Section 230 immunity from companies, then demaned Congress repeal all of Section 230 before he would fund the military. On the antitrust breakup front, Trump demanded that Bill Barr file antitrust claims against Google prior to the election as part of his campaign against “big tech.”

It’s just absolutely hilarious that he’s now claiming that members of Congress doing the very same thing he did, but to a lesser degree, and with less power magically turns these platforms into state actors.

There was a lot of speculation as to what lawyers Trump would have found to file such a lawsuit, and (surprisingly) it’s not any of the usual suspects. There is the one local lawyer in Florida (required to file such a suit there), two lawyers with AOL email addresses, and then a whole bunch of lawyers from Ivey, Barnum, & O’Mara, a (I kid you not) “personal injury and real estate” law firm in Connecticut. If these lawyers have any capacity for shame, they should be embarrassed to file something this bad. But considering that the bio for the lead lawyer on the case hypes up his many, many media appearances, and even has a gallery of photos of him appearing on TV shows, you get the feeling that perhaps these lawyers know it’s all performative and will get them more media coverage. That coverage should be mocking them for filing an obviously vexatious and frivolous lawsuit.

The lawsuit is filed in Florida, which has an anti-SLAPP law (not a great one, but not a horrible one either). It does seem possible that these companies might file anti-SLAPP claims in response to this lawsuit, meaning that Trump could potentially be on the hook for the legal fees of all three. Of course, if the whole thing is a performative attempt at playing the victim, it’s not clear that that would matter.

Filed Under: 1st amendment, class action, content moderation, donald trump, jack dorsey, mark zuckerberg, section 230, state actor, sundar pichai
Companies: facebook, twitter, youtube

Disgraced Yale Law Professor Now Defending Anti-Vaxxers In Court With His Nonsense Section 230 Ideas

from the that's-not-how-any-of-this-works dept

Back in January, we wrote about a bizarrely bad Wall Street Journal op-ed co-written by disgraced and suspended Yale Law professor Jed Rubenfeld, arguing that Section 230 somehow magically makes social media companies state actors, controlled by the 1st Amendment. This is, to put it mildly, wrong. His argument is convoluted and not at all convincing. He takes the correct idea that government officials threatening private companies with government retaliation if they do not remove speech creates 1st Amendment issues, and then tries to extend it by saying that because 230 gives companies more freedom to remove content, that magically makes them state actors.

As we noted at the time, that’s not how any of this works. Companies’ ability to moderate content is itself protected by the 1st Amendment. Section 230 gives them procedural benefits in court to get dumb cases kicked out earlier, but it most certainly does not magically make them an arm of the government. This wacky idea that social media is magically a state actor was rightly shut down by Supreme Court Justice Brett Kavanaugh (who, ironically, is part of another scandal involving Rubenfeld) in the Halleck case, in which the Court stated clearly that you don’t just magically make companies state actors. There are rules, man. From the ruling written by Kavanaugh:

By contrast, when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum. This Court so ruled in its 1976 decision in Hudgens v. NLRB. There, the Court held that a shopping center owner is not a state actor subject to First Amendment requirements such as the public forum doctrine….

The Hudgens decision reflects a commonsense principle: Providing some kind of forum for speech is not an activity that only governmental entities have traditionally performed. Therefore, a private entity who provides a forum for speech is not transformed by that fact alone into a state actor. After all, private property owners and private lessees often open their property for speech. Grocery stores put up community bulletin boards. Comedy clubs host open mic nights. As Judge Jacobs persuasively explained, it ?is not at all a near-exclusive function of the state to provide the forums for public expression, politics, information, or entertainment.?

However, it appears that not only is Rubenfeld making these arguments in laughably wrong WSJ pieces, but he’s now trying to do so in court as well, as he’s now representing some anti-vaxxers, who are trying to insist that Facebook’s decision to put warning labels on the bogus information the anti-vaxxers were posting somehow violated their 1st Amendment rights.

We had written about this case last summer, noting that it was so stupid and so wrong that I had difficulty writing it up. And that was before Rubenfeld joined the defense team. At issue was that Robert F. Kennedy’s blatant misinformation anti-vax propaganda shop, “Children’s Health Defense” sued Facebook, claiming that it had “teamed up” with the US government to censor their speech. The reasoning was that Rep. Adam Schiff had (stupidly) threatened to remove Facebook’s 230 protections if the company didn’t do a better job dealing with misinformation.

As we noted at the time, there is perhaps a weak case they might have against Schiff, but not against Facebook.

Yet, the case goes on. Facebook has rightly moved to have the case dismissed, and that motion is worth a read if only because the exasperation of Facebook’s lawyers at Wilmer Hale can be heard quite clearly. There’s a lot in there, but the summary covers it pretty thoroughly:

CHD claims that Facebook?s fact-checking program violated its First Amendment rights, restrained it from competing in the marketplace of vaccine ?messages,? … and constituted a RICO enterprise. Those claims turn the First Amendment on its head. The First Amendment is a shield from government action?not a sword to be used in private litigation. It is therefore unsurprising that the SAC contains numerous independent and incurable defects.

First, the SAC does not state a Bivens claim because it does not allege federal action. Facebook and Mr. Zuckerberg are private actors. Facebook exercised its own editorial discretion to reduce the visibility of posts identified by independent fact-checkers as containing false or partially false information. None of the challenged conduct is attributable to the federal government.

Second, far from violating the First Amendment, Facebook?s decisions to label and limit the visibility of CHD?s content are themselves protected by the First Amendment. This Court may not hold Facebook or Mr. Zuckerberg liable for exercising editorial discretion with respect to matters of public concern. And even if the First Amendment did not fully bar CHD?s claims, it requires that CHD, at minimum, plausibly allege that Facebook acted with actual malice. The SAC fails to do so, even though Defendants? motions to dismiss unquestionably put CHD on notice of this defect.

Third, Section 230 of the Communications Decency Act (?CDA?) shields Facebook from liability for publishing third-party fact checks or restricting access to CHD?s content. None of the SAC?s allegations concerning the relationship between Facebook and third-party fact-checkers strip Facebook or Mr. Zuckerberg of that protection.

Fourth, the Lanham Act claim fails because CHD has not identified a commercial injury that gives it standing under the Act. The Lanham Act protects those engaged in commerce against unfair competition. Because CHD?s alleged injuries are to its interests as a consumer of Facebook?s free service, not as a competitor, they are not cognizable under the Act. And CHD?s allegations do not establish that the purportedly false statements are ?promotional statements? covered by the Act.

Fifth, CHD has not stated a civil RICO claim because it has failed, even on its third bite at the pleading apple, to identify any predicate acts of wire fraud. And CHD has alleged neither a sufficiently ?direct? injury to confer statutory standing nor a cognizable civil RICO ?pattern.?

Sixth, the SAC additionally does not state a claim against Mr. Zuckerberg because it does not allege that he was personally involved in any of the allegedly unlawful conduct. Nor has CHD pleaded the necessary prerequisites for any theory of agency liability.

Seventh, though the SAC contains many paragraphs describing CHD?s views on 5G, CHD nowhere connects those views to an actionable theory of liability

Apparently, Rubenfeld has joined forces with RFK Jr. and showed up in court to defend this idiocy to what would appear to be an appropriately skeptical judge, alongside lawyer Roger Teich (who originally filed the complaint with RFK Jr.).

In a virtual hearing on Facebook?s motion to dismiss the lawsuit Wednesday, Judge Illston asked if the government can ever take steps to counter misinformation without running afoul of the First Amendment.

?Let?s say there was something on the internet that says, ?If you take a Covid vaccine, you?re going to grow a third head.? That?s clearly not true. Is it OK to not let that be published?? Illston asked.

CHD attorney Roger Teich replied, ?I don?t think it?s OK if the government is calling the shot.?

Illston pressed: ?You think it?s inappropriate for the government to say generally, ?We?d really like it if all these private social media outlets didn?t publish lies about the Covid vaccine?? That?s not alright to say that??

Teich answered that it was the CDC?s ?underhandedness? in using Facebook to restrict speech that violates the Constitution.

That, of course, is not how any of this works. And someone with Rubenfeld’s pedigree should know that. But, instead, he’s out there defending this utter and complete nonsense:

?State action must be found whenever government officials are coercing, inducing or encouraging private parties to do what they themselves cannot constitutionally do,? CHD attorney Jed Rubenfeld said.

Sure, if there’s actual coercion, then a discussion can be had. But CHD has no evidence of any of that. And it seems to ignore Facebook’s own 1st Amendment rights. And when the judge pointed all this out to Rubenfeld, he tries to cook up a wacky theory that because members of Congress or the CDC said something, and then Facebook took action, that magically makes Facebook a state actor.

CHD argued that U.S. Magistrate Judge Virginia DeMarchi in San Jose got it wrong when she dismissed Daniels v. Alphabet Inc. on March 31. The plaintiff in that suit argued Schiff and House Speaker Nancy Pelosi had coerced YouTube, owned by Google?s parent Alphabet, into removing objectionable content. DeMarchi dismissed the suit with leave to amend, finding the plaintiff did ?not plead any facts suggesting that Speaker Pelosi or Rep. Schiff were personally involved in or directed the removal? of videos.

CHD attorney Jed Rubenfeld said DeMarchi ?was not informed of the precedent? when she issued that ruling.

?What matters is if they gave the private party the standard of decision,? Rubenfeld said. ?The CDC gives Facebook the standard of decision.?

?And does it matter if what the CDC said is true,? Illston asked.

Rubenfeld replied by insisting the information his client has posted about vaccines is true, but even if the speech was false, ?it would still be constitutionally protected.?

Um. Again, even if this were true (and it’s making a lot out of an incredibly weak chain of events), wouldn’t CHD’s actual cause of action be against the government officials and not Facebook, which retains its own 1st Amendment rights to label nonsense nonsense, or to take down content?

Everything about this case is dumb, and the fact that the disgraced and suspended Rubenfeld is using it to further his nutty legal theories is just the icing on the nonsense cake. Hopefully the judge does the expected thing and dismisses the case with a thorough benchslap for wasting the court’s time.

Filed Under: 1st amendment, anti-vax, content moderation, jed rubenfeld, rfk jr., robert f. kennedy jr., section 230, state action doctrine, state actor
Companies: children's health defense, facebook

Ridiculous: Yale Law Prof Argues That Because Some In Congress Want More Moderation, That Makes Twitter A State Actor

from the did-he-teach-hawley? dept

I’m beginning to see where Josh Hawley got his totally nutty ideas about the 1st Amendment. The Wall Street Journal has an utterly insane piece by Yale Law professor Jed Rubenfeld — currently suspended due to sexual harassment claims, and who was infamously quoted telling prospective law clerks for then Judge Brett Kavanaugh, that Kavanugh “hires women with a certain look” — and a… um… biotech executive named Vivek Ramaswami who is mad about “woke” companies, insisting (wrongly) that the big internet companies are actually part of the US government and therefore have to abide by the 1st Amendment in their content moderation practices.

Honestly, the level of thinking here is on par with your typical Breitbart commenter, not a well known (if slightly disgraced) Yale Law professor.

Conventional wisdom holds that technology companies are free to regulate content because they are private, and the First Amendment protects only against government censorship. That view is wrong: Google, Facebook and Twitter should be treated as state actors under existing legal doctrines. Using a combination of statutory inducements and regulatory threats, Congress has co-opted Silicon Valley to do through the back door what government cannot directly accomplish under the Constitution.

It’s not just “conventional wisdom.” It’s lots and lots of legal precedent and a general understanding of 1st Amendment doctrine going back ages. State action doctrine is not some brand new concept. I mean, there have been some very thoughtful academic pieces on the idea that state action doctrine should be changed to try to make it apply to social media companies. But those are academic papers suggesting how they think the law should change. They’re not saying it fits under current doctrine.

Because it doesn’t.

It is ?axiomatic,? the Supreme Court held in Norwood v. Harrison (1973), that the government ?may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.? That?s what Congress did by enacting Section 230 of the 1996 Communications Decency Act, which not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so.

So… the first sentence is correct. It’s also why we’ve repeatedly raised concerns about lawmakers demanding specific content moderation options. But the second sentence is just laughably wrong. Nothing in Section 230 of the Communications Act induces, encourages, or promotes private persons to accomplish what is constitutionally forbidden to accomplish. The 1st Amendment protects a company’s right not to associate with those it does not wish to associate with. It also protects against being compelled to host speech it disagrees with. Those are both constitutionally protected things. Section 230 does not change that.

The piece does highlight some members of Congress stupidly (and I believe, unconstitutionally) pressuring Facebook and Google to restrict “harmful content.” And I agree that’s wrong. But it’s a massive leap towards insanity to try to spin that as saying that those vague threats from elected officials magically turns the websites themselves into arms of the state, subject to 1st Amendment restrictions placed on government. I mean, if it did, you’ve just handed Congress a magic tool to effectively nationalize any company: just unconstitutionally order them to do something, and voila, they’re now state actors.

That’s insane. That would only encourage Congress to make unconstitutional demands of companies to have those companies declared state actors. It’s bonkers. I feel sorry for Yale Law students who deserve better.

Such threats have worked. In September 2019, the day before another congressional grilling was to begin, Facebook announced important new restrictions on ?hate speech.? It?s no accident that big tech took its most aggressive steps against Mr. Trump just as Democrats were poised to take control of the White House and Senate. Prominent Democrats promptly voiced approval of big tech?s actions, which Connecticut Sen. Richard Blumenthal expressly attributed to ?a shift in the political winds.?

So… the argument here is that you want more hate speech online, and you’re mad that Facebook is restricting it? Holy shit. What is wrong with you?

And, um, note what is left out in this claim about exactly when these companies “took its most aggressive steps against Mr. Trump.” It’s not just about the fact that Democrats were poised to take control of the Executive and Legislative branches, but because Trump had just inspired a fucking riot at the Capitol building in an effort to overturn a free and fair election and reports were coming out that he was happy about what happened, worrying many that he would encourage yet more attacks in the days leading up to the Biden inauguration.

Seems like kind of an important thing to include, no? There’s no indication that the Trump bans were about politics at all. There is every indication they were about preventing an armed insurrection and possible civil war. But Rubenfeld and Ramaswamy literally ignore all of that and insist that it’s some sort of ideological or political issue… and stretch that to argue that these companies are arms of the state. A state that is still controlled by Donald Trump.

I mean, this is embarrassing.

For more than half a century courts have held that governmental threats can turn private conduct into state action. In Bantam Books v. Sullivan (1963), the Supreme Court found a First Amendment violation when a private bookseller stopped selling works state officials deemed ?objectionable? after they sent him a veiled threat of prosecution. In Carlin Communications v. Mountain States Telephone & Telegraph Co. (1987), the Ninth U.S. Circuit Court of Appeals found state action when an official induced a telephone company to stop carrying offensive content, again by threat of prosecution.

As the Second Circuit held in Hammerhead Enterprises v. Brezenoff (1983), the test is whether ?comments of a government official can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official?s request.? Mr. Richmond?s comments, along with many others, easily meet that test. Notably, the Ninth Circuit held it didn?t matter whether the threats were the ?real motivating force? behind the private party?s conduct; state action exists even if he ?would have acted as he did independently.?

Again, this is getting the facts all mixed up. All of these cases are important ones for why the government cannot force companies into moderating the way it sees best. It’s why nearly all proposals to modify Section 230 are unconstitutional. But… those all involved cases where officials made specific demands that a company then followed through on — and then the actions are really seen as government actions. But, here, there was no government official demanding that Twitter or Facebook block Trump. Trump is the President.

I agree that there might be an argument that elected officials who make specific moderation demands could be violating 1st Amendment rights of speakers (and of the companies themselves!), but to argue that vague statements by elected officials to be better about “taking responsibility” turning all moderation decisions into state action is galaxy-brain nonsense.

The piece does at least note that repealing Section 230 is a bad idea, but then goes off the rails immediately:

Republicans including Mr. Trump have called for Section 230?s repeal. That misses the point: The damage has already been done. Facebook and Twitter probably wouldn?t have become behemoths without Section 230, but repealing the statute now may simply further empower those companies, which are better able than smaller competitors to withstand liability. The right answer is for courts to recognize what lawmakers did: suck the air out of the Constitution by dispatching big tech to do what they can?t. Now it?s up to judges to fill the vacuum, with sound legal precedents in hand.

Uh, sure. Let’s have judges produce precedent — a la Backpage v. Dart — that says that elected officials cannot threaten or try to force companies to do something unconstitutional. But, that should be on the public officials, not on companies, and certainly not when these actions were not taken at the behest of elected officials, but in order to try to stop an armed insurrection (which, again, the authors never bother to mention other than an oblique reference towards the end of the piece about “the breach of the Capitol”).

The article also says that these companies now might try to block Joe Biden because they don’t like his support of antitrust action against them. And…um… does anyone believe that? That’s insane (beyond the fact that there are no antitrust suits against Twitter, which isn’t even that big in the first place). But even if they did do that, it would immediately backfire. I mean, it would not just be ridiculous, laughable, and a total PR disaster, but it would play right into the hands of those suing the companies for antitrust.

I’m not sure how difficult it is for Rubenfeld and Ramawamy to get this through their skulls, but the bans last week were not because of a policy disagreement with the President. No one’s blocking anyone for their “conservative viewpoints.” It was because he had just inspired a violent mob to attack the Capitol while Congress was in session and trying to officially count the Electoral College votes and five people died. That violates every possible terms of service agreement ever written.

There?s more at stake than free speech. Suppression of dissent breeds terror. The answer to last week?s horror should be to open more channels of dialogue, not to close them off. If disaffected Americans no longer have an outlet to be heard, the siege of Capitol Hill will look like a friendly parley compared with what?s to come.

There are tons of outlets for “disaffected Americans.” They have many outlets to be heard. What they don’t have is a right to demand that any company host their speech when they are spreading blatant disinformation and violent rhetoric, including calling on people to literally murder public officials.

Ordinary Americans understand the First Amendment better than the elites do. Users who say Facebook, Twitter and Google are violating their constitutional rights are right. Aggrieved plaintiffs should sue these companies now to protect the voice of every American?and our constitutional democracy.

If they do, they will lose, and they will lose badly. It will be an embarrassing waste of money. One hopes that anyone thinking of filing such a lawsuit discusses it with a lawyer trained by actual legal experts, and not taught by Jed Rubenfeld.

Filed Under: 1st amendment, congress, content moderation, jed rubenfeld, state action doctrine, state actor, vivek ramaswami
Companies: facebook, google, twitter

from the best-of-luck dept

Well, here’s an odd one: the Presidential campaign for Tulsi Gabbard is now suing Google claiming, among other things, that the company has “violated her First Amendment rights” by temporarily shutting down her advertising account and also funneling some of her campaign emails to spam in Gmail. This lawsuit is a complete non-starter, and makes use of the same debunked legal theories that others have used against social media companies. First, it argues that closing her Google advertising account was obviously because people at Google didn’t want her message getting out after the first Democratic Presidential debates.

On June 28, 2019?at the height of Gabbard?s popularity among Internet searchers in the immediate hours after the debate ended, and in the thick of the critical post-debate period (when television viewers, radio listeners, newspaper readers, and millions of other Americans are discussing and searching for presidential candidates), Google suspended Tulsi?s Google Ads account without warning.

For hours, as millions of Americans searched Google for information about Tulsi, and as Tulsi was trying, through Google, to speak to them, her Google Ads account was arbitrarily and forcibly taken offline. Throughout this period, the Campaign worked frantically to gather more information about the suspension; to get through to someone at Google who could get the Account back online; and to understand and remedy the restraint that had been placed on Tulsi?s speech?at precisely the moment when everyone wanted to hear from her.

In response, the Campaign got opacity and an inconsistent series of answers from Google. First, Google claimed that the Account was suspended because it somehow violated Google?s terms of service. (It didn?t.) Later, Google changed its story. Then it changed its story again. Eventually, after several hours of bizarre and conflicting explanations while the suspension dragged on, Google suddenly reversed course completely and reinstated the Account. To this day, Google has not provided a straight answer?let alone a credible one?as to why Tulsi?s political speech was silenced right precisely when millions of people wanted to hear from her.

But in context, the explanation for Google?s suspension of the Account at exactly the wrong time is no great mystery: Google (or someone at Google) didn?t want Americans to hear Tulsi Gabbard?s speech,so it silenced her. This has happened time and time again across Google platforms. Google controls one of the largest and most important forums for political speech in the entire world, and it regularly silences voices it doesn?t like, and amplifies voices it does.

Of course, if you’re at all familiar with how this works — as we’ve explained for years now — you’ll know that there’s a much more credible reason than someone at Google trying to sabotage Gabbard’s campaign: it’s that making these kinds of decisions at scale is effectively impossible, and mistakes are made or situations turn up that, at first glance, certainly appear to violate terms of service. This is especially true in political advertising, a part of the social media ecosystem that is under even more scrutiny than other parts, as many people believe that was abused during the 2016 election, and there are various efforts underway to make platforms even more careful about what kind of political advertising they allow. Given that backdrop it’s not at all surprising that Gabbard’s campaign might get caught in the crossfire.

Hell, we’ve experience something kind of similar — in which Google has (on multiple occasions) removed advertising from our site and threatened to close down our account entirely based on its broken ad review system. It happens, and we complain about it — but never in a million years would I think that Google was purposefully targeting someone by doing that. It’s because we recognize that these kinds of moderation decisions are difficult and at scale, even a small percentage of mistakes will end up hitting a lot of people. But that’s Google’s right. It’s Google’s platform, after all.

Also, what’s particularly odd about this is that the focus of the lawsuit is on Gabbard’s campaign losing her advertising account. Anyone doing a Google search for Gabbard was still getting tons of organic search results for Gabbard. In effect, this is Gabbard saying that it’s somehow against the law to not accept her money to put her own messages at the top of Google, above the organic ones. Who knew that there was a legal right to skip to the top of all Google results if you’ve got money to burn? No one. Because there is no such right.

And that’s not all. The conspiracy theories go deeper:

And Google?s election manipulation doesn?t stop with its search platform. For example, Google?s email platform Gmail sends communications from Tulsi into people?s Spam folders at a disproportionately high rate. In fact, Gmail appears to classify communications from Tulsi Gabbard as Spam at a rate higher than other similar communications?for example, those from other Democratic presidential candidates. There is no technical explanation for this disparity.

Uh, yeah, there is a “technical explanation for this disparity.” (1) Spam filters, like any other filters, don’t always work well and often filter “legitimate” mail, (2) lots of people may have marked Gabbard’s emails as spam, training the system to treat them as such, or (3) Gabbard’s emailing practices may have been more spam-like than other candidates. It’s also possible that she’s wrong that her emails went to spam more often than other candidates. Either way, there are lots of possible explanations that are significantly more plausible than some nefarious plot in Larry Page’s office to take Gabbard out of the running.

Either way, like many of the other troll lawsuits over basic moderation decisions, this one appears to be a lot more performative than serious in any legal sense. First off, it’s highly questionable why this is a federal lawsuit as opposed to a state one, since most of the claims are state ones. The federal claims are laughable and should be tossed out pretty quick. Also, the complaint has all sorts of bizarre, laughable conspiracy theory elements to it, including the idea that Google employees backing Obama and Clinton over the last few presidential cycles is clear evidence of their bias in how the search engine operates (it is not). There’s a claim that because searching for “SESTA” on Google turns up an EFF site as the top result… that’s somehow proof of Google tilting the scales (not mentioned: EFF has a complaint before the FTC about Google, and receives very, very little money from any corporate donor, including Google).

She also claims that “the government” is somehow responsible for “ceding the internet to Google” because the FTC declined to file a complaint against Google in 2012 over unrelated issues, despite some FTC staff believing there was a legitimate case (not enough of them did to support actually filing a case, but Gabbard seems to chalk that up to a conspiracy to give the internet to Google, rather than a lack of evidence and the realization in the FTC that it would likely lose such a case).

Bizarrely, Gabbard’s complaint completely rewrites the history of net neutrality in a blatantly false way to support her nonsense legal arguments:

Other disturbing data points about the power wielded by Google and other major tech companies like Facebook have emerged in recent years. In the early 2010s, the FCC rightly considered whether net neutrality regulations, which sought to provide equal access to the Internet by governing Internet Service Providers, should also be extended to apply to Internet content platforms like Google.

However, during the Trump presidency, the FCC has not only declined to extend net neutrality protections to apply to Internet content platforms like Google, it has revoked those regulations that were already existing. See [n the Matter of Restoring Internet Freedom, 33 F.C.C. Rcd. 311 (2018); United States Telecom Ass?nv. FCC, 825 F.3d 674, 729 (D.C. Cir. 2016). Companies like Google have more leeway and ability than ever to bend the Internet to their will.

That… is a bizarre and, at best, misleading reading of net neutrality history (at worst, it’s manipulative). No one ever seriously considered “extending” net neutrality rules to Google because (1) the only people who suggested it were AT&T mouthpieces trolling the whole net neutrality process, (2) it’s not part of the FCC’s mandate to handle regulation of edge service providers, and (3) there is no such thing as “net neutrality” for search engines because their whole business is about providing recommendations, which by definition cannot be “neutral”. A “neutral” search engine is one that gives you totally random results. A working search engine is one that gives you “biased” results. Biased in support of relevance to whatever you’re looking for.

As for the actual claims in the lawsuit, they’re all repeats of failed claims elsewhere. They won’t go far. First up, there’s a laughable 1st Amendment claim. As everyone knows, Google is not bound by the First Amendment as it is not a government actor. Yet, Gabbard (incredibly weakly) argues that it is:

Google creates, operates, and controls its platform and services, including but not limited to Google Search, Google Ads, and Gmail as a public forum or its functional equivalent by intentionally and openly dedicating its platform for public use and public benefit, inviting the public to utilize Google as a forum for free speech. Google serves as a state actor by performing an exclusively and traditionally public function by regulating free speech within a public forum and helping to run elections. Accordingly, speech cannot be arbitrarily, unreasonably, or discriminatorily excluded, regulated, or restricted on the basis of viewpoint or the identity of the speaker on Google?s platform.

Google?s actions, and the actions of its agents, deprive the Campaign of its constitutional rights. Google has restricted the Campaign?s speech and expressive conduct by adopting and applying subjective, vague, and overbroad criteria (the ?Subjective Criteria?) that give Google unfettered and unbridled discretion to censor speech for arbitrary, capricious, or nonexistent reasons. The Subjective Criteria fail to convey a sufficiently definite warning to the Campaign (or the public) as to what is prohibited or restricted and, as a result, they allow Google to censor speech at its whim and based on subjective animus towards the speaker and/or her particular political or religious viewpoint.

So, this complaint is basically using the “magic words” legal theory. For someone to be a state actor, they need to be operating a service that is “exclusively and traditionally” run by the government. But beyond saying that Google does this, the complaint makes literally no effort whatsoever to back up that claim. Because it can’t. Because it’s laughable. I mean, just a few weeks ago, the Supreme Court made it quite clear that the bar to be considered a state actor to be bound by the 1st Amendment is much, much higher. From the Supreme Court’s ruling in Manhattan Community Access:

It is not enough that the federal, state, or local government exercised the function in the past, or still does. And it is not enough that the function serves the public good or the public interest in some way. Rather, to qualify as a traditional, exclusive public function within the meaning of our state-action precedents, the government must have traditionally and exclusively performed the function.

The Court has stressed that ?very few? functions fall into that category…. Under the Court?s cases, those functions include, for example, running elections and operating a company town…. The Court has ruled that a variety of functions do not fall into that category, including, for example: running sports associations and leagues, administering insurance payments, operating nursing homes, providing special education, representing indigent criminal defendants, resolving private disputes, and supplying electricity.

Gabbard arguing that Google is “running elections” is laughable.

The state claims aren’t going to win any fans either. Gabbard — like every damn troll who sues social media sites — tries to use California’s Unruh act, claiming this is a civil rights violation. So far, each of those has failed, including one that just failed last week when some Russian trolls lost their lawsuit against Facebook. The ruling in that case seems like the thing that Gabbard’s lawyers should have read before filing this nonsense nuisance lawsuit:

Courts have rejected the notion that private corporations providing services via the internet are public fora for purposes of the First Amendment. For instance, in Prager Univ. v. Google LLC, this Court rejected the notion that ?private social media corporations . . . are state actors that must regulate the content of their websites according to the strictures of the First Amendment? under public forum analysis. 2018 WL 1471939, at *8 (N.D. Cal. Mar. 26, 2018) (emphasis in original). In addition, the Ebeid court rejected the argument that Facebook is a public forum. 2019 WL 2059662, at *6. Moreover, in Buza v. Yahoo!, Inc., the court held that the plaintiff?s assertion that ?Yahoo!?s services should be seen as a ?public forum? in which the guarantees of the First Amendment apply is not tenable under federal law. As a private actor, Yahoo! has every right to control the content of material on its servers, and appearing on websites that it hosts.? 2011 WL 5041174, at *1 (N.D. Cal. Oct. 24, 2011). Furthermore, in Langdon v. Google, Inc., the court held that ?Plaintiff?s analogy of [Google and other] Defendants? private networks to shopping centers and [plaintiff?s] position that since they are open to the public they become public forums is not supported by case law.? 474 F. Supp. 2d 622, 632 (D. Del. 2007).

At bottom, the United States Supreme Court has held that property does not ?lose its private character merely because the public is generally invited to use it for designated purposes.? Lloyd Corp. v. Tanner, 407 U.S. 551, 569 (1972). Thus, simply because Facebook has many users that create or share content, it does not mean that Facebook, a private social media company by Plaintiffs? own admission in the complaint, becomes a public forum.

Much of the lawsuit is based on a two massive assumptions, neither of which are accurate:

  1. That Google is a state actor
  2. That Google acted arbitrarily and capriciously in deliberately targeting Gabbard

The entire lawsuit falls apart if even one of those is not accurate, and neither of them are.

Even stranger: the complaint doesn’t even seem to recognize that Section 230 of the Communications Decency Act exists. It makes no mention of it, nor attempts to get around it. It just pretends it’s not there. Which is kind of strange.

This case is going to get laughed out of court. It’s even possible that Google could make an anti-SLAPP argument here and stick the Gabbard campaign with its legal fees.

There’s one other element in all of this that should be mentioned, is that even though this seems to disprove the argument that Google is somehow targeting “conservatives” (Gabbard is a Democrat with (mostly) typical Democratic party positions), the same folks on social media who constantly whine about Google censoring conservatives are… cheering on this announcement (and, no I’m not linking), even as it partially disproves a key part of their argument. It does seem notable that part of the lawsuit actually quotes Breitbart and highlights that Breitbart claims that Google “routinely censors conservative viewpoints” (and Breitbart ran multiple articles cheering on this lawsuit).

More recently, Google employees engaged in an internal lobbying campaign to block Breitbart from Google?s advertising program. As part of this internal lobbying campaign, one Google employee pressed that ?[t|]here is obviously a moral argument to be made [to blocking Breitbart] as well as a business case.? While it?s not entirely clear what ?business case? the Google employee was referring to, it?s important to note that Breitbart has been among Google?s staunchest critics, alleging that the company routinely censors conservative viewpoints.

I’m not sure what anyone thinks this proves. If the argument — as Breitbart pushes — is that Google censors conservatives (a statement made repeatedly without proof) this whole lawsuit partially debunks that. If the argument is that Google censors views it doesn’t like, well, again there’s no actual evidence of that, but either way, are they making the argument that there’s some sort of “must carry” rules, which is just utter nonsense. Like this lawsuit.

Filed Under: advertising, content moderation, political advertising, public forum, section 230, state actor, tulsi gabbard
Companies: google