chilling effects – Techdirt (original) (raw)

Judge Slams Ken Paxton’s Attack On Media Matters’ Free Speech Rights

from the absolute-free-speech-suppressor dept

The First Amendment has won again, this time against another pretend “free speech absolutist” (Texas Attorney General Ken Paxton) in his attempt to punish someone for their free speech. Perhaps Ken Paxton will have to learn about the First Amendment in these remedial legal ethics education classes he’s required to take as part of closing out the criminal charges he was facing for years.

You may recall that after fake free speech absolutist Elon Musk got all pissy at Media Matters’ use of its own free speech rights to point out the fact that they were able to find ads on ExTwitter from giant companies appearing next to the accounts of literal neo-Nazis, a couple of pandering state Attorneys General decided they’d use the power of their states to punish Media Matters.

The whole thing is incredibly stupid, but just to set the stage, Musk started whining about how unfair it was that Media Matters found and wrote about the ads. Trump advisor Stephen Miller tweeted that he thought state AGs should investigate Media Matters for their article, and both Paxton and Missouri’s Andrew Bailey jumped up to do so.

Paxton sent a civil investigatory demand (CID) as a sort of fishing expedition, demanding Media Matters hand over a ton of internal documents. Media Matters responded by going to court, initially in Maryland, but then quickly moved to DC (after the judge in Maryland suggested that was the proper venue) and asked the court to protect it from this obviously ridiculous, retaliatory attack. The attack was clearly designed to create chilling effects to stop any sort of investigatory reporting on what was happening to ExTwitter.

On Friday, Judge Amit Mehta did a complete and total takedown of Paxton’s bullshit censorial attack on Media Matters’ speech. The whole thing is worth a read. Paxton argued that the DC court has no jurisdiction over his Texas-based investigation. This is a bit ironic, given that Paxton is at the same time claiming jurisdiction over Media Matters despite it being in DC, not Texas.

Turns out, Paxton screwed himself here (such a good lawyer, huh?) by hiring a process server to deliver the CID in DC, thereby making the jurisdiction question a lot easier:

First, the court finds that Defendant invoked the benefits and protections of the District’s laws when he “caused” service of the CID in the District of Columbia “through a professional process service.” Def.’s Opp’n, Decl. of Ass’t Att’y Gen. Levi Fuller, Ex. 1, ECF No. 26-1, ¶ 3 [hereinafter Fuller Decl.]. Courts have found that the hiring of a process server creates an agency relationship between the attorney and process server, and that relationship establishes the attorney’s presence in the jurisdiction to satisfy the “minimum contacts” requirement. See Schleit v. Warren, 693 F. Supp. 416, 419–20 (E.D. Va. 1988) (so holding under Virginia law); Balsly v. W. Michigan Debt Collections, Inc., No. 11-cv-642-DJN, 2012 WL 628490, at *5–7 (E.D. Va. Feb. 27, 2012) (same); Hamilton, Miller, Hudson & Fayne Travel Corp. v. Hori, 520 F. Supp. 67, 70 (E.D. Mich. 1981) (so holding under Michigan law). Courts also have held that a person who arranges for personal delivery of process in a State “purposely avail[s] themselves of the privilege of serving process in [the State].” Hori, 520 F. Supp. at 70. As one court has put it: “it [is] reasonable to conclude that a lawyer who knowingly serves abusive process in a jurisdiction . . . is ‘purposely avail[ing] himself of the privilege of conducting activities within the forum State.’” Schleit, 693 F. Supp. at 422–23 (quoting Luke v. Dalow Indus., Inc., 566 F. Supp. 1470, 1472 (E.D. Va. 1983)). Defendant’s hiring of a process server in the District of Columbia to effect service on Media Matters therefore created the requisite jurisdictional contacts with the District. See Smith v. Jenkins, 452 A.2d 333, 335 (D.C. 1982) (“Generally an agency relationship results when one person authorizes another to act on his behalf subject to his control, and the other consents to do so.”) (citations omitted).

Maybe they can teach that in Paxton’s remedial classes as well.

The judge also notes the irony of Paxton claiming to be able to enforce Texas law in DC but then not to be subject to a DC court himself:

Defendant promised to “vigorously enforce” the Texas DTPA against Media Matters for “fraudulent acts” with no apparent connection to Texas. Branch Decl., Ex. B at 13. His issuance of the CID had the effect of chilling Plaintiffs’ expressive activities nationwide, which deprived D.C. residents access to Plaintiffs’ reporting. The national implications of Defendant’s actions were compounded by his calling upon other Attorneys General to investigate Media Matters. See id., Ex. C, at 17. Thus, like the New Jersey Attorney General in Grewal, Defendant “projected himself across state lines and asserted a pseudo-national executive authority” that makes exercising jurisdiction over him reasonable and does not offend principles of federalism.

Having shown that Paxton has done enough that the DC Court has jurisdiction over him, the court takes on Paxton’s claim that his CID presents no injury to Media Matters (try not to laugh). Judge Mehta points out how ridiculous this claim is by basically saying, “dude, do you even know how the First Amendment works?”

Where, as here, a plaintiff brings a claim of First Amendment retaliation, “the injury-infact element is commonly satisfied by a sufficient showing of self-censorship, which occurs when a claimant is chilled from exercising his right to free expression.” Edgar v. Haines, 2 F.4th 298, 310 (4th Cir. 2021), cert. denied, 142 S. Ct. 2737 (2022) (quoting Cooksey v. Futrell, 721 F.3d 226, 235 (4th Cir. 2013)) (internal quotation marks omitted); see also Twitter, 56 F.4th at 1174 (citing Edgar, 2 F.4th at 310); Cooksey, 721 F.3d at 236 (finding justiciable injury where a state official informed plaintiff that she had “statutory authority” to seek an injunction against him if he did not edit his diet-advice website and plaintiff alleged “speech-chilling uncertainty about the legality of private conversations and correspondence”). The chill must be “objectively reasonable.” Edgar, 2 F.4th at 310 (quoting Benham v. City of Charlotte, 635 F.3d 129, 135 (4th Cir. 2011)).

Through sworn affidavits, Plaintiffs have demonstrated the profound chilling impact that the CID has had on its news operations and journalistic mission. Media Matters’ Editor-in-Chief, Benjamin Dimiero, declares that the CID has “dramatically changed [his] team’s editorial processes[.]” Pls.’ Mot., Decl. of Benjamin Dimiero in Supp. of Pls.’ Mot., Ex. 4, ECF No. 4-4, ¶ 16 [hereinafter Dimiero Decl.]. Dimiero describes a “new culture of fear” amongst Media Matters staff about research and reporting. Id. For example, he avers that the editorial team and leadership now engage in “greater internal scrutiny and risk calculation” when approaching stories that they otherwise would have published after their normal vetting process, such as stories about media coverage of the Defendant’s anti-abortion actions in Texas. Id. Dimiero further states that other stories, such as one concerning content moderation decisions made by X, “may go unreported on entirely.” Id. “There is,” he says, “a general sense among our team and organization that we must tread very lightly[] and be careful not to cross lines that would jeopardize our work or our employees’ safety . . . because of concern that certain reporting could make us a target for further retaliation.”

According to Dimiero, since Defendant announced the investigation, “Media Matters’s editorial leaders have pared back reporting and publishing, particularly on any topics that could be perceived as relating to the Paxton investigation.” Id. ¶ 17. Absent the CID, Media Matters would have coordinated follow up research and reporting on Hananoki’s November 16 Article, as well as the one that appeared the next day. Id. ¶ 18. Media Matters, for instance, “received several tips from people who have seen advertisements for prominent brands placed alongside extremist content,” but has limited the scope of its reporting on the subject “for fear of additional retaliation.” Id. Furthermore, Media Matters otherwise would have published at least two additional articles on the topics of Hananoki’s reporting, but his team withheld them due to concerns of further legal action. Id. ¶ 19. Writers have expressed concerns that their investigations could serve as the basis for retaliatory legal action and that their work product might be subject to investigative demands. Id. ¶ 22; see also Padera Decl. ¶¶ 23–24 (same). Media Matters’ leadership and editorial team have since assumed a more significant role in publishing decisions, which “has significantly slowed down [their] editorial and publication process.” Dimiero Decl. ¶ 21. Media Matters has been taking these steps out of fear of retaliation, not out of legitimate concerns about fairness or accuracy

I can relate, having been sued for my accurate reporting myself. The mental toll that such a lawsuit has on your reporting is very real, even when (arguably especially when) you know that your reporting was 100% solid. It’s incredibly chilling that you can still end up in court, facing ruinous liability, even when you do everything right.

From there, Judge Mehta moves on to the likelihood of success for Media Matters. He notes he only needs to do so for the First Amendment issue, which are pretty obvious and very easy.

Defendant’s investigation of Media Matters is “retaliatory action sufficient to deter a person of ordinary firmness in plaintiff’s position from speaking again[.]” Aref, 833 F.3d at 258. Defendant makes no contrary argument, Def.’s Opp’n at 23, so the court treats as conceded the sufficiency of Plaintiffs’ proof as to this element, see Day v. D.C. Dep’t of Consumer & Regul. Affs., 191 F. Supp. 2d 154, 159 (D.D.C. 2002) (“If a party fails to counter an argument that the opposing party makes in a motion, the court may treat that argument as conceded.”); see also Wannall v. Honeywell, Inc., 775 F.3d 425, 428 (D.C. Cir. 2014) (“[I]f a party files an opposition to a motion and therein addresses only some of the movant’s arguments, the court may treat the unaddressed arguments as conceded.”)

Still, the court explains why Plaintiffs prevail regardless. “[T]he threat of invoking legal sanctions” is sufficient to deter protected speech. Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 67 (1963). So, too, is the “threat of administrative and judicial intrusion into newsgathering and editorial process” that arises from official process and its possible enforcement. United States v. LaRouche Campaign, 841 F.2d 1176, 1182 (1st Cir. 1988) (internal quotation marks omitted). The Texas Code authorizes the Attorney General to seek restraint of future conduct and the imposition of civil penalties of up to $10,000 per violation in a Texas state court if he has “reason to believe” Plaintiffs violated the DTPA. Tex. Bus. & Com. Code § 17.47(a), (c). He also can seek to have Plaintiffs held in contempt in Texas state court for not complying with the CID. Id. § 17.62(c). These potential punitive consequences, as well as possible judicial intervention to enforce the CID, make Plaintiffs’ claim of chilled expression objectively reasonable.

There is more. “The compelled production of a reporter’s resource materials can constitute a significant intrusion . . . [that] may substantially undercut the public policy in favor of the free flow of information to the public[.]” United States v. Cuthbertson, 630 F.2d 139, 147 (3d Cir. 1980). The CID seeks such records. It demands “internal and external communications . . . regarding Elon Musk’s purchase of X,” X’s CEO “Linda Yaccarino,” and Hananoki’s November 16 Article, as well as external communications with “employees and representatives of X” and the various companies that were the subject of the November 16 Article for a three-week period. Branch Decl., Ex. A, at 11. The compelled disclosure of such “research materials poses a serious threat to the vitality of the newsgathering process.” Shoen v. Shoen, 48 F.3d 412, 416 (9th Cir. 1995). And, of course, Plaintiffs’ actual self-censorship in response to the announced investigation and the CID “provides some evidence of the tendency of [Defendant’s] conduct to chill First Amendment activity.” Hartley v. Wilfert, 918 F. Supp. 2d 45, 54 (D.D.C. 2013) (quoting Constantine v. Rectors & Visitors of George Mason Univ., 411 F.3d 474, 500 (4th Cir. 2005)). The court need not repeat that uncontested evidence here.

Also, Paxton apparently didn’t even try to defend non-censorial reasons for opening the investigation:

To establish causal link, “[i]t is not enough to show that an official acted with a retaliatory motive and that the plaintiff was injured—the motive must cause the injury. Specifically, it must be a ‘but-for’ cause, meaning that the adverse action against the plaintiff would not have been taken absent the retaliatory motive.” Nieves v. Bartlett, 139 S. Ct. 1715, 1722 (2019). Defendant’s initial press release establishes that Defendant opened an investigation of Media Matters in response to its protected media activities. Branch Decl., Ex. B, at 13. Also, Defendant’s description of Media Matters as a “radical anti-free speech” and “radical left-wing organization” and his encouraging of other Attorneys General to look into Media Matters’ reporting is evidence of retaliatory intent….

Defendant has not responded to Plaintiffs’ causation evidence. See Def.’s Opp’n at 22–23. Notably, he has not submitted a sworn declaration that explains his reasons for opening the investigation. By remaining silent, he has conceded the requisite causal link

It seems quite possible that Ken Paxton is a terrible lawyer.

Paxton also claimed that Media Matters’ voice wasn’t chilled because the org had continued to speak out in defense of its reporting. But, as the court notes, that’s not how any of this works. At all.

Defendant also contends that it is “factually untrue” that Media Matters has had its expression chilled, citing television appearances by Media Matters’ President, in which he has defended the organization’s reporting and “doubled down” on the accuracy of the X images contained the November 16 Article. Def.’s Opp’n at 24; Fuller Decl., Exs. E & F, at 24–39. But this argument asks too much of Plaintiffs. They “need not show that the government action led them to stop speaking ‘altogether,’” only that it would be “likely to deter a person of ordinary firmness from the exercise of First Amendment rights.” Edgar, 2 F.4th at 310 (quoting Benham 635 F.3d at 135). Therefore, the fact that Media Matters’ President has publicly defended its work does not mean that Plaintiffs have not suffered irreparable harm.

End result: preliminary injunction barring Paxton from enforcing his CID.

Of course, now we’ll have to see what happens in Missouri, where AG Andrew Bailey (who also pretends to be a free speech warrior while trying to suppress the speech of others) not only sent a CID, but immediately sued Media Matters in Missouri. He claims that Media Matters’ decision to go to court to block Paxton’s CID meant that they would refuse to bow down to his demands as well. That, of course, puts that case in a local Missouri court. But one hopes that this ruling will help clarify the First Amendment issues for that court as well.

Still, chalk one up for actual free speech and the First Amendment: Ken Paxton has had his attempt to retaliate against Media Matters for its speech smacked down, as was richly deserved.

Filed Under: 1st amendment, andrew bailey, chilling effects, cid, dc, elon musk, free speech, ken paxton, retaliation, texas
Companies: media matters

Judge Slams Elon Musk For Filing Vexatious SLAPP Suit Against Critic, Calling Out How It Was Designed To Suppress Speech

from the slappity-slapp-slapp dept

Self-described “free speech absolutist” Elon Musk has just had a judge slam him for trying to punish and suppress the speech of critics. Judge Charles Breyer did not hold back in his ruling dismissing Musk’s utterly vexatious SLAPP suit against the Center for Countering Digital Hate (CCDH).

Sometimes it is unclear what is driving a litigation, and only by reading between the lines of a complaint can one attempt to surmise a plaintiff’s true purpose. Other times, a complaint is so unabashedly and vociferously about one thing that there can be no mistaking that purpose. This case represents the latter circumstance. This case is about punishing the Defendants for their speech.

As a reminder, this case was brought after Musk got upset with CCDH for releasing a report claiming that hate speech was up on the site. I’ve noted repeatedly that I’m not a fan of CCDH and am skeptical of any research they release, as I’ve found them to have very shoddy, results-driven methodologies. So, it’s entirely possible that the report they put out is bullshit (though, there have been other reports that seem to confirm its findings).

But the way to respond to such things is, as any actual “free speech absolutist” would tell you, with more speech. Refute the findings. Explain why they were wrong. Dispute the methodology.

But Elon Musk didn’t do that.

He sued CCDH for their speech and put up a silly façade to pretend it wasn’t about their speech. Instead, he claimed it was some sort of breach of contract claim and a Computer Fraud and Abuse Act (CFAA) claim (which long-term Techdirt readers will recognize as a law intended to be used against “hacking” but which has been regularly used in abusive ways against “things we don’t like”).

Despite my general distrust of CCDH’s research and methodologies, this case seemed like an obvious attack on speech. I was happy to see the Public Participation Project (where I am a board member) file an amicus brief (put together by the Harvard Cyberlaw clinic) calling out that this was a clear SLAPP suit.

CCDH itself filed an anti-SLAPP motion, also calling out how the entire lawsuit was clearly an attempt to punish the organization for its speech.

And, thankfully, the judge agreed. The judge noted that it’s clear from the complaint and the details of the case that the lawsuit is about CCDH’s speech, at which point the burden shifts to the plaintiff to prove a “reasonable probability” of prevailing on the merits to allow the case to move forward.

Let’s just say that ExTwitter’s arguments don’t go over well. ExTwitter’s lawyers argued that it wasn’t really complaining about CCDH’s speech, but rather the way in which it collected the data it used to make its argument regarding hate speech. The judge doesn’t buy it:

But even accepting that the conduct that forms the specific wrongdoing in the state law claims is CCDH’s illegal access of X Corp. data, that conduct (scraping the X platform and accessing the Brandwatch data using ECF’s login credentials) is newsgathering—and claims based on newsgathering arise from protected activity….

It is also just not true that the complaint is only about data collection. See Reply at 3 (arguing that X Corp.’s contention that its “claims arise from ‘illegal access of data,’ as opposed to speech,” is the “artifice” at “the foundation of [this] whole case.”) (quoting Opp’n at 10). It is impossible to read the complaint and not conclude that X Corp. is far more concerned about CCDH’s speech than it is its data collection methods. In its first breath, the complaint alleges that CCDH cherry-picks data in order to produce reports and articles as part of a “scare campaign” in which it falsely claims statistical support for the position that the X platform “is overwhelmed with harmful content” in order “to drive advertisers from the X platform.” See FAC ¶ 1. Of course, there can be no false claim without communication. Indeed, the complaint is littered with allegations emphasizing CCDH’s communicative use of the acquired data. See, e.g., id. ¶¶ 17–20 (reports/articles are based on “flawed ‘research’ methodologies,” which “present an extremely distorted picture of what is actually being discussed and debated” on the X platform, in order to “silence” speech with which CCDH disagrees); id. ¶ 43 (CCDH “used limited, selective, and incomplete data from that source . . . that CCDH then presented out of context in a false and misleading manner in purported ‘research’ reports and articles.”), id. ¶ 56 (“CCDH’s reports and articles . . . have attracted attention in the press, with media outlets repeating CCDH’s incorrect assertions that hate speech is increasing on X.”).

The judge also correctly calls out that not suing for defamation doesn’t mean you can pretend you’re not suing over speech, when that’s clearly the intent here:

Whatever X Corp. could or could not allege, it plainly chose not to bring a defamation claim. As the Court commented at the motion hearing, that choice was significant. Tr. of 2/29/24 Hearing at 62:6–10. It is apparent to the Court that X Corp. wishes to have it both ways—to be spared the burdens of pleading a defamation claim, while bemoaning the harm to its reputation, and seeking punishing damages based on reputational harm.

For the purposes of the anti-SLAPP motion, what X Corp. calls its claims is not actually important. The California Supreme Court has held “that the anti-SLAPP statute should be broadly construed.” Martinez, 113 Cal. App. 4th at 187 (citing Equilon Enters. v. Consumer Cause, Inc., 29 Cal. 4th 53, 60 n.3 (2002)). Critically, “a plaintiff cannot avoid operation of the anti-SLAPP statute by attempting, through artifices of pleading, to characterize an action as a ‘garden variety breach of contract [or] fraud claim’ when in fact the liability claim is based on protected speech or conduct.” Id. at 188 (quoting Navellier, 29 Cal. 4th at 90–92); see also Baral v. Schnitt, 1 Cal. 5th 376, 393 (2016) (“courts may rule on plaintiffs’ specific claims of protected activity, rather than reward artful pleading”); Navellier, 29 Cal. 4th at 92 (“conduct alleged to constitute breach of contract may also come within constitutionally protected speech or petitioning. The anti-SLAPP statute’s definitional focus in not the form of the plaintiff’s cause of action[.]”).

This is important, as we’ve seen other efforts to try to avoid anti-SLAPP claims by pretending that a case is not about speech. It’s good to see a judge call bullshit on this. Indeed, in a footnote, the judge even calls out Elon’s lawyers for trying to mislead him about all this:

At the motion hearing, X Corp. asserted that it was “not trying to avoid defamation” and claimed to have “pleaded falsity” in paragraph 50 of the complaint. Tr. of 2/29/24 Hearing at 59:20–23. In fact, paragraph 50 did not allege falsity, or actual malice, though it used the word “incorrect.” See FAC ¶ 50 (“incorrect implications . . . that hate speech viewed on X is on the rise” and “incorrect assertions that X Corp. ‘doesn’t care about hate speech’”). When the Court asked X Corp. why it had not brought a defamation claim, it responded rather weakly that “to us, this is a contract and intentional tort case,” and “we simply did not bring it.” Tr. of 2/29/24 Hearing at 60:1–12; see also id. at 60:21–22 (“That’s not necessarily to say we would want to amend to bring a defamation claim.”).

The judge outright mocks ExTwitter’s claim that they were really only concerned with the data scraping techniques and that they would have brought the same case even if CCDH hadn’t published its report. He notes that each of the torts under which the case was brought require ExTwitter to show harm, and the only “harms” they discuss are “harms” from the speech:

X Corp.’s many allegations about CCDH’s speech do more than add color to a complaint about data collection—they are not “incidental to a cause of action based essentially on nonprotected activity.” See Martinez, 113 Cal. App. 4th at 187. Instead, the allegations about CCDH’s misleading publications provide the only support for X Corp.’s contention that it has been harmed. See FAC ¶ 78 (breach of contract claim: alleging that CCDH “mischaracterized the data . . . in efforts to claim X is overwhelmed with harmful conduct, and support CCDH’s call to companies to stop advertising on X. . . . As a direct and proximate result of CCDH’s breaches of the ToS in scraping X, X has suffered monetary and other damages in the amount of at least tens of millions of dollars”); ¶¶ 92– 93 (intentional interference claim: alleging that Defendants “intended for CCDH to mischaracterize the data regarding X in the various reports and articles . . . to support Defendants’ demands for companies to stop advertising on X” and that “[a]s a direct and proximate result of Defendants intentionally interfering with the Brandwatch Agreements . . . X Corp. has suffered monetary and other damages of at least tens of millions of dollars”); ¶¶ 98–99 (inducing breach of contract claim: alleging that “X Corp. was harmed and suffered damages as a result of Defendants’ conduct when companies paused or refrained from advertising on X, in direct response to CCDH’s reports and articles” and that “[a]s a direct and proximate result of Defendants inducing Brandwatch to breach the Brandwatch Agreements . . . X Corp. has suffered monetary and other damages in the amount of at least tens of millions of dollars.”).

The “at least tens of millions of dollars” that X Corp. seeks as damages in each of those claims is entirely based on the allegation that companies paused paid advertising on the X platform in response to CCDH’s “allegations against X Corp. and X regarding hate speech and other types of content on X.” See id. ¶ 70. As CCDH says, “X Corp. alleges no damages that it could possibly trace to the CCDH Defendants if they had never spoken at all.” Reply at 3. Indeed, X Corp. even conceded at the motion hearing that it had not alleged damages that would have been incurred if CCDH “had scraped and discarded the information,” or scraped “and never issued a report, or scraped and never told anybody about it.” See Tr. of 2/29/24 Hearing at 7:22–8:3. The element of damages in each state law claim therefore arises entirely from CCDH’s speech.

In other words, stop your damned lying. This case was always about trying to punish CCDH for its speech.

ExTwitter could still get past an anti-SLAPP motion by showing it had a likelihood of succeeding on the other claims, but no such luck. First up are the “breach of contract” claims that are inherently silly for a variety of reasons (not all of which I’ll go over here).

But the biggest one, again, is that if this were a legitimate breach of contract claim, Musk wouldn’t be seeking “reputational damages.” But he is. And thus:

Another reason that the damages X Corp. seeks—“at least tens of millions of dollars” of lost revenue that X Corp. suffered when CCDH’s reports criticizing X Corp. caused advertisers to pause spending, see FAC ¶ 70—are problematic is that X Corp. has alleged a breach of contract but seeks reputation damages. Of course, the main problem with X Corp.’s theory is that the damages alleged for the breach of contract claim all spring from CCDH’s speech in the Toxic Twitter report, and not its scraping of the X platform. See Reply at 7 (“Because X Corp. seeks (impermissibly) to hold CCDH U.S. liable for speech without asserting a defamation claim, it is forced to allege damages that are (impermissibly) attenuated from its claimed breach.”). One way we know that this is true is that if CCDH had scraped the X platform and never spoken, there would be no damages. Cf. ACLU Br. at 12. (“Had CCDH U.S. praised rather than criticized X Corp., there would be no damages to claim and therefore no lawsuit.”). Again, X Corp. conceded this point at the motion hearing.

CCDH’s reputation damages argument is another way of saying that the damages X Corp. suffered when advertisers paused their spending in response to CCDH’s reporting was not a foreseeable result of a claimed breach.

In other words, the actual issue here wasn’t any “breach of contract”. Just the speech. Which is protected.

And that becomes important for a separate analysis of whether or not the First Amendment applies here, with the judge going through the relevant precedents and noting that there’s just nothing in the case that suggests any of the relevant damages are due to any claimed contractual breach, rather than from CCDH’s protected speech:

Here, X Corp. is not seeking some complicated mix of damages—some caused by the reactions of third parties and some caused directly by the alleged breach. The Court can say, as a matter of law, whether the single type of damages that X Corp. seeks constitutes “impermissible defamation-like publication damages that were caused by the actions and reactions of third parties to” speech or “permissible damages that were caused by [CCDH’s] breaches of contract.”….

The breach that X Corp. alleges here is CCDH’s scraping of the X platform. FAC ¶ 77. X Corp. does not allege any damages stemming directly from CCDH’s scraping of the X platform.19 X Corp. seeks only damages based on the reactions of advertisers (third parties) to CCDH’s speech in the Toxic Twitter report, which CCDH created after the scraping. See FAC ¶¶ 70, 78; see also ACLU Br. at 12 (“The damages X Corp. seeks . . . are tied to reputational harm only, with no basis in any direct physical, operational or other harm that CCDH U.S.’s alleged scraping activities inflicted on X Corp.”). That is just what the Fourth Circuit disallowed in Food Lion, 194 F.3d at 522. The speech was not the breach, as it was in Cohen. And X Corp.’s damages would not have existed even if the speech had never occurred, as in Newman, 51 F.4th at 1134. Here, there would be no damages without the subsequent speech. Accordingly, the Court can hold as a matter of law that the damages alleged are impermissible defamation-like publication damages caused by the actions of third parties to CCDH’s report.

And, finally, the court says it would be “futile” to allow ExTwitter to amend the complaint and try again. Everything here is about punishing CCDH for its speech. While the judge notes that courts should often give plaintiffs leave to amend in cases where it makes sense, here he rightly fears that ExTwitter would only amend as further punishment for CCDH’s speech.

ExTwitter argued that it could amend the complaint to show that the data scraping harmed the “safety and security” of the site, and that it created harm for the company in having to protect the site. Again, the judge points out how ridiculous both arguments are.

On the “safety and security” side, the judge effectively says the legal equivalent of “I wasn’t born yesterday.” Nothing in this scraping puts users at risk. It was just a sophisticated search of Twitter’s site:

While security and safety are noble concepts, they have nothing to do with this case. The Toxic Twitter report stated that CCDH had used the SNScrape tool, “which utilizes Twitter’s search function,” to “gather tweets from” “ten reinstated accounts,” resulting in a “dataset of 9,615 tweets posted by the accounts.” Toxic Twitter at 17. There is no allegation in the complaint, and X Corp. did not assert that it could add an allegation, that CCDH scraped anything other than public tweets that ten X platform users deliberately broadcast to the world.21 No private user information was involved—no social security numbers, no account balances, no account numbers, no passwords, not even “gender, relationship status, ad interests etc.” See Meta Platforms, Inc. v. BrandTotal Ltd., 605 F. Supp. 3d 1218, 1273 (2022).

And, importantly, the judge notes that even if some users of ExTwitter want to delete their tweets, it doesn’t implicate their “safety and security” that someone might have seen (or even saved) their tweets earlier:

When asked why the collecting of public tweets implicates users’ security interests, X Corp. insisted that “this all goes to control over your data,” and that users expect that they will be able to take down their tweets later, or to change them—abilities they are robbed of when the “data” is scraped. See Tr. of 2/29/24 Hearing at 40:17–24. But even assuming that it is “very important” to a user that he be able to amend or remove his pro-neo-Nazi tweets at some point after he has tweeted them, see id. at 40:24–25, a user can have no expectation that a tweet that he has publicly disseminated will not be seen by the public before that user has a chance to amend or remove it. While scraping is one way to collect a user’s tweets, members of the public could follow that user and view his tweets in their feeds, or use the X platform’s search tool (as SNScrape did) and view his tweets that way.

X Corp.’s assertion that the scraping alleged here violates a user’s “safety and security” in his publicly disseminated tweets is therefore a non-starter.

As for a harm in ExTwitter having to then “protect” its users, the judge also laughs that one off. He notes that scraping is a common part of how the web works, even if some websites don’t like it:

The problem with this argument is that it is at odds with what X Corp. has alleged.

Although social media platforms do not like it, scraping, for various ends, is commonplace. See, e.g., ACLU Br. at 7 (“Researchers and Journalists Use Scraping to Enable Speech in the Public Interest and Hold Power to Account.”); see also id. at 8–9 (collecting sources); Andrew Sellars, “Twenty Years of Web Scraping and the Computer Fraud and Abuse Act,” 24 B.U. J. Sci. & Tech. L 372, 375 (2018) (“Web scraping has proliferated beneath the shadow of the [CFAA].”); hiQ 2022 Circuit opinion, 31 F.4th at 1202 (“HiQ points out that data scraping is a common method of gathering information, used by search engines, academic researchers, and many others. According to hiQ, letting established entities that already have accumulated large user data sets decide who can scrape that data from otherwise public websites gives those entities outsized control over how such data may be put to use . . . the public interest favors hiQ’s position.”); see also id. at 1186 (“LinkedIn blocks approximately 95 million automated attempts to scrape data every day”).

Furthermore, while the judge notes that, of course, there can be some cases where scraping could create harms for a site, this case is not an example of that. It was just doing a basic search of publicly available info:

This is not such a case. Here, CCDH is alleged to have used Twitter’s own search tool to collect 9,615 public tweets from ten Twitter users, see FAC ¶ 77; Toxic Twitter at 17, and then to have announced that it did so in a public report, see id. Assuming for present purposes that this conduct amounts to the “scraping” barred by the ToS,23 the extent of CCDH’s scraping was not a mystery. As CCDH asked at the motion hearing, “What CCDH did and the specific tweets that it gathered, what tool it used, how it used that tool and what the results were are documented explicitly in its public report. So what is it that they’re investigating?” Tr. of 2/29/24 Hearing at 22: 3–7. Nor was this the kind of large-scale, commercial scraping—as in hiQ, as alleged in Bright Data—that could conceivably harm the X platform or overburden its servers. It is not plausible that this small-scale, non-commercial scraping would prompt X Corp. to divert “dozens, if not over a hundred personnel hours across disciplines,” see Tr. of 2/29/24 Hearing at 8:7–11, of resources toward the repair of X Corp.’s systems. Nor would such expenditures have been foreseeable to CCDH in 2019. In 2019, if CCDH had thought about the no-scraping provision in the ToS at all, it would have expected X Corp. to incur damages only in response to breaches of that provision that could actually harm the X platform. It would not have expected X Corp. to incur damages in connection with a technical breach of that provision that involved the use of Twitter’s search tool to look at ten users and 9,615 public tweets.

Furthermore, the judge notes that if ExTwitter had to spend time and money to “protect” the site here, it wouldn’t be because of any costs from the scraping, but rather from CCDH’s (again, protected) speech.

And thus the anti-SLAPP motion wins, the case is dismissed, and Elon can’t file an amended complaint. Indeed, the judge calls out the silliness of ExTwitter claiming that CCDH was seeking to suppress speech, by noting that it’s quite obvious the opposite is going on here:

The Court notes, too, that X Corp.’s motivation in bringing this case is evident. X Corp. has brought this case in order to punish CCDH for CCDH publications that criticized X Corp.—and perhaps in order to dissuade others who might wish to engage in such criticism. Although X Corp. accuses CCDH of trying “to censor viewpoints that CCDH disagrees with,” FAC ¶ 20, it is X Corp. that demands “at least tens of millions of dollars” in damages—presumably enough to torpedo the operations of a small nonprofit—because of the views expressed in the nonprofit’s publications…. If CCDH’s publications were defamatory, that would be one thing, but X Corp. has carefully avoided saying that they are.

Given these circumstances, the Court is concerned that X Corp.’s desire to amend its breach of contract claim has a dilatory motive—forcing CCDH to spend more time and money defending itself before it can hope to get out from under this potentially ruinous litigation. See PPP Br. at 2 (“Without early dismissal, the free speech interests that the California legislature sought to protect will vanish in piles of discovery motions.”). As CCDH argued at the motion hearing, the anti-SLAPP “statute recognizes that very often the litigation itself is the punishment.” Tr. of 2/29/24 Hearing at 33:12–34:5. It would be wrong to allow X Corp. to amend again when the damages it now alleges, and the damages it would like to allege, are so problematic, and when X Corp.’s motivation is so clear.

Accordingly, the Court STRIKES the breach of contract claim and will not allow X Corp. to add the proposed new allegations as to that claim.

The judge also drops a hammer on the silly CFAA claims. First, the court notes that, as per the Supreme Court’s Van Buren ruling, if you’re arguing losses from hacking, the loss has to come from “technical harms” associated with the unauthorized access. ExTwitter claims that the “loss” was from the investigation it had to do to stop such violations, but the judge isn’t buying it. (For what it’s worth, Riana Pfefferkorn notes that the issue of harms having to be “technological harms” came from dicta in the Supreme Court, not a holding, but it appears that courts are treating it as a holding…)

When the lawsuit was filed, we called out a big part of the problem, which was that a separate entity, BrandWatch, was who gave CCDH access to its tools to do research on Twitter. So if there were any complaint here by ExTwitter, it might (weakly!) be against BrandWatch for giving CCDH access. But they can’t possibly argue that CCDH violated the CFAA.

X Corp.’s losses in connection with “attempting to conduct internal investigations in efforts to ascertain the nature and scope of CCDH’s unauthorized access to the data,” see FAC ¶ 87, are not technological in nature. The data that CCDH accessed does not belong to X Corp., see Kaplan Decl. Ex. A at 13 (providing that users own their content and grant X Corp. “a worldwide, non-exclusive, royalty-free license”), and there is no allegation that it was corrupted, changed, or deleted. Moreover, the servers that CCDH accessed are not even X Corp.’s servers. X Corp. asserted at the motion hearing that its servers “stream data to Brandwatch servers in response to queries from a logged in user” and so “you cannot say fairly it’s not our systems.” Tr. of 2/29/24 Hearing at 28:16–20. But that is not what the complaint alleges. The complaint alleges that “X Corp. provided non-public data to Brandwatch” and “[t]hat data was then stored on a protected computer.” FAC ¶ 83; see also id. ¶ 86 (“the Licensed Materials were stored on servers located in the United States that Brandwatch used for its applications. CCDH and ECF thus knew that, in illegally using ECF’s login credentials and querying the Licensed Materials, CCDH was targeting and gaining unauthorized access to servers used by Brandwatch in the United States.”) (emphasis added); id. ¶ 29 (Twitter would stream its Licensed Materials from its servers, “including in California,” to “servers used by Brandwatch [] located in the United States, which Brandwatch’s applications accessed to enable [its] users with login credentials to analyze the data.”).28 It is therefore hard to see how an investigation by X Corp. into what data CCDH copied from Brandwatch’s servers could amount to “costs caused by harm to computer data, programs, systems, or information services.” See Van Buren, 141 S. Ct. at 1659–60.

The court also laughs off the idea that the cost of attorneys could be seen as “technological harm” under the CFAA. And thus, CFAA claims are dropped.

As we noted in our original post, the other claims were the throw-in claims designed to piggyback on the contract and CFAA claims and to sound scary and drive up the legal fees. The judge sees these for what they are and dismisses them easily.

CCDH’s arguments for dismissing the tort claims are that: (1) the complaint shows that CCDH did not cause a breach; (2) the complaint has failed to plausibly allege a breach; (3) the complaint has failed to plausibly allege CCDH’s knowledge; and (4) the complaint fails to adequately allege damages. MTD&S at 23–26. The Court concludes that CCDH’s arguments about causation and about damages are persuasive, and does not reach its other arguments.

The court also called out that it’s weird (and notable in exposing Musk’s nonsense) that the complaint seeks to hold CCDH liable for Brandwatch allowing CCDH to use its services:

X Corp.’s response is that “the access is the breach.” Opp’n at 30. In other words, Brandwatch agreed to keep the Licensed Materials secure, and by allowing CCDH to access the Licensed Materials, Brandwatch necessarily—and simultaneously—breached its agreement to keep the Licensed Materials secure. The Court rejects that tortured reasoning. Any failure by Brandwatch to secure the Licensed Materials was a precondition to CCDH’s access. In addition, to the extent that X Corp. maintains that CCDH need not have done anything to impact Brandwatch’s behavior, then it is seeking to hold CCDH liable for breaching a contract to which it was not a party. That does not work either.

The complaint also included some utter nonsense that the judge isn’t buying. Some Senator had said that CCDH was a “foreign dark money group,” and thus ExTwitter said there might be further claims against unnamed “John Does,” but the judge points out that this conspiracy theory isn’t backed up by anything legitimate:

CCDH’s last argument is that X Corp. fails to state a claim against the Doe defendants. MTD&S at 26. X Corp. alleges that “one [unnamed] United States senator referred to CCDH as ‘[a] foreign dark money group,’” and that “[o]ther articles have claimed that CCDH is, in part, funded and supported by foreign organizations and entities whose directors, trustees, and other decision-makers are affiliated with legacy media organizations.” FAC ¶ 62. It further alleges that “CCDH is acting . . . at the behest of and in concert with funders, supporters, and other entities.” Id. ¶ 63. These allegations are vague and conclusory, and do not state a plausible claim against the Doe defendants. See Iqbal, 556 U.S. at 678 (claim is plausible “when the plaintiff pleads factual content that allows the court to draw the reasonable inference that the defendant is liable for the misconduct alleged.”).

So, case dismissed. Of course, Elon might appeal to the 9th Circuit and just run up the costs of what he’ll eventually need to pay CCDH’s lawyers just to further attempt to bully and suppress the speech of critics (and to chill speech of other critics).

And, yeah, that might happen. All from the “free speech absolutist.”

On the whole, though, this is a good, clear ruling that really highlights (1) the bullshit vexatious, censorial nature of the SLAPP suit from Elon and (2) the value and importance of a strong anti-SLAPP law like California’s.

Filed Under: anti-slapp, breach of contract, california, cfaa, charles breyer, chilling effects, elon musk, free speech, slapp
Companies: brandwatch, ccdh, twitter, x

Move Over, Software Developers – In The Name Of Cybersecurity, The Government Wants To Drive

from the unconstitutional-camel-noses dept

Earlier this year the White House put out a document articulating a National Cybersecurity Strategy. It articulates five “pillars,” or high-level focus areas where the government should concentrate its efforts to strengthen the nation’s resilience and defense against cyberattacks: (1) Defend Critical Infrastructure, (2) Disrupt and Dismantle Threat Actors, (3) Shape Market Forces to Drive Security and Resilience, (4) Invest in a Resilient Future, and (5) Forge International Partnerships to Pursue Shared Goals. Each pillar also includes several sub-priorities and objectives as well.

It is a seminal document, and one that has and will continue to spawn much discussion. For the most part what it calls for is too high level to be particularly controversial. It may even be too high level to be all that useful, although there can be value in distilling into words any sort of policy priorities. After all, even if what the government calls for may seem obvious (like “defending critical infrastructure,” which of course we’d all expect it do), going to the trouble to actually articulate it as a policy priority provides a roadmap for more constructive efforts to follow and may also help to martial resources, plus it can help ensure that any more tangible policy efforts the government is inclined to directly engage in are not at cross-purposes with what the government wants to accomplish overall.

Which is important because what the rest of this post discusses is how the strategy document itself reveals that there may already be some incoherence among the government’s policy priorities. In particular, it lists as one of the sub-priorities an objective with troubling implications: imposing liability on software developers. This priority is described in a few paragraphs in the section entitled, “Strategic Objective 3.3: Shift Liability for Insecure Software Products and Services,” but the essence is mostly captured in this one:

The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios. To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. This safe harbor will draw from current best practices for secure software development, such as the NIST Secure Software Development Framework. It also must evolve over time, incorporating new tools for secure software development, software transparency, and vulnerability discovery.

Despite some equivocating language, at its essence it is no small thing that the White House proposes: legislation instructing people on how to code their software and requiring adherence to those instructions. And such a proposal raises a number of concerns, including in both the method the government would use to prescribe how software be coded, and the dubious constitutionality of it being able to make such demands. While with this strategy document itself the government is not yet prescribing a specific way to code software, it contemplates that the government someday could. And it does so apparently without recognizing how significantly shaping it is for the government to have the ability to make such demands – and not necessarily for the better.

In terms of method, while the government isn’t necessarily suggesting that a regulator enforce requirements for software code, what it does propose is far from a light touch: allowing enforcement of coding requirements via liability – or, in other words, the ability of people to sue if software turns out to be vulnerable. But regulation via liability is still profoundly heavy-handed, perhaps even more so than regulator oversight would be. For instance, instead of a single regulator working from discrete criteria there will be myriad plaintiffs and courts interpreting the language however they understand it. Furthermore, litigation is notoriously expensive, even for a single case, let alone with potentially all those same myriad plaintiffs. We have seen all too many innovative companies be obliterated by litigation, as well as seen how the mere threat of litigation can chill the investment needed to bring new good ideas into reality. This proposal seems to reflect a naïve expectation that litigation will only follow where truly deserved, but we know from history that such restraint is rarely the rule.

True, the government does contemplate there being some tuning to dull the edge of the regulatory knife, particularly through the use of safe harbors, such that there are defenses that could protect software developers from being drained dry by unmeritorious litigation threats. But while the concept of a safe harbor may be a nice idea, they are hardly a panacea, because we’ve also seen how if you have to litigate whether they apply then there’s no point if they even do. In addition, even if it were possible to craft an adequately durable safe harbor, given the current appetite among policymakers to tear down the immunities and safe harbors we currently have, like Section 230 or the already porous DMCA, the assumption that policymakers will actually produce a sustainable liability regime with sufficiently strong defenses and not be prone to innovation-killing abuse is yet another unfortunately naïve expectation.

The way liability would attach under this proposal is also a big deal: through the creation of a duty of care for the software developer. (The cited paragraph refers to it as “standards of care,” but that phrasing implies a duty to adhere to them, and liability for when those standards are deviated from.) But concocting such a duty is problematic both practically and constitutionally, because at its core, what the government is threatening here is alarming: mandating how software is written. Not suggesting how software should ideally be written, nor enabling, encouraging, nor facilitating it to be written well, but instead using the force of law to demand how software be written.

It is so alarming because software is written, and it raises a significant First Amendment problem for the government to dictate how anything should be expressed, regardless how correct or well-intentioned the government may be. Like a book or newspaper, software is something that is also expressed through language and expressive choices; there is not just one correct way to write a program that does something, but rather an infinite number of big and little structural and language decisions made along the way. But this proposal basically ignores the creative aspect to software development (indeed, software is even treated as eligible for copyright protection as an original work of authorship). Instead it treats it more like a defectively-made toaster than a book or newspaper, replacing the independent expressive judgment of the software developer with the government’s. Courts have also recognized the expressive quality to software, so it would be quite a sea change if the Constitution somehow did not apply to this particular form of expression. And such a change would have huge implications, because cybersecurity is not the only reason that the government keeps proposing to regulate software design. The White House proposal would seem to bless all these attempts, no matter how ill-advised or facially censorial, by not even contemplating the constitutional hurdles any legal regime to regulate software design would need to hurdle.

It would still need to hurdle them even if the government truly knew best, which is a big if, even here, and not just because the government may lack adequate enough or current enough expertise. The proposal does contemplate a multi-stakeholder process to develop best practices, and there is nothing wrong in general with the government taking on some sort of facilitating role to help illuminate what these practices are and making sure software developers are aware of them – it may even be a good idea. The issue is not that there may be no such thing as any best practices for software development – obviously there are. But they are not necessarily one-size-fits-all or static; a best practice may depend on context, and constantly need to evolve to address new vectors of attack. But a distant regulator, and one inherently in a reactive posture, may not understand the particular needs of a particular software program’s userbase, nor the evolving challenges facing the developer. Which is a big reason why requiring adherence to any particular practice through the force of law is problematic, because it can effectively require software developers to make their code the government’s way rather than what is ultimately the best way for them and their users. Or at least put them in the position of having to defend their choices, which up until now the Constitution had let them make freely. And which would amount to a huge, unprecedented burden that threatens to chill software development altogether.

Such chilling is not an outcome the government should want to invite, and indeed, according to the strategy document itself, does not want. The irony with the software liability proposal is that it is inherently out-of-step with the overall thrust of the rest of the document, and even the third pillar it appears in itself, which proposes to foster better cybersecurity through the operation of more efficient markets. But imposing design liability would have the exact opposite effect on those markets. Even if well-resourced private entities (ex: large companies) might be able to find a way to persevere and navigate the regulatory requirements, small ones (including those potentially excluded from the stakeholder process establishing the requirements) may not be able to meet them, and individual people coding software are even less likely to. The strategy document refers to liability only on developers with market power, but every software developer has market power, including those individuals who voluntarily contribute to open source software projects, which provide software users with more choices. But those continued contributions will be deterred if those who make them can be liable for them. Ultimately software liability will result in fewer people writing code and consequently less software for the public to use. So far from making the software market work more efficiently through competitive pressure, imposing liability for software development will only remove options for consumers, and with it the competitive pressure the White House acknowledges is needed to prompt those who still produce software to do better. Meanwhile, those developers who remain will still be inhibited from innovating if that innovation can potentially put them out of compliance with whatever the law has so far managed to imagine.

Which raises another concern with the software liability proposal and how it undermines the rest of the otherwise reasonable strategy document. The fifth pillar the White House proposes is to “Forge International Partnerships to Pursue Shared Goals”:

The United States seeks a world where responsible state behavior in cyberspace is expected and rewarded and where irresponsible behavior is isolating and costly. To achieve this goal, we will continue to engage with countries working in opposition to our larger agenda on common problems while we build a broad coalition of nations working to maintain an open, free, global, interoperable, reliable, and secure Internet.

On its face, there is nothing wrong with this goal either, and it, too, may be a necessary one to effectively deal with what are generally global cybersecurity threats. But the EU is already moving ahead to empower bureaucratic agencies to decide how software should be written, yet without a First Amendment or equivalent understanding of the expressive interests such regulation might impact. Nor does there seem to be any meaningful understanding about how any such regulation will affect the entire software ecosystem, including open source, where authorship emerges from a community, rather than a private entity theoretically capable of accountability and compliance.

In fact, while the United States hasn’t yet actually specified requirements for design practices a software developer must comply with, the EU is already barreling down the path of prescriptive regulation over software, proposing a law that would task an agency to dictate what criteria software must comply with. (See this post by Bert Hubert for a helpful summary of its draft terms.) Like the White House, the EU confuses its stated goal of helping the software market work more efficiently with an attempt to control what can be in the market. For all the reasons that an attempt by the US stands to be counterproductive, so would EU efforts be, especially if born from a jurisdiction lacking a First Amendment or equivalent understanding of the expressive interests such regulation would impact. Thus it may turn out to be European bureaucrats that attempt to dictate the rules of the road for how software can be coded, but that means that it will be America’s job to try to prevent that damage, not double-down on it.

It is of course true that not everything software developers currently do is a good idea or even defensible. Some practices are dreadful and damaging. It isn’t wrong to be concerned about the collateral effects of ill-considered or sloppy coding practices or for the government to want to do something about it. But how regulators respond to these poor practices is just as important, if not more so, than that they respond, if they are going to make our digital environment better and more secure and not worse and less. There are a lot of good ideas in the strategy document for how to achieve this end, but imposing software design liability is not one of them.

Filed Under: 1st amendment, chilling effects, coding, computer security, cybersecurity, duty of care, innovation, liability, national cybersecurity strategy, software, standards of care, white house

NYPD’s New Labor Day Tradition Involves Drone Surveillance Of People’s Private Parties And Property

from the fight-for-your-right-to-party dept

Never let it be said the NYPD doesn’t know how to have a good time. The question remains as to whether it’s possible for the NYPD to allow others to have a good time.

The NYPD has always been in the business of acquiring the latest in law enforcement tech. The arrival of easily affordable drones attracted the NYPD’s acquisition team, which began obtaining these eyes-in-the-sky more than a decade ago when they were still considered to be mostly a military plaything.

The acquisition of drones also attracted the eye of a local artist, who was arrested by the NYPD for satirizing its drone fleet with publicly posted “ads” that suggested the PD was getting into the drone strike business. The NYPD had drones. It did not, however, have a sense of humor. It engaged in a “weeks-long manhunt” for the artist behind the satirical posters that so offended the NYPD it decided it must be a criminal offense.

This criticism was shut down with the heaviest hand the NYPD could apply to the situation. The criticism (at least in this form) stopped. The NYPD’s acquisition and deployment of drones did not.

The NYPD is subject to some limitations on drone use by city law. The POST (Public Oversight of Surveillance Technology) Act requires the NYPD to inform the public about any new use of surveillance tech 90 days ahead of deployment. It did not do this… not in this case.

What it has said about its drones is that their use will be limited to the following situations, as noted by Sean Hollister for The Verge:

[W]hile the NYPD did publish a document about how it uses drones back in 2021, it suggested back then that drones would only be used for:

search and rescue operations, documentation of collisions and crimes scenes, evidence searches at large inaccessible scenes, hazardous material incidents, monitoring vehicular traffic and pedestrian congestion at large scale events, visual assistance at hostage/barricaded suspect situations, rooftop security, observations at shooting or large scale events, public safety, emergency, and other situations with the approval of the Chief of Department

Missing from this list? The thing the NYPD has decided its drones are going to do over the Labor Day weekend:

The New York City police department plans to pilot the unmanned aircrafts in response to complaints about large gatherings, including private events, over Labor Day weekend, officials announced Thursday.

“If a caller states there’s a large crowd, a large party in a backyard, we’re going to be utilizing our assets to go up and go check on the party,” Kaz Daughtry, the assistant NYPD Commissioner, said at a press conference.

While it could be argued (poorly, but argued nonetheless) that drone deployments might reduce wasted law enforcement resources by determining whether noise complaints are worth an in-person follow-up, the decision to convert a holiday weekend into a trial run for unfettered surveillance isn’t the sort of thing anyone (outside of the NYPD) is ever going to embrace.

You see, it’s not just about the parties. It’s about what can be seen and where it can be seen from. An officer at street level outside of a fenced yard can only see so much. A drone flying over enclosed yards can give officers a form of “plain view” they simply cannot achieve on their own. And since law enforcement believes anything its eyes (or its proxy eyes) can see is fair game when it comes to warrantless searches, flying drones over yards just because someone said a party is too loud is an abusive use of surveillance tech, which should be limited to the far more serious suspected criminal acts enumerated in the NYPD’s 2021 drone use document.

Considering the city’s size and population, one would expect the NYPD’s drones to be flying nonstop all weekend long. If the only justification for deployment is complaints about parties, the NYPD will have all the reason it needs to engage in extended, expansive surveillance of entire neighborhoods under the pretense of keeping the peace during a period of celebration.

Should the latent “threat” of people enjoying themselves a bit too much justify this kind of surveillance? The answer is obviously “no.” And a drone flight over a reported party can easily provide glimpses into neighboring property that has been the subject of zero complaints.

Then there’s the chilling effect. Is a Labor Day party protected by the First Amendment? Quite possibly. Not only is it an exercise of the right to freely associate, the perhaps-rowdy statements made by attendees are protected speech that should not be deterred by surveillance efforts seemingly completely divorced from anything resembling probable cause or reasonable suspicion. Sure, the city has an interest in ensuring neighboring residents aren’t subjected to excessive noise or intoxicated spillover, but those objectives can still be achieved without sending a camera into the curtilage without developing an articulable reason for doing so.

Filed Under: 1st amendment, 4th amendment, association, chilling effects, drones, nypd, surveillance

Court Says Texas’ Adult Content Age Verification Law Clearly Violates The 1st Amendment

from the 1st-amendment-wins-again dept

One down, many more to go.

We’ve been talking a lot by the rush of states to push for age verification laws all over the world, despite basically every expert noting that age verification technology is inherently a problem for privacy and security, and the laws mandating it are terrible. So far, it seems that only the Australian government has decided to buck the trend and push back on implementing such laws. But, much of the rest of the world is moving forward with them, while a bunch of censorial prudes cheer these laws on despite the many concerns about them.

The Free Speech Coalition, the trade group representing the adult content industry, has sued to block the age verification laws in the US that specifically target their websites. We reported on how their case in Utah was dismissed on procedural grounds, because that law is a bounty-type law with a private right of action, so there was no one in the government that could be sued. However, the similar law in Texas did not include that setup (even as Texas really popularized that method with its anti-abortion law). The Free Speech Coalition sued over the law to block it from going into effect.

Judge David Alan Ezra (who is technically a federal judge in Hawaii, but is hearing Texas cases because the Texas courts are overwhelmed) has issued a pretty sweeping smackdown of these kinds of laws, noting that they violate the 1st Amendment and that they’re barred by Section 230.

Given the rushed nature of the proceedings (the case was filed a few weeks ago, and the judge needed to decide before the law was scheduled to go into effect on Friday), it’s impressive that the ruling is 81 pages of detailed analysis. We’ll have a separate post soon regarding the judge’s discussion on the “health warnings” part of the opinion, but I wanted to cover the rest of the legal analysis, mostly regarding the 1st Amendment and Section 230.

However, it is worth mentioning Texas’ ridiculous argument that there was no standing for the Free Speech Coalition in this case. They tried to argue that there was no standing because FSC didn’t name a particular association member impacted by the law, but we’ve been over this in other cases in which trade associations (see: NetChoice and CCIA) are able to bring challenges on behalf of their member companies. The more bizarre standing challenge was that some of the websites that are members of the Free Speech Coalition are not American companies.

But, the judge notes (1) many of the members are US companies and (2) even the non-US companies are seeking to distribute content in the US, where the 1st Amendment still protects them:

Defendant repeatedly emphasizes that the foreign website Plaintiffs “have no valid constitutional claims” because they reside outside the United States. (Def.’s Resp., Dkt. # 27, at 6–7). First, it is worth noting that this argument, even if successful, would not bar the remaining Plaintiffs within the United States from bringing their claims. Several website companies, including Midus Holdings, Inc., Neptune Media, LLC, and Paper Street Media, LLC, along with Jane Doe and Free Speech Coalition (with U.S. member Paper Street Media, LLC), are United States residents. Defendant, of course, does not contest that these websites and Doe are entitled to assert rights under the U.S. Constitution. Regardless of the foreign websites, the domestic Plaintiffs have standing.

As to the foreign websites, Defendant cites Agency for Intl. Dev. v. All. for Open Socy. Intl., Inc., 140 S. Ct. 2082 (2020) (“AOSI”), which reaffirmed the principle that “foreign citizens outside U.S. territory do not possess rights under the U.S. Constitution.” Id. at 2086. AOSI’s denial of standing is distinguishable from the instant case. That case involved foreign nongovernmental organizations (“NGOs”) that received aid—outside the United States—to distribute outside the United States. These NGOs operated abroad and challenged USAID’s ability to condition aid based on whether an NGO had a policy against prostitution and sex trafficking. The foreign NGOs had no domestic operations and did not plan to convey their relevant speech into the United States. Under these circumstances, the Supreme Court held that the foreign NGOs could not claim First Amendment protection. Id.

AOSI differs from the instant litigation in two critical ways. First, Plaintiffs do not seek to challenge rule or policymaking with extraterritorial effect, as the foreign plaintiffs did in AOSI. By contrast, the foreign Plaintiffs here seek to exercise their First Amendment rights only as applied to their conduct inside the United States and as a preemptive defense to civil prosecution. Indeed, courts have typically awarded First Amendment protections to foreign companies with operations in the United States with little thought. See, e.g., Manzari v. Associated Newspapers Ltd., 830 F.3d 881 (9th Cir. 2016) (in a case against British newspaper, noting that defamation claims “are significantly cabined by the First Amendment”); Mireskandari v. Daily Mail and Gen. Tr. PLC, CV1202943MMMSSX, 2013 WL 12114762 (C.D. Cal. Oct. 8, 2013) (explicitly noting that the First Amendment applied to foreign news organization); Times Newspapers Ltd. v. McDonnell Douglas Corp., 387 F. Supp. 189, 192 (C.D. Cal. 1974) (same); Goldfarb v. Channel One Russia, 18 CIV. 8128 (JPC), 2023 WL 2586142 (S.D.N.Y. Mar. 21, 2023) (applying First Amendment limits on defamation to Russian television broadcast in United States); Nygård, Inc. v. UusiKerttula, 159 Cal. App. 4th 1027, 1042 (2008) (granting First Amendment protections to Finnish magazine); United States v. James, 663 F. Supp. 2d 1018, 1020 (W.D. Wash. 2009) (granting foreign media access to court documents under the First Amendment). It would make little sense to allow Plaintiffs to exercise First Amendment rights as a defense in litigation but deny them the ability to raise a pre-enforcement challenge to imminent civil liability on the same grounds.

Moving on. The judge does a fantastic job detailing how Texas’ age verification law is barred by the 1st Amendment. First, the decision notes that the law is subject to strict scrutiny, the highest level of scrutiny in 1st Amendment cases. As the court rightly notes, in the landmark Reno v. ACLU case (the case that found everything except Section 230 of the Communications Decency Act unconstitutional), the Supreme Court said governments can’t just scream “for the children” and use that as a shield against 1st Amendment strict scrutiny:

However, beginning in the 1990s, use of the “for minors” language came under more skepticism as applied to internet regulations. In Reno v. ACLU, the Supreme Court held parts of the CDA unconstitutional under strict scrutiny. 521 U.S. 844, 850 (1997). The Court noted that the CDA was a content-based regulation that extended far beyond obscene materials and into First Amendment protected speech, especially because the statute contained no exemption for socially important materials for minors. Id. at 865. The Court noted that accessing sexual content online requires “affirmative steps” and “some sophistication,” noting that the internet was a unique medium of communication, different from both television broadcast and physical sales.

It also points to ACLU vs. Ashcroft, which found the Child Online Protection Act unconstitutional on similar grounds, and notes that Texas’ law is pretty similar to COPA.

Just like COPA, H.B. 1181 regulates beyond obscene materials. As a result, the regulation is based on whether content contains sexual material. Because the law restricts access to speech based on the material’s content, it is subject to strict scrutiny

Texas also tried to argue that there should be no 1st Amendment protections for adult content because it’s “obscene.” But the judge noted that’s not at all how the system works:

In a similar vein, Defendant argues that Plaintiffs’ content is “obscene” and therefore undeserving of First Amendment coverage. (Id. at 6). Again, this is precedent that the Supreme Court may opt to revisit, but we are bound by the current Miller framework. Miller v. California, 413 U.S. 15, 24 (1973). 3 Moreover, even if we were to abandon Miller, the law would still cover First Amendmentprotected speech. H.B. 1181 does not regulate obscene content, it regulates all content that is prurient, offensive, and without value to minors. Because most sexual content is offensive to young minors, the law covers virtually all salacious material. This includes sexual, but non-pornographic, content posted or created by Plaintiffs. See (Craveiro-Romão Decl., Dkt. # 28-6, at 2; Seifert Decl., Dkt. # 28-7, at 2; Andreou Decl., Dkt. # 28-8, at 2). And it includes Plaintiffs’ content that is sexually explicit and arousing, but that a jury would not consider “patently offensive” to adults, using community standards and in the context of online webpages. (Id.); see also United States v. Williams, 553 U.S. 285, 288 (2008); Ashcroft v. Free Speech Coal., 535 U.S. 234, 252 (2002). Unlike Ginsberg, the regulation applies regardless of whether the content is being knowingly distributed to minors. 390 U.S. at 639. Even if the Court accepted that many of Plaintiffs’ videos are obscene to adults—a question of fact typically reserved for juries—the law would still regulate the substantial portion of Plaintiffs’ content that is not “patently offensive” to adults. Because H.B. 1181 targets protected speech, Plaintiffs can challenge its discrimination against sexual material.

And under strict scrutiny, the law… fails. Badly. The key part of strict scrutiny is whether or not the law is tailored specifically to address a compelling state interest, and not go beyond that. While the court says that protecting children is a compelling state interest, the law is not even remotely narrowly tailored to that interest:

Although the state defends H.B. 1181 as protecting minors, it is not tailored to this purpose. Rather, the law is severely underinclusive. When a statute is dramatically underinclusive, that is a red flag that it pursues forbidden viewpoint discrimination under false auspices, or at a minimum simply does not serve its purported purpose….

H.B. 1181 will regulate adult video companies that post sexual material to their website. But it will do little else to prevent children from accessing pornography. Search engines, for example, do not need to implement age verification, even when they are aware that someone is using their services to view pornography. H.B. 1181 § 129B.005(b). Defendant argues that the Act still protects children because they will be directed to links that require age verification. (Def.’s Resp., Dkt. # 27, at 12). This argument ignores visual search, much of which is sexually explicit or pornographic, and can be extracted from Plaintiffs’ websites regardless of age verification. (Sonnier Decl., Dkt. # 31-1, at 1–2). Defendant’s own expert suggests that exposure to online pornography often begins with “misspelled searches[.]”…

So, the law doesn’t stop most access to adult content. The judge highlights that, by the state’s own argument, it doesn’t apply to foreign websites, which host a ton of adult content. And it also doesn’t apply to social media, since most of their content is not adult content.

In addition, social media companies are de facto exempted, because they likely do not distribute at least one-third sexual material. This means that certain social media sites, such as Reddit, can maintain entire communities and forums (i.e., subreddits), dedicated to posting online pornography with no regulation under H.B. 1181. (Sonnier Decl., Dkt. # 31-1, at 5). The same is true for blogs posted to Tumblr, including subdomains that only display sexually explicit content. (Id.) Likewise, Instagram and Facebook pages can show material which is sexually explicit for minors without compelled age verification. (Cole Decl., Dkt. # 5-1, at 37–40). The problem, in short, is that the law targets websites as a whole, rather than at the level of the individual page or subdomain. The result is that the law will likely have a greatly diminished effect because it fails to reduce the online pornography that is most readily available to minors.

In short, if the argument is that we need to stop kids from seeing pornography, the law should target pornography, rather than a few sites which focus on pornography.

Also, the law is hella vague, in part because it does not consider that 17-year-olds are kinda different from 5-year-olds.

The statute’s tailoring is also problematic because of several key ambiguities in H.B. 1181’s language. Although the Court declines to rest its holding on a vagueness challenge, those vagueness issues still speak to the statute’s broad tailoring. First, the law is problematic because it refers to “minors” as a broad category, but material that is patently offensive to young minors is not necessarily offensive to 17-year-olds. As previously stated, H.B. 1181 lifts its language from the Supreme Court’s holdings in Ginsberg and Miller, which remains the test for obscenity. H.B. 1181 § 129B.001; Miller, 413 U.S. at 24; Ginsberg, 390 U.S. at 633. As the Third Circuit held, “The type of material that might be considered harmful to a younger minor is vastly different—and encompasses a much greater universe of speech—than material that is harmful to a minor just shy of seventeen years old. . . .” ACLU v. Ashcroft, 322 F.3d at 268. 7 H.B. 1181 provides no guidance as to what age group should be considered for “patently offensive” material. Nor does the statute define when material may have educational, cultural, or scientific value “for minors,” which will likewise vary greatly between 5-yearolds and 17-year-olds.

And even the “age verification” requirements are vague because it’s not clear what counts.

Third, H.B. 1181 similarly fails to define proper age verification with sufficient meaning. The law requires sites to use “any commercially reasonable method that relies on public or private transactional data” but fails to define what “commercially reasonable” means. Id. § 129B.03(b)(2)(B). “Digital verification” is defined as “information stored on a digital network that may be accessed by a commercial entity and that serves as proof of the identify of an individual.” Id. § 129B.003(a). As Plaintiffs argue, this definition is circular. In effect, the law defines “identity verification” as information that can verify an identity. Likewise, the law requires “14-point font,” but text size on webpages is typically measured by pixels, not points. See Erik D. Kennedy, The Responsive Website Font Size Guidelines, Learn UI Design Blog (Aug. 7, 2021) (describing font sizes by pixels) (Dkt. # 5-1 at 52–58). Overall, because the Court finds the law unconstitutional on other grounds, it does not reach a determination on the vagueness question. But the failure to define key terms in a comprehensible way in the digital age speaks to the lack of care to ensure that this law is narrowly tailored. See Reno, 521 U.S. at 870 (“Regardless of whether the CDA is so vague that it violates the Fifth Amendment, the many ambiguities concerning the scope of its coverage render it problematic for purposes of the First Amendment.”).

So the law is underinclusive and vague. But it’s also overinclusive by covering way more than is acceptable under the 1st Amendment.

Even if the Court were to adopt narrow constructions of the statute, it would overburden the protected speech of both sexual websites and their visitors. Indeed, Courts have routinely struck down restrictions on sexual content as improperly tailored when they impermissibly restrict adult’s access to sexual materials in the name of protecting minors.

The judge notes (incredibly!) that parts of HB 1181 are so close to COPA (the law the Supreme Court found unconstitutional in the ACLU v. Ashcroft case) that he seems almost surprised Texas even bothered.

The statutes are identical, save for Texas’s inclusion of specific sexual offenses. Unsurprisingly, then, H.B. 1181 runs into the same narrow tailoring and overbreadth issues as COPA….

[….]

Despite this decades-long precedent, Texas includes the exact same drafting language previously held unconstitutional.

Nice job, Texas legislature.

The court also recognizes the chilling effects of age verification laws, highlighting how, despite the ruling in Lawrence v. Texas saying anti-gay laws were unconstitutional, Texas has still kept the law in question on the books.

Privacy is an especially important concern under H.B. 1181, because the government is not required to delete data regarding access, and one of the two permissible mechanisms of age-verification is through government ID. People will be particularly concerned about accessing controversial speech when the state government can log and track that access. By verifying information through government identification, the law will allow the government to peer into the most intimate and personal aspects of people’s lives. It runs the risk that the state can monitor when an adult views sexually explicit materials and what kind of websites they visit. In effect, the law risks forcing individuals to divulge specific details of their sexuality to the state government to gain access to certain speech. Such restrictions have a substantial chilling effect. See Denver Area Educ. Telecomm. Consortium, Inc., 518 U.S. at 754 (“[T]he written notice requirement will further restrict viewing by subscribers who fear for their reputations should the operator, advertently or inadvertently, disclose the list of those who wish to watch the patently offensive channel.”).

The deterrence is particularly acute because access to sexual material can reveal intimate desires and preferences. No more than two decades ago, Texas sought to criminalize two men seeking to have sex in the privacy of a bedroom. Lawrence v. Texas, 539 U.S. 558 (2003). To this date, Texas has not repealed its law criminalizing sodomy. Given Texas’s ongoing criminalization of homosexual intercourse, it is apparent that people who wish to view homosexual material will be profoundly chilled from doing so if they must first affirmatively identify themselves to the state.

Texas argued that the age verification data will be deleted, but that doesn’t cut it, which is an important point in many other states passing similar laws:

Defendant contests this, arguing that the chilling effect will be limited by age verification’s ease and deletion of information. This argument, however, assumes that consumers will (1) know that their data is required to be deleted and (2) trust that companies will actually delete it. Both premises are dubious, and so the speech will be chilled whether or not the deletion occurs. In short, it is the deterrence that creates the injury, not the actual retention. Moreover, while the commercial entities (e.g., Plaintiffs) are required to delete the data, that is not true for the data in transmission. In short, any intermediary between the commercial websites and the third-party verifiers will not be required to delete the identifying data.

The judge also notes that leaks and data breaches are a real risk, even if the law requires deletion of data! And that the mere risk of such a leak is a speech deterrent.

Even beyond the capacity for state monitoring, the First Amendment injury is exacerbated by the risk of inadvertent disclosures, leaks, or hacks. Indeed, the State of Louisiana passed a highly similar bill to H.B. 1181 shortly before a vendor for its Office of Motor Vehicles was breached by a cyberattack. In a related challenge to a similar law, Louisiana argues that age-verification users were not identified, but this misses the point. See Free Speech Coalition v. Leblanc, No. 2:23-cv-2123 (E.D. La. filed June 20, 2023) (Defs.’ Resp., Dkt. # 18, at 10). The First Amendment injury does not just occur if the Texas or Louisiana DMV (or a third-party site) is breached. Rather, the injury occurs because individuals know the information is at risk. Private information, including online sexual activity, can be particularly valuable because users may be more willing to pay to keep that information private, compared to other identifying information. (Compl. Dkt. # 1, at 17); Kim Zetter, Hackers Finally Post Stolen Ashley Madison Data, Wired, Aug. 18, 2015, https://www.wired.com/2015/08/happened-hackers-posted-stolen-ashleymadison-data (discussing Ashley Madison data breach and hackers’ threat to “release all customer records, including profiles with all the customers’ secret sexual fantasies and matching credit card transactions, real names and addresses.”). It is the threat of a leak that causes the First Amendment injury, regardless of whether a leak ends up occurring.

Hilariously, Texas’ own “expert” (who works on age verification tech and is on the committee that runs the trade association of age verification companies) basically undermined Texas’ argument:

Defendant’s own expert shows how H.B. 1181 is unreasonably intrusive in its use of age verification. Tony Allen, a digital technology expert who submitted a declaration on behalf of Defendant, suggests several ways that age-verification can be less restrictive and costly than other measures. (Allen Decl., Dkt. # 26-6). For example, he notes that age verification can be easy because websites can track if someone is already verified, so that they do not have to constantly prove verification when someone visits the page. But H.B. 1181 contains no such exception, and on its face, appears to require age verification for each visit.

Given all that, the age verification alone violates the 1st Amendment.

With that, there isn’t even a need to do a Section 230 analysis, but the court does so anyway. It doesn’t go particularly deep, other than to note that Section 230’s coverage is considered broad (even in the 5th Circuit):

Defendant seeks to differentiate MySpace because the case dealt with a negligence claim, which she characterizes as an “individualized harm.” (Def.’s Resp., Dkt. # 27, at 19). MySpace makes no such distinction. The case dealt with a claim for individualized harm but did not limit its holding to those sorts of harms. Nor does it make sense that Congress’s goal of “[paving] the way for a robust new forum for public speech” would be served by treating individual tort claims differently than state regulatory violations. Bennett v. Google, LLC, 882 F.3d 1163, 1166 (D.C. Cir. 2018) (cleaned up). The text of the CDA is clear: “No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3). “[A]ny” state law necessarily includes those brought by state governments, so Defendant’s distinction between individual vs. regulatory claims is without merit.

The Fifth Circuit “and other circuits have consistently given [Section 230(c)] a wide scope.” Google, Inc. v. Hood, 822 F.3d 212, 220-21 (5th Cir. 2016) (quoting MySpace, 528 F.3d at 418). “The expansive scope of CDA immunity has been found to encompass state tort claims, alleged violations of state statutory law, requests for injunctive relief, and purported violations of federal statutes not specifically excepted by § 230(e).” Hinton v. Amazon.com.dedc, LLC, 72 F. Supp. 3d 685, 689 (S.D. Miss. 2014) (citing cases).

And while the court says 230 preemption might not apply to adult content websites that create and host their own content, it absolutely does apply to those that host 3rd party user-uploaded content.

Those Plaintiffs that develop and post their own content are not entitled to an injunction on Section 230 grounds. Still, other Plaintiffs, such as WebGroup, which operates XVideos, only hosts third-party content, and therefore is entitled to Section 230 protection.

Given all that it’s not difficult for the court to issue the injunction, noting that a violation of 1st Amendment rights is irreparable harm.

In short, Plaintiffs have shown that their First Amendment rights will likely be violated if the statute takes effect, and that they will suffer irreparable harm absent an injunction. Defendant suggests this injury is speculative and notimminent, (Def.’s Resp., Dkt. # 27, at 21–23), but this is doubtful. H.B. 1181 takes effect on September 1—mere days from today. That is imminent. Nor is the harm speculative. The Attorney General has not disavowed enforcement. To the contrary, her brief suggests a genuine belief that the law should be vigorously enforced because of the severe harms purportedly associated with what is legal pornography. (Id. at 1–5). It is not credible for the Attorney General to state that “[p]orn is absolutely terrible for our kids” but simultaneously claim that they will not enforce a law ostensibly aimed at preventing that very harm. Because the threat of enforcement is real and imminent, Plaintiffs’ harm is non-speculative. It is axiomatic that a plaintiff need not wait for actual prosecution to seek a preenforcement challenge. See Babbitt v. United Farm Workers Nat. Union, 442 U.S. 289, 298 (1979). In short, Plaintiffs have more than met their burden of irreparable harm.

All in all this is a very good, very clear, very strong ruling, highlighting how age verification mandates for adult content violate the 1st Amendment. It’s likely Texas will appeal, and the 5th Circuit has a history of ignoring 1st Amendment precedent, but for now this is a win for free speech and against mandatory age verification.

Filed Under: 1st amendment, adult content, age verification, chilling effects, hb 1181, pornography, preemption, privacy, section 230, standing, state laws, texas
Companies: free speech coalition

Jim Jordan Further Weaponizes His Subcommittee On The Weaponization Of The Gov’t To Chill Speech

from the the-hypocrite's-hypocrite dept

Rep. Jim Jordan is at it again. You’ll recall that Jordan ignored subpoenas from the January 6th Committee and was referred to the House Ethics Committee for his failure to respond to those subpoenas. Of course, since being handed the keys to the brand new (created just for him) “Subcommittee on the Weaponization of the Government,” Jordan has been furiously flinging spurious subpoenas left and right, and then threatening contempt proceedings for anyone who ignores them.

As we’ve highlighted in the past, nearly everything that Jordan accuses others of doing, and which he insists his committee is there to stop… he is actually doing himself (while those he accuses of “weaponizing the government” are not, in fact, doing that at all).

It is the absolute worst of the worst in terms of not just unadulterated hypocrisy, but doing so in a manner that unconstitutionally silences the speech.

The Washington Post has the latest details on how Jordan and his committee are threatening academics for the crime of researching disinformation.

Last week, Jordan (Ohio) threatened legal action against Stanford University, home to the Stanford Internet Observatory, for not complying fully with his records requests. The university turned over its scholars’ communications with government officials and big social media platforms but is holding back records of some disinformation complaints. Stanford told The Washington Post that it omitted internal records, some filed by students. The university is negotiating for limited interviews.

The push caps years of pressure from conservative activists who have harangued such academics online and in person and filed open-records requests to obtain the correspondence of those working at public universities. The researchers who have been targeted study the online spread of disinformation, including falsehoods that have been accelerated by former president and candidate Donald Trump and other Republican politicians. Jordan has argued that content removals urged by some in the government have suppressed legitimate theories on vaccine risks and the covid-19 origins as well as news stories wrongly suspected of being part of foreign disinformation campaigns.

Basically all of this is premised on the blatantly false claim that there is some “censorship industrial complex” in which researchers, government institutions, and social media companies are working together in a grand cork board conspiracy to silence conservatives. Literally none of that is true, as Twitter itself admitted in court just recently.

Part of the issue is that folks who are deep into the conspiracy theory world simply can’t comprehend that anyone would study disinformation for academic reasons, and they insist that it must be part of a secret plan to “censor” people. The truth, of course, is that while the academics in this field are trying to understand how misleading information flows, and also how it impacts beliefs and action (if at all!), it’s pretty rare to find “disinformation” researchers who think that “censorship” is an effective way of stopping the flows of information.

But, the conspiracy theory must be fed, and the repeated lies about the “censorship industrial complex” needs a villain… and Jordan has focused his attention on these academics. And, in the process, he is literally weaponizing the power he has as a government officials to chill speech and actually push people away from studying disinformation flows.

The pressure has forced some researchers to change their approach or step back, even as disinformation is rising ahead of the 2024 election. As artificial intelligence makes deception easier and platforms relax their rules on political hoaxes, industry veterans say they fear that young scholars will avoid studying disinformation.

Even if you worry that “disinformation” is often misclassified, you should still want the space to be studied, because that’s how we learn whether or not “disinformation” is actually a problem. I’ve long had my doubts about how effective disinformation actually is in changing minds or behavior, but I still want it studied. And Jordan’s weaponization of his Congressional subcommittee is making that much harder.

And, of course, that’s a large part of the goal. It’s become quite clear that the Jordan wing of the GOP (which has now become the core of the GOP, rather than the fringe it once was) has decided that the only way they can win elections is through blatant lies, propaganda, and nonsense, and therefore they need to suppress anyone who calls out their bullshit.

So, things like this are particularly laughable:

“Whether directly or indirectly, a government-approved or-facilitated censorship regime is a grave threat to the First Amendment and American civil liberties,” Jordan wrote.

The only one leading to censorship here is you. The “grave threat to the First Amendment and American civil liberties” is your stupid attempt to bring back a no frills McCarthy-style congressional committee designed to intimidate people into silence.

The hypocrisy is so loud and so stupid:

Jordan spokesman Russell Dye argued that the multitude of requests will build on evidence that shows an organized effort to tamp down conservative speech online. “The committee is working hard to get to the bottom of this censorship to protect First Amendment rights for all Americans,” he said.

The censorship and First Amendment violations are coming from you, dude.

Filed Under: 1st amendment, chilling effects, free speech, jim jordan, subpoenas, weaponization subcommittee

Jim Jordan Weaponizes The Subcommittee On The Weaponization Of The Gov’t To Intimidate Researchers & Chill Speech

from the where-are-the-jordan-files? dept

As soon as it was announced, we warned that the new “Select Subcommittee on the Weaponization of the Federal Government,” (which Kevin McCarthy agreed to support to convince some Republicans to support his speakership bid) was going to be not just a clown show, but one that would, itself, be weaponized to suppress speech (the very thing it claimed it would be “investigating.”)

To date, the subcommittee, led by Jim Jordan, has lived down to its expectations, hosting nonsense hearings in which Republicans on the subcommittee accidentally destroy their own talking points and reveal themselves to be laughably clueless.

Anyway, it’s now gone up a notch beyond just performative beclowing to active maliciousness.

This week, Jordan sent information requests to Stanford University, the University of Washington, Clemson University and the German Marshall Fund, demanding they reveal a bunch of internal information, that serves no purpose other than to intimidate and suppress speech. You know, the very thing that Jim Jordan pretends his committee is “investigating.”

House Republicans have sent letters to at least three universities and a think tank requesting a broad range of documents related to what it says are the institutions’ contributions to the Biden administration’s “censorship regime.”

As we were just discussing, the subcommittee seems taken in by Matt Taibbi’s analysis of what he’s seen in the Twitter files, despite nearly every one of his “reports” on them containing glaring, ridiculous factual errors that a high school newspaper reporter would likely catch. I mean, here he claims that the “Disinformation Governance Board” (an operation we mocked for the abject failure of the administration in how it rolled out an idea it never adequately explained) was somehow “replaced” by Stanford University’s Election Integrity Project.

Except the Disinformation Governance Board was announced, and then disbanded, in April and May of 2022. The Election Integrity Partnership was very, very publicly announced in July of 2020. Now, I might not be as decorated a journalist as Matt Taibbi, but I can count on my fingers to realize that 2022 comes after 2020.

Look, I know that time has no meaning since the pandemic began. And that journalists sometimes make mistakes (we all do!), but time is, you know, not that complicated. Unless you’re so bought into the story you want to tell you just misunderstand basically every last detail.

The problem, though, goes beyond just getting simple facts wrong (and the list of simple facts that Taibbi gets wrong is incredibly long). It’s that he gets the less simple, more nuanced facts, even more wrong. Taibbi still can’t seem to wrap his head around the idea that this is how free speech and the marketplace of ideas actually works. Private companies get to decide the rules for how anyone gets to use their platform. Other people get to express their opinions on how those rules are written and enforced.

As we keep noting, the big revelations so far (if you read the actual documents in the Twitter Files, and not Taibbi’s bizarrely disconnected-from-what-he’s-commenting-on commentary), is that Twitter’s Trust and Safety team was… surprisingly (almost boringly) competent. I expected way more awful things to come out in the Twitter Files. I expected dirt. Awful dirt. Embarrassing dirt. Because every company of any significant size has that. They do stupid things for stupid fucking reasons, and bend over backwards to please certain constituents.

But… outside of a few tiny dumb decisions, Twitter’s team has seemed… remarkably competent. They put in place rules. If people bent the rules, they debated how to handle it. They sometimes made mistakes, but seemed to have careful, logical debates over how to handle those things. They did hear from outside parties, including academic researchers, NGOs, and government folks, but they seemed quite likely to mock/ignore those who were full of shit (in a manner that pretty much any internal group would do). It’s shockingly normal.

I’ve spent years talking to insiders working on trust and safety teams at big, medium, and small companies. And, nothing that’s come out is even remotely surprising, except maybe how utterly non-controversial Twitter’s handling of these things was. There’s literally less to comment on then I expected. Nearly every other company would have a lot more dirt.

Still, Jordan and friends seem driven by the same motivation as Taibbi, and they’re willing to do exactly the things that they claim they’re trying to stop: using the power of the government to send threatening intimidation letters that are clearly designed to chill academic inquiry into the flow of information across the internet.

By demanding that these academic institutions turn over all sorts of documents and private communications, Jordan must know that he’s effectively chilling the speech of not just them, but any academic institution or civil society organization that wants to study how false information (sometimes deliberately pushed by political allies of Jim Jordan) flow across the internet.

It’s almost (almost!) as if Jordan wants to use the power of his position as the head of this subcommittee… to create a stifling, speech-suppressing, chilling effect on academic researchers engaged in a well-established field of study.

Can’t wait to read Matt Taibbi’s report on this sort of chilling abuse by the federal government. It’ll be a real banger, I’m sure. I just hope he uses some of the new Substack revenue he’s made from an increase in subscribers to hire a fact checker who knows how linear time works.

Filed Under: academic research, chilling effects, congress, intimidation, jim jordan, matt taibbi, nonsense peddlers, research, twitter files, weaponization subcommittee
Companies: clemson university, german marshall fund, stanford, twitter, university of washington

Germany’s Government Continues To Lock People Up For Being Extremely Online

from the watch-your-fingers,-folks dept

Germany’s uncomfortable relationship with free speech continues. The country has always been sensitive about certain subjects (rhymes with Bitler and, um, Yahtzee), resulting in laws that suppress speech referring to these subjects, apparently in hopes of preventing a Fourth Reich from taking hold.

But the censorship of speech extends far beyond the lingering aftereffects of Germany’s supremely troubled past. The government has passed laws outlawing speech with the vaguest of contours, like “hate speech” and “fake news.” And it has swung a pretty powerful hammer to ensure cooperation, stripping away intermediary immunity to hold platforms directly accountable for user-generated content. You know, like a nation run by authoritarians, except ones that enact penalties for references to a certain former authoritarian.

Germany may wish to escape its abusive past. But its speech-related laws encourage abuse by powerful people. Allow this timeline to run without interruption long enough, and you’re staring down the barrel of history.

Maybe it won’t be the second coming of national socialism. But it might just be the conversion of Germany into something resembling the USSR farm team East Germany was until the fall of the Berlin Wall. As this New York Times report details, people are being arrested for being careless online — something that suggests far too many local politicians desire a Stasi of their own.

When the police pounded the door before dawn at a home in northwest Germany, a bleary-eyed young man in his boxer shorts answered. The officers asked for his father, who was at work.

They told him that his 51-year-old father was accused of violating laws against online hate speech, insults and misinformation. He had shared an image on Facebook with an inflammatory statement about immigration falsely attributed to a German politician. “Just because someone rapes, robs or is a serious criminal is not a reason for deportation,” the fake remark said.

The police then scoured the home for about 30 minutes, seizing a laptop and tablet as evidence, prosecutors said.

If the police already had copies of the posting, it doesn’t make much sense for them to search a home and seize devices. But that’s what they do. And, according to the New York Times article, this happens nearly one hundred times a day all over the nation, day after day after day.

Normally, when someone suggests efforts like these produce a chilling effect, governments issue statements affirming support for free speech and obliquely suggest the original commenter is misinformed or misinterpreting these actions. Not so with Germany. The chilling effect is the entire point — something the government freely admits.

German authorities have brought charges for insults, threats and harassment. The police have raided homes, confiscated electronics and brought people in for questioning. Judges have enforced fines worth thousands of dollars each and, in some cases, sent offenders to jail. The threat of prosecution, they believe, will not eradicate hate online, but push some of the worst behavior back into the shadows.

This means the earlier efforts — those forcing social media platforms to immediately and proactively remove anything the German government might find offensive — haven’t worked as well as politicians hoped. It was an impossible demand, one made by people who don’t believe anything is impossible if it’s backed by a government mandate and (most importantly) entirely the responsibility of other people.

Since the government can’t make social media companies perform the impossible, prosecutors have decided to go after internet users, who are far easier to threaten, intimidate, and jail into silence. Chilling is what we do, say prosecutors, citing the supposed success similar tactics have had in the fight against online piracy(!):

Daniel Holznagel, a former Justice Ministry official who helped draft the internet enforcement laws passed in 2017, compared the crackdown to going after copyright violators. He said people stopped illegally downloading music and movies as much after authorities began issuing fines and legal warnings.

“You can’t prosecute everyone, but it will have a big effect if you show that prosecution is possible,” said Mr. Holznagel, who is now a judge.

Since it’s almost impossible to tell what will trigger police action and prosecution, German citizens are likely engaging in self-censorship regularly. Sarcasm, irony, parody, shitposting… all of this is under scrutiny, since it’s apparent the government isn’t capable of performing anything but a straightforward reading of user-generate content. It would be almost comical if it weren’t for the police raids, prosecutions, device seizures, and jail time.

No national figures exist on the total number of people charged with online speech-related crimes. But in a review of German state records, The New York Times found more than 8,500 cases. Overall, more than 1,000 people have been charged or punished since 2018, a figure many experts said is probably much higher.

This effort siphons resources from law enforcement agencies asked to police more serious criminal acts — the kind that result in actual victims who can show actual harm, rather than the theoretical harm posed by posts that fall outside of the boundaries set by the German government’s escalating desire to regulate speech.

But that doesn’t mean local police aren’t welcoming the new duties. Some seem to particularly relish literally policing the internet.

Authorities in Lower Saxony raid homes up to multiple times per month, sometimes with a local television crew in tow.

Internet use is under constant surveillance. This always-on monitoring provides law enforcement with targets. Arrestees who refuse to submit to device searches aren’t slowing down investigators. Electronics are sent to crime labs and subjected to forensic searches by Cellebrite devices. Millions of dollars fund these efforts… and for what?

Swen Weiland, a software developer turned internet hate speech investigator, is in charge of unmasking people behind anonymous accounts. He hunts for clues about where a person lives and works, and connections to friends and family. After an unknown Twitter user compared Covid restrictions to the Holocaust, he used an online registry of licensed architects to help identify the culprit as a middle-aged woman.

“I try to find out what they do in their normal life,” Mr. Weiland said. “If I find where they live or their relatives then I can get the real person. The internet does not forget.”

That’s what the government is going after. Germany is in the business of punishing stupidity. But only certain forms of stupidity. Far more threatening posts are ignored by law enforcement. An activist interviewed by the NYT said she was doxxed and threatened by online commenters. When she took this info to law enforcement, officers responded by giving her a brochure about online hate and telling nothing that was said broke any laws. Doxxing is ok. Threats are ok. Making an extremely terrible analogy? A criminal offense.

So is this:

Last year, Andy Grote, a city senator responsible for public safety and the police in Hamburg, broke the local social distancing rules — which he was in charge of enforcing — by hosting a small election party in a downtown bar.

After Mr. Grote later made remarks admonishing others for hosting parties during the pandemic, a Twitter user wrote: “Du bist so 1 Pimmel” (“You are such a penis”).

Three months later, six police officers raided the house of the man who had posted the insult, looking for his electronic devices.

While it’s somewhat understandable that speech restrictions have been put in place in hopes of preventing history from repeating, the government’s desire to turn ignorance into criminal activity is hugely problematic. Laws like this are never taken off the books. They either linger forever or are subjected to endless expansions, allowing the government to start serving its own interests rather than those it imagines the general public values. There’s no impending slippery slope here. Germany is already headed down it.

Filed Under: chilling effects, content moderation, fake news, free speech, germany, hate speech, insults, piracy, surveillance

The Constitutional Challenge To FOSTA Hits A Roadblock As District Court Again Ignores Its Chilling Effects

from the should-would-could-and-did dept

The constitutional challenge to FOSTA suffered a significant setback late last month when the district court granted the government’s motion for summary judgment, effectively dismissing the challenge. If not appealed (though it sounds like it may be), it would be the end of the road for it.

What is most dismaying about the decision – other than its ultimate holding – is the court’s failure to recognize the chilling effect on expression FOSTA has already had, and which the DC Circuit had previously acknowledged when it found the standing needed for the plaintiffs’ challenge to continue, after the district court had previously tried to dismiss it once before for lack of it. In this latest decision, the district court again turned a blind eye to the expressive harm FOSTA causes and rooted its ruling not in the language of the appellate holding suggesting there might actually be a problem here but instead in the dicta of Judge Katsas’s concurrence, even though none of this more equivocating language was a binding observation by the appeals court.

For instance, at one point in the decision the district court wrote:

Plaintiffs also contend that the prior decision of our Court of Appeals as to plaintiffs’ standing in this case precludes my holding that § 2421A is susceptible to the narrowing construction I endorse above. I disagree. Indeed, I find that plaintiffs’ argument not only overreads the majority’s opinion, but also ignores Judge Katsas’s concurrence. More specifically, while the majority did point out that FOSTA’s language, including the “promote or facilitate” elements discussed above, could be read to sweep broadly “when considered in isolation,” Woodhull I, 948 F.3d at 372, the panel did so in the context of its analysis of plaintiffs’ standing to bring a pre-enforcement challenge. That standing analysis merely requires considering whether plaintiffs have established that they engage in activities “arguably” within the scope of the challenged statute, see SBA, 573 U.S. at 164, not that the statute does in fact prohibit the alleged activities. As such, the majority was not determining the precise scope of what FOSTA proscribes, but rather whether plaintiffs’ broad reading of FOSTA was “arguably” a valid one. In short, the majority did not decide how FOSTA should be construed, only how it could be construed. To that end, the narrowing construction of the law discussed above was neither endorsed, nor rejected, by the majority’s opinion. Indeed, in his concurrence, Judge Katsas expressly stated that the majority did not purport to construe the statute for anything other than the standing analysis, noting instead that the plaintiffs’ preferred reading was only “identif[ied] … as at least one possible reading of FOSTA.” Woodhull II, 948 F.3d at 375 (Katsas, J., concurring in part and concurring in the judgment). Judge Katsas wrote separately specifically to indicate that he viewed the plaintiffs’ reading ultimately as untenable, even if he did also agree that the plaintiffs had standing under his narrower reading. I therefore find that plaintiffs are incorrect in arguing that I am precluded from reading FOSTA so narrowly: our Court of Appeals did not take any position on that reading of FOSTA, and indeed Judge Katsas expressly adopted it. [p. 17 (emphasis in the original)]

The problem is, at no point did the district court actually consider how FOSTA’s language would be construed, let alone how it had already been construed.

In conjunction with their motion for summary judgment, plaintiffs did submit a statement of facts accompanied with a number of supporting affidavits. However, these facts and affidavits are material only to establishing plaintiffs’ ongoing standing—which defendants do not challenge—and the entitlement of plaintiffs to injunctive relief should they prevail on the merits. Because, as explained below, I find that plaintiffs’ facial constitutional claims are without merit, there is no need to address the facts underpinning plaintiffs’ request for injunctive relief. [fn 5]

Nor, for that matter, had the DC Circuit itself previously, as it had not been called upon to fully inquire as to the expressive effects of FOSTA and therefore could not officially indicate one way or another whether there were any for any or all of the plaintiffs. As it was, once it found standing for just two of them, it had ended its inquiry, because finding it possible for just two plaintiffs was enough to revive the challenge, and even Judge Katsas’s doubt as to the expressive harm he articulated in his concurrence was still nothing more than idle musing, and not a definitive finding of any sort.

Nevertheless, the appeals court, and even Judge Katsas, had observed that there very easily could be some impermissible expressive harms resulting from FOSTA arising from its vague language. Yet the district court chose to largely ignore that observation, or the factual record documenting the ways these plaintiffs had already been chilled. Instead it treated the Katsas concurrence as an official finding that FOSTA’s language could cause no expressive harms at all.

Judge Katsas wrote separately specifically to indicate that he viewed the plaintiffs’ reading ultimately as untenable, even if he did also agree that the plaintiffs had standing under his narrower reading. I therefore find that plaintiffs are incorrect in arguing that I am precluded from reading FOSTA so narrowly: our Court of Appeals did not take any position on that reading of FOSTA, and indeed Judge Katsas expressly adopted it.

As such, a proper construal of FOSTA leads to the conclusion that it is narrowly tailored toward prohibiting activity that effectively aids or abets specific instances of prostitution. I therefore have no trouble finding that its legitimate sweep, encompassing only conduct or unprotected speech integral to criminal activity, predominates any sweep into protected speech—indeed, under the narrow construal above, I do not read FOSTA to possibly prohibit any such protected speech, much less a sufficient amount so as to render the Act overbroad. [p. 18]

A finding of summary judgment assumes that there are no issues of material fact, and so the only legal question before the court would have been one of how the law should treat the agreed-upon facts. But here the district court’s own reasoning indicates that there is indeed a question of fact: is the language of FOSTA one that can chill lawful expressive activity, or one that does not? That there was a plausible reading where it might not does not seem dispositive, especially in light of the fact that such readings had already occurred (particularly with respect to the massage therapist, who lost his ability to advertise on Craigslist, through which he had been successfully advertised for years, once FOSTA passed and Craigslist found the legal risk of allowing such ads to be too great in the face of it). It is thus preposterous to find, as this court did, that FOSTA could not have a chilling effect when there is already plenty of evidence of one.

If this decision were to stand Congress would only end up further emboldened to make more laws like this one that chill speech, even though the First Amendment unequivocally tells them not to. Because per this court it only matters if Congress intended to harm speech, and not whether it actually did.

Though FOSTA may well implicate speech in achieving its separate purpose, such an indirect effect does not provide a basis for strict scrutiny: “even if [a law] has an incidental effect on some speakers or messages but not others,” it is to be treated as content neutral. [p. 23]

But FOSTA’s chilling effect has been far from incidental, and hopefully on appeal this harm will be recognized and remedied.

Filed Under: 1st amendment, chilling effects, dc circuit, fosta, free speech, section 230, standing
Companies: woodhull foundation

Not This Again: Facebook Threatens To Sue Guy Who Registered 'DontUseInstagram.com'

from the don't-be-a-trademark-bully dept

Ah, this one takes me back to the early days of Techdirt, when the biggest nonsense we were writing about was giant corporate bullies threatening (or in some cases suing!) over so called “Sucks Sites” (that’s an article from almost 20 years ago!). The issue was that people who were upset with a particular company would register the domain of CompanySucks.com to (usually) put up a protest site. The company (and its lawyers) would then threaten to sue the individual for trademark infringement. There were some mixed rulings over those sites, but in general most have decided that sucks sites are not trademark infringement, and are protected under a variety of theories — including a lack of any possible confusion and because they’re nominative fair use.

You’d have hoped that, by now, big company lawyers would recognize all of this. Apparently not Facebook’s. Now, to be fair, as we recently discussed, for companies like Facebook, often they carefully police domains that make use of similar URLs in order to cut off sketchy phishing and scam sites. But it’s one thing to go after such scammers… and it’s another to go after someone who is obviously engaging in criticism.

Enter: DontUseInstagram.com, created by Paul Kruczynski.

The site now is designed to pretty much do what it says on the tin: give you reasons why you shouldn’t use Instagram. Whether or not you agree with that messaging, it’s clearly not infringing on Facebook/Instagram’s trademarks. Someone should probably tell Instagram’s lawyers. Because they sent a threat letter. In fact, they sent this threat letter before he’d even launched anything at the site, basically trying to intimidate him out of the site before he’d even done anything with it.

To Whom It May Concern,

We are writing concerning your registration and use of the domain name dontuseinstagram.com, which contains the Instagram trademark.

You are undoubtedly familiar with Instagram and its worldwide renown in providing photo sharing and editing services, online networking and related products and services through a number of channels, including through its mobile application software and its website available at Instagram.com. Instagram owns exclusive rights to the INSTAGRAM trademark, including rights secured through common law use and registration in the United States (Reg. Nos. 4,170,675 and 4,146,057) and internationally. Instagram is a global leader in photography software for mobile devices, with over 800 million monthly active accounts. Due to Instagram’s exponential growth and immense popularity, the Instagram brand, is frequently, if not daily, referenced in the media and pop culture. Its fame entitles it to broad legal protection.

We have recently discovered that you registered the domain name, which incorporates the famous INSTAGRAM mark. Instagram has an obligation to its users and the public to police against the registration and/or use of domain names that may cause consumer confusion as to affiliation with or sponsorship by Instagram, dilute the distinctiveness of its INSTAGRAM mark, or otherwise tarnish the mark. Accordingly, in addition to civil actions, Instagram and its parent Facebook have filed numerous proceedings pursuant to ICANN’S Uniform Domain-Name Dispute- Resolution Policy (http://www.icann.org/en/help/dndr/udrp) to secure the transfer of infringing domain names. Moreover, the Anticybersquatting Consumer Protection Act provides for serious penalties (up to $100,000 per domain name) against persons who, without authorization, use, sell, or offer for sale a domain name that infringes another’s trademark.

While Instagram respects your right of expression and your desire to conduct business on the Internet, Instagram must take action to stop the misuse of its intellectual property. As you can imagine, various third parties around the world have attempted to wrongfully capitalize on Instagram’s reputation by registering domain names that include or are derived from the INSTAGRAM brand. Such names are confusingly similar to, dilutive of, and can tarnish the INSTAGRAM mark.

We understand that you may have registered dontuseinstagram.com without full knowledge of the law in this area. However, Instagram is concerned about your use of the Instagram trademark in your domain name. Accordingly, we must insist that you immediately cease using and disable either delete or transfer to Instagram any site available at that address. You should not sell, offer to sell, or transfer the domain name to any third party.

Please confirm in writing that you will agree to resolve this matter as requested. If we do not receive confirmation that you will comply with our request, we will have no other choice but to pursue all available remedies against you.

Sincerely,

Instagram IP & DNS Enforcement Group

Instagram, Inc.

Kruczynski was able to line up the Cyberlaw Clinic at Harvard Law’s Berkman Klein Center to help him respond to those Instagram lawyers, explaining to them in fairly great detail how totally full of shit their threat letter is in a letter from Kendra Albert,

The legal claims that your letters make are frivolous. Even worse, your overreach imperils Mr. Kruczynski?s First Amendment rights. Mr. Kruczynski?s domain name is not likely to cause consumer confusion, which Instagram would be required to prove in order to succeed on a trademark infringement claim. See Boston Duck Tours, LP v. Super Duck Tours, LLC, 531 F.3d 1, 12 (1st Cir. 2008) (citing Borinquen Biscuit, 443 F.3d 116 (1st Cir. 2006)).

To establish likelihood of confusion, a trademark owner ?must show more than the theoretical possibility of confusion.? Int’l Ass’n of Machinists & Aero. Workers, AFL-CIO v. Winship Green Nursing Ctr., 103 F.3d 196, 198 (1st Cir. 1996). For courts to find a likelihood of confusion, it has to be shown that there is ?a likelihood of confounding an appreciable number of reasonably prudent purchasers exercising ordinary care.? Id. Given that Mr. Kruczynski?s domain has not even been launched, Instagram cannot show more than a theoretical possibility of confusion. Moreover, as Mr. Kruczynski?s website dontuseinstagram.com currently resembles nothing like the Instagram website, it is inconceivable that any reasonably prudent purchaser exercising ordinary care would confuse the two websites.

Even in the scenario that Mr. Kruczynski?s domain dontuseinstagram.com becomes live and operates in a way Mr. Kruczynski originally intended it to, Instagram will not be able to establish that there is likelihood of confusion in Mr. Kruczynski?s registration and use of dontuseinstagram.com under the First Circuit?s eight-factor test. See Oriental Fin. Grp., Inc. v. Cooperativa De Ahorro Cr?dito Oriental, 698 F.3d 9, 17 (1st Cir. 2012) (citing Beacon Mut. Ins. Co. v. OneBeacon Ins. Grp., 376 F.3d 8, 15 (1st Cir. 2004)). Mr. Kruczynski did not intend to claim any associations with the Instagram mark and did not intend to compete with Instagram. In fact, Mr. Kruczynski?s domain would serve as a platform to criticize Instagram?s user privacy violations, not as a social media platform for users to share photos and accumulate followers. The goods or services provided by dontuseinstagram.com would be significantly different from those provided by Instagram, and the channels of trade and advertising would be very different as well. It is unimaginable that there would be evidence of actual confusion where Instagram users actually confuse Instagram with a website criticizing Instagram, starting from the domain name itself. Even assuming that Instagram has a strong mark that most people recognize, it is overreaching for Instagram to forbid others from registering or using any name that mentions Instagram without due regard of relevant laws.

The existence of a parked page on Mr. Kruczynski?s domain does not create trademark infringement where there previously was not any. See, e.g., Acad. of Motion Picture Arts & Scis. v. GoDaddy.com, Inc., No. CV 10-03738 AB (CWx), 2015 U.S. Dist. LEXIS 120871 (C.D. Cal. Sep. 10, 2015) (holding that the plaintiff failed to meet its burden of proving that a domain name registrar who operates parked page programs acted with a bad faith intent to profit from the plaintiff?s marks). In fact, the existence of the parked page is largely irrelevant to the discussion of trademark infringement here, and you are overstepping by demanding Mr. Kruczynski remove the parked page on his own registered domain.

Your claim of Mr. Kruczynski?s alleged trademark infringement is ungrounded in law. The non-infringing nature of the use would have been obvious had an attorney even glanced at the name of the site.

To the extent that these emails were sent using an automated process that merely checks to see if a domain contains the word Instagram, and then automatically requests the transfer of a domain to you if it does, such behavior plays on the threat of litigation to suppress potentially lawful speech. I am aware that there may be many domains registered with the Instagram mark in them, some of which may be used for phishing or other nefarious purposes. But that does not justify a ?spray and pray? strategy where you automatically send notices of infringement without any human review. Such notices may serve to unlawfully intimidate critics, requiring them to find legal counsel.

That reply was sent back in November and it requests that Instagram retract the threat letter and provide “a clear statement that you do not intend to file suit over his ownership of dontuseinstagram.com.” Somewhat optimistically, it also said that such a letter should “be accompanied by a discussion of what processes you will implement in order to ensure any messages you may send to domain owners will not attempt to intimidate lawful users of the Instagram wordmark.”

Neither happened. Instead, Instagram just went silent. It’s quite likely that the human being who received the response letter realized how bad an idea it was to send that original threat letter, even if automated, but has just moved on to threatening someone else. But Instagram deserves to be called out for its practices which can lead to real intimidation for people, unlike Paul, who don’t have the ability to have a knowledgeable lawyer respond to the threat.

That’s unfortunate, because it means that there are no consequences for sending out such bogus, censorial threat letters. Well, other than having a site like Techdirt call out your stupid threats.

Filed Under: brand protection, chilling effects, criticism, domain registration, don't use instagram, free speech, sucks sites, trademark, trademark bullying
Companies: facebook, instagram