cfaa – Techdirt (original) (raw)

UN Delegates Cheer As They Vote To Approve Increased Surveillance Via Russia-Backed Cybercrime Treaty

from the why-are-we-even-doing-this? dept

For years now, the UN has been trying to strike a deal on a “Cybercrime Treaty.” As with nearly every attempt by the UN to craft treaties around internet regulation, it’s been a total mess. The concept, enabling countries to have agreed upon standards to fight cybercrime, may seem laudable. But when it’s driven by countries that have extremely different definitions of “crime,” it becomes problematic. Especially if part of the treaty is enabling one country to demand another reveal private information about someone they accuse of engaging in a very, very broadly defined “cybercrime.”

The UN structure means that the final decision-makers are nation-states, and other stakeholders have way less say in the process.

And, on Thursday, those nation-states unanimously approved it, ignoring the concerns of many stakeholders.

Some history: two years ago, we warned about how the proposed treaty appeared to be perfect for widespread censorship, as it included considering “hate speech” as a form of cybercrime it sought to regulate. Last year, we checked in again and found that, while updated, the proposed treaty was still a total mess and would lead to both the stifling of free expression and increased surveillance.

No wonder certain governments (Russia, China) loved it.

While the final treaty made some changes from earlier versions that definitely made it better, the end product is still incredibly dangerous in many ways. Human Rights Watch put out a detailed warning regarding the problems of the treaty, noting that Russia is the main backer of the treaty — which should already cause you to distrust it.

The treaty has three main problems: its broad scope, its lack of human-rights safeguards, and the risks it poses to children’s rights.

Instead of limiting the treaty to address crimes committed against computer systems, networks, and data—think hacking or ransomware—the treaty’s title defines cybercrime to include any crime committed by using Information and Communications Technology systems. The negotiators are also poised to agree to the immediate drafting of a protocol to the treaty to address “additional criminal offenses as appropriate.” As a result, when governments pass domestic laws that criminalize any activity that uses the Internet in any way to plan, commit, or carry out a crime, they can point to this treaty’s title and potentially its protocol to justify the enforcement of repressive laws.

In addition to the treaty’s broad definition of cybercrime, it essentially requires governments to surveil people and turn over their data to foreign law enforcement upon request if the requesting government claims they’ve committed any “serious crime” under national law, defined as a crime with a sentence of four years or more. This would include behavior that is protected under international human rights law but that some countries abusively criminalize, like same-sex conduct, criticizing one’s government, investigative reporting, participating in a protest, or being a whistleblower.

In the last year, a Saudi court sentenced a man to death and a second man to 20 years in prison, both for their peaceful expression online, in an escalation of the country’s ever-worsening crackdown on freedom of expression and other basic rights.

This treaty would compel other governments to assist in and become complicit in the prosecution of such “crimes.”

EFF also warned of how the treaty would be used for greater governmental surveillance:

If you’re an activist in Country A tweeting about human rights atrocities in Country B, and criticizing government officials or the king is considered a serious crime in both countries under vague cybercrime laws, the UN Cybercrime Treaty could allow Country A to spy on you for Country B. This means Country A could access your email or track your location without prior judicial authorization and keep this information secret, even when it no longer impacts the investigation.

Criticizing the government is a far cry from launching a phishing attack or causing a data breach. But since it involves using a computer and is a serious crime as defined by national law, it falls within the scope of the treaty’s cross-border spying powers, as currently written.

This isn’t hyperbole. In countries like Russia and China, serious “cybercrime” has become a catchall term for any activity the government disapproves of if it involves a computer. This broad and vague definition of serious crimes allows these governments to target political dissidents and suppress free speech under the guise of cybercrime enforcement.

Posting a rainbow flag on social media could be considered a serious cybercrime in countries outlawing LGBTQ+ rights. Journalists publishing articles based on leaked data about human rights atrocities and digital activists organizing protests through social media could be accused of committing cybercrimes under the draft convention.

The text’s broad scope could allow governments to misuse the convention’s cross border spying powers to gather “evidence” on political dissidents and suppress free speech and privacy under the pretext of enforcing cybercrime laws.

That seems bad!

EFF also warned how the Cybercrime Treaty could be used against journalists and security researchers. It creates a sort of international (but even more poorly worded) version of the CFAA, a law we’ve criticized many times in the past for how it is abused by law enforcement to go after anyone doing anything they dislike “on a computer.”

Instead, the draft text includes weak wording that criminalizes accessing a computer “without right.” This could allow authorities to prosecute security researchers and investigative journalists who, for example, independently find and publish information about holes in computer networks.

These vulnerabilities could be exploited to spread malware, cause data breaches, and get access to sensitive information of millions of people. This would undermine the very purpose of the draft treaty: to protect individuals and our institutions from cybercrime.

What’s more, the draft treaty’s overbroad scope, extensive secret surveillance provisions, and weak safeguards risk making the convention a tool for state abuse. Journalists reporting on government corruption, protests, public dissent, and other issues states don’t like can and do become targets for surveillance, location tracking, and private data collection.

And so, of course, the UN passed it on Thursday in a unanimous vote. Because governments love it for all the concerns discussed above, and human rights groups and other stakeholders don’t get a vote. Which seems like a problem.

The passage of the treaty is significant and establishes for the first time a global-level cybercrime and data access-enabling legal framework.

The treaty was adopted late Thursday by the body’s Ad Hoc Committee on Cybercrime and will next go to the General Assembly for a vote in the fall. It is expected to sail through the General Assembly since the same states will be voting on it there.

The agreement follows three years of negotiations capped by the final two-week session that has been underway.

And then they gave themselves a standing ovation. Because it’s not them who will get screwed over by this treaty. It’s everyone else.

cybercrime treaty adopted. diplomats give a standing ovation.adopted over objections of most human rights orgs. little good will come out of this. all risk. russians get their dream treaty.democracies will regret their spinelessness when countries demand new crimes of 'extremism' &tc.

David Kaye (@davidkaye.bsky.social) 2024-08-08T21:07:36.751Z

For the treaty to go into force, 40 nations have to ratify it. Hopefully the US refuses to, and also pushes for other non-authoritarian countries to reject this treaty as well. It’s a really dangerous agreement, and these kinds of international agreements can cause serious problems once countries agree to them and they enter into force. Terrible treaties, once ratified, are nearly impossible to fix.

Filed Under: cfaa, computer crimes, cybercrime, cybercrime treaty, data access, russia, surveillance, un

Neil Gorsuch Highlights Aaron Swartz As An Example Of Overreach In Criminal Law

from the wasn't-expecting-that dept

Well, here’s something unexpected. Apparently Supreme Court Justice Neil Gorsuch has a new book coming out this week called “Over Ruled: The Human Toll of Too Much Law.” And, one of the examples in the book is about the ridiculous criminal case against Aaron Swartz and his eventual tragic decision to take his own life while facing the possibility of decades in prison for the “crime” of downloading too many research papers while on a college campus that had an unlimited subscription to those research papers.

At the time, we wrote about the travesty of the case and the tragedy of how it all ended.

But it’s still somewhat surprising to find out that the case has been wedged in Gorsuch’s mind as an example of prosecutorial overreach and over-criminalization.

David French has an interview with Gorsuch about the book in the NY Times, and the Swartz case is the first example Gorsuch brings up:

French: This was an interesting element of the book to me and something that people who are not familiar with your jurisprudence might not know — it’s that you’ve long been a champion of the rights of criminal defendants. It struck me that some of the stories here in the book, of the way in which the complexity of criminal law has impacted people, are among the most potent in making the point. Is there a particular story about the abuse of criminal law that stands out to you as you’re reflecting back on the work?

Gorsuch: I would say Aaron Swartz’s story in the book might be one example. Here’s a young man, a young internet entrepreneur, who has a passion for public access to materials that he thinks should be in the public domain. And he downloads a bunch of old articles from JSTOR.

His lawyer says it included articles from the 1942 edition of the Journal of Botany. Now, he probably shouldn’t have done that, OK?

But JSTOR and he negotiated a solution, and they were happy. And state officials first brought criminal charges but then dropped them. Federal prosecutors nonetheless charged him with several felonies. And when he refused to plea bargain — they offered him four to six months in prison, and he didn’t think that was right — he wanted to go to trial.

What did they do?

They added a whole bunch of additional charges, which exposed him to decades in federal prison. And faced with that, he lost his money, all of his money, paying for lawyers’ fees, as everybody does when they encounter our legal system. And ultimately, he killed himself shortly before trial. And that’s part of what our system has become, that when we now have, I believe, if I remember correctly from the book, more people now serving life sentences in our prison system than we had serving any prison sentence in 1970. And today — one more little item I point out — one out of 47 Americans is subject to some form of correctional supervision (as of 2020).

I disagree with Gorsuch on many, many things. On the two big internet cases from this last term, Gorsuch joined the Lalaland takes of Justices Alito and Thomas (in both the Moody and the Murthy case Gorsuch was a third vote besides Alito and Thomas towards nonsense). So, it seems a bit shocking for Gorsuch to be somewhat on the side of Swartz, who would have eviscerated Gorsuch’s position in both of those cases.

Of course, Gorsuch is also wrong that Swartz “probably shouldn’t have done that.” MIT had a site license that enabled anyone on campus to download as many articles from JSTOR as they wanted. It didn’t say “unless you download too many.”

But, at least he recognizes how ridiculous the criminal lawsuit that Swartz faced a dozen years ago is. For well over a decade, we’ve been highlighting how dangerous the CFAA is as a law. It is so easily abused by prosecutors that it’s been dubbed “the law that sticks.” It sticks because when there is no real criminal prosecution under other laws, prosecutors will often cook up a CFAA violation, as they did with Aaron. And it remains ridiculous that, to this day, nothing has ever been done to prevent another Aaron Swartz-type scenario from happening again.

Perhaps, with Gorsuch bringing it up again in his book and in this interview, it can renew some of the interest that showed up in the months following Aaron’s untimely death to make real changes to the laws that caused it. Having a Justice like Gorsuch calling out the terrible and ridiculous situation the CFAA caused seems like a good reason for Congress to revisit that law, rather than cooking up new nonsense like KOSA.

Filed Under: aaron swartz, cfaa, criminal overreach, david french, neil gorsuch, too much law

from the another-one-down-the-drain dept

Is there any law that Elon Musk actually understands?

The latest is that he’s lost yet another lawsuit, this time (in part) for not understanding copyright law.

There have been a variety of lawsuits regarding data scraping over the past decade, and we’ve long argued that such scraping should be allowed under the law (though sites are free to take technical measures to try to block them). Some of these issues are at stake in the recent Section 230 lawsuit that Ethan Zuckerman filed against Meta. That one is more about middleware/API access.

But pure “scraping” has come up in a number of cases, most notably the LinkedIn / HiQ case, where the 9th Circuit has said that scraping of public information is not a violation of the CFAA, as it was not “unauthorized access.” But the follow-up to that case was that the court still blocked HiQ from scraping LinkedIn, in part because of LinkedIn’s user agreement.

This has created a near total mess, where it is not at all clear if scraping public data on the internet is actually allowed.

This has only become more important in the last few years with the rise of generative AI and the need to get access to as much data as possible to train on.

Internet companies have been pushing to argue that their terms of service can block all kinds of scraping, perhaps relying on the eventual injunction blocking HiQ. Both Meta and ExTwitter sued a scraping company, Bright Data, arguing that its scraping violated their terms of service.

In January, Meta’s case against Bright Data was dismissed at the summary judgment stage. The judge in that case, Edward Chen, found that Meta’s terms of service clearly do not prohibit logged-off scraping of public data.

Now, ExTwitter’s lawsuit against the same company has reached a similar conclusion.

This time, it’s Judge William Alsup, who has dismissed the case for failure to state a claim. Alsup’s decision is a bit more thorough. It highlights that there are two separate issues here: did it violate ExTwitter’s terms of service to access its systems for scraping, and then, separately, to scrape and sell the data.

On the access side, the judge is not convinced by any of the arguments. It’s not trespass to chattels, because that requires some sort of injury.

Critically, the instant complaint alleges no such impairment or deprivation. X Corp. parrots elements, reciting that Bright Data’s “acts have caused injury to X Corp. and . . . will cause damage in the form of impaired condition, quality, and value of its servers, technology infrastructure, services, and reputation” (Amd. Compl. ¶ 102). Its lone deviation from that parroting — a conclusory statement that Bright Data’s “acts have diminished the server capacity that X Corp. can devote to its legitimate users” — fails to move the needle (Amd. Compl. ¶ 98). To say nothing of the fact that, as alleged, Bright Data and its customers are legitimate X users (subject to the Terms), the scraping tools and services they use are reliant on X Corp.’s servers functioning exactly as intended.

It’s not fraud under California law, because there’s no misrepresentation:

Starting with the argument that Bright Data’s technology and tools misrepresented requests, remember X Corp. does not allege that Bright Data or its customers have used their own registered accounts, or any other registered accounts, to scrape data from X, i.e., to access X by sending requests to X Corp.’s servers (for extracting and copying data). Meanwhile, X Corp. acknowledges that one does not need a registered account to access X and send such requests (see Amd. Compl. ¶ 22). X Corp. also acknowledges that X users with registered accounts can access X and send such requests without logging in to their registered accounts

And it’s not tortious interference with a contract, because, again, there’s no damage:

Among the elements of a tortious-interference claim is resulting damage. Pac. Gas & Elec., 791 P.2d at 590. The only damage that X Corp. plausibly pleaded in the instant complaint is that resulting from scraping and selling of data and, by extension, inducing scraping. X Corp. has not alleged any damage resulting from automated access to systems and, by extension, inducing automated access. As explained above, X Corp. has pleaded no impairment or deprivation of X Corp. servers resulting from sending requests to those servers. And, thin allusions to server capacity that could be devoted to “legitimate users” and reputational harm — not redressable under trespass to chattels as a matter of law — are simply too conclusory to be redressable at all. X Corp. will be allowed to seek leave to amend to allege damage (if any) resulting from automated access, as set out at the end of this order. But the instant complaint has failed to state a claim for tortious interference based on such access.

As for the scraping and selling of data, well, there’s no breach there either. And here we get into the copyright portion of the discussion. The question is who has the rights over this particular data. ExTwitter is claiming, somehow, that it has the right to stop scrapers because it has some rights over the data. But, the content is from users. Not ExTwitter. And that’s an issue.

Judge Alsup notes that ExTwitter’s terms give it a license to the content users post, but that’s a copyright license. Not a license to then do other stuff, such as suing others for copying it.

Note the rights X Corp. acquires from X users under the non-exclusive license closely track the exclusive rights of copyright owners under the Copyright Act. The license gives X Corp. rights to reproduce and copy, to adapt and modify, and to distribute and display (Terms 3–4). Section 106 of the Act gives “the owner of copyright . . . the exclusive rights to do and to authorize any of the following”: “to reproduce . . . in copies,” “to prepare derivative works,” “to distribute copies . . . to the public by sale,” and “to display . . . publicly.” 17 U.S.C. § 106. But X Corp. disclaims ownership of X users’ content and does not acquire a right to exclude others from reproducing, adapting, distributing, and displaying it under the non-exclusive license

Alsup notes that ExTwitter could, in theory, acquire the copyright on all content published on the platform instead of licensing it. However, he claims that it probably doesn’t do this because it could impact the company’s Section 230 immunities:

One might ask why X Corp. does not just acquire ownership of X users’ content or grant itself an exclusive license under the Terms. That would jeopardize X Corp.’s safe harbors from civil liability for publishing third-party content. Under Section 230(c)(1) of the Communications Decency Act, social media companies are generally immune from claims based on the publication of information “provided by another information content provider.” 47 U.S.C. § 230(c)(1). Meanwhile, under Section 512(a) of the Digital Millenium Copyright Act (“DMCA”), social media companies can avoid liability for copyright infringement when they “act only as ‘conduits’ for the transmission of information.” Columbia Pictures Indus., Inc. v. Fung, 710 F.3d 1020, 1041 (9th Cir. 2013); 17 U.S.C. § 512(a). X Corp. wants it both ways: to keep its safe harbors yet exercise a copyright owner’s right to exclude, wresting fees from those who wish to extract and copy X users’ content.

I have to admit, I’m not sure that a copyright assignment would change the Section 230 analysis… but perhaps? Anyway, it’s a weird hypothetical to raise in this scenario.

The larger point is just that ExTwitter has no right to stop others from copying this data. That’s not part of the rights the company has over the content on the site put there by third-party users.

The upshot is that, invoking state contract and tort law, X Corp. would entrench its own private copyright system that rivals, even conflicts with, the actual copyright system enacted by Congress. X Corp. would yank into its private domain and hold for sale information open to all, exercising a copyright owner’s right to exclude where it has no such right. We are not concerned here with an arm’s length contract between two sophisticated parties in which one or the other adjusts their rights and privileges under federal copyright law. We are instead concerned with a massive regime of adhesive terms imposed by X Corp. that stands to fundamentally alter the rights and privileges of the world at large (or at least hundreds of millions of alleged X users). For the reasons that follow, this order holds that X Corp.’s statelaw claims against Bright Data based on scraping and selling of data are preempted by the Copyright Act

And thus, the claims here also fail.

Arguably, this complaint was less silly than some others (and, yes, Meta made a similar — and similarly failed — complaint). The mess of the HiQ decisions means that the issue of data scraping is still kind of a big unknown under the law. Eventually, the Supreme Court may need to weigh in on scraping, and that’s going to be yet another scary Supreme Court case…

Filed Under: cfaa, contract, data scraping, license, terms of service, william alsup
Companies: bright data, meta, twitter, x

Judge Slams Elon Musk For Filing Vexatious SLAPP Suit Against Critic, Calling Out How It Was Designed To Suppress Speech

from the slappity-slapp-slapp dept

Self-described “free speech absolutist” Elon Musk has just had a judge slam him for trying to punish and suppress the speech of critics. Judge Charles Breyer did not hold back in his ruling dismissing Musk’s utterly vexatious SLAPP suit against the Center for Countering Digital Hate (CCDH).

Sometimes it is unclear what is driving a litigation, and only by reading between the lines of a complaint can one attempt to surmise a plaintiff’s true purpose. Other times, a complaint is so unabashedly and vociferously about one thing that there can be no mistaking that purpose. This case represents the latter circumstance. This case is about punishing the Defendants for their speech.

As a reminder, this case was brought after Musk got upset with CCDH for releasing a report claiming that hate speech was up on the site. I’ve noted repeatedly that I’m not a fan of CCDH and am skeptical of any research they release, as I’ve found them to have very shoddy, results-driven methodologies. So, it’s entirely possible that the report they put out is bullshit (though, there have been other reports that seem to confirm its findings).

But the way to respond to such things is, as any actual “free speech absolutist” would tell you, with more speech. Refute the findings. Explain why they were wrong. Dispute the methodology.

But Elon Musk didn’t do that.

He sued CCDH for their speech and put up a silly façade to pretend it wasn’t about their speech. Instead, he claimed it was some sort of breach of contract claim and a Computer Fraud and Abuse Act (CFAA) claim (which long-term Techdirt readers will recognize as a law intended to be used against “hacking” but which has been regularly used in abusive ways against “things we don’t like”).

Despite my general distrust of CCDH’s research and methodologies, this case seemed like an obvious attack on speech. I was happy to see the Public Participation Project (where I am a board member) file an amicus brief (put together by the Harvard Cyberlaw clinic) calling out that this was a clear SLAPP suit.

CCDH itself filed an anti-SLAPP motion, also calling out how the entire lawsuit was clearly an attempt to punish the organization for its speech.

And, thankfully, the judge agreed. The judge noted that it’s clear from the complaint and the details of the case that the lawsuit is about CCDH’s speech, at which point the burden shifts to the plaintiff to prove a “reasonable probability” of prevailing on the merits to allow the case to move forward.

Let’s just say that ExTwitter’s arguments don’t go over well. ExTwitter’s lawyers argued that it wasn’t really complaining about CCDH’s speech, but rather the way in which it collected the data it used to make its argument regarding hate speech. The judge doesn’t buy it:

But even accepting that the conduct that forms the specific wrongdoing in the state law claims is CCDH’s illegal access of X Corp. data, that conduct (scraping the X platform and accessing the Brandwatch data using ECF’s login credentials) is newsgathering—and claims based on newsgathering arise from protected activity….

It is also just not true that the complaint is only about data collection. See Reply at 3 (arguing that X Corp.’s contention that its “claims arise from ‘illegal access of data,’ as opposed to speech,” is the “artifice” at “the foundation of [this] whole case.”) (quoting Opp’n at 10). It is impossible to read the complaint and not conclude that X Corp. is far more concerned about CCDH’s speech than it is its data collection methods. In its first breath, the complaint alleges that CCDH cherry-picks data in order to produce reports and articles as part of a “scare campaign” in which it falsely claims statistical support for the position that the X platform “is overwhelmed with harmful content” in order “to drive advertisers from the X platform.” See FAC ¶ 1. Of course, there can be no false claim without communication. Indeed, the complaint is littered with allegations emphasizing CCDH’s communicative use of the acquired data. See, e.g., id. ¶¶ 17–20 (reports/articles are based on “flawed ‘research’ methodologies,” which “present an extremely distorted picture of what is actually being discussed and debated” on the X platform, in order to “silence” speech with which CCDH disagrees); id. ¶ 43 (CCDH “used limited, selective, and incomplete data from that source . . . that CCDH then presented out of context in a false and misleading manner in purported ‘research’ reports and articles.”), id. ¶ 56 (“CCDH’s reports and articles . . . have attracted attention in the press, with media outlets repeating CCDH’s incorrect assertions that hate speech is increasing on X.”).

The judge also correctly calls out that not suing for defamation doesn’t mean you can pretend you’re not suing over speech, when that’s clearly the intent here:

Whatever X Corp. could or could not allege, it plainly chose not to bring a defamation claim. As the Court commented at the motion hearing, that choice was significant. Tr. of 2/29/24 Hearing at 62:6–10. It is apparent to the Court that X Corp. wishes to have it both ways—to be spared the burdens of pleading a defamation claim, while bemoaning the harm to its reputation, and seeking punishing damages based on reputational harm.

For the purposes of the anti-SLAPP motion, what X Corp. calls its claims is not actually important. The California Supreme Court has held “that the anti-SLAPP statute should be broadly construed.” Martinez, 113 Cal. App. 4th at 187 (citing Equilon Enters. v. Consumer Cause, Inc., 29 Cal. 4th 53, 60 n.3 (2002)). Critically, “a plaintiff cannot avoid operation of the anti-SLAPP statute by attempting, through artifices of pleading, to characterize an action as a ‘garden variety breach of contract [or] fraud claim’ when in fact the liability claim is based on protected speech or conduct.” Id. at 188 (quoting Navellier, 29 Cal. 4th at 90–92); see also Baral v. Schnitt, 1 Cal. 5th 376, 393 (2016) (“courts may rule on plaintiffs’ specific claims of protected activity, rather than reward artful pleading”); Navellier, 29 Cal. 4th at 92 (“conduct alleged to constitute breach of contract may also come within constitutionally protected speech or petitioning. The anti-SLAPP statute’s definitional focus in not the form of the plaintiff’s cause of action[.]”).

This is important, as we’ve seen other efforts to try to avoid anti-SLAPP claims by pretending that a case is not about speech. It’s good to see a judge call bullshit on this. Indeed, in a footnote, the judge even calls out Elon’s lawyers for trying to mislead him about all this:

At the motion hearing, X Corp. asserted that it was “not trying to avoid defamation” and claimed to have “pleaded falsity” in paragraph 50 of the complaint. Tr. of 2/29/24 Hearing at 59:20–23. In fact, paragraph 50 did not allege falsity, or actual malice, though it used the word “incorrect.” See FAC ¶ 50 (“incorrect implications . . . that hate speech viewed on X is on the rise” and “incorrect assertions that X Corp. ‘doesn’t care about hate speech’”). When the Court asked X Corp. why it had not brought a defamation claim, it responded rather weakly that “to us, this is a contract and intentional tort case,” and “we simply did not bring it.” Tr. of 2/29/24 Hearing at 60:1–12; see also id. at 60:21–22 (“That’s not necessarily to say we would want to amend to bring a defamation claim.”).

The judge outright mocks ExTwitter’s claim that they were really only concerned with the data scraping techniques and that they would have brought the same case even if CCDH hadn’t published its report. He notes that each of the torts under which the case was brought require ExTwitter to show harm, and the only “harms” they discuss are “harms” from the speech:

X Corp.’s many allegations about CCDH’s speech do more than add color to a complaint about data collection—they are not “incidental to a cause of action based essentially on nonprotected activity.” See Martinez, 113 Cal. App. 4th at 187. Instead, the allegations about CCDH’s misleading publications provide the only support for X Corp.’s contention that it has been harmed. See FAC ¶ 78 (breach of contract claim: alleging that CCDH “mischaracterized the data . . . in efforts to claim X is overwhelmed with harmful conduct, and support CCDH’s call to companies to stop advertising on X. . . . As a direct and proximate result of CCDH’s breaches of the ToS in scraping X, X has suffered monetary and other damages in the amount of at least tens of millions of dollars”); ¶¶ 92– 93 (intentional interference claim: alleging that Defendants “intended for CCDH to mischaracterize the data regarding X in the various reports and articles . . . to support Defendants’ demands for companies to stop advertising on X” and that “[a]s a direct and proximate result of Defendants intentionally interfering with the Brandwatch Agreements . . . X Corp. has suffered monetary and other damages of at least tens of millions of dollars”); ¶¶ 98–99 (inducing breach of contract claim: alleging that “X Corp. was harmed and suffered damages as a result of Defendants’ conduct when companies paused or refrained from advertising on X, in direct response to CCDH’s reports and articles” and that “[a]s a direct and proximate result of Defendants inducing Brandwatch to breach the Brandwatch Agreements . . . X Corp. has suffered monetary and other damages in the amount of at least tens of millions of dollars.”).

The “at least tens of millions of dollars” that X Corp. seeks as damages in each of those claims is entirely based on the allegation that companies paused paid advertising on the X platform in response to CCDH’s “allegations against X Corp. and X regarding hate speech and other types of content on X.” See id. ¶ 70. As CCDH says, “X Corp. alleges no damages that it could possibly trace to the CCDH Defendants if they had never spoken at all.” Reply at 3. Indeed, X Corp. even conceded at the motion hearing that it had not alleged damages that would have been incurred if CCDH “had scraped and discarded the information,” or scraped “and never issued a report, or scraped and never told anybody about it.” See Tr. of 2/29/24 Hearing at 7:22–8:3. The element of damages in each state law claim therefore arises entirely from CCDH’s speech.

In other words, stop your damned lying. This case was always about trying to punish CCDH for its speech.

ExTwitter could still get past an anti-SLAPP motion by showing it had a likelihood of succeeding on the other claims, but no such luck. First up are the “breach of contract” claims that are inherently silly for a variety of reasons (not all of which I’ll go over here).

But the biggest one, again, is that if this were a legitimate breach of contract claim, Musk wouldn’t be seeking “reputational damages.” But he is. And thus:

Another reason that the damages X Corp. seeks—“at least tens of millions of dollars” of lost revenue that X Corp. suffered when CCDH’s reports criticizing X Corp. caused advertisers to pause spending, see FAC ¶ 70—are problematic is that X Corp. has alleged a breach of contract but seeks reputation damages. Of course, the main problem with X Corp.’s theory is that the damages alleged for the breach of contract claim all spring from CCDH’s speech in the Toxic Twitter report, and not its scraping of the X platform. See Reply at 7 (“Because X Corp. seeks (impermissibly) to hold CCDH U.S. liable for speech without asserting a defamation claim, it is forced to allege damages that are (impermissibly) attenuated from its claimed breach.”). One way we know that this is true is that if CCDH had scraped the X platform and never spoken, there would be no damages. Cf. ACLU Br. at 12. (“Had CCDH U.S. praised rather than criticized X Corp., there would be no damages to claim and therefore no lawsuit.”). Again, X Corp. conceded this point at the motion hearing.

CCDH’s reputation damages argument is another way of saying that the damages X Corp. suffered when advertisers paused their spending in response to CCDH’s reporting was not a foreseeable result of a claimed breach.

In other words, the actual issue here wasn’t any “breach of contract”. Just the speech. Which is protected.

And that becomes important for a separate analysis of whether or not the First Amendment applies here, with the judge going through the relevant precedents and noting that there’s just nothing in the case that suggests any of the relevant damages are due to any claimed contractual breach, rather than from CCDH’s protected speech:

Here, X Corp. is not seeking some complicated mix of damages—some caused by the reactions of third parties and some caused directly by the alleged breach. The Court can say, as a matter of law, whether the single type of damages that X Corp. seeks constitutes “impermissible defamation-like publication damages that were caused by the actions and reactions of third parties to” speech or “permissible damages that were caused by [CCDH’s] breaches of contract.”….

The breach that X Corp. alleges here is CCDH’s scraping of the X platform. FAC ¶ 77. X Corp. does not allege any damages stemming directly from CCDH’s scraping of the X platform.19 X Corp. seeks only damages based on the reactions of advertisers (third parties) to CCDH’s speech in the Toxic Twitter report, which CCDH created after the scraping. See FAC ¶¶ 70, 78; see also ACLU Br. at 12 (“The damages X Corp. seeks . . . are tied to reputational harm only, with no basis in any direct physical, operational or other harm that CCDH U.S.’s alleged scraping activities inflicted on X Corp.”). That is just what the Fourth Circuit disallowed in Food Lion, 194 F.3d at 522. The speech was not the breach, as it was in Cohen. And X Corp.’s damages would not have existed even if the speech had never occurred, as in Newman, 51 F.4th at 1134. Here, there would be no damages without the subsequent speech. Accordingly, the Court can hold as a matter of law that the damages alleged are impermissible defamation-like publication damages caused by the actions of third parties to CCDH’s report.

And, finally, the court says it would be “futile” to allow ExTwitter to amend the complaint and try again. Everything here is about punishing CCDH for its speech. While the judge notes that courts should often give plaintiffs leave to amend in cases where it makes sense, here he rightly fears that ExTwitter would only amend as further punishment for CCDH’s speech.

ExTwitter argued that it could amend the complaint to show that the data scraping harmed the “safety and security” of the site, and that it created harm for the company in having to protect the site. Again, the judge points out how ridiculous both arguments are.

On the “safety and security” side, the judge effectively says the legal equivalent of “I wasn’t born yesterday.” Nothing in this scraping puts users at risk. It was just a sophisticated search of Twitter’s site:

While security and safety are noble concepts, they have nothing to do with this case. The Toxic Twitter report stated that CCDH had used the SNScrape tool, “which utilizes Twitter’s search function,” to “gather tweets from” “ten reinstated accounts,” resulting in a “dataset of 9,615 tweets posted by the accounts.” Toxic Twitter at 17. There is no allegation in the complaint, and X Corp. did not assert that it could add an allegation, that CCDH scraped anything other than public tweets that ten X platform users deliberately broadcast to the world.21 No private user information was involved—no social security numbers, no account balances, no account numbers, no passwords, not even “gender, relationship status, ad interests etc.” See Meta Platforms, Inc. v. BrandTotal Ltd., 605 F. Supp. 3d 1218, 1273 (2022).

And, importantly, the judge notes that even if some users of ExTwitter want to delete their tweets, it doesn’t implicate their “safety and security” that someone might have seen (or even saved) their tweets earlier:

When asked why the collecting of public tweets implicates users’ security interests, X Corp. insisted that “this all goes to control over your data,” and that users expect that they will be able to take down their tweets later, or to change them—abilities they are robbed of when the “data” is scraped. See Tr. of 2/29/24 Hearing at 40:17–24. But even assuming that it is “very important” to a user that he be able to amend or remove his pro-neo-Nazi tweets at some point after he has tweeted them, see id. at 40:24–25, a user can have no expectation that a tweet that he has publicly disseminated will not be seen by the public before that user has a chance to amend or remove it. While scraping is one way to collect a user’s tweets, members of the public could follow that user and view his tweets in their feeds, or use the X platform’s search tool (as SNScrape did) and view his tweets that way.

X Corp.’s assertion that the scraping alleged here violates a user’s “safety and security” in his publicly disseminated tweets is therefore a non-starter.

As for a harm in ExTwitter having to then “protect” its users, the judge also laughs that one off. He notes that scraping is a common part of how the web works, even if some websites don’t like it:

The problem with this argument is that it is at odds with what X Corp. has alleged.

Although social media platforms do not like it, scraping, for various ends, is commonplace. See, e.g., ACLU Br. at 7 (“Researchers and Journalists Use Scraping to Enable Speech in the Public Interest and Hold Power to Account.”); see also id. at 8–9 (collecting sources); Andrew Sellars, “Twenty Years of Web Scraping and the Computer Fraud and Abuse Act,” 24 B.U. J. Sci. & Tech. L 372, 375 (2018) (“Web scraping has proliferated beneath the shadow of the [CFAA].”); hiQ 2022 Circuit opinion, 31 F.4th at 1202 (“HiQ points out that data scraping is a common method of gathering information, used by search engines, academic researchers, and many others. According to hiQ, letting established entities that already have accumulated large user data sets decide who can scrape that data from otherwise public websites gives those entities outsized control over how such data may be put to use . . . the public interest favors hiQ’s position.”); see also id. at 1186 (“LinkedIn blocks approximately 95 million automated attempts to scrape data every day”).

Furthermore, while the judge notes that, of course, there can be some cases where scraping could create harms for a site, this case is not an example of that. It was just doing a basic search of publicly available info:

This is not such a case. Here, CCDH is alleged to have used Twitter’s own search tool to collect 9,615 public tweets from ten Twitter users, see FAC ¶ 77; Toxic Twitter at 17, and then to have announced that it did so in a public report, see id. Assuming for present purposes that this conduct amounts to the “scraping” barred by the ToS,23 the extent of CCDH’s scraping was not a mystery. As CCDH asked at the motion hearing, “What CCDH did and the specific tweets that it gathered, what tool it used, how it used that tool and what the results were are documented explicitly in its public report. So what is it that they’re investigating?” Tr. of 2/29/24 Hearing at 22: 3–7. Nor was this the kind of large-scale, commercial scraping—as in hiQ, as alleged in Bright Data—that could conceivably harm the X platform or overburden its servers. It is not plausible that this small-scale, non-commercial scraping would prompt X Corp. to divert “dozens, if not over a hundred personnel hours across disciplines,” see Tr. of 2/29/24 Hearing at 8:7–11, of resources toward the repair of X Corp.’s systems. Nor would such expenditures have been foreseeable to CCDH in 2019. In 2019, if CCDH had thought about the no-scraping provision in the ToS at all, it would have expected X Corp. to incur damages only in response to breaches of that provision that could actually harm the X platform. It would not have expected X Corp. to incur damages in connection with a technical breach of that provision that involved the use of Twitter’s search tool to look at ten users and 9,615 public tweets.

Furthermore, the judge notes that if ExTwitter had to spend time and money to “protect” the site here, it wouldn’t be because of any costs from the scraping, but rather from CCDH’s (again, protected) speech.

And thus the anti-SLAPP motion wins, the case is dismissed, and Elon can’t file an amended complaint. Indeed, the judge calls out the silliness of ExTwitter claiming that CCDH was seeking to suppress speech, by noting that it’s quite obvious the opposite is going on here:

The Court notes, too, that X Corp.’s motivation in bringing this case is evident. X Corp. has brought this case in order to punish CCDH for CCDH publications that criticized X Corp.—and perhaps in order to dissuade others who might wish to engage in such criticism. Although X Corp. accuses CCDH of trying “to censor viewpoints that CCDH disagrees with,” FAC ¶ 20, it is X Corp. that demands “at least tens of millions of dollars” in damages—presumably enough to torpedo the operations of a small nonprofit—because of the views expressed in the nonprofit’s publications…. If CCDH’s publications were defamatory, that would be one thing, but X Corp. has carefully avoided saying that they are.

Given these circumstances, the Court is concerned that X Corp.’s desire to amend its breach of contract claim has a dilatory motive—forcing CCDH to spend more time and money defending itself before it can hope to get out from under this potentially ruinous litigation. See PPP Br. at 2 (“Without early dismissal, the free speech interests that the California legislature sought to protect will vanish in piles of discovery motions.”). As CCDH argued at the motion hearing, the anti-SLAPP “statute recognizes that very often the litigation itself is the punishment.” Tr. of 2/29/24 Hearing at 33:12–34:5. It would be wrong to allow X Corp. to amend again when the damages it now alleges, and the damages it would like to allege, are so problematic, and when X Corp.’s motivation is so clear.

Accordingly, the Court STRIKES the breach of contract claim and will not allow X Corp. to add the proposed new allegations as to that claim.

The judge also drops a hammer on the silly CFAA claims. First, the court notes that, as per the Supreme Court’s Van Buren ruling, if you’re arguing losses from hacking, the loss has to come from “technical harms” associated with the unauthorized access. ExTwitter claims that the “loss” was from the investigation it had to do to stop such violations, but the judge isn’t buying it. (For what it’s worth, Riana Pfefferkorn notes that the issue of harms having to be “technological harms” came from dicta in the Supreme Court, not a holding, but it appears that courts are treating it as a holding…)

When the lawsuit was filed, we called out a big part of the problem, which was that a separate entity, BrandWatch, was who gave CCDH access to its tools to do research on Twitter. So if there were any complaint here by ExTwitter, it might (weakly!) be against BrandWatch for giving CCDH access. But they can’t possibly argue that CCDH violated the CFAA.

X Corp.’s losses in connection with “attempting to conduct internal investigations in efforts to ascertain the nature and scope of CCDH’s unauthorized access to the data,” see FAC ¶ 87, are not technological in nature. The data that CCDH accessed does not belong to X Corp., see Kaplan Decl. Ex. A at 13 (providing that users own their content and grant X Corp. “a worldwide, non-exclusive, royalty-free license”), and there is no allegation that it was corrupted, changed, or deleted. Moreover, the servers that CCDH accessed are not even X Corp.’s servers. X Corp. asserted at the motion hearing that its servers “stream data to Brandwatch servers in response to queries from a logged in user” and so “you cannot say fairly it’s not our systems.” Tr. of 2/29/24 Hearing at 28:16–20. But that is not what the complaint alleges. The complaint alleges that “X Corp. provided non-public data to Brandwatch” and “[t]hat data was then stored on a protected computer.” FAC ¶ 83; see also id. ¶ 86 (“the Licensed Materials were stored on servers located in the United States that Brandwatch used for its applications. CCDH and ECF thus knew that, in illegally using ECF’s login credentials and querying the Licensed Materials, CCDH was targeting and gaining unauthorized access to servers used by Brandwatch in the United States.”) (emphasis added); id. ¶ 29 (Twitter would stream its Licensed Materials from its servers, “including in California,” to “servers used by Brandwatch [] located in the United States, which Brandwatch’s applications accessed to enable [its] users with login credentials to analyze the data.”).28 It is therefore hard to see how an investigation by X Corp. into what data CCDH copied from Brandwatch’s servers could amount to “costs caused by harm to computer data, programs, systems, or information services.” See Van Buren, 141 S. Ct. at 1659–60.

The court also laughs off the idea that the cost of attorneys could be seen as “technological harm” under the CFAA. And thus, CFAA claims are dropped.

As we noted in our original post, the other claims were the throw-in claims designed to piggyback on the contract and CFAA claims and to sound scary and drive up the legal fees. The judge sees these for what they are and dismisses them easily.

CCDH’s arguments for dismissing the tort claims are that: (1) the complaint shows that CCDH did not cause a breach; (2) the complaint has failed to plausibly allege a breach; (3) the complaint has failed to plausibly allege CCDH’s knowledge; and (4) the complaint fails to adequately allege damages. MTD&S at 23–26. The Court concludes that CCDH’s arguments about causation and about damages are persuasive, and does not reach its other arguments.

The court also called out that it’s weird (and notable in exposing Musk’s nonsense) that the complaint seeks to hold CCDH liable for Brandwatch allowing CCDH to use its services:

X Corp.’s response is that “the access is the breach.” Opp’n at 30. In other words, Brandwatch agreed to keep the Licensed Materials secure, and by allowing CCDH to access the Licensed Materials, Brandwatch necessarily—and simultaneously—breached its agreement to keep the Licensed Materials secure. The Court rejects that tortured reasoning. Any failure by Brandwatch to secure the Licensed Materials was a precondition to CCDH’s access. In addition, to the extent that X Corp. maintains that CCDH need not have done anything to impact Brandwatch’s behavior, then it is seeking to hold CCDH liable for breaching a contract to which it was not a party. That does not work either.

The complaint also included some utter nonsense that the judge isn’t buying. Some Senator had said that CCDH was a “foreign dark money group,” and thus ExTwitter said there might be further claims against unnamed “John Does,” but the judge points out that this conspiracy theory isn’t backed up by anything legitimate:

CCDH’s last argument is that X Corp. fails to state a claim against the Doe defendants. MTD&S at 26. X Corp. alleges that “one [unnamed] United States senator referred to CCDH as ‘[a] foreign dark money group,’” and that “[o]ther articles have claimed that CCDH is, in part, funded and supported by foreign organizations and entities whose directors, trustees, and other decision-makers are affiliated with legacy media organizations.” FAC ¶ 62. It further alleges that “CCDH is acting . . . at the behest of and in concert with funders, supporters, and other entities.” Id. ¶ 63. These allegations are vague and conclusory, and do not state a plausible claim against the Doe defendants. See Iqbal, 556 U.S. at 678 (claim is plausible “when the plaintiff pleads factual content that allows the court to draw the reasonable inference that the defendant is liable for the misconduct alleged.”).

So, case dismissed. Of course, Elon might appeal to the 9th Circuit and just run up the costs of what he’ll eventually need to pay CCDH’s lawyers just to further attempt to bully and suppress the speech of critics (and to chill speech of other critics).

And, yeah, that might happen. All from the “free speech absolutist.”

On the whole, though, this is a good, clear ruling that really highlights (1) the bullshit vexatious, censorial nature of the SLAPP suit from Elon and (2) the value and importance of a strong anti-SLAPP law like California’s.

Filed Under: anti-slapp, breach of contract, california, cfaa, charles breyer, chilling effects, elon musk, free speech, slapp
Companies: brandwatch, ccdh, twitter, x

Court Tosses Lawsuit Against NSO Group Brought By Murdered Journalist Jamal Khashoggi’s Widow

from the may-eventually-find-justice-but-not-by-going-this-route dept

Jamal Khashoggi, a dissident journalist often critical of the Saudi government, was murdered by Saudi government agents while inside the Saudi consulate in Istanbul, Turkey. He wasn’t just murdered. His body was dismembered. All of this was captured by hidden recording devices placed by the Turkish government in the Saudi consulate.

The Saudi government is — or at least, was — a customer of Israeli exploit developer, NSO Group. For years, NSO Group sold its powerful phone malware to some of the world’s most notorious human rights abusers. On that list is the Saudi government — a point it drove home by luring legal US resident (and Washington Post journalist) Jamal Khashoggi to its consulate for the sole purpose of erasing him from human existence.

NSO knew who it was selling to. It just didn’t care. It didn’t care until several months of increasingly negative coverage forced multiple countries to issue sanctions, open investigations, and — in the case of its homeland — finally restrict which countries it could sell to.

This has also led to NSO being sued multiple times by people targeted by its spyware as well as by US tech companies whose infrastructure and communication services were used to infect targeted devices.

NSO Group will be walking away from at least one lawsuit, though. Hanan Elatr, Khashoggi’s widow, sued NSO, along with Q Cyber Technologies, another malware manufacturer. Elatr alleged multiple injuries stemming from the infection of her two phones by NSO’s Pegasus malware, a zero-click exploit. These infections were confirmed by Toronto’s Citizen Lab, which has uncovered dozens of similar infections of journalists, dissidents, government critics, and human rights advocates over the past few years.

Among the violations listed in Elatr’s suit are violations of Virginia’s Computer Crimes Act and the CFAA. On top of that, there are counts of negligence and intentional infliction of emotional distress. Elatr alleges her phones were infected to make it easier for the Saudi government to locate Jamal Khashoggi, an effort that ultimately resulted in his murder by Saudi security personnel.

Unfortunately, as horrific as this crime was, there’s no reason to believe NSO Group was complicit, at least not in the legal sense. NSO Group’s business model is unethical. But it’s not illegal. And it can’t be held directly responsible for the acts of its customers, especially when there’s a lot of missing connective tissue between selling malware to the Saudi government and the Saudi government deciding to murder someone on foreign soil.

Sure, NSO is obviously aware it’s been selling oppression enhancement software to some of the worst people on earth. And while it’s capable of comprehending the damage it’s helping create, it is not (at least I hope it isn’t) conspiring with foreign governments to commit acts of brutality.

Whether or not NSO can even be considered legally culpable in any way for acts of violence by its customers isn’t up for debate in this case. The real problem with this lawsuit is there’s nothing connecting NSO Group, the Saudi government, and the murder of a journalist on (technically) Saudi soil located in Istanbul, Turkey. On top of that, most of the detected infections of Hanan Elatr’s phones appeared to have been initiated by the United Arab Emirates government, another one of NSO’s detestable customers.

NSO moved to dismiss the lawsuit, first by claiming it had “derivative sovereign immunity” as a foreign company. The court notes this argument just doesn’t work, having already been rejected by the Ninth Circuit Appeals Court when NSO attempted to escape the lawsuit brought by WhatsApp. The Foreign Sovereign Immunities Act (FSIA) does not cover private companies. It’s as simple as that.

More to the point is the lack of jurisdiction. It doesn’t really matter what either party argues. There’s nothing tying any of them to Virginia other than Elatr’s current residence in Virginia. Since the alleged phone infections occurred in 2018, when Elatr was located in Dubai working as a flight attendant for Emirates Airlines, none of the alleged injuries were sustained in Virginia.

Despite alleging that the surveillance was ongoing in this district, and that NSO Group “and its clients” targeted plaintiff, the Complaint fails to include any non-conclusory allegations regarding how long and where plaintiff had been living in the district, and how NSO Group specifically participated in the surveillance of her phones while she was in Virginia, as opposed to the conduct that may have happened while she was overseas or travelling for work as flight attendant for Emirates Airlines. Nor does the Complaint allege facts that counter defendants’ argument that non-party Saudi and Emirati governments were using the Pegasus technology to surveil plaintiff.

On top of that, there’s no evidence NSO Group had anything to do with the targeting of Elatr. It provided the malware, but the targeting was done by foreign governments that aren’t named as defendants. (Sovereign immunity would likely apply if they were.)

While the court doesn’t particularly care for NSO Group’s repeated assertions it’s indistinguishable from the Israeli government in terms of immunity, it also says it can’t be sued in Virginia. It may not even be able to be sued at all by Khashoggi’s widow, no matter where the legal action is filed, because it did not infect her phones, nor did it directly engage in surveillance of her devices.

Khashoggi’s widow will have to seek justice elsewhere. But given the shamelessness of the Saudi government and its unwillingness to take responsibility for a murder it ordered, it’s unlikely going after the murderers directly is going to result in anything more than years of frustration.

Filed Under: cfaa, hanan elatr, jamal khashoggi, liability, saudi arabia, spyware, surveillance
Companies: nso group, q cyber technologies

Air Canada Would Rather Sue A Website That Helps People Book More Flights Than Hire Competent Web Engineers

from the time-to-cross-air-canada-off-the-flight-list dept

I am so frequently confused by companies that sue other companies for making their own sites and services more useful. It happens quite often. And quite often, the lawsuits are questionable CFAA claims against websites that scrape data to provide a better consumer experience, but one that still ultimately benefits the originating site.

Over the last few years various airlines have really been leading the way on this, with Southwest being particularly aggressive in suing companies that help people find Southwest flights to purchase. Unfortunately, many of these lawsuits are succeeding, to the point that a court has literally said that a travel company can’t tell others how much Southwest flights cost.

But the latest lawsuit of this nature doesn’t involve Southwest, and is quite possibly the dumbest one. Air Canada has sued the site Seats.aero that helps users figure out the best flights for their frequent flyer miles. Seats.aero is a small operation run by the company with the best name ever: Localhost, meaning that the lawsuit is technically “Air Canada v. Localhost” which sounds almost as dumb as this lawsuit is.

The Air Canada Group brings this action because Mr. Ian Carroll—through Defendant Localhost LLC—created a for-profit website and computer application (or “app”)— both called Seats.aero—that use substantial amounts of data unlawfully scraped from the Air Canada Group’s website and computer systems. In direct violation of the Air Canada Group’s web terms and conditions, Carroll uses automated digital robots (or “bots”) to continuously search for and harvest data from the Air Canada Group’s website and database. His intrusions are frequent and rapacious, causing multiple levels of harm, e.g., placing an immense strain on the Air Canada Group’s computer infrastructure, impairing the integrity and availability of the Air Canada Group’s data, soiling the customer experience with the Air Canada Group, interfering with the Air Canada Group’s business relations with its partners and customers, and diverting the Air Canada Group’s resources to repair the damage. Making matters worse, Carroll uses the Air Canada Group’s federally registered trademarks and logo to mislead people into believing that his site, app, and activities are connected with and/or approved by the real Air Canada Group and lending an air of legitimacy to his site and app. The Air Canada Group has tried to stop Carroll’s activities via a number of technological blocking measures. But each time, he employs subterfuge to fraudulently access and take the data—all the while boasting about his exploits and circumvention online.

Almost nothing in this makes any sense. Having third parties scrape sites for data about prices is… how the internet works. Whining about it is stupid beyond belief. And here, it’s doubly stupid, because anyone who finds a flight via seats.aero is then sent to Air Canada’s own website to book that flight. Air Canada is making money because Carroll’s company is helping people find Air Canada flights they can take.

Why are they mad?

Air Canada’s lawyers also seem technically incompetent. I mean, what the fuck is this?

Through screen scraping, Carroll extracts all of the data displayed on the website, including the text and images.

Carroll also employs the more intrusive API scraping to further feed Defendant’s website.

If the “API scraping” is “more intrusive” than screen scraping, you’re doing your APIs wrong. Is Air Canada saying that its tech team is so incompetent that its API puts more load on the site than scraping? Because, if so, Air Canada should fire its tech team. The whole point of an API is to make it easier for those accessing data from your website without needing to do the more cumbersome process of scraping.

And, yes, this lawsuit really calls into question Air Canada’s tech team and their ability to run a modern website. If your website can’t handle having its flights and prices scraped a few times every day, then you shouldn’t have a website. Get some modern technology, Air Canada:

Defendant’s avaricious data scraping generates frequent and myriad requests to the Air Canada Group’s database—far in excess of what the Air Canada Group’s infrastructure was designed to handle. Its scraping collects a large volume of data, including flight data within a wide date range and across extensive flight origins and destinations—multiple times per day.

Maybe… invest in better infrastructure like basically every other website that can handle some basic scraping? Or, set up your API so it doesn’t fall over when used for normal API things? Because this is embarrassing:

At times, Defendant’s voluminous requests have placed such immense burdens on the Air Canada Group’s infrastructure that it has caused “brownouts.” During a brownout, a website is unresponsive for a period of time because the capacity of requests exceeds the capacity the website was designed to accommodate. During brownouts caused by Defendant’s data scraping, legitimate customers are unable to use or the Air Canada + Aeroplan mobile app, including to search for available rewards, redeem Aeroplan points for the rewards, search for and view reward travel availability, book reward flights, contact Aeroplan customer support, and/or obtain service through the Aeroplan contact center due to the high volume of calls during brownouts.

Air Canada’s lawyers also seem wholly unfamiliar with the concept of nominative fair use for trademarks. If you’re displaying someone’s trademarks for the sake of accurately talking about them, there’s no likelihood of confusion and no concern about the source of the information. Air Canada claiming that this is trademark infringement is ridiculous:

I guarantee that no one using Seats.aero thinks that they’re on Air Canada’s website.

The whole thing is so stupid that it makes me never want to fly Air Canada again. I don’t trust an airline that can’t set up its website/API to handle someone making its flights more attractive to buyers.

But, of course, in these crazy times with the way the CFAA has been interpreted, there’s a decent chance Air Canada could win.

For its part, Carroll says that he and his lawyers have reached out to Air Canada “repeatedly” to try to work with them on how they “retrieve availability information,” and that “Air Canada has ignored these offers.” He also notes that tons of other websites are scraping the very same information, and he has no idea why he’s been singled out. He further notes that he’s always been open to adjusting the frequency of searches and working with the airlines to make sure that his activities don’t burden the website.

But, really, the whole thing is stupid. The only thing that Carroll’s website does is help people buy more flights. It points people to the Air Canada site to buy tickets. It makes people want to fly more on Air Canada.

Why would Air Canada want to stop that other than that it can’t admit that it’s website operations should all be replaced by a more competent team?

Filed Under: api, cfaa, flights, frequent fliers, scraping, screen scraping, trademark
Companies: air canada, localhost, seats.aero

Journalists Ask DOJ To Stop Treating URL Alterations As A Federal Crime

from the insecurity-complex dept

The DOJ — following a period of questionable leadership under Donald Trump — said it has little interest in prosecuting journalists. It has also made it clear it will not abuse the CFAA to punish people who did nothing more than access sites in ways not intended by the sites’ creators.

Why? Because there are a multitude of First Amendment issues the DOJ would rather not tangle with. Journalists should almost always be considered off limits because they are instrumental in reporting on issues of public interest. BS CFAA prosecutions should be shitcanned for the same reason: they’re more likely to violate rights than capture criminals.

No sooner had the DOJ pledged to be better about the CFAA and its intersection with the First Amendment, it reversed course to raid a journalist’s home over footage of a Fox News interview with rapper Kanye West. It was hardly the sort of thing one would hope their government would be interested in: a coddling conversation with a talented musician who also harbored a rather upsetting anti-Semitic views.

The stuff cut from the Fox interview was obtained and aired by Tim Burke. The unaired footage was illuminating, to say the least. During that interview, Kanye West delivered a bizarre conspiracy theory that included Planned Parenthood, the KKK, and a supposed effort to control the Jewish population in the United States. It also showed that Kanye West — one of Trump’s “black friends” — had been vaccinated, even as Trump continued to espouse things like bleach and horse dewormers.

That embarrassment of a lame duck and his preferred news outlet apparently led to the raid of Tim Burke’s house — a raid that resulted in nearly all of his electronic devices being seized. Burke is no traditional journalist, having worked for a variety of web outlets, including the version of Deadspin that routinely engaged in sociopolitical conversation until told to “focus on sports” by its new private equity owners. (The best contributors to Deadspin have defected to, um, Defector and definitely deserve your support.)

What Burke apparently accessed (perhaps due to password sharing) was the unvetted feed of the interview — one that was supposed to remain out of sight until Fox could edit it to its liking. But it wasn’t hacking. It may have been “unauthorized” access, but only in the sense that the temporary host of this unedited footage would never knowingly share it with a muckraking journalist.

That being said, it wasn’t as though the feed wasn’t publicly accessible. The temp login Burke used gave him access to URLs any web user could access, if only they knew where to look. The login led to unsecured footage and recordings, including the ones Burke accessed and published.

The FBI raided Burke’s home, seizing his phones and computers. The DOJ seems intent on prosecuting Burke for “stealing” personal information, which definitely isn’t what happened here.

Lucas Ropek has published a lengthy examination of this case for Gizmodo — one that shows just how far off its own rails the DOJ has gone. That examination quotes Kim Zetter’s discussion of the case, one that shows the DOJ is trying to criminalize the everyday activities of millions of web users in hopes of knocking this particular journalist down a peg or two with a criminal conviction.

It’s not clear what action Burke took constitutes a crime in the minds of prosecutors — whether they think he broke the law by using the publicly accessible demo credentials, or by viewing and recording the unencrypted live feeds, or both.

If the government alleges that Burke violated the CFAA by using the credentials then, Rasch says, this would criminalize the sharing of any password. Family members who share Netflix passwords would be violating the CFAA, he says, and this is not what the statute intended or says.

The government may, however, say that Burke violated the portion of the CFAA that pertains to “unauthorized access” — that is, even though the feeds were unencrypted and were publicly accessible without needing to use a password…

What the government is criminalizing in this prosecution are things as innocuous as password sharing and URL alteration. That those on receiving end of either of these activities may not like these things to happen doesn’t make them criminal acts. And that’s according to the DOJ’s own statements of intent — ones that said they would not target journalists during certain investigations nor criminalize normal internet behavior just because the CFAA can be read as criminalizing those acts.

Once again, journalists perhaps more respected than Tim Burke are rallying support for his cause. Sure, Burke may be a convenient target, given his apparent willingness to embrace murky methods of obtaining information, but if the DOJ can find him guilty of password sharing and URL alteration, journalists, activists, and everyday internet users will, once again, find themselves on the wrong side of the DOJ’s definition of the law.

Nearly fifty rights groups and journalism advocates have signed off on a letter [PDF] to Attorney General Merrick Garland demanding the DOJ drop its extremely misguided prosecution of Tim Burke. The letter raises several concerns, as well as demanding answers from the AG about his implicit support of this incursion on long-held First Amendment rights.

It would be extremely problematic — and unconstitutional — to criminalize access to publicly available information simply because powerful people would prefer it be kept private. It is antithetical to the Fourth Estate’s constitutionally-protected function to place a burden on journalists to intuit what publicly-available, newsworthy information public figures want kept secret, and to abide by their wishes.

To the extent that the DOJ’s investigation is based on Burke’s use of “demo” credentials to access to the platform on which he found the publicly accessible URL, it is also not clear how such access could be “without authorization.” Burke, to the best of our knowledge based on the aforementioned reporting, received the demo credentials from a source, who found them publicly posted on the internet with no restrictions on anyone’s use. If there is more to the story, then the government should explain those facts to avoid chilling similar newsgathering.

The letter also asks the DOJ to explain whether its own policy — the one that said it would not target journalists with warrants or subpoenas for actions related to “obtaining records” or otherwise “acting withing the scope of newsgathering” — was followed in this case. It also asks the DOJ to explain who it considers to be a “journalist” worthy of the protections put in place by this policy. If Burke somehow fell outside of its definition, this collection of rights groups and journalists would like the DOJ to explain how it arrived at the conclusion that Burke was not a journalist.

We are especially concerned that the government might not have considered Burke to be subject to the News Media Policy. The government’s response brief takes the position that Burke should not be considered a “member of the news media” who is “acting within the scope of newsgathering” under the News Media Policy, despite the fact that the court has rightly acknowledged Burke’s status as a member of the media. In support of its position, the response brief notes Burke had not recently published under his own byline, does not work for an established media outlet, and sometimes used job titles other than “journalist.”

Of course, one does not need to work full-time as a journalist in order to engage in protected journalism. The PPA protects anyone “with a purpose to disseminate” information to the public, regardless of whether their own byline is attached. And it’s quite common for journalists — including freelancers, producers, researchers, editors, news services and consultants — to provide research and documents for stories they do not themselves write, or even provide written copy without receiving a byline. That does not deprive them of constitutional protection. Courts have rightly warned against limiting the First Amendment’s press clause to established media outlets — a warning that is especially important as technological advances give rise to new forms of journalism while traditional news outlets close their doors at alarming rates.

Thus, if the DOJ determined Burke is not a member of the news media, clarity is needed regarding why, so that other non-traditional journalists will know whether their newsgathering is protected.

It’s an important question to ask. The internet has democratized both information gathering and information dissemination. Journalism is no longer restricted to sweaty men with press credentials tucked in their fedora hatbands who spent most of their time gauging the distance between their interview subjects and the nearest phone booth.

While today’s journalism may still contain any number of sweaty men, the lack of press credentials/fedoras/phone booths does not mean only those who cling to the old ways — steady employment, frequent bylines, landline access, etc. — are worthy of being considered “journalists.” Literally anyone can be a journalist. All it takes is the willingness to find subject matter of public interest and report on it.

The DOJ’s actions in this case suggest it still believes — despite recent statements to the contrary — that it will only consider people who don’t piss off more powerful people to be journalists. In this case, Fox News was angered and decided it needed to get law enforcement involved. But that’s where discretion comes into play. The DOJ could have walked away from this. And it should have. What it’s doing here flies in the face of its own self-imposed restraints — an effort that shows just how truly worthless self-imposed restraints are. Unless you’re willing to follow them, they may as well not exist at all.

Filed Under: cfaa, doj, journalism, raids, tim burke

Kansas Cops Raid Small Town Newspaper In Extremely Questionable ‘Criminal Investigation’

from the sorry-about-the-boot-prints-on-your-rights dept

The free press is supposed to be free. That’s what the First Amendment means. Journalists have a long-acknowledged, supported-by-decades-of-precedent right to publish information that may make the government uncomfortable.

When cops start raiding press outlets, everyone takes notice. This isn’t how this works — not in the United States with its long list of guaranteed rights.

But that’s what happened at a small newspaper in Kansas, for reasons local law enforcement is currently unwilling to explain.

In an unprecedented raid Friday, local law enforcement seized computers, cellphones and reporting materials from the Marion County Record office, the newspaper’s reporters, and the publisher’s home.

Eric Meyer, owner and publisher of the newspaper, said police were motivated by a confidential source who leaked sensitive documents to the newspaper, and the message was clear: “Mind your own business or we’re going to step on you.”

The city’s entire five-officer police force and two sheriff’s deputies took “everything we have,” Meyer said, and it wasn’t clear how the newspaper staff would take the weekly publication to press Tuesday night.

While there’s still some speculation about the reason for this raid, this law enforcement action has at least accelerated the demise of the paper’s owner.

Stressed beyond her limits and overwhelmed by hours of shock and grief after illegal police raids on her home and the Marion County Record newspaper office Friday, 98-year-old newspaper co-owner Joan Meyer, otherwise in good health for her age, collapsed Saturday afternoon and died at her home.

She had not been able to eat after police showed up at the door of her home Friday with a search warrant in hand. Neither was she able to sleep Friday night.

She tearfully watched during the raid as police not only carted away her computer and a router used by an Alexa smart speaker but also dug through her son Eric’s personal bank and investments statements to photograph them. Electronic cords were left in a jumbled pile on her floor.

Sure, correlation is not causation, but one can reasonably expect that a law enforcement raid on an elderly person’s home — especially one who had just found out her paper had been raided by the same officers — would not result in an extended life expectancy.

Even if you ignore the death as being nothing more than the result of being 98 years old, you have to recognize the insane overreach that saw a newspaper’s offices raided, followed by a raid of the newspaper owner’s home.

In addition to these raids, officers also raided the home of vice mayor Ruth Herbel.

All anyone knows is what’s stated in the warrant application, as well as a recent bit of friction involving the paper, some leaked DUI records, and a local business owner.

According to Meyer, a retired University of Illinois journalism professor, the raid came after a confidential source leaked sensitive documents to the newspaper about local restaurateur Kari Newell. The source, Meyer said, provided evidence that Newell has been convicted of DUI and was driving without a license—a fact that could spell trouble for her liquor license and catering business.

Meyer, however, said he ultimately did not decide to publish the story about Newell after questioning the motivations of the source. Instead, he said, he just alerted police of the information.

“We thought we were being set up,” Meyer said about the confidential information.

That’s according to the paper’s co-owner, Eric Meyer. These raids were set in motion by information the newspaper didn’t even publish and despite the fact the Marion County Record informed law enforcement about the leaked info.

That’s one theory: that Kari Newell had enough pull to put the police in motion to silence a potential publisher of leaked info that, to this point, had not made the leaked information public.

There’s also another theory, which suggests something even more horrible than a local business owner weaponizing local law enforcement to keep their own misdeeds under wraps.

An interview with Eric Meyer by Marisa Kabas suggests this might have nothing to do with a local restaurateur’s alleged drunk driving. What may actually be happening here is local law enforcement attempting to silence reporting about… well, local law enforcement.

What has remained unreported until now is that, prior to the raids, the newspaper had been actively investigating Gideon Cody, Chief of Police for the city of Marion. They’d received multiple tips alleging he’d retired from his previous job to avoid demotion and punishment over alleged sexual misconduct charges.

And that’s a new wrinkle that makes everything worse. Raiding a newspaper, a newspaper owner’s home, and the home of the vice mayor over unpublished news about a local businessperson’s DUI problems is one thing. Performing these raids to prevent a small paper from publishing what it had discovered about the chief of police is quite another. The first is a horrible infringement of First Amendment rights. The latter is a hideous abuse of law enforcement powers.

According to the warrant, the cops are investigating a couple of crimes. One seems extremely unrelated to either theory: “Identify Theft.” That crime is described as expected: the use of another person’s identity to commit fraud. Nothing in either theory suggests anything like that was committed by the paper, its owners, or the vice mayor. There has been some talk that if you squint and cheat, you could conceivably argue that a possible method of checking Newell’s driver’s license could possibly, technically, violate the state’s identity theft law, but that is an extreme stretch, and still would not justify the full raid and seizures.

The other law cited in the warrant — K.S.A. 21-5839 — is the real problem here. The state law is pretty much the CFAA: a catch-all for “computer” crimes that allows law enforcement (if so motivated) to treat almost anything that might resemble a journalistic effort to gather facts as a crime against computers.

There’s a whole lot of vague language about “authorization,” which means opportunistic cops can use this law to justify raids simply because they did not “authorize” any release of information pertaining to either (a) DUI arrests or citations, or (b) the chief of police’s past history as an alleged sex fiend.

What’s on the record (such as it is) suggests these raids are the acts of officers seeking to protect one of their own: police chief Gideon Cody. The end result of the raids was the seizing of the means of (press) production. Reporters’ computers and phones were seized, along with the small paper’s server — seizures that appear to be designed to silence this press outlet. While ongoing silence would obviously protect the police department, as well as a business owner who may not want the wrong kind of press attention, Occam’s Razor suggests cops will always be far more interested in protecting themselves than taxpayers, no matter how (comparatively) rich they might be.

The Marion, Kansas Police Department has responded to the national outrage generated by its actions. And its official statement uses a whole lot of words to say absolutely nothing.

The Marion Kansas Police Department has has several inquiries regarding an ongoing investigation.

As much as I would like to give everyone details on a criminal investigation I cannot. I believe when the rest of the story is available to the public, the judicial system that is being questioned will be vindicated.

I appreciate all the assistance from all the State and Local investigators along with the entire judicial process thus far.

Speaking in generalities, the federal Privacy Protection Act, 42 U.S.C. §§ 2000aa-2000aa-12, does protect journalists from most searches of newsrooms by federal and state law enforcement officials. It is true that in most cases, it requires police to use subpoenas, rather than search warrants, to search the premises of journalists unless they themselves are suspects in the offense that is the subject of the search.

The Act requires criminal investigators to get a subpoena instead of a search warrant when seeking “work product materials” and “documentary materials” from the press, except in circumstances, including: (1) when there is reason to believe the journalist is taking part in the underlying wrongdoing.

The Marion Kansas Police Department believes it is the fundamental duty of the police is to ensure the safety, security, and well-being of all members of the public. This commitment must remain steadfast and unbiased, unaffected by political or media influences, in order to uphold the principles of justice, equal protection, and the rule of law for everyone in the community. The victim asks that we do all the law allows to ensure justice is served. The Marion Kansas Police Department will nothing less.

First off, the judicial system isn’t what’s being “questioned.” It’s the acts of this particular cop shop, which will always be far less impartial than the judges overseeing their cases. While we would like to know why these search warrants we’re granted, we’re far more interested in why law enforcement sought them in the first place.

The rest of this non-explanation is just CYA boilerplate. We all know how cops are supposed to behave. A cop frontmouth telling us that what we’re witnessing is nothing more than cops behaving they way we expect them to — while refusing to provide any specifics — means nothing at all until the facts come out. The problem is the Marion Police Department thinks the lack of facts means it should be given the benefit of a doubt, rather than recognize this is a situation it will need to fully justify if it expects to salvage what’s left of its eroding reputation.

Either way, what local law enforcement should have immediately recognized, long before the raids were carried out, is that this would draw national attention to these unconstitutional raids as well as give the Marion County Recorder a bunch of fans capable of offsetting the damage done by these blundering officers.

This is from Meyer, the paper’s surviving co-owner:

It is kind of heartwarming: One of the things that I just noticed was we got this incredible swelling of people buying subscriptions to the paper off of our website. We got a lot of them, including some—I’m not gonna say who they’re from—but one of them is an extremely famous movie producer and screenwriter who came in and subscribed to the paper all of a sudden. I mean, it’s like, why are people from Poughkeepsie, New York and Los Angeles, California and Seattle, Washington and, you know, all these different places subscribing to the paper?

But a lot of people seem to want to help, and we’ve had people calling, asking “where can I send money to help you?” And, well, we don’t need money right now. We just are gonna have a long weekend of work to do. But we’ll catch up.

No matter the reason for the raids, the cops fucked up. But it will take a lawsuit to hold them accountable for their actions. No one outside of the participating departments believes these actions were justified. No one believes these raids weren’t carried out for the sole purpose of protecting people in power, whether it was a local business owner or the local police chief. Everything about this is wrong. Hopefully, a court will set this straight, as well as require the PD to explain the motivation for its actions in detail, putting to rest the speculation these oversteps have generated.

Filed Under: 1st amendment, 4th amendment, cfaa, computer crimes, eric meyer, free press, free speech, gideon cody, hacking, identity theft, joan meyer, journalism, kansas, kari newell, marion pd, police raid, ruth herbel
Companies: marion county record

‘Free Speech Absolutist’ Elon Musk Files Obvious SLAPP Suit Against Non-Profit Critic

from the musk-is-more-of-a-slappist-than-a-free-speech-absolutist dept

There’s so much to dig into on this one. First off, just to state my own bias upfront, I’m not a fan of the Center for Countering Digital Hate (CCDH). Literally just a few days ago I wrote about one of its highly questionable studies and how it’s being used (badly) to justify a terrible bill in California. Beyond that, I think that the organization has a history of publishing overhyped reports that the media (and some politicians) love, but which do not accurately reflect reality.

So, when CCDH produced a report recently claiming that there was a surge in hateful content on ExTwitter, I didn’t cover it, because I don’t trust the group’s methodology to be sound, even if it is likely true that ExTwitter has enabled more hateful content. It also wasn’t that surprising or newsworthy when Linda Yaccarino, CEO of ExTwitter, hit back at the report, claiming it was wrong and that the company was successfully suppressing hateful speech using its visibility filtering tools (side note: this is somewhat ironic, given how people still insist that Elon took over Twitter to get rid of “shadowbanning,” when he’s not just doubled down on visibility filtering, but strongly advocates for it).

But then things blew up in the last few days. It came out on Monday that ExTwitter had sent a pompous, over-the-top, nonsensical threat letter to CCDH from Elon Musk’s personal attack dog, Alex Spiro. Even as much as I disagree with CCDH and their methodology, the letter from Spiro is laughable in its vexatious nonsense:

I write on behalf of my client X Corp., which operates the Twitter platform. It has come to our attention that you and your organization, the Center for Countering Digital Hate, (“CCDH”), have made a series of troubling and baseless claims that appear calculated to harm Twitter generally, and its digital advertising business specifically. CCDH regularly posts articles making inflammatory, outrageous, and false or misleading assertions about Twitter and its operations, which CCDH holds out to the general public as supported by “research.” CCDH fixes this label on its outlandish conclusions about Twitter despite failing to conduct (or even attempt) anything resembling the rigorous design process, analytical procedures, or peer review that a reasonable person would expect to accompany research product published by any reputable organization.

Spiro calls out CCDH’s questionable methodology, which (again) I agree is poor. But poor research methodology does not violate the law, and sending a threatening letter over it seems like a clear SLAPP situation. Spiro’s letter implies a defamation claim:

CCDH’s claims in this article are false, misleading, or both, and they are not supported by anything that could credibly be called research. The article provides no methodology for its selection or testing of tweets, no baseline for Twitter’s enforcement time frame, and no explanation as to why the 100 chosen tweets represent an appropriate sample of the nearly 500 million tweets sent per day from which to generalize about the platform’s content moderation practices. And despite purporting to conclude that Twitter favors Twitter Blue subscribers by allowing them to “break its rules with impunity,” the article provides no evidence of differing treatment in content moderation actions against Twitter Blue subscribers and non-subscribers, and indeed reflects no effort to conduct any testing to support this claim, which appears under its headline. The article cites no sources other than different, similarly threadbare posts on CCDH’s own website, and fails to identify the qualifications of any of the researchers who worked on the article.4 In other words, the article is little more than a series of inflammatory, misleading, and unsupported claims based on a cursory review of random tweets.

Even more bizarrely, it suggests there’s a “false designation of origin” claim under the Lanham Act. Which makes zero sense and just seems like flinging shit at the wall.

CCDH’s lawyers hit back on Monday, explaining why this was all nonsense:

We write in response to the ridiculous letter you sent our clients on behalf of X Corp., which operates the Twitter (or the new “X”) platform, dated July 20, 2023. (A copy of your July 20 letter is attached.) In that letter, you claim that CCDH has supposedly made “inflammatory, outrageous, and false or misleading assertions about Twitter” and suggest it has engaged in some sort of conspiracy “to drive advertisers off Twitter by smearing the company and its owner.” These allegations not only have no basis in fact (your letter states none), but they represent a disturbing effort to intimidate those who have the courage to advocate against incitement, hate speech and harmful content online, to conduct research and analysis regarding the drivers of such disinformation, and to publicly release the findings of that research, even when the findings may be critical of certain platforms

As you know, CCDH recently published an article concerning the proliferation of hate speech on Twitter and the company’s failure to address it. That article involved CCDH’s review of 100 hateful tweets that contained racist, homophobic, neo-Nazi, antisemitic, or conspiracy content—i.e., content that plainly violates Twitter’s own policies in this regard. One tweet, for example, stated that “black culture has done more damage [than] the [Ku Klux] [K]lan ever did.” Another referenced the white supremacist ideology known as “replacement theory,” claiming that “[t]he Jewish Mafia wants to replace us all with brown people.” And yet another explicitly encouraged violence against the LGBTQ+ community, suggesting that LGBTQ+ rights activists need “IRON IN THEIR DIET. Preferably from a #AFiringSquad.” CCDH staff reported all 100 tweets using Twitter’s own designated reporting tool. Four days later, 99 of the 100 tweets identified by CCDH remained available on Twitter.

Tellingly, after CCDH published this article, Twitter did not spend its time and resources addressing the hate and disinformation that CCDH had identified, despite Twitter’s purported commitment to addressing hate speech on its platform. Instead, your clients decided to “shoot the messenger” by attempting to intimidate CCDH and Mr. Ahmed. In your July 20 letter, for example, you write that “CCDH’s claims in [its report] are false, misleading, or both”—although you point to no actual inaccuracy—“and they are not supported by anything that could credibly be called research”—although the article describes the basis for its conclusions and the methodology it used. While it is true that CCDH did not undertake a review of the “500 million tweets” that you claim are posted on Twitter each day, CCDH never claimed to have done so. In fact, under Mr. Musk’s leadership, Twitter has taken steps to curtail research on the platform. To criticize CCDH for being too limited in its research while simultaneously taking steps to close the platform off to independent research and analysis is the very height of hypocrisy.

The response letter also took on the ridiculous suggestion of a Lanham Act claim:

But your July 20 letter doesn’t stop there. You go on to state that there is “no doubt that CCDH intends to harm Twitter’s business” and warn that you are “investigating” whether CCDH has violated Section 43(a) of the Lanham Act. That threat is bogus and you know it. None of the examples cited in your letter constitutes the kind of advertisement or commercial speech that would trigger the Lanham Act. To the contrary, the statements you complain about constitute political, journalistic, and research work on matters of significant public concern, which obviously are not constrained by the Lanham Act in any way. Moreover, as a nonprofit working to stop online hate, CCDH is obviously not in competition with Twitter, which makes your allegations of a Lanham Act injury even more fanciful. Your assertion that the goal in CCDH’s research and reporting is to benefit Twitter’s competitors also ignores the fact that CCDH has published critical, highly-publicized reports about other platforms, including Instagram, Facebook, and TikTok. Simply put, there is no bona fide legal grievance here. Your effort to wield that threat anyway, on a law firm’s letterhead, is a transparent attempt to silence honest criticism. Obviously, such conduct could hardly be more inconsistent with the commitment to free speech purportedly held by Twitter’s current leadership.

I mean, all of this is nonsense. Spiro’s threat letter was clearly a ridiculous (and poorly argued) intimidation tactic. And it’s doubly hilarious that it claims CCDH’s methodology doesn’t count because the sample size is too small, when Musk’s entire faked reason for trying to get out of the Twitter deal was too much spam, based on a similarly misleading sample size.

But, more to the point, Elon has pretended all along to be a supporter of free speech. Many of us have pointed out what a ridiculously false statement that is, and Musk has a long history of suppressing and attacking critics.

Anyway… around the same time that CCDH was sending this letter, ExTwitter and Musk were (stupidly) filing an actual lawsuit against CCDH. The case is clearly a SLAPP suit, but (oddly) ExTwitter is not represented by Spiro or his firm Quinn Emanuel. Nor does it make any of the claims suggested in the letter (defamation or a Lanham Act claim).

Instead, the lawsuit is even dumber. Filed by the law firm of White & Case (which is big enough to know better than to file vexatious SLAPP suits) the claims are breach of contract (?!?) and a CFAA violation for hacking the site. Then there are the usual throw-in claims of “intentional interference” and an “inducing breach of contract.”

The claims are ridiculous, but they are a strong reminder that SLAPP suits come in many forms, and don’t just need to be about defamation. Of course, having this actual lawsuit preceded by Spiro’s weak-ass attempt at intimidation, which strongly implies defamation, only helps to prove that the actual nonsense claims here are pure SLAPPs and a direct attack on free speech by someone who cosplays online as a “free speech absolutist.”

Let’s do a quick explanation for why these claims are frivolous:

CCDH intentionally and unlawfully accessed data it sought regarding the X platform in two ways. CCDH US, as a registered user of X, scraped data from X’s platform in violation of the express terms of its agreement with X Corp. CCDH also convinced an unknown third party — in violation of that third party’s contractual obligations — to improperly share login credentials to a secured database that CCDH then accessed, and retrieved information from, on multiple occasions without authorization. CCDH, in turn, selectively quoted data it obtained via those methods. It did so out of context in public reports and articles it prepared to make it appear as if X is overwhelmed by harmful content, and then used that contrived narrative to call for companies to stop advertising on X.

The specifics here are that ExTwitter is claiming someone gave CCDH access to ExTwitter’s account with Brandwatch. Brandwatch has a tool for advertisers to monitor their brands on social media. Twitter has an ongoing relationship with Brandwatch (likely using the Twitter API) to help customers of Brandwatch (generally advertisers) see what’s happening on social media.

ExTwitter claims that someone with a Brandwatch account gave CCDH access to their dashboard:

Twitter is informed and believes, and on that basis alleges, that none of the Defendants (except for the third party who is included as Doe Defendant and improperly shared its login credentials with CCDH) are or ever have been customers of Brandwatch, and have never been provided with login credentials that would enable them to permissibly access the data with authorization. None of the Defendants (again, except for the third party who improperly shared its login credentials with CCDH) are or ever have been parties to the Brandwatch Agreements. And neither X nor Brandwatch has ever consented, in any form or in any way, to any of Defendants (except the third party who provided CCDH with its login credentials and who is named as a Doe Defendant) the data that X Corp. provided to Brandwatch under the Brandwatch Agreements.

In order to prepare and publish the so-called “research” reports and articles about X, CCDH has — since at least March 2011 — necessarily obtained access to and accessed the Licensed Materials improperly and without authorization. Indeed, CCDH has admitted as much, citing Brandwatch—a platform it never had any right to access—as a source of its data in its “research” reports, despite that data being accessible only to authorized users via login credentials, which the CCDH was not. These actions were unknown to Brandwatch and to X until recently

Even if true, this is no basis for ExTwitter to sue CCDH. It might (in theory, but probably not in reality) have a claim against Brandwatch or the Brandwatch user. Or, more likely, Brandwatch might have a claim against its users for breaching its contract. But there’s no transitive property that gives ExTwitter a legitimate claim against CCDH.

This is all just fluff and nonsense.

There’s also this:

Twitter is informed and believes, and on that basis alleges, that CCDH’s conduct as described herein is intended to do more than further CCDH’s own censorship efforts.

Again, I disagree with CCDH’s methodology and its goals. I think it’s a terrible organization that gets way too much attention for its shoddy research and biased takes. But what is described above is literally the quintessential definition of free speech. CCDH cannot meaningfully “censor” anything. The only thing it can do is use its own free speech rights to try to convince others to disassociate from someone.

That’s free speech. That’s the marketplace of ideas.

I can disagree with CCDH’s position and its research and arguments, and still recognize that it has every right to advocate for whatever it wants to advocate for. That’s not censorship, Elon, that’s free speech.

So, the breach of contract claims are a total joke. It’s not the contract between ExTwitter and CCDH that was broken. And the CFAA claims are even more disgusting. We’ve obviously written about the horror that is the CFAA many times before. The Computer Fraud and Abuse Act, passed because Ronald Reagan was confused and thought the movie War Games was true, has been widely abused for years by companies (and law enforcement) using bogus claims of “unauthorized access” to attack people who do things on their computers that they just don’t like. The broad nature of the law has lead to it being called “the law that sticks” because it’s often used when all other laws would fail.

Thankfully, over the past few years, the courts have pushed back on the most egregious uses of the CFAA, but it’s still a bad law. And here, the CFAA claims are particularly laughable:

Defendants, except for the third party who provided CCDH with its login credentials, have violated the CFAA by knowingly, and with intent to defraud X Corp., accessing a protected computer, without authorization, and by means of such conduct furthered the fraud and obtained one or more things of value

Bullshit. Again, this claim only makes (very slightly, but not really) sense if it were Brandwatch making it, not ExTwitter. The complaint makes it clear that the computer systems in question were Brandwatch’s, not ExTwitter’s:

Defendants (except for the third party who is included as a Doe Defendant and improperly shared its login credentials with CCDH) were never validly given login credentials to access the data provided under the Brandwatch Agreements. Those Defendants nonetheless, knowing the data was secured pursuant to the Brandwatch Agreements and that those Defendants did not have authorization to access it, convinced an unknown third party, who is likely a Brandwatch customer, to share its login credentials with the remaining Defendants. Those Defendants then accessed that data without authorization, as admitted in CCDH’s reports and articles discussed above, in furtherance of obtaining data regarding X that those Defendants could mischaracterize as part of its campaign to call on companies to stop advertising on X.

And the “loss” part, which is a necessary part of a CFAA claim, is particularly ridiculous even by CFAA standards, in which “losses” are often quite absurd.

X has suffered loss as a result of these violations, including, without limitation, amounts expended attempting to conduct internal investigations in efforts to ascertain the nature and scope of CCDH’s unauthorized access to the data, significant employee resources and time to participate and assist in those investigations, and attorneys’ fees in aid of those investigations and in enforcing the relevant agreements. These losses amount to well over $5,000 aggregated over a one-year period

Yes. That’s right. Elon is claiming that the “loss” under the CFAA is the fact that ExTwitter employees had to investigate how it was that CCDH was getting the information it used to make fun of Twitter.

This is all hogwash. No matter what you think of CCDH, it has every right to analyze Twitter and post its own interpretation of how well the company is handling hateful content, just as I (or Musk) have the right to respond and point out the problems with their opinions or analysis.

That is free speech.

What is not free speech is using the power of the state to file vexatious, bogus lawsuits to try to intimidate them for their speech. The fact that the case filed has entirely different (but equally ridiculous) legal theories than the letter that preceded it really only serves to underline that the intent of this was to find the best way to intimidate critics. The lawyer filing this case, Jonathan Hawk, is an experienced lawyer working for a giant law firm. He must know that this case is a vexatious, nonsense SLAPP suit, but he still agreed to file it. It’s disgusting.

While California has an anti-SLAPP law, as some have noted, this case may not be applicable to it. The CFAA claim is a federal claim, and California’s anti-SLAPP law (while it can be used in federal court) can’t be used against federal law (this is why we NEED A FEDERAL ANTI-SLAPP law). And, while the breach of contract claim might be arguable under California’s anti-SLAPP, ExTwitter can and will argue it’s not really about speech… In other words, this is still going to be a pain for CCDH. (Edited to provide a clearer explanation of the anti-SLAPP issue…).

I asked 1st Amendment Ken White to dig into the California anti-SLAPP analysis, and he explained why it (unfortunately) likely won’t apply here:

When evaluating an anti-SLAPP motion, California courts look to the legal nature of the claim, not the plaintiff’s intent in bringing it. A case that the plaintiff filed because of protected activity, or in retaliation against protected activity, doesn’t come under the statute unless the legal claims are based on protected activity. This is sometimes called the “gravamen” test. So, for instance, if a landlord sues to evict you and cites your non-payment of rent, even if you claim that the real motive is your organizing tenants to protest the landlord, the anti-SLAPP statute doesn’t apply because the gravamen of the claim – the thrust of the claim – isn’t your protected speech. Here, the defendant can’t use the anti-SLAPP statute to attack the CFAA claim because it’s a federal cause of action. In addition, I think it’s going to be tricky showing that the gravamen of the other claims is the speech as opposed to breach of the contract regarding access to the data. It’s not a sure loss, but it’s a problem.

And, again, this is why we really need a strong federal anti-SLAPP law to deal with situations like this.

But, let’s be 100% clear about this: Elon Musk is not a free speech absolutist or a free speech supporter. He’s a thin-skinned free speech suppressor willing to file vexatious SLAPP suits to intimidate those who criticize him.

Filed Under: alex spiro, california, cfaa, criticism, defamation, elon musk, free speech, intimidation, jonathan hawk, lanham act, research, slapp suit, trademark
Companies: ccdh, twitter, x

Bungie Wins Default Judgment Against Danish Cheat Purveyor In Ruling That Encourages More CFAA Abuse

from the well,-'win'-might-be-overstating-things dept

A lawsuit [PDF] against a cheat creator has swung almost completely in Bungie’s direction, mainly thanks to the Danish defendant being unwilling to travel across the pond to defend himself in court. The claims are numerous, ranging from copyright infringement to trademark infringement to CFAA violations to the ever-popular (but rarely successful) RICO.

While Bungie seems willing to fight against bogus DMCA claims that affect its fans, it’s also apparently willing to wield the same law to punish people who sell cheats to other users. This doesn’t always work, but it works well enough that Bungie continues to aggressively protect its products from hacks offered by developers to people unwilling (or unable) to compete on a level playing field.

This case centers on the “Wallhax” cheats for Destiny 2 apparently developed by Canadian company Elite Boss Tech and Denmark resident Daniel Larsen, along with a handful of others Bungie managed to unmask after obtaining a settlement from two defendants last October.

The default judgment awarded here comes with a $16 million price tag. But that number really means nothing. If Bungie was unable to persuade Daniel Larsen to engage in its extraterritorial lawsuit, it’s unlikely it will be able to convince him to hand over millions of dollars in response to this judgment.

Citing the settlements Bungie has already been able to obtain from cooperating defendants, the court pushes forward with the default judgment. The court agrees with pretty much every claim raised by Bungie, including the civil version of RICO most plaintiffs fail to apply as carefully as Bungie does here.

Specifically, Bungie has alleged and provided evidence that Larsen and the enterprise engaged in criminal copyright infringement and money laundering in violation of 18 U.S.C. §§ 1956 and 1957. As to criminal copyright infringement, Bungie must demonstrate that Larsen willfully infringed on a valid copyright for purposes of commercial advantage or private financial gain. See 17 U.S.C. § 506(a). Here, the allegations and evidence suffice to show that Larsen willfully accessed and utilized Bungie’s Destiny 2 software in order to develop the Wallhax cheat, which directly infringed on Bungie’s two valid copyrights for Larsen’s personal gain. The allegations and evidence are also sufficient to satisfy the predicate act of money laundering. A defendant engages in money laundering under 18 U.S.C. § 1956(a)(1)(A)(i) when they (1) conduct (or attempt to conduct) (2) a financial transaction, (3) knowing that the property involved in the financial transaction represents the proceeds of some unlawful activity, (4) with the intent to promote the carrying on of specified unlawful activity, and (5) the property was in fact be derived from a specified unlawful activity. 18 U.S.C. § 1956(a)(1). Bungie has shown that Larsen and the Wallhax enterprise obtained financial proceeds from the sale of the Wallhax cheat, which was the product of criminal copyright infringement.

I haven’t read all the predicate acts for money laundering, but the criminal act suggests actual “laundering” of proceeds to disguise their illegal origin. But it appears I have that all wrong: simply depositing proceeds from crimes into any financial institution is apparently “laundering,” even if no steps are taken to obfuscate the illegal origins of the funds. Weird.

It’s a default judgment so there are no counterarguments to consider. But the judge decides Bungie’s CFAA arguments have enough merit to be carried along with the rest of the allegedly illegal flotsam:

_Bungie has provided sufficient allegations and evidence that Larsen violated the CFAA when he intentionally accessed the Destiny 2 servers to obtain the Destiny 2 software to create the Wallhax cheat without authorization. By doing so, Larsen violated that terms of the LSLA and manipulated key elements of the Destiny 2 software through the Wallhax cheat_…

That’s not very descriptive and nothing in this decision explains what exactly constituted this CFAA violation. For that, we have to go back to Bungie’s amended complaint, which provides some helpful context:

Defendants, acting in concert with users who deploy their cheat software, obtain data from within the Destiny 2 client software’s memory space that the users are not authorized to access – specifically the positional information used in Defendants’ “ESP” display.

In addition, Defendants are fully aware that users who deploy their cheat software do so in violation of the LSLA, and that access to the Destiny 2 client software memory space by such users is entirely unauthorized.

In accessing the Destiny 2 client software’s memory space without authorization, Defendants’ software obtains information from the Destiny 2 system to enable the presentation of the “ESP” display on the users’ computers.

In addition, by accessing the Destiny 2 client software’s memory space without authorization, Defendants’ software takes control of the aiming function of the Destiny 2 client software, enabling the player to fire with perfect accuracy every time.

As a result of this conduct, Defendants’ software endows cheating users with significant advantages not available to players who play the game honestly.

This is all very unfortunate but companies like Bungie have plenty of options to deal with cheaters and cheat purveyors. Violating a user agreement should be grounds for banning or account suspension. But they really shouldn’t be considered violations of the CFAA, even in civil cases. While it’s always fun to pile on charges (we learned it from you, prosecutors! [intense sobbing]), it does very little for the internet at large to treat end user license agreement violations as malicious acts. While the acts alleged here appear to be deliberate circumvention, treating any unexpected (or exploratory) use of software as an illegal act (as the court does here) makes it that much tougher to discover and report security flaws or engage in research that utilizes methods service providers may not expect.

It’s not that Bungie is wrong to deter cheaters and those selling cheats to cheaters. It’s that it had a lot of options to deploy that didn’t bring the oft-abused CFAA into the mix. Bungie was always going to win, especially when the defendant no-showed the entire case. Portraying exploitation of areas left unprotected by Bungie as a criminal act in and of itself does no one any favors.

Filed Under: cfaa, cheats, daniel larsen, default judgment, destiny 2, video games, wallhax
Companies: bungie, elite boss tech