negligence – Techdirt (original) (raw)

Lawsuit: Cops Stood By While Elderly Woman Was Stabbed 68 Times; Cops: Hey, We Yelled At The House

from the suddenly-incapable-of-a-forcible-entry dept

This isn’t a good look for the Las Vegas Metro PD, even if it’s completely supported by court precedent. No matter how often law enforcement agencies sling around the phrase “protect and serve,” they have almost no legal obligation to do either of those things.

Sometimes a “failure to intervene” allegation might undermine a cop’s attempt to secure qualified immunity for violations committed by other cops, but when it comes to crimes being committed against regular people, cops simply aren’t legally obliged to stop crimes in progress even when they’re already at the scene.

But this lawsuit is hoping a court might find otherwise. Whether or not it does, it will definitely expose some cops for what they are: lazy opportunists who aren’t really in the life-saving business. Even if the officers are found to be on the right side of judicial precedent, they’re not going to come out of this looking like people who should be employed as police officers.

Las Vegas police officers stood outside listening as a 74-year-old woman was stabbed 68 times and killed by one of her sons in her home, a lawsuit filed Saturday alleged.

How the Metropolitan Police Department responded to the event prompted the woman’s other sons to file a lawsuit alleging, among other things, negligence and wrongful death.

[…]

Bonilla was arrested and booked on one charge of murder of an elderly or protected person. After pleading guilty in 2023, he was sentenced to life with the possibility of parole.

Pablo Bonilla — someone well known to Las Vegas law enforcement — murdered his mother in a most horrific manner. Then he walked out the door and surrendered to the law enforcement officers who just basically hung out outside of the house and waited for the murder-in-progress to resolve itself.

They can’t even say they tried everything they could to prevent this murder from happening. The officers’ report makes it clear they did nothing more than shout in the direction of the house from the safety of their cop car. And in all my years of reporting on police misconduct, I have never seen this particular description of officers’ (in)action:

Bonilla’s arrest report said that officers “challenged the apartment” using a vehicle bullhorn because the apartment’s patio and front door were guarded by metal gates. About 30 minutes after they arrived, they heard Zuniga screaming for help, according to the lawsuit. Afterward, Bonilla appeared at the doorway of the apartment, covered in blood, police said. He was taken into custody.

What even the fuck is that. The apartment wasn’t murdering Paula Prada Zuniga. Her son was. And since when have mental gates on doors and patios ever stopped cops from performing forced entries? Because if that’s all it takes to stop cops from entering residences, every criminal in America can ensure undisturbed criminal activity in perpetuity with a very small investment in security non-tech.

Trust me, these cops would have blown past the supposedly impassable metal gates in a heartbeat if they thought there was cash to seize or drugs to bust or a warrantless search to be had. But when it came to hearing an elderly woman screaming for help as she was brutally stabbed, the officers were suddenly faced with insurmountable obstacles that reduced them to yelling at a house from a safe distance away.

This is the worst kind of policing: officers who don’t feel it’s worth their effort, much less their time, to prevent or respond to a crime in (audible) progress. When confronted with their own laziness and (presumably) cowardice, the cops claimed they had zero chance of entering the house because the same metal gates they’d bypassed for other reasons were now the on-the-ground equivalence of… I don’t know… dealing with a foreign country with no extradition agreement in place.

This is already an absurd abdication of professionalism. But, thanks to the officers’ own report, it’s now in a realm of police failure that goes beyond what any talented satirist could actually create without destroying readers’ suspension of disbelief. “Challenged the apartment,” my ass. These officers need to be fired before they cause any more damage, either by hanging out near in-progress murders or by dragging down the entirety of the LVMPD to their level.

Filed Under: defund the police, las vegas pd, lawsuit, negligence, paula zuniga

Court Dismisses Mark Zuckerberg Personally From Massive ‘Social Media Addicts Children’ Lawsuit

from the ambulance-chasers-chasing-ambulances dept

Over the last few years, there have been a ton of lawsuits, pretty much all of them dubious, arguing that social media is inherently harmful to children (something the research does not show) and that therefore there is some sort of magic product liability claim that will hold social media companies responsible. A bunch of those lawsuits have been rolled up into a massive single multidistrict litigation case in California, under the catchy name: “In re: social media adolescent addiction/personal injury products liability litigation.”

The docket is massive, currently with well over 750 documents on the docket, and I’m sure many more are to come. At least some of the cases tried to put personal liability on Mark Zuckerberg himself, as if he were somehow directly gleefully looking to harm children with Facebook and Instagram.

The court has now dismissed those claims (though with leave to amend). Eric Goldman brought my attention to this latest ruling in the case on his blog (honestly there are so many documents on the docket I had completely missed this one).

As you might expect in a case this massive, with a bunch of personal injury attorneys jumping in with the hope of scoring some massive multi-billion dollar settlement, they’re willing to throw every stupid theory they can come up with against the wall to see if any one gets by an inattentive judge. Goldman’s summary is a bit more diplomatic: “the plaintiff lawyers are in total war mode, throwing seemingly inexhaustible resources at the case to explore every possible issue and angle of liability, no matter how arcane or tangential.”

Anyway, some of the plaintiffs argued that Zuck should be personally liable based on a wacky theory that Zuck concealed and misrepresented how safe Meta’s various platforms were in a negligent manner. The judge takes the various claims against Zuck and uses this example from one of the complaints to summarize them:

In Zuckerberg’s testimony before Congress and in other public statements alleged in paragraphs 364 through 391 of the Master Complaint, Defendants Meta and Zuckerberg disclosed some facts but intentionally failed to disclose other facts, making their disclosures deceptive. In addition, Meta and Zuckerberg intentionally failed to disclose certain facts that were known only to them, which Plaintiff and their parents could not have discovered. Had the omitted information been disclosed, the injuries that Plaintiff suffered would have been avoidable and avoided. Plaintiff reasonably would have been on alert to avoid an ultimately dangerous activity. Plaintiff asserts that she has always valued her health, and makes conscious choices to avoid other common dangerous activities teenagers and pre-teens often fall victim to, such as drinking and vaping. Because Plaintiff was unaware of the dangers of Instagram, she could not take those same healthy steps to avoid a dangerous situation. Plaintiff repeats and realleges against Zuckerberg each and every allegation against Meta contained in Count 8 (paragraphs 976 through 987) and Count 9 (paragraphs 988 through 999) in the Master Complaint.

In short, the argument is that Zuck knew Meta was inherently dangerous for kids (which is nonsense, not supported by the data). If only he had said that publicly, they claim, the plaintiff kids in this case would be good little kids and no longer would use Instagram, because Zuck told them it was dangerous.

If this seems absolutely preposterous, that’s because it is.

The plaintiffs also argue that Zuckerberg is personally liable for this due to reports of how much input he has into the design of the various products:

Plaintiffs build out their theory of liability as to Zuckerberg in their opposition to defendant’s motion to dismiss. … They focus primarily on two aspects of Zuckerberg’s role in Meta. First, plaintiffs allege that, from Meta’s inception to the present, Zuckerberg has maintained tight control over design decisions, including those relating to developing user engagement that are at issue in this litigation. Second, emphasizing Zuckerberg’s role as a public figure and given his alleged knowledge of Meta’s platforms’ dangers, plaintiffs allege that his statements about Meta’s platforms’ safety—including some of those excerpted above—form a pattern of concealment that is actionable under theories of fraudulent and negligent misrepresentation and concealment.

The court, impressively, has to look at this question under various different state laws, given that the case rolls up cases from different states, and where different state laws apply to different aspects (multidistrict litigation can be nuts). And, thus, it notes that in many of the states where this claim was brought, there’s a problem: a bunch of them don’t even recognize the tort of “negligent misrepresentation by omission.” So, it’s easy to dismiss such claims against him in those states.

But, even in the states where they do have such a tort, it doesn’t go well. The court notes that various states have different standards for a “duty to disclose” but basically finds all of them wanting.

Plaintiffs propose three bases for this Court to find Zuckerberg owed a duty to disclose the information he purportedly withheld: (i) Zuckerberg’s “exclusive and superior knowledge” of how Meta’s products harm minors; (ii) Zuckerberg’s “public, partial representations concerning the safety of Meta’s products”; and (iii) Zuckerberg’s fame and public notoriety. (Dkt. No. 538 at 7– 11.) None of these approaches is supported by any state’s law. In short, plaintiffs cannot rely on Zuckerberg’s comparative knowledge alone to establish the kind of “confidential” or otherwise “special” relationship with each plaintiff that these states’ laws require. The Court sets forth the analysis supporting this conclusion as to each of plaintiffs’ three theories below.

The “exclusive and superior knowledge” is laughable, as the court points out. That only applies to duties between transacting parties, like if you’re selling someone a car and fail to disclose that the engine fell out or whatever. That’s clearly not the case here:

No plaintiff here pleads they were transacting or were otherwise engaged with Zuckerberg personally. Thus, plaintiffs fail to establish a duty to disclose based on “superior knowledge.”

Again, no luck for the supposed “public partial representations.” As the court notes, in the states that have such a concept, it involves transactions (again) between people with a “special relationship,” between the parties, where such a disclosure would make sense. That does not exist:

Again, plaintiffs have not pled any relationship—let alone a “special” one—between themselves and Zuckerberg. This theory fails

And, finally, the “fame and public notoriety” doesn’t cut it either. Indeed, the court notes that if the plaintiffs’ theory made sense here, we’d see an absolute flood of lawsuits, any time anyone who was a public figure didn’t “disclose” random information.

Plaintiffs use this broad language to extrapolate a claim here. They argue, on the one hand, that Zuckerberg “was the trusted voice on all things Meta” and “remained an approachable resource to the public,” and, on the other hand, that he accepted Meta’s duty to its customers “[b]y cultivating his roles in public life as both the embodiment of Meta and Silicon Valley’s approximation of a philosopher king.” (Dkt. No. 538 at 9–10.) Specious allusions to Plato aside, plaintiffs have not provided case law to support this interpretation of the Berger standard, nor have they meaningfully grappled with the expansion of state tort law that would result were the Court to recognize the duty they identify. To that end, plaintiffs’ theory would invert the states’ “confidential” or “special” relationship requirements by creating a duty to disclose for any individual recognizable to the public. The Court will not countenance such a novel approach here.

And thus, the claims are dismissed. The court gives the plaintiffs leave to amend based on a theory they apparently tossed in at the last minute about corporate officer liability. However, the court only notes that it wasn’t fully briefed on that issue, and thus allows the plaintiffs to file an amended complaint on that issue. Normally, if you throw in a claim super late like that, a court will say “too late, too bad,” but here it admits that because the case is so complicated, with so many moving parts, it will let it slide.

Given the aggressive nonsense of the lawyers here, it seems likely that they’ll push forward with their theory and file an amended complaint, but it seems unlikely to survive.

Unfortunately, though, this is the dumb world we live in today. Product liability claims are being used against internet companies (and their executives) because any time anything bad happens, people want to find someone to blame. And, of course, there are sketchy bottom-feeder lawyers willing to bring such cases to court, in hopes of cashing in.

Filed Under: addiction, disclosures, mark zuckerberg, negligence, product liability, social media, social media addiction
Companies: meta

As Predicted, Judge Dismisses Nearly All Of Sarah Silverman, Michael Chabon, And Other Authors’ Lawsuits Against OpenAI

Can’t say we didn’t warn everyone. Last summer we pointed out that Sarah Silverman and a bunch of other authors suing AI companies for copyright infringement seemed to only demonstrate that they didn’t understand how copyright works.

And, now Judge Araceli Martinez-Olguin, has dismissed most of the claims in three related cases from authors against OpenAI, noting that their theories are just not how copyright law works. The judge does leave them open to amend the claims, but it’s difficult to see how any of the cases will survive. Open AI sought to dismiss all claims except for the direct infringement claim. In its motion to dismiss, OpenAI notes that they will seek to resolve the direct infringement question as a matter of law later in the case (i.e., they will seek summary judgment on it, likely arguing fair use).

For the rest, though, they seek to dismiss the claims outright, and mostly got exactly what they wanted. First up, there’s the pernicious “vicarious copyright infringement” claims that are frequently brought in cases, but rarely hold up. They certainly don’t hold up here:

Plaintiffs suggest that they do not need to allege a “substantial similarity” because they have evidence of “direct copying.” ECF 48 (“Response”) at 15. They argue that because Defendants directly copied the copyrighted books to train the language models, Plaintiffs need not show substantial similarity. Id. at 15 (citing Range Rd. Music, Inc. v. E. Coast Foods, Inc., 668 F.3d 1148, 1154 (9th Cir. 2012) (explaining that “substantial similarity” helps determine whether copying occurred “when an allegedly infringing work appropriates elements of an original without reproducing it in toto.”). Plaintiffs misunderstand Range Rd. There, the court did not need to find substantial similarity because the infringement was the public performance of copyrighted songs at a bar. Range Rd., 668 F.3d at 1151-52, 1154. Since the plaintiffs provided unrebutted evidence that the performed songs were the protected songs, they did not need to show that they were substantially similar. Id. at 1154. Distinctly, Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books. Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials. See Skidmore, 952 F.3d at 1064; Corbello, 974 F.3d at 973-74.

Plaintiffs’ allegation that “every output of the OpenAI Language Models is an infringing derivative work” is insufficient. Tremblay Compl. ¶ 59; Silverman Compl. ¶ 60. Plaintiffs fail to explain what the outputs entail or allege that any particular output is substantially similar – or similar at all – to their books. Accordingly, the Court dismisses the vicarious copyright infringement claim with leave to amend.

Next up were the always weak DMCA 1202 claims about the “removal or alteration of copyright management information.” That also does not fly:

Even if Plaintiffs provided facts showing Defendants’ knowing removal of CMI from the books during the training process, Plaintiffs have not shown how omitting CMI in the copies used in the training set gave Defendants reasonable grounds to know that ChatGPT’s output would induce, enable, facilitate, or conceal infringement. See Stevens, 899 F.3d at 673 (finding that allegations that “someone might be able to use [the copyrighted work] undetected . . . simply identifies a general possibility that exists whenever CMI is removed,” and fails to show the necessary mental state). Plaintiffs argue that OpenAI’s failure to state which internet books it uses to train ChatGPT shows that it knowingly enabled infringement, because ChatGPT users will not know if any output is infringing. Response at 21-22. However, Plaintiffs do not point to any caselaw to suggest that failure to reveal such information has any bearing on whether the alleged removal of CMI in an internal database will knowingly enable infringement. Plaintiffs have failed to state a claim under Section 12(b)(1)

Same thing with 1202(b)(3) regarding the alleged distribution of copies. That’s a problem since they don’t show any distribution of copies:

Under the plain language of the statute, liability requires distributing the original “works” or “copies of [the] works.” 17 U.S.C. § 1202(b)(3). Plaintiffs have not alleged that Defendants distributed their books or copies of their books. Instead, they have alleged that “every output from the OpenAI Language Models is an infringing derivative work” without providing any indication as to what such outputs entail – i.e., whether they are the copyrighted books or copies of the books. That is insufficient to support this cause of action under the DMCA.

Plaintiffs compare their claim to that in Doe 1, however, the plaintiffs in Doe 1 alleged that the defendants “distributed copies of [plaintiff’s licensed] code knowing that CMI had been removed or altered.” Doe 1, 2023 WL 3449131, at *11. The Doe 1 plaintiffs alleged that defendants knew that the programs “reproduced training data,” such as the licensed code, as output. Id. Plaintiffs here have not alleged that ChatGPT reproduces Plaintiffs copyrighted works without CMI.

Then there are the unfair competition claims. Here, one part of the claim remains standing, but the rest are dismissed. As the court notes, for there to be unlawful competition, they need to show an act is “unlawful, unfair, or fraudulent.” Here two of the three prongs fail. First up “unlawful.”

Even if Plaintiffs can bring claims under the DMCA, they must show economic injury caused by the unfair business practice. See Davis v. RiverSource Life Ins. Co., 240 F. Supp. 3d 1011, 1017 (N.D. Cal. 2017) (quoting Kwikset Corp. v. Superior Ct., 51 Cal. 4th 310, 322 (2011)). Defendants argue that Plaintiffs have not alleged that they have “lost money or property.” Motion at 29-30; see Kwikset Corp., 51 Cal. 4th at 322-23. Plaintiffs counter that they have lost intellectual property in connection with the DMCA claims because of the “risk of future damage to intellectual property that results the moment a defendant removes CMI from digital copies of Plaintiffs’ work – copies that can be reproduced and distributed online at near zero marginal cost.” Response at 28. However, nowhere in Plaintiffs’ complaint do they allege that Defendants reproduced and distributed copies of their books. Accordingly, any injury is speculative, and the unlawful prong of the UCL claim fails for this additional reason.

What about fraudulent? Nope. No good.

Plaintiffs also argue that they pleaded UCL violations based on “fraudulent” conduct. Response at 26-27. They point to a paragraph in the complaint that states that “consumers are likely to be deceived” by Defendants’ unlawful practices and that Defendants “deceptively designed ChatGPT to output without any CMI.” Tremblay Compl. ¶ 72. The allegation’s references to CMI demonstrates that Plaintiffs’ claims rest on a violation of the DMCA, and thus fail as the Court has dismissed the underlying DMCA claim. Supra Sections B, C(1). To the extent that Plaintiffs ground their claim in fraudulent business practices, Plaintiffs fail to indicate where they have pleaded allegations of fraud. Thus, they fail to satisfy the heightened pleading requirements of Rule 9(b) which apply to UCL fraud claims. See Armstrong-Harris, 2022 WL 3348246, at *2. Therefore, the UCL claim based on fraudulent conduct also fails.

The only prong that remains is “unfair,” which the court notes, California defines broadly, and thus it survives, for now. Given everything else in the opinion, though, it feels like this one prong is also ripe for dismissal at the summary judgment stage.

Then there’s “negligence.” Plaintiffs’ lawyers love to claim negligence, but it rarely stands up. You can’t just take “this thing is bad” and claim negligence. Here, the plaintiffs went to even more ridiculous levels, arguing that OpenAI had a made up “duty of care” to protect the copyrights of the authors, and the failure to do that was negligent. As the court notes, that’s not how this works:

The Complaints allege that Defendants negligently maintained and controlled information in their possession. Tremblay Compl. ¶¶ 74-75; Silverman Compl. ¶¶ 75-76. Plaintiffs argue without legal support that Defendants owed a duty to safeguard Plaintiffs’ works. Response at 30. Plaintiffs do not identify what duty exists to “maintain[] and control[]” the public information contained in Plaintiffs’ copyrighted books. The negligence claim fails on this basis.

Plaintiffs’ argument that there is a “special relationship” between the parties also fails. See Response at 30. Nowhere in the Complaints do Plaintiffs allege that there is any fiduciary or custodial relationship between the parties. Plaintiffs do not explain how merely possessing their books creates a special relationship, citing only to an inapposite case where defendants were custodians of plaintiffs’ “personal and confidential information.” Witriol v. LexisNexis Grp., No. C05-02392 MJJ, 2006 WL 4725713, at *8 (N.D. Cal. Feb. 10, 2006).

As Plaintiffs have not alleged that Defendants owed them a legal duty, the Court dismisses this claim with leave to amend.

Finally, there’s the “unjust enrichment” claim which also fails, because there’s no evidence that any benefit to OpenAI came from “mistake, fraud, coercion or request.”

Defendants argue that this claim must be dismissed because Plaintiffs fail to allege what “benefit” they quasi-contractually “conferred” on OpenAI or that Plaintiffs conferred this benefit through “mistake, fraud, or coercion.” Motion at 32 (citing Bittel Tech., Inc. v. Bittel USA, Inc., No. C10-00719 HRL, 2010 WL 3221864, at 5 (N.D. Cal. Aug. 13, 2010) (“Ordinarily, a plaintiff must show that the benefit was conferred on the defendant through mistake, fraud or coercion.”) (citation omitted). Plaintiffs fail to allege that OpenAI “has been unjustly conferred a benefit ‘through mistake, fraud, coercion, or request.’” See Astiana, 783 F.3d at 762 (citation omitted); LeGrand v. Abbott Lab’ys, 655 F. Supp. 3d 871, 898 (N.D. Cal. 2023) (same); see, e.g., Russell v. Walmart, Inc., No. 22-CV-02813-JST, 2023 WL 4341460, at 2 (N.D. Cal. July 5, 2023) (“it is not enough that Russell have provided Walmart with a beneficial service; Russell must also allege that Walmart unjustly secured that benefit through qualifying conduct. Absent qualifying mistake, fraud, coercion, or request by Walmart, there is no injustice.”). As Plaintiffs have not alleged that OpenAI unjustly obtained benefits from Plaintiffs’ copyrighted works through fraud, mistake, coercion, or request, this claim fails

The court does allow the plaintiffs to amend, and it is almost guaranteed that an amended complaint will be forthcoming. But given the underlying reasons for dismissing all of those claims, I find it hard to believe that they’ll amend it in a way that will succeed.

Of course, there are still the two other claims that survive, but both seem likely to be in trouble by the time this case gets to summary judgment.

I know that many people wanted this case to be a winner, in part because they dislike generative AI in general, or OpenAI specifically. Or, in some cases, because they’re fans of the authors involved. But this case is about the specifics of copyright, and you have to allege specific facts to make it a copyright case, and (as we noted) these cases were ridiculously weak from the jump.

And the judge saw that.

Filed Under: ai, copyright management information, direct infringement, dmca, fair use, generative ai, michael chabon, negligence, paul tremblay, sarah silverman, unfair competition, vicarious infringement
Companies: openai

As Predicted: Judge Laughs GOP’s Laughable ‘Google Spam Bias’ Lawsuit Right Out Of Court

from the our-spam-filters-are-safe-for-now dept

Election season is approaching, so I fully expect this nonsense to come right back again, but maybe with a court shutting it down, culture war nonsense peddlers can move on to some other nonsense?

The background: in the runup to the 2022 U.S. elections, a prominent Republican “digital marketing” (read: political spam) shop noticed that its marketing campaigns weren’t doing very well. Rather than realizing that (1) the candidates and messages they were pitching were not what people wanted and (2) they were way too aggressive in spamming nonsense to people, the political consultants who ran the place decided to… blame Google.

This was because they badly misread a study that noted that if you don’t train your spam filter at all, Gmail will put more GOP campaign emails into spam than from Democratic campaigns. The same study found that as soon as users started training the spam filter the effect went away. Also, the same study found that other popular email providers, Yahoo and Microsoft’s Outlook, found the opposite was true (their spam filters caught more Dem emails than GOP). Google, for its part, said the problem was not bias, but that Republicans are shit at understanding modern non-spam email habits. In one meeting, in which Senator Marco Rubio apparently screamed at Google, company representatives had to point out that his own campaign was doing a bunch of stupid email things.

But, the modern GOP has mastered the art of playing the victim, and immediately flopped to the ground screaming “foul!” Most normal people recognized that this was just a bunch of Republican politicians trying to force their spam into inboxes. Even some prominent Republican-supporting pundits called out the political spammers on their own team, highlighting that the reality is they just spam too fucking much.

All this whining still convinced Google to offer up a special “political spam” whitelist option for Gmail, which the public massively opposed (no one wants more spam!) Of course, once Google offered it up, the Republicans… refused to use it.

But it wasn’t just the usual flop and whining. The GOP went legal. They filed an FEC complaint, arguing that this was an illegal in-kind contribution from Google. That went nowhere. Then, the Republican National Committee filed a lawsuit against Google, claiming that Google’s spam filtering (which again, the public loves overwhelmingly) violated… civil rights law in California. And also some common carrier nonsense, as well as negligence. Because it’s negligent to filter spam? Really?

It was a dumb lawsuit. We called it out as a dumb lawsuit at the time.

And now a federal judge has agreed, and tossed the lawsuit out of court.

First up, Section 230 bars all of this:

At the outset, Plaintiff’s suit is barred because Google is entitled to immunity from suit under section 230 of the Communications Decency Act, 47 U.S.C. § 230. Section 230 affords interactive computer service providers immunity from liability for decisions related to blocking and screening of offensive material, or for providing others with the technical means to do so. 47 U.S.C. § 230(c)(2). “To assert an affirmative defense under section 230(c)(2)(A), a moving party must qualify as an ‘interactive computer service,’ that voluntarily blocked or filtered material it considers ‘to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,’ and did so in ‘good faith.’” Holomaxx Techs. v. Microsoft Corp., 783 F. Supp. 2d 1097, 1104 (N.D. Cal. 2011) (quoting 47 U.S.C. § 230(c)(2)(A)). Section 230 must be construed to protect defendants “not merely from ultimate liability, but from having to fight costly and protracted legal battles.” Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1175 (9th Cir. 2008) (en banc); see also Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093, 1097 (9th Cir. 2019). In “close cases” section 230 claims “must be resolved in favor of immunity.” Roommates.com, 521 F.3d at 1174.

Google, and specifically Google’s Gmail, is an interactive computer service. An interactive computer service is “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server . . . .” 42 U.S.C. § 230(f)(2); see Holomaxx, 783 F. Supp. 2d at 1104 (finding that Microsoft’s email service was an interactive computer service). Plaintiff does not dispute this classification. (Opp’n. at 28.)

Turning to the second requirement for a section 230 defense, Google’s filtering of spam constitutes filtering “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” 47 U.S.C. § 230(c)(2)(A). In Enigma Software Group USA, LLC v. Malwarebytes, Inc, the Ninth Circuit took up the issue of what kind of material would fall within the catchall of “otherwise objectionable.” 946 F.3d 1040, 1044 (9th Cir. 2019). The court rejected an interpretation of section 230 in its prior decision in Zango, Inc. v. Kaspersky Lab, Inc., 568 F.3d 1169, 1174 (9th Cir. 2009) that gave unfettered discretion to a provider to determine what is “objectionable.” Enigma Software Group, 946 F.3d at 1050. Specifically, the Ninth Circuit concluded that blocking and filtering decisions that are driven by anticompetitive animus do not concern “objectional material,” particularly in light of Congress’s codified intent that section 230 “preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services . . . .” Id. at 1050–51 (quoting 47 U.S.C. § 230(b)(2)–(3)).

At the same time, the Ninth Circuit rejected a narrow view of what constituted “objectional” material, noting the “breadth” of that term. Id. at 1051. The court called into question cases interpreting “objectionable” in light of the other terms in section 230 on the principle of ejusdem generis (Latin for “of the same kind or class”), noting that the specific terms in section 230 “vary greatly.” Id. And while it did not expressly adopt their reasoning, the Ninth Circuit appeared to approve decisions holding that “unsolicited marketing emails” are “objectionable” for purposes of section 230. Id. at 1052 (citing Holomaxx, 783 F. Supp. 2d at 1104; e360Insight, LLC v. Comcast Corp., 546 F. Supp. 2d 605, 608–610 (N.D. Ill. 2008)).

This Court likewise holds that a provider such as Google can filter spam, including marketing emails, as “objectionable” material under section 230.

The GOP tried to get around 230 by arguing “but people asked for our spam!” That doesn’t fly:

The fact that the RNC sent emails to individuals who requested them at some point in time does not undermine this conclusion. In its Complaint, the RNC alleges that it maintains a “list of people who have requested to receive emails from the RNC” and that its campaign emails “are only sent to people on this list.” (Compl. ¶ 22.) The RNC further alleges that it removes individuals from this list who no longer wish to subscribe to the RNC’s emails, and that the emails it sends are “solicited.” (Id.) As a result, the RNC concludes that the emails “are plainly not spam because they are only sent to Gmail users who requested them” and that therefore they are not “offensive.” (Opp’n. at 34.) However, just because the RNC complies with the CAN-SPAM Act does not preclude that Google may reasonably consider multiple marketing emails to be “objectionable.” First, “compliance with CAN–SPAM, Congress decreed, does not evict the right of the provider to make its own good faith judgment to block mailings.” e360Insight, 546 F. Supp. 2d at 609 (citing 15 U.S.C. § 7707(c)). Second, just because a user interacts with a company at one point in time does not mean that the user “solicits” each and every email sent by the entity. Most individuals who use email are likely familiar with having engaged with an entity one time (such as by purchasing a particular product) only to have that entity send numerous other emails, many or all of which are no longer relevant or wanted. While a user may be generally able to opt out of those emails, an email provider such as Google may reasonably segregate those sorts of mass mailings (even though they were originally requested by the user in the legal sense, see 15 U.S.C. § 7704) in order to ensure that “wanted electronic mail messages” will not be “lost, overlooked, or discarded amidst the larger volume of unwanted messages,” 15 U.S.C. § 7701(a)(4).

Then there’s this little dig from the judge about the GOP’s propensity to spam:

It is clear from the Complaint that the RNC sends out a significant number of emails to individuals on its list. (See Compl. ¶ 21 (noting the RNC emails supporters about events, such as the 349 that occurred within the Eastern District from February to October 2022); ¶ 39 (noting “multiple emails sent over the weekend”); ¶ 42 (noting that RNC’s press releases were “just 0.3% of the email volume as the RNC’s main marketing domain”).) While it may be that some, perhaps many, users specifically wanted each and every one of those emails, Google could reasonably consider these mass mailings to be objectionable, just as it can for other email senders.

The RNC also tried to get around 230 by arguing that Google’s spam filtering was not done in “good faith.” Of course, good faith is only required for 230(c)(2) and it seems like Google could have been fine on (c)(1) grounds as well, but even here, the judge says that there’s no evidence of bad faith, other than the one study that the GOP misread.

In this case, the RNC’s allegation that Google acted in “bad faith” does not rise above the speculative level. At bottom, the RNC’s allegation is that Google diverted emails to spam at the end of the month which had been, coincidentally, a historically successful fundraising time for the RNC, and that the reasons Google gave for the low “inboxing” rate were — in the RNC’s view — not true. Plaintiff argues that the only reasonable inference for why its emails were labelled as spam is Google’s alleged political animus toward the RNC. (Compl. ¶ 3.) This is pure speculation, lacking facts from which the Court could infer animus or an absence of good faith. The only affirmative allegation that includes any facts from which the Court could draw a conclusion of the absence of good faith is Paragraph 54 of the Complaint, which cites a North Carolina State University study that is alleged to have “found that Google’s Gmail labels significantly more campaign emails from Republican political candidates as spam than campaign emails from Democratic political candidates. Specifically, the study found that Gmail labeled only 8.2% of Democratic emails as spam, as compared with 67.6% of Republican campaign emails.”

While this study does provide some evidence that Google could be acting without good faith, the Court finds that this study is insufficient, standing alone, to meet the pleading requirements as described in Twombly and Iqbal. First, the study itself does not attribute any motive to Google, with the study authors noting “we have no reason to believe there were deliberate attempts from these email services to create these biases to influence the voters. . . .” (ECF No. 30-10 at 9.) Second, the study indicates that all three email programs considered — Google, Outlook, and Yahoo — had a political bias, although Google’s left-leaning bias was greater than Outlook or Yahoo’s right-leaning biases. (Id.) Third, the study indicates that Google’s spam filter “responded significantly more rapidly to user interactions compared to Outlook and Yahoo” (id.), suggesting that a more plausible reason for the left-leaning bias was user input, not bad faith efforts on the part of Google.

Also, the fact that Google kept sending employees to help train the GOP on how to have better email practices seemed to undermine the “bad faith” claims:

In the Complaint, the RNC recounts that adopting Google’s suggestions had a “significantly positive impact on [email] performance,” though they did not resolve the end-of-month issue. (Id. ¶ 48.) While the RNC may disagree with Google regarding what caused the drop in inboxing, the fact that Google engaged with the RNC for nearly a year and made suggestions that improved email performance is inconsistent with a lack of good faith….

Finally, the A/B test cited in Paragraph 33 of the Complaint undermines the RNC’s claim of bad faith discrimination on the basis of political affiliation. If Google were discriminating against RNC emails due to their political affiliation, then neither set of emails should have gotten through Google’s spam filter. The fact that one version did indicates it was not the substantive content or sender of the email, but rather some other factor, such as the different links contained with the email or some other technical feature of the email, that was triggering application of the spam filter. At oral argument, counsel for the Plaintiff conceded that the A/B test does not support a finding that emails are being filtered because the RNC is sending them or because the emails contain political content.

The end result then, is that the judge recognizes that a bunch of Republican whiners misread a study, overreacted, and sued.

In short, the only fact alleged by the RNC to support its conclusory allegation that “Google’s interception and diversion of the RNC’s emails, and the harm it is causing to the RNC, is intentional, deliberate, and in bad faith,” (Compl. ¶ 56), is the North Carolina State University study that expressly states there is no reason to believe Google was acting in bad faith, and the remainder of the allegations in the Complaint are inconsistent with such a conclusion. In light of the multiple reasonable explanations for why the RNC’s emails were filtered as set forth in the Complaint, the Court does not find the RNC’s allegation that Google was knowingly and purposefully harming the RNC because of political animus to be a “reasonable inference.” Accordingly, the Court concludes that the RNC has not sufficiently pled that Google acted without good faith, and the protection of section 230 applies.

The court then rightly notes that this is exactly how Section 230 is supposed to work, given that the law is supposed to encourage interactive computer services to voluntarily monitor content on their platforms, and that’s exactly what’s happening here with spam filters.

It’s also nice to see the judge call out (as I kept doing in my posts) how that same study found that Microsoft and Yahoo favored Republican emails and disfavored Democrats, and no one was bitching about that:

This concern is exemplified by the fact that the study on which the RNC relies to show bad faith states that each of the three email systems had some sort of right- or left- leaning bias. (ECF No. 30-10 at 9 (“all [spam filtering algorithms] exhibited political biases in the months leading up to the 2020 US elections”).) While Google’s bias was greater than that of Yahoo or Outlook, the RNC offers no limiting principle as to how much “bias” is permissible, if any. Moreover, the study authors note that reducing the filters’ political biases “is not an easy problem to solve. Attempts to reduce the biases of [spam filtering algorithms] may inadvertently affect their efficacy.” (Id.) This is precisely the impact Congress desired to avoid in enacting the Communications Decency Act, and reinforces the conclusion that section 230 bars this suit.

The court also brushes aside a ridiculous interpretation of 230 that the GOP tried here, in which they claimed that 230 only gives a website immunity from financial liability, but that a court can still issue injunctive relief (basically telling the company to stop filtering political spam). That’s not how any of this works, the judge notes:

As an initial matter, the word “liable” has a broader definition than Plaintiff suggests and can include being held to account even through injunctive relief. For example, Black’s Law Dictionary (11th ed. 2019) defines liable as “responsible or answerable in law; legally obligated,” which would include a legal obligation to comply with an injunctive order just as it would a monetary judgement. Moreover, courts have rejected such a theory as it applies to liability under section 230(c)(1), and have questioned whether the theory would be viable as to section 230(c)(2)

The case could just end there. The claims are barred by 230, end of story. But, instead, the court decides to run through the actual claims anyway and explain why they still fail, even without Section 230.

Even if Google were not entitled to section 230 immunity, each of Plaintiff’s claims would still be subject to dismissal because they are either not a claim upon which relief can be granted, or because Plaintiff has failed to establish it is entitled to relief.

Again, we’ve pointed this out repeatedly: even in the absence of Section 230, most claims that lose on 230 grounds would still lose, just that it would take longer and be more expensive.

The common carrier claim? Nope. Not at all.

Further supporting this conclusion, the Court notes that the RNC has not cited any authority to establish that an email provider such as Google is a common carrier, and the Court is unaware of any. Perhaps most significantly, a contrary conclusion would dramatically alter the manner in which email providers conduct their business. As noted by the Plaintiff, many major email providers, including Google, have an interest in limiting spam being delivered to users. (See Compl. ¶ 27 (“As a service to its users, and to increase its own profits, Google intercepts certain messages intended for its users that comprise unsolicited and unwanted bulk-emailed messages and place them in a separate folder, called a spam folder.”);…

And points out claiming that email is a common carrier would mean a ton of spam:

Email providers such as Google, Yahoo, MSN and others would likely be prohibited from filtering spam or other messages and would instead be required to simply dump all emails into a user’s inbox, first come, first served. While it is true that California courts have not hesitated to interpret statutes in light of new technologies, this Court declines to accept the RNC’s invitation to interpret California’s common carrier law in such a way as to require email providers to deliver spam to the millions of Americans who use their services.

Even sillier, the court points out that even if it did find that Google were a common carrier, the RNC isn’t a customer of Google. They send out emails via other service providers, so there’s no customer relationship here.

Finally, even if Google were a common carrier, the RNC did not avail itself of Google’s services, and Google owes no duty to it. See Grotheer v. Escape Adventures, Inc., 14 Cal. App. 5th 1283, 1294 (2017) (“[A] common carrier necessarily entails great responsibility, requiring common carriers to exercise a high duty of care towards their customers.”

Negligence? Lol.

Here, the RNC has not paid Google any sum to transmit messages, and therefore would not be entitled to damages for ordinary negligence.

Civil rights law under California’s Unruh Act (a favorite of spurned Trumpists claiming that moderating them violates their rights, which never works)? Nope. Not even close. Contrary to their claims, being a cult-like believer in Donald Trump does not make you a protected, discriminated class.

The Court declines the invitation to usurp a legislative function by adding a new protected class to the Unruh Act. This is in keeping with the California courts’ modern approach of giving deference to the Legislature. It is also consistent with the few cases to address this issue, all of which have reached the same conclusion that “the Unruh Civil Rights Act does not protect against discrimination based upon political affiliation or the exercise of constitutional rights.”

There were a few more throw-in claims, and they fail all the same.

Of course, with election season coming up again, it may spur the RNC on to appeal this ruling, if only so it gets to keep whining through 2024 how oppressed they are by big bad tech companies…

Filed Under: discrimination, email, gmail, gop, negligence, political spam, rnc, section 230, spam filters, unruh act
Companies: google

Court Rejects Attempt To Blame Amazon For The Purchase Of Product Used For Suicide

from the that's-not-how-negligence-works dept

There have been a bunch of attempts over the last few years to try to get around Section 230, and to sue various websites under a “negligence” theory under the law, arguing that the online service was somehow negligent in failing to protect a user, and therefore Section 230 shouldn’t apply. Some cases have been successful, though in a limited way. Many of the cases have failed spectacularly, as the underlying arguments often just don’t make much sense.

The latest high profile loss of this argument was in a big case, that received tons of attention, in part because of the tragic circumstances involved in the complaint. The complaint argued that Amazon sold “suicide kits,” because it offered the ability to purchase a compound that is often used for suicide, and also noted related items that are “frequently bought together,” based on its recommendation algorithm. The families of some teenagers who tragically died via this method, sued Amazon, saying it was negligent in selling the product, and making recommendations. The complaint noted that Amazon had received warnings about the chemical compound in question for years, but kept selling it (at least until December of 2022 when it cut off sales).

It’s, of course, completely reasonable to be sympathetic to the families here. The situation is clearly horrible and tragic in all sorts of ways. But there are important reasons why we don’t blame third parties when someone decides to kill themselves via suicide. For one, it can incentivize even more such actions, as it can be seen as a way of extracting “revenge.”

Either way, thankfully, a court has rejected this latest case, and done so very thoroughly. Importantly, while there is a Section 230 discussion, it also explains why, even absent 230, this case is a loser. You can’t just random claim that a company selling products is liable for someone who uses the product for suicide.

Indeed, the opinion starts out by exploring the product liability claims separate from the 230 analysis, and says you can’t make this leap to hold Amazon liable. First, the court notes that under the relevant law, there can be strict liability for the manufacturer, but no one is claiming Amazon manufacturers the compound, so that doesn’t work. The standards for liability as a seller, are much higher (for good reason!). And, part of that is that you can only be held liable if the product itself is defective.

Plaintiffs’ WPLA negligent product liability claim fails for a number of reasons. First, the court concludes that the Sodium Nitrite was not defective, and that Amazon thus did not owe a duty to warn. Under Washington law, “no warning need be given where the danger is obvious or known to the operator.” Dreis, 739 P.2d at 1182 (noting that this is true under negligence and strict liability theories); Anderson v. Weslo, Inc., 906 P.2d 336, 340-42 (Wash. Ct. App. 1995) (noting that the risk of falling and getting hurt while jumping on a trampoline is obvious and a manufacturer/seller need not warn of such obvious dangers); Mele v. Turner, 720 P.2d 787, 789-90 (Wash. 1986) (finding neighbors were not required to warn teenager regarding lawnmower’s dangers—e.g., putting hands under running lawnmower—where the allegedly dangerous condition was obvious and known to plaintiff). 9 In line with this principle, Washington courts consistently hold that a warning label need not warn of “every possible injury.” Anderson, 906 P.2d 341-42; Baughn v. Honda Motor Co., 727 P.2d 655, 661-64 (Wash. 1986) (finding sufficient Honda’s warning that bikes were intended for “off-the-road use only” and that riders should wear helmets; no warning required as to risk of getting hit by car, the precise danger eventually encountered); Novak v. Piggly Wiggly Puget Sound Co., 591 P.2d 791, 795-96 (Wash. Ct. App. 1979) (finding general warnings about ricochet sufficient to inform child that a BB gun, if fired at a person, could injure an eye).

Here, the Sodium Nitrite’s warnings were sufficient because the label identified the product’s general dangers and uses, and the dangers of ingesting Sodium Nitrite were both known and obvious. The allegations in the amended complaint establish that Kristine and Ethan deliberately sought out Sodium Nitrite for its fatal properties, intentionally mixed large doses of it with water, and swallowed it to commit suicide. (See, e.g., Am. Compl. ¶¶ 161-72, 178-79, 183, 185-86, 190-202, 20-23, 116, 139-43.) Kristine and Ethan’s fates were undisputedly tragic, but the court can only conclude that they necessarily knew the dangers of bodily injury and death associated with ingesting Sodium Nitrite.

And thus:

Amazon therefore had no duty to provide additional warnings regarding the dangers of ingesting Sodium Nitrite. See, e.g., Dreis, 739 P.2d at 1182 (“The warning’s contents, combined with the obviousness of the press’ dangerous characteristics, indicate that any reasonable operator would have recognized the consequences of placing one’s hands in the point-of-operation area.”).

Again, think of what would happen if the results were otherwise. It is an unfortunate reality of the world that we live in, that some people will end up dying by suicide. It is always tragic. But blaming companies for selling the tools or products that are used by people in those situations will not help anyone.

The court goes even further. It notes that even if Amazon should have been expected to add even more warnings about the product, that would not have stopped the tragic events from occurring (indeed, it would have only confirmed the reasons why the product was purchased):

Second, Plaintiffs’ WPLA negligent product liability claim also fails because, even if Amazon owed a duty to provide additional warnings as to the dangers of ingesting sodium nitrite, its failure to do so was not the proximate cause of Kristine and Ethan’s deaths. “Proximate cause is an essential element” of both negligence and strict liability theories.12 Baughn, 727 P.2d at 664. “If an event would have occurred regardless of a defendant’s conduct, that conduct is not the proximate cause of the plaintiff’s injury.” Davis v. Globe Mach. Mfg. Co., 684 P.2d 692, 696 (Wash. 1984). Under Washington law, if the product’s user knows there is a risk, but chooses to act without regard to it, the warning “serves no purpose in preventing the harm.” Lunt, 814 P.2d at 1194 (concluding that defendants alleged failure to warn plaintiff of specific dangers associated with skiing and bindings was not proximate cause of injuries because plaintiff would have kept skiing regardless); Baughn, 727 P.2d at 664-65 (concluding that allegedly inadequate warnings were not proximate cause of harm where victim knew the risk and ignored the warnings; the harm would have occurred even with more vivid warnings of risk of death or serious injury). A product user’s “deliberate disregard” for a product’s warnings is a “superseding cause that breaks the chain of proximate causation.” Beard v. Mighty Lift, Inc., 224 F. Supp. 3d 1131, 1138 (W.D. Wash. 2016) (stating that “a seller may reasonably assume that the user of its product will read and heed the warnings . . . on the product” (citing Baughn, 727 P.2d at 661)).

Here, the court concludes that additional warnings would not have prevented Kristine and Ethan’s deaths. The allegations in the amended complaint establish that Kristine and Ethan sought the Sodium Nitrite out for the purpose of committing suicide and intentionally subjected themselves to the Sodium Nitrite’s obvious and known dangerous and those described in the warnings on the label…. Accordingly, Plaintiffs have failed to plausibly allege that Amazon’s failure to provide additional warnings about the dangers of ingesting Sodium Nitrite proximately caused Kristine and Ethan’s deaths

In other words, there could not be product liability. There were necessary warnings, and more warnings would not have changed the outcome.

The plaintiffs also argued that Amazon could be held liable for “suppressing” reviews complaining about Amazon selling the product. And on this point, Section 230 does protect Amazon:

Here, the “information” at issue in Plaintiffs’ WPLA intentional concealment claim is the “negative product reviews that warned consumers of [Sodium Nitrite’s] use for death by suicide.” (Am. Compl. ¶ 241(j).) This “information” was, as Plaintiffs admit, provided by the users of Amazon.com. (See id. ¶¶ 122, 144-45.) Indeed, the amended complaint does not allege that Amazon provided, created, or developed any portion of the negative product reviews. (See generally id.) Accordingly, only the users of Amazon.com, not Amazon, acted as information content providers with respect to Plaintiffs’ WPLA intentional concealment claim. See, e.g., Fed. Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1117-19 (N.D. Cal. 2020) (concluding that Facebook was not an information content provider where plaintiffs sought to hold Facebook liable for removing a plaintiff’s Facebook account, posts, and content); Joseph, 46 F. Supp. 3d at 1106-07 (concluding that Amazon was not acting as an information content provider where plaintiff’s claims arose from the allegedly defamatory statements in reviews posted by third parties).

There are some other attempts to get around 230 as well, and they get rejected as well (not even via 230, just on the merits directly).

The allegations in Count II (common law negligence) fail to state a plausible claim for relief under RCW 7.72.040(1)(a). As discussed above, a plaintiff must establish that the injury-causing product is defective in order to recover against a negligent product seller under the WPLA. (See supra § III.C.1.) The court has already rejected Plaintiffs’ argument that the Sodium Nitrite was defective on the basis of inadequate warnings. (See id.) Accordingly, the allegations in Count II fail to state plausible negligent product liability claims under the WPLA because, as a threshold point, the Sodium Nitrite is not defective. Because Plaintiffs fail to meet this threshold requirement, the court need not address their remaining arguments or the other elements of this claim.

Once again, this all kinda highlights that people who think that getting rid of 230 will magically make companies liable for anything bad that happens on their platforms remain wrong. That won’t happen. Those claims still fail, they just do so in a more expensive way. It might be a boon for trial lawyers looking to pad their billable hours, but it won’t actually do anything productive towards stopping bad things from happening. Indeed, it might make it worse, because efforts to mitigate harms will be used against companies, claiming it showed “knowledge,” and thus companies will be better off just looking the other way.

Filed Under: negligence, product liability, section 230, sodium nitrite, suicide, torts
Companies: amazon

Section 230 Immunizes TikTok Against Suit Brought By Parent Whose Child Died Participating In A ‘Blackout Challenge’

from the underneath-all-the-claims,-it's-still-user-generated-content dept

Earlier this year, the mother of child who died of asphyxiation while participating in the so-called “Blackout Challenge” sued TikTok, alleging the company was directly responsible for her 10-year-old daughter’s death. The lawsuit claimed this wasn’t about third-party content, even though the content that the child allegedly emulated was posted on TikTok. Instead, the lawsuit tried to avoid the obvious Section 230 implications by framing its allegations as intentionally flawed product design.

Plaintiff does not seek to hold the TikTok Defendants liable as the speaker or publisher of third-party content and instead intends to hold the TikTok Defendants responsible for their own independent conduct as the designers, programmers, manufacturers, sellers, and/or distributors of their dangerously defective social media products and for their own independent acts of negligence as further described herein. Thus, Plaintiffs claims fall outside of any potential protections afforded by Section 230(c) of the Communications Decency Act.

TikTok has long been controversial for content its users post. Much of this controversy is manufactured. Someone hears something about a new and potentially dangerous “challenge” and pretty soon news broadcasts all over the nation are quoting each other’s breathless reporting to turn something few people engaged in into “viral” moral panics. According to the lawsuit, this particular “challenge” showed up in the 10-year-old’s “For You” section — an algorithmically sorted list of recommendations, some of which is generated by the user’s own interests.

The plaintiff seeking closure via the court system is out of luck, though. It doesn’t matter how the allegations are framed. It matters what the allegations actually are. The lawyers representing the child’s mother wanted to dodge the Section 230 question because they knew the lawsuit was unwinnable if they met that head on.

The legal dancing is over (at least until the appeal). Section 230 immunity can’t be avoided just by trying to turn the algorithmic sorting of user-generated content into some sort of product design flaw. The federal court handling the lawsuit has tossed the suit, citing the very law the plaintiff wanted to keep out of the discussion. (via Law and Crime)

From the decision [PDF]:

Section 230 provides immunity when: (1) the defendant is an interactive computer service provider; (2) the plaintiff seeks to treat the defendant as a publisher or speaker of information; and (3) that information is provided by another content provider. 47 U.S.C. § 230(c)(1). Here, the Parties agree that Defendants are interactive computer service providers, and that the Blackout Challenge videos came from “another information content provider” (third-party users). They dispute only whether Anderson, by her design defect and failure to warn claims, impermissibly seeks to treat Defendants as the “publishers” of those videos. It is evident from the face of Anderson’s Complaint that she does.

In addition to that, Anderson wanted TikTok to be treated as a certain kind of publisher, the kind that creates content and publishes it. But there are zero facts to back that claim. Hence the shift of focus to defective design and consumer safety torts under the rationale that it’s TikTok’s recommendation algorithm that’s deliberately and dangerously broken. It doesn’t work. TikTok is indeed a publisher, but a publisher of user-created content, which is definitely covered by Section 230. [Emphasis in the original.]

Anderson bases her allegations entirely on Defendants’ presentation of “dangerous and deadly videos” created by third parties and uploaded by TikTok users. She thus alleges that TikTok and its algorithm “recommend inappropriate, dangerous, and deadly videos to users”; are designed “to addict users and manipulate them into participating in dangerous and deadly challenges”; are “not equipped, programmed with, or developed with the necessary safeguards required to prevent circulation of dangerous and deadly videos”; and “[f]ail[] to warn users of the risks associated with dangerous and deadly videos and challenges.” (Compl. ¶¶ 107, 127 (emphasis added).) Anderson thus premises her claims on the “defective” manner in which Defendants published a third party’s dangerous content.

Although Anderson recasts her content claims by attacking Defendants’ “deliberate action” taken through their algorithm, those “actions,” however “deliberate,” are the actions of a publisher. Courts have repeatedly held that such algorithms are “not content in and of themselves.”

That does it for the lawsuit. The court concludes by reiterating that the lawsuit is about user-generated content, even if it hopes to be perceived as about something else by attacking TikTok’s recommendation algorithms. You can argue that TikTok should perform better moderation, especially when recommending content to minors, but you can’t argue the tragic death is unrelated to content posted by TikTok users. If immunity is the perceived problem, the court suggests parents stop hiring legal representation and start talking to their elected representation.

Nylah Anderson’s death was caused by her attempt to take up the “Blackout Challenge.” Defendants did not create the Challenge; rather, they made it readily available on their site. Defendants’ algorithm was a way to bring the Challenge to the attention of those likely to be most interested in it. In thus promoting the work of others, Defendants published that work—exactly the activity Section 230 shields from liability. The wisdom of conferring such immunity is something properly taken up with Congress, not the courts.

That’s the correct judicial take. Unfortunately, there are far too many elected representatives seeking to destroy Section 230 immunity and First Amendment protections for platforms, although most care more about keeping them and their buddies extremely online than about the tragic deaths of impressionable social media users.

Filed Under: blackout challenge, liability, negligence, product design, section 230
Companies: tiktok

Why The Ninth Circuit's Decision In Lemmon V. Snap Is Wrong On Section 230 And Bad For Online Speech

from the another-hard-case dept

Foes of Section 230 are always happy to see a case where a court denies a platform its protection. What’s alarming about Lemmon v. Snap is how comfortable so many of the statute’s frequent defenders seem to be with the Ninth Circuit overruling the district court to deny Snapchat this defense. They mistakenly believe that this case raises a form of liability Section 230 was never intended to reach. On the contrary: the entire theory of the case is predicated on the idea that Snapchat let people talk about something they were doing. This expressive conduct is at the heart of what Section 230 was intended to protect, and denying the statute’s protection here invites exactly the sort of harm to expression that the law was passed to prevent.

The trouble with this case, like so many other cases with horrible facts, is that it can be hard for courts to see that bigger picture. As we wrote in an amicus brief in the Armslist case, which was another case involving Section 230 with nightmarish facts obscuring the important speech issues in play:

“Tragic events like the one at the heart of this case can often challenge the proper adjudication of litigation brought against Internet platforms. Justice would seem to call for a remedy, and if it appears that some twenty-year old federal statute is all that stands between a worthy plaintiff and a remedy, it can be tempting for courts to ignore it in order to find a way to grant that relief.”

Here some teenagers were killed in a horrific high-speed car crash, and of course the tragedy of the situation creates an enormous temptation to find someone to blame. But while we can be sympathetic to the court’s instinct, we can’t suborn the facile reasoning it employed to look past the speech issues in play because acknowledging them would have interfered with the conclusion the court was determined to reach. Especially because at one point it even recognized that this was a case about user speech, before continuing on with an analysis that ignored its import:

Shortly before the crash, Landen opened Snapchat, a smartphone application, to document how fast the boys were going. [p.5] (emphasis added)

This sentence, noting that the boys were trying to document how fast they were going, captures the crux of the case: that the users were using the service to express themselves, albeit in a way that was harmful. But that’s what Section 230 is built for, to insulate service providers from liability when people use their services to express themselves in harmful ways because, let’s face it, people do it all the time. The court here wants us to believe that this case is somehow different from the sort of matter where Section 230 would apply and that this “negligent design” claim involves a sort of harm that Section 230 was never intended to apply to. Unfortunately it’s not a view supported by the statutory text or the majority of precedent, and for good reason because, as explained below, it would eviscerate Section 230’s critical protection for everyone.

Like it had done in the Homeaway case, the court repeatedly tried to split an invisible hair to pretend it wasn’t trying to impose liability arising out of the users’ own speech. [See, e.g., p. 10, misapplying Barnes v. Yahoo]. Of course, a claim that there was a negligent design of a service for facilitating expression is inherently premised on the idea that there was a problem with the resulting expression. And just because the case was not about a specific form of legal liability manifest in their users’ speech did not put it outside of Section 230. Section 230 is a purposefully broadly-stated law (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”), and here the court wants the platform to take responsibility for how its users used its services to express themselves. [p. 15, misapplying the Roommates.com case].

Section 230 also covers everything that could be wrong with expression unless the thing wrong with it happens to fall into one of the few exceptions the statute enumerates: it involves an intellectual property right, violates federal criminal law, or otherwise implicates FOSTA. None of those exceptions apply here, and, in fact, in the same section of the law where these few exceptions are set forth there is also a pre-emption provision explicitly barring any state law from becoming the basis of any new exceptions. Which, with this decision giving the go-ahead to a state law-based tort claim of “negligent design,” is what the Ninth Circuit has now caused to happen.

It hurts online speech if courts can carve out new exceptions. If judges can ever post hoc look at a situation where expressive activity has led to harm and decide the degree of harm warrants stripping service providers of their Section 230 protection, then there is basically no point in having Section 230 on the books. If platforms have to litigate over whether it protects them, then it doesn’t really matter whether it does or not because they’ll already have lost out on so much of the value the protection was supposed to afford them to make it possible for them to facilitate others’ expression in the first place. The inevitable consequence of this functional loss of statutory protection is that there will be fewer service providers available to facilitate as much user expression, if any at all.

But even if there were some limiting principle that could be derived from this case to constrain courts from inventing any other new exceptions, just having this particular “negligent design” one will still harm plenty of speech. To begin with, one troubling aspect the decision is that it is not particularly coherent, and one area of confusion relates to what it actually thinks is the negligent design. [see, e.g., p. 15]. The court spends time complaining about how Snapchat somehow deliberately encourages users to drive at unsafe speeds, even though the court itself acknowledged that while Snapchat apparently rewards users with “trophies, streaks, and social recognitions” to encourage them to keep using their service [p. 5], it “does not tell its users how to earn these various achievements” [p. 5], and it is a leap to say that Snap is somehow wrongfully encouraging users to do anything when it is not actually saying anything of the kind. [See p. 6 (“Many of Snapchat?s users suspect, if not actually ‘believe,’ that Snapchat will reward them for ‘recording a 100-MPH or faster [s]nap’ using the Speed Filter.”)]. In fact, as the decision itself cites, Snapchat actually cautioned against reckless posting behavior. [See p. 6 with the screenshot including the text, “Don’t snap and drive.”] If the case were actually about Snap explicitly encouraging dangerous behavior (“Drive 100 mph and win a prize!”) then there might legitimately be a claim predicated on the platform’s own harmful speech, for which Section 230 wouldn’t apply. But the record does not support this sort of theory, the theory of liability was predicated on a user’s apparently harmful speech, and in any case the alleged encouragement wasn’t really what the plaintiffs were charging was actually negligently designed anyway.

Instead, what was at issue was the “speed filter,” a tool that helped users document how fast they were traveling. Unlike the district court, the Ninth Circuit could not seem to fathom that a tool that helped document speed could be used for anything other than unsafe purposes. But of course it can. Whether traveling at speed is dangerous depends entirely on context. A user in a plane could easily document traveling at significant speed perfectly safely, while a user on a bike documenting travel at a much slower speed could still be in tremendous peril. One reason we have Section 230 is because it is impossible for the service provider to effectively police all the uses of its platform, and even if it could, it would be unlikely to know whether the speeding was safe or not. But in denying Snapchat Section 230 protection with the presumption that such speech is always unsafe, the court has effectively decided that no one can ever document that they are traveling quickly, even in a safe way, because it is now too legally risky for the platform to give users the tools to do it.

Furthermore, if a platform could lose its Section 230 platform because the design of its services enabled speech that was harmful, it would eviscerate Section 230, because there are few, if any, whose design would not. For example, Twitter’s design lets people post harmful expression. Perhaps one might argue it even encourages them to by making it so easy to post such garbage. Of course, Twitter also makes it easy to post things that are not harmful too, but the Ninth Circuit’s decision here does not seem to care that a design eliciting user expression might be used for both good and bad ends. Per this decision, which asserts a state law-created “duty to design a reasonably safe product,” [see p. 13, misapplying the Doe 14 v. Internet Brands case], even a product that meets the definition of an “interactive computer service” set forth in Section 230 (along with its pre-emption provision), if the design could be used to induce bad expression, then the platform no longer qualifies for Section 230’s protection. But that would effectively mean that everyone could always plead around Section 230 because nearly every Section 230 case arises from someone having used the service in a harmful way the service enabled. It is unfortunate that the Ninth Circuit has now opened the door to such litigation, as the consequences stand to be chilling to all kinds of online speech and services Section 230 was designed to protect.

Filed Under: 9th circuit, intermediary liability, lemmon, negligence, product liability, section 230, speech, speed filter
Companies: snap

Texas Appeals Court Brushes Off Section 230 In Allowing Lawsuit Over Sex Trafficking Against Facebook To Continue

from the it-continues-because-it-continues dept

Earlier this year, we mentioned, in passing, personal injury lawyer Annie McAdams’ weird crusade against internet companies and Section 230. The lawyer — who bragged to the NY Times about how she found out her favorite restaurant’s secret margarita mix by suing them and using the discovery process to get the recipe — has been suing a bunch of internet companies trying to argue that we can ignore Section 230 if you argue that the sites were “negligent” in how they were designed. In a case filed in Texas against Facebook (and others) arguing that three teenagers were recruited by sex traffickers via Facebook and that Facebook is to blame for that, the lower court judge ruled last year that he wouldn’t dismiss on Section 230 grounds. I wish I could explain to you his reasoning, but the ruling is basically “well, one side says 230 bars this suit, and the other says it doesn’t, and I’ve concluded it doesn’t bar the lawsuit.” That’s literally about the entire analysis:

In reviewing the statute and the cases cited by the parties, the Court concludes that the Plaintiffs have plead causes of action that would not be barred by the immunity granted under the Act.

Why? I could not tell you. Judge Steven Kirkland provides no real basis.

Either way, Facebook appealed, and the appeals court has upheld the lower court ruling with even less analysis. The only mention of Section 230 is to say that that was Facebook’s reason for asking for dismissal. The court takes three paragraphs to describe the history of the case, and this is the entire analysis:

Facebook has not established that it is entitled to mandamus relief. Accordingly, we deny Facebook?s petitions for writ of mandamus.

Why? Who the hell knows. Texas courts are weird, man.

At least one judge on the panel, Justice Tracy Christopher, issued a dissent from the majority opinion. The dissent is also pretty short and sweet, and basically says “um, seems like 230 applies here, so, yeah.”

I respectfully dissent from these denials of mandamus and I urge the Texas Supreme Court to review these cases. Federal law grants Facebook immunity from suits such as these. See 47 U.S.C. § 230. Because Facebook has immunity, these suits have no basis in law, and dismissal under Texas Rule of Procedure 91a is proper.

The Real Parties in Interest urge our court to adopt a construction of Section 230 that has been adopted by only a few courts. The vast majority of the courts reviewing this law have adopted the arguments made by Facebook. The artful pleading by the Real Parties in Interest should not prevail over the statute.

Also, just to be clear, since some may ask, and since this is a case about sex trafficking: FOSTA does not apply here because (1) the actions at issue happened prior to FOSTA becoming law, and (2) (as only the dissent notes), FOSTA does not apply to civil actions in state court. Still, what a weird set of rulings, that seem to go against nearly all Section 230 case law… and with basically no analysis as to why at all.

Filed Under: annie mcadams, cda 230, intermediary liability, negligence, product liability, section 230, sex trafficking, texas, vexatious lawsuits
Companies: facebook

from the if-you-don't-fix-the-front,-you'll-be-paying-on-the-back-end dept

A federal judge is going to let a bunch of people keep suing Yahoo over its three-year run of continual compromise. Yahoo had hoped to get the class action suit tossed, stating that it had engaged in “unending” efforts to thwart attacks, but apparently it just wasn’t good enough to prevent every single one of its three billion email accounts from falling into the hands of hackers.

In a decision on Friday night, U.S. District Judge Lucy Koh in San Jose, California rejected a bid by Verizon Communications Inc, which bought Yahoo’s Internet business last June, to dismiss many claims, including for negligence and breach of contract.

Koh dismissed some other claims. She had previously denied Yahoo’s bid to dismiss some unfair competition claims.

Yahoo was accused of being too slow to disclose three data breaches that occurred from 2013 and 2016, increasing users’ risk of identity theft and requiring them to spend money on credit freeze, monitoring and other protection services.

Three billion is a lot of potential class-mates, even though many Yahoos users had moved on to more viable/useful services long before the breach began. That being said, password reuse is common. So is the tendency to have the same user name in place across several platforms. And, needless to say, personally identifiable info stays the same, no matter what platform Yahoo’s former users have strayed to.

The complaint — amended again after news broke that Yahoo’s entire user base had been compromised — notes that Yahoo’s “unending” efforts were routinely terrible, if not practically nonexistent. The suit points out multiple Yahoo hosts were compromised in 2008 and 2009. The next year, Google notified Yahoo that its systems were being used to attack Google. And in 2012, Yahoo suffered two breaches, including one stemming from a SQL injection attack that revealed the company unendingly stored passwords in plain text.

A couple of claims have been dismissed but the most damaging — negligence — remains. The plaintiffs so far have presented plenty of evidence that Yahoo handled users’ PII extremely carelessly. From the decision [PDF]:

First, the contract entered into between the parties related to email services for Plaintiffs. Plaintiffs were required to turn over their PII to Defendants and did so with the understanding that Defendants would adequately protect Plaintiffs’ PII and inform Plaintiffs of breaches. Second, it was plainly foreseeable that Plaintiffs would suffer injury if Defendants did not adequately protect the PII. Third, the FAC asserts that hackers were able to gain access to the PII and that Defendants did not promptly notify Plaintiffs, thereby causing injury to Plaintiffs. Fourth, the injury was allegedly suffered exactly because Defendants provided inadequate security and knew that their system was insufficient. Fifth, Defendants “knew their data security was inadequate” and that “they [did not] have the tools to detect and document intrusions or exfiltration of PII.” “Defendants are morally culpable, given their repeated security breaches, wholly inadequate safeguards, and refusal to notify Plaintiffs . . . of breaches or security vulnerabilities.” Id. Sixth, and finally, Defendants’ concealment of their knowledge and failure to adequately protect Plaintiffs’ PII implicates the consumer data protection concerns expressed in California statutes, such as the CRA and CLRA.

Yahoo also has to keep fighting “deceit by concealment” allegations stemming from its delayed reporting of known security breaches.

Defendants also criticize Plaintiffs for continuing to use Yahoo Mail and taking no remedial actions after learning of Defendants’ allegedly inadequate security. However, Defendants fail to acknowledge that Defendants’ delayed disclosures are likely to have harmed Plaintiffs in the interim. Plaintiffs did not even know that they should take any remedial actions during the periods of Defendants’ delayed disclosures. Moreover, contrary to Defendants’ suggestion, the actions that Plaintiffs took after the fact do not conclusively determine what actions they would have taken if they had been alerted before the fact. The FAC provides at least one good reason why Plaintiffs may not have ceased their use of Yahoo Mail after the fact—namely, Plaintiffs have already established their “digital identities around Yahoo Mail.” Plaintiffs can consistently plead that they took minimal or no action after learning of the security defects but that they “would have taken measures to protect themselves” if they had been informed beforehand.

In total, Yahoo is still on the hook for 9 of 15 allegations related to the massive security breach. And it has no one to blame but itself if new owner Verizon ends up shelling out for damages. Yahoo’s terrible security had been a problem for a half-decade before the 2013 breach. Three years later, it became clear everything Yahoo had collected on three billion email accounts was now in the hands of other people. This long line of breaches show Yahoo was very interested in increasing its user base, but much less motivated to protect their info.

Filed Under: breach, cybersecurity, email, hack, liability, negligence, security, standing
Companies: verizon, yahoo

Supreme Court Completely Punts On First Amendment Question About 'Threatening' Song Lyrics On Facebook

from the we'll-see-you-in-court-again-next-year? dept

Last year, we wrote about a potentially important First Amendment case involving Anthony Elonis, who posted some fairly nasty things online, including some about his ex-wife that some certainly read to be threatening. Elonis insisted that it was just a persona, and what he posted online were merely rap lyrics with no actual threat behind them — but he still ended up in jail for a few years because of it. And, thus, the Supreme Court heard a case that was supposed to be about the First Amendment and whether threats need to be “true threats” to lead to charges, and what exactly “true threats” mean, and through whose eyes should the statements be seen. But, as has happened all too often with this Supreme Court, it punted on the key issue and chose not to discuss the First Amendment issues at all after realizing it could overturn the case on other grounds.

The Supreme Court thinks it’s doing a good thing when it fails to actually address the big question, saying that it’s waiting for an appropriate time to do so, but all it really does is keep a bunch of legal uncertainties going, allowing lawyers to rack up huge amounts of billable hours, on questions that the Court could have just settled the first time around. It’s no different in this case. Here, the court basically rejects the use of the “reasonable person” test that was used in the original jury instructions (i.e., would a “reasonable person” find Elonis’s statements to be “true threats,” rather than whether Elonis himself intended them as such). But, the Court notes, that’s setting up a negligence standard that is rarely found in criminal law, and certainly not in the relevant statute for this case:

Elonis?s conviction, however, was premised solely on how his posts would be understood by a reasonable person. Such a ?reasonable person? standard is a familiar feature of civil liability in tort law, but is inconsistent with ?the conventional requirement for criminal conduct? awareness of some wrongdoing.?… Having liability turn on whether a ?reasonable person? regards the communication as a threat?regardless of what the defendant thinks??reduces culpability on the all-important element of the crime to negligence,? … and we ?have long been reluctant to infer that a negligence standard was intended in criminal statutes,? … Under these principles, ?what [Elonis] thinks? does matter.

And thus, the Supremes overturn the appeals court’s ruling. It also notes that all the tap dancing the DOJ did in trying to insist that this is not a negligence standard fails:

The Government is at pains to characterize its position as something other than a negligence standard, emphasizing that its approach would require proof that a defendant?comprehended [the] contents and context? of the communication…. The Government gives two examples of individuals who, in its view, would lack this necessary mental state?a ?foreigner, ignorant of the English language,? who would not know the meaning of the words at issue, or an individual mailing a sealed envelope without knowing its contents…. But the fact that the Government would require a defendant to actually know the words of and circumstances surrounding a communication does not amount to a rejection of negligence. Criminal negligence standards often incorporate ?the circumstances known? to a defendant…. Courts then ask, however, whether a reasonable person equipped with that knowledge, not the actual defendant, would have recognized the harmfulness of his conduct.That is precisely the Government?s position here: Elonis can be convicted, the Government contends, if he himself knew the contents and context of his posts, and a reasonable person would have recognized that the posts would be read as genuine threats. That is a negligence standard.

Okay. Fair enough. With that, the Court sides with Elonis:

In light of the foregoing, Elonis?s conviction cannot stand. The jury was instructed that the Government need prove only that a reasonable person would regard Elonis?s communications as threats, and that was error. Federal criminal liability generally does not turn solely on the results of an act without considering the defendant?s mental state. That understanding ?took deep and early root in American soil? and Congress left it intact here: Under Section 875(c), ?wrongdoing must be conscious to be criminal.?

But then the Court fails to take the next necessary step in handling what standard is appropriate.

There is no dispute that the mental state requirement in Section 875(c) is satisfied if the defendant transmits a communication for the purpose of issuing a threat, or with knowledge that the communication will be viewed as a threat…. In response to a question at oral argument, Elonis stated that a finding of recklessness would not be sufficient…. Neither Elonis nor the Government has briefed or argued that point, and we accordingly decline to address it… (this Court is ?poorly situated? to address an argument the Court of Appeals did not consider, the parties did not brief, and counsel addressed in ?only the most cursory fashion at oral argument?). Given our disposition, it is not necessary to consider any First Amendment issues.

And then the decision takes an additional page sanctimoniously explaining why it’s not necessary to answer this question, even though it’s likely to come up again. We’ve seen this argument before (including in some of the links above). This court hates to go out and actually address such questions, all too frequently saying simply “this rule is the wrong rule” and leaving a massive hurricane mess behind as everyone tries to guess at what the right rule might be. I’m sure, in some sort of “Supreme Court Justice On High” logic this makes sense, but it leads to a dangerous world of uncertainty that seems to only be helpful for the lawyers. Justice Alito made this point in his own addition to the ruling (partial concurrence/partial dissent):

Today, the Court announces: It is emphatically the prerogative of this Court to say only what the law is not.

The Court?s disposition of this case is certain to cause confusion and serious problems. Attorneys and judges need to know which mental state is required for conviction…. This case squarely presents that issue, but the Court provides only a partial answer. The Court holds that the jury instructions in this case were defective because they required only negligence in conveying a threat. But the Court refuses to explain what type of intent was necessary. Did the jury need to find that Elonis had the purpose of conveying a true threat? Was it enough if he knew that his words conveyed such a threat? Would recklessness suffice? The Court declines to say. Attorneys and judges are left to guess.

This will have regrettable consequences. While this Court has the luxury of choosing its docket, lower courts and juries are not so fortunate. They must actually decide cases, and this means applying a standard. If purpose or knowledge is needed and a district court instructs the jury that recklessness suffices, a defendant may be wrongly convicted. On the other hand, if recklessness is enough, and the jury is told that conviction requires proof of more, a guilty defendant may go free. We granted review in this case to resolve a disagreement among the Circuits. But the Court has compounded?not clarified?the confusion.

So, stay tuned, as we’ll likely be back with the Supreme Court having to review this very issue once again within the next few years. And, in the meantime, there will be a huge mess in a variety of courts because no one knows what the proper rules are.

Filed Under: anthony elonis, first amendment, free speech, negligence, rap lyrics, reasonable person, supreme court, true threats