9th circuit – Techdirt (original) (raw)

Ninth Circuit: No Immunity For Officer Who Shot Peaceful Protester In The Groin

from the thugs-gonna-thug dept

When it comes to the Supreme Court-created (and diluted) qualified immunity doctrine, the Ninth Circuit leads the league in rejections. The Fifth Circuit is its polar opposite, more willing to forgive cops than uphold civil rights. This case would have been dead on arrival in the Fifth. But since it landed in the Ninth, the victim of excessive force, protester Derrick Sanderlin, will see his lawsuit move forward.

Like millions of other people, California resident Derrick Sanderlin participated in an anti-police violence protest prompted by the murder of Minnesota resident George Floyd by Minneapolis police officer Derek Chauvin. And, like many peaceful protesters demonstrating against police violence, his acts of protest were greeted with police violence.

While participating in what even San Jose police officers referred to as a “peaceful” protest (although it did turn more damaging and violent later), Sanderlin had the misfortune of running into San Jose PD officer Michael Panighetti. This interaction was captured on the officer’s body camera, which provides the narrative recounted by the Ninth Circuit Appeals Court in its decision [PDF].

The first part of the narrative, though, was provided by the officer during his testimony, something that details alleged actions that were not captured by his camera:

Panighetti testified in his deposition that as he approached the intersection of Santa Clara Street and 5th Street, he had been monitoring an individual wearing a San Francisco 49ers jersey who had been throwing objects at police officers and hiding behind corners. When they reached the intersection, Panighetti observed that individual in the 49ers jersey, along with another person, hiding behind the corner of a building. Panighetti claimed that he was able to continue to visually monitor the two subjects because the building was glass all around the first floor. Panighetti then explained that he saw those two subjects holding gallon paint cans, and he believed they were poised to throw the paint cans at police officers. At one point, the subjects pushed a dumpster into the intersection and attempted to hide behind it.

The plaintiff in this lawsuit was neither of the people the officer claimed he observed to be engaged in some anti-police violence of their own. It didn’t matter. Sanderlin became the focus of violent reprisal simply because he temporarily stood in front of the officer. That encounter was captured on camera.

At that point, a man later identified as Sanderlin moved into the sidewalk while carrying a sign over his head. Panighetti claimed that Sanderlin purposefully placed himself in front of officers to block the two subjects holding paint cans and hiding behind the dumpster. In video footage captured by Panighetti’s body-worn camera, Sanderlin is seen standing on the sidewalk holding a sign, and a dumpster is behind him. The video does not clearly show the two subjects allegedly holding paint cans that Panighetti describes, though there is clearly a chaotic scene unfolding around this encounter. In the video, Panighetti can be heard yelling to Sanderlin, “I’m going to hit you, dude. You better move!” Sanderlin fails to immediately comply, continuing to stand in the sidewalk holding his sign over his head. After only a few seconds, Panighetti fires a 40mm foam baton at Sanderlin, striking him in the groin area. Sanderlin recoils from the impact and appears to take a few steps, shifting his weight between his feet in pain. He then limps out of the middle of the sidewalk, at which point he is no longer visible in the video footage.

Just because it was a “less lethal” munition doesn’t mean the round fired by the officer didn’t do serious damage to Sanderlin. The full power blow from the 40mm baton round led to severe injuries that required emergency surgery.

The recording shows Panighetti ordering Sanderlin to “move,” coupled with an informal warning that the officer was planning “to hit you, dude” if Sanderlin refused to comply. As the narrative notes, Sanderlin wasn’t really given much chance to consider his options (i.e., whether or not the officer intended to strike him/whether or not he was obliged to comply with this vague order) much less comply before being shot by the officer.

Sanderlin testified he never heard an order to move. He also testified he had done nothing but peacefully protest and that his standing in front of the officer was nothing more than First Amendment activity, rather than the supposed covering up of criminal activity by other protesters behind him. And, according to Sanderlin’s assertions, no officer offered to render aid as he lay in the street after being shot by Panighetti. Instead, it was his wife who discovered him some time later and took him to the hospital to be treated.

The lower court said qualified immunity did not cover the officer’s actions. And, as can be determined from the narrative above, certain facts are still in conflict. The lower court said the officer’s actions were a clear violation of rights. And with everything else still in dispute, the lawsuit needed to be placed in front of a jury.

The lower court’s decision is affirmed by the Ninth Circuit. This needs to go to trial. Qualified immunity simply does not cover this apparent violation of Sanderlin’s right to be free from excessive force deployments.

Then there’s where Officer Panighetti chose to shoot Sanderlin — a move that directly contradicted his SJPD training and, at minimum, was a negligent use of force:

According to SJPD training materials, “Less Lethal Impact munitions” like the 40mm foam baton Panighetti fired “are used to: Disorient [and] Incapacitate . . . Injury should be expected.” The training materials further reveal that projectiles that are fired “to ‘Center Mass’ provide for the highest probability of causing immediate incapacitation, but also have the potential to cause serious injury or death.” Panighetti himself explained that he was trained to use the 40mm launcher “to incapacitate a suspect” posing a safety risk. The record also shows that the groin, where Sanderlin was shot, is considered an area of particularly high risk of injury, and the training materials specifically indicate that “[t]he groin area should not be intentionally targeted.”

Pretty damning, especially when considered in the context of the officer’s assertions about his own training and expertise in hopes of explaining why it was so necessary to shoot one person in the crotch just so he could move past the peaceful protester to the less peaceful protesters allegedly standing behind him.

The Ninth says the question of whether this deployment of force was “reasonable” is something a jury should decide. From what it has seen, it appears that it isn’t. And that’s the legal standard — one often ignored by federal courts at multiple levels: if there are any questions that can’t be immediately answered by case law, it’s up to a jury of the defendant’s peers to answer those questions. The Ninth Circuit is simply doing what other courts often won’t: allow juries to determine the winners and losers of civil rights cases, rather than just letting cops who don’t feel like playing walk away from the damage they’ve caused to others. Panighetti will now need to convince a jury his actions were justified. Given the circumstances, it’s seems unlikely that will happen.

Filed Under: 4th amendment, 9th circuit, derrick sanderlin, excessive force, george floyd, michael panighetti, qualified immunity, san jose

More Of RFK Jr.’s ‘Don’t Moderate Me, Bro’ Cases Are Laughed Out Of Court

from the that's-not-how-any-of-this-works dept

In the last month, I wrote about two of Robert F. Kennedy Jr.’s bullshit crazy lawsuits over him being very, very mad that social media companies keep moderating or limiting the spread of his dangerous bullshit anti-vax nonsense. In one, the Ninth Circuit had to explain (not for the first time) to RFK and his disgraced Yale Law professor lawyer, Jed Rubenfeld, that Meta fact checking RFK Jr. does not violate the First Amendment, and that Section 230 does not turn every internet company into a state actor.

In the other case, one of the MAGA world’s favorite judges ignored both the facts and the scolding he just got from the Supreme Court to insist that the Biden administration has been trying to censor RFK Jr., a thing that has not actually happened.

But Professor Eric Goldman reminds me that there were two other cases involving RFK Jr. and his anger at being moderated that had developments that I hadn’t covered. And both of them were, thankfully, not in courtrooms of partisan judges who live in fantasylands.

First, we had a case in which RFK Jr. sued Meta again. I had mentioned this case when it was filed. The Ninth Circuit one mentioned above was also against Meta, but RFK Jr. decided to try yet again. In this case, he also sued them claiming that efforts to restrict a documentary about him by Meta violated his First Amendment rights.

If you don’t recall, Meta very temporarily blocked the ability to share the documentary, which they chalked up to a glitch. They fixed it very quickly. But RFK Jr. insisted it was a deliberate attempt to silence him, citing Meta’s AI chatbot as giving them the smoking gun (yes, they really did this, even the chatbot is just a stochastic parrot spewing whatever it thinks will answer a question).

What I had missed was that district court Judge William Orrick, who is not known for suffering fools lightly, has rejected RFK Jr.’s demands for a preliminary injunction. Judge Orrick is, shall we say, less than impressed by RFK Jr. returning to the well for another attempt at this specious argument, citing the very Ninth Circuit case that RFK Jr. just lost in his other case against Meta.

The plaintiffs assert that they are likely to succeed on the merits of their First Amendment claim, which is that Meta violated their rights to free speech by censoring their posts and accounts on Meta’s platforms. But the First Amendment “‘prohibits only governmental abridgment of speech’ and ‘does not prohibit private abridgment of speech.’” Children’s Health Def. v. Meta Platforms, Inc., —F. 4th—, No. 21-16210, 2024 WL 3734422, at *4 (9th Cir. Aug. 9, 2024) (first quoting Manhattan Cmty. Access Corp. v. Halleck, 587 U.S. 802, 808 (2019); and then citing Prager Univ. v. Google LLC, 951 F.3d 991, 996 (9th Cir. 2020)). Because there is no apparent state action, this claim is unlikely to succeed.

RFK Jr. twists himself into a pretzel shape to try to claim that Meta is magically a state actor, but the court has to remind him that these arguments are quite stupid.

The Ninth Circuit recently has twice affirmed dismissal of claims filed by plaintiffs alleging that social media platforms violated the plaintiffs’ First Amendment rights by flagging, removing, or otherwise “censoring” the plaintiffs’ content shared on those platforms. See Children’s Health, 2024 WL 3734422 at *2–4; O’Handley, 62 F.4th at 1153–55. In both cases, the Ninth Circuit held that the plaintiffs’ claims failed at the first step of the state action framework because of “the simple fact” that the defendants “acted in accordance with [their] own content-moderation policy,” not with any government policy…..

The only difference between those cases and this one is that here, the plaintiffs seem to allege that the “specific” harmful conduct is Meta’s censorship itself, rather than its policy of censoring. Based on the documents submitted and allegations made, that is a distinction without a difference.

RFK Jr. tried to argue that the ruling by Judge Doughty in Louisiana supports his position, but Judge Orrick wasn’t born yesterday and that he can actually read what the Supreme Court wrote in the Murthy decision rejecting these kinds of arguments.

The Murthy opinion makes my decision here straightforward. Murthy rejected Missouri’s factual findings and specifically explained that the Missouri evidence did not show that the federal government caused the content moderation decisions. Yet here, the plaintiffs rely on Missouri as their evidence that a state rule caused the defendants’ alleged censorship actions. Even if I accepted the vacated district court order as evidence here—which I do not—the Supreme Court has plainly explained why it does not support the plaintiffs’ argument.

Even though he notes that he doesn’t even need to go down this road, Judge Orrick also explains why the whole “state actor” argument is nonsense as well:

The plaintiffs’ theory is that Meta and the government colluded or acted jointly, or the government coerced Meta, to remove content related to Kennedy’s 2024 presidential campaign from Meta’s platforms. The problem with that theory is again the lack of evidence. The Missouri and Kennedy findings were rejected by the Supreme Court, as explained above. And they—and the interim report—suggest at most a relationship or communications between Meta and the government about removal of COVID-19 misinformation in 2020 and 2021. Even if the plaintiffs proved that Meta and the government acted jointly, or colluded, or that Meta was coerced by the government to remove and flag COVID-19 misinformation three years ago, that says nothing about Meta’s relationship and communications with the government in 2024. Nor does it suggest that Meta and the government worked together to remove pro-Kennedy content from Meta’s platforms.

Because of this, the plaintiffs fail to show likelihood of success on the merits—or serious questions going to the merits—for any of the three possible state action prongs. They do not provide evidence or allegations of a “specific[]” agreement between Meta and the government to specifically accomplish the goal of removing Kennedy content from Meta platforms. See Children’s Health, 2024 WL 3734422, at 5 (describing joint action test and collecting cases). Nor do they show that the government exercised coercive power or “significant encouragement” for Meta to remove Kennedy-related content in 2024. Id. at 9–10 (describing coercion test and finding that allegations about Congressmembers’ public criticism of COVID-19 misinformation on social media sites was insufficient to show government coerced platforms to remove it). And for similar reasons, the plaintiffs do not establish a “sufficiently close nexus” between the government and the removal of Kennedy-related content from Meta’s platforms. Id. at *5. Their First Amendment claim accordingly fails at step two of the state action inquiry. It is far from likely to succeed on the merits.

RFK Jr. also made a Voting Rights Act claim, that removing the documentary about him somehow interfered with people’s rights to vote for him. But the court notes that this argument is doomed by the fact that Meta noted that the blocking of links was an accident, which happens all the time:

The defendants point to compelling evidence that the video links were incorrectly automatically flagged as a phishing attack, a “not uncommon” response by its automated software to newly created links with high traffic flow. Oppo. 5–6 (citing Mehta Decl. Ex. A ¶ 7). The defendants’ evidence shows that once the defendants were alerted to the problem, through channels set up specifically for that purpose, the links were restored, and the video was made (and is currently still) available on its platform. Mehta Decl. Ex. A. ¶¶ 4–8, Exs. M–Q. Though the plaintiffs say the removal of the video was an effort to coerce them to not urge people to vote for Kennedy, the defendants’ competing evidence shows that it was a technological glitch and that the plaintiffs were aware of this glitch because they reported the problem in the first place. And if the plaintiffs were aware that a tech issue caused the removal of the videos, with that “context” it would probably not be reasonable for them to believe the video links were removed in an effort to coerce or intimidate them.

The court is also not impressed by the argument that other people (not parties to the case) had accounts removed or limited for sharing support for RFK Jr. As the judge makes clear, RFK Jr. doesn’t get to sue someone over a claim that they intimidated someone else (for which there isn’t any actual evidence anyway).

Third, the plaintiffs submit evidence that other peoples’ accounts were censored, removed, or threatened with removal when they posted any sort of support for Kennedy and his candidacy. See, e.g., Repl. 1:13–24; [Dkt No. 29-1] Exs. A, B. The defendants fail to respond to these allegations in their opposition, but the reason for this failure seems obvious. Section 11(b) provides a private right of action for Person A where Person B has intimidated, threatened, or coerced Person A “for urging or aiding any person to vote.” 52 U.S.C.A. § 10307(b). It does not on its face, or in any case law I found or the parties cite, provide a private right of action for Person C to sue Person B for intimidating, threatening, or coercing Person A “for urging or aiding any person to vote.” Id. Using that example, the three plaintiffs would be “Person C.” Their evidence very well might suggest that Meta is censoring other users’ pro-Kennedy content. But those users are not plaintiffs in this case and are not before me now.

Importantly, the plaintiffs had plenty of time and opportunity to add any of those affected users as new plaintiffs in this case, as they added Reed Kraus between filing the initial complaint and filing the AC and current motion. But they did not do so. Nor do they allege or argue that AV24 has some sort of organizational or third-party standing to assert the claims of those affected users. And while they seem to say that Kennedy himself is affected because that evidence shows Meta users are being coerced or threatened for urging people to vote for him, the effect on the candidate is not what § 11(b) protects. Accordingly, this evidence does not support the plaintiffs’ assertions. The plaintiffs, therefore, fail to counter the compelling evidence and reasons that the defendants identify in explanation for the alleged censorship.

More critically, the plaintiffs do not deny the defendants’ portrayal of and reasons for the defendants’ actions. The plaintiffs fail to incorporate those reasons into their assessment of how a “reasonable” recipient of Meta’s communications would interpret the communications in “context.” See Wohl III, 661 F. Supp. 3d at 113. Based on the evidence provided so far, a reasonable recipient of Meta’s communications would be unlikely to view them as even related to voting, let alone as coercing, threatening, or intimidating the recipient with respect to urging others to vote.

Towards the end of the ruling, the court finally gets to Section 230 and notes that the case is probably going nowhere even without everything earlier, because Section 230 makes Meta immune from liability for its moderation actions. However, the case didn’t hinge on that because neither side really went deep on the 230 arguments.

As for the other RFK Jr. case, I had forgotten that he had also sued Google/YouTube over its moderation efforts. At the end of last month, the Ninth Circuit also upheld a lower court ruling on that case in an unpublished four-page opinion where the three-judge panel made quick work of the nonsense lawsuit:

Google asserts that it is a private entity with its own First Amendment rights and that it removed Kennedy’s videos on its own volition pursuant to its own misinformation policy and not at the behest of the federal government. Kennedy has not rebutted Google’s claim that it exercised its independent editorial choice in removing his videos. Nor has Kennedy identified any specific communications from a federal official to Google concerning the removed Kennedy videos, or identified any threatening or coercive communication, veiled or otherwise, from a federal official to Google concerning Kennedy. As Kennedy has not shown that Google acted as a state actor in removing his videos, his invocation of First Amendment rights is misplaced. The district court’s denial of a preliminary injunction is AFFIRMED.

If RFK Jr. intends to appeal the latest Meta ruling (and given the history of his frivolous litigation, the chances seem quite high that he will), the Ninth Circuit might want to just repurpose this paragraph and swap out the “Google” for “Meta” each time.

Now, if only the Fifth Circuit would learn a lesson or two from the Ninth Circuit (or the Supreme Court), we could finally dispense with the one case that ridiculously went in RFK Jr.’s favor.

Filed Under: 1st amendment, 9th circuit, content moderation, free speech, jed rubenfeld, rfk jr., state actor, voting rights act, william orrick
Companies: google, meta, youtube

In YOLO Ruling, Ninth Circuit Cracks Open Pandora’s Box For Section 230

from the you-only-destroy-the-internet-once dept

The Ninth Circuit appeals court seems to have figured out the best way to “reform” Section 230: by pretending it doesn’t apply to some stuff that the judges there just randomly decide it doesn’t apply to anymore. At least that’s my reading of the recent ruling against YOLO Technologies.

Now, let’s start by making something clear: YOLO Technologies appears to be a horrible company, making a horrible service, run by horrible people. We’ll get more into the details of that below. I completely understand the instinctual desire that YOLO should lose. That said, there are elements of this ruling that could lead to dangerous results for other services that aren’t horrible. And that’s what always worries me.

First, a quick history lesson: over fifteen years ago, we wrote about the Ninth Circuit’s ruling in Barnes v. Yahoo. At the time, and in the years since Barnes, that ruling seemed potentially problematic. The case revolved around another horrible situation, where an ex-boyfriend posted fake profiles. Barnes contacted Yahoo and reached a Director of Communications who promised to “take care of” the fake profiles.

However, the profiles remained up. Barnes sued, and Yahoo used 230 to try to get out of it. Much of the Barnes decision is very good. It’s an early decision that makes it clear Section 230 protects websites for their publishing activity of third-party content. It clearly debunks the completely backwards notion that you are “either a platform or a publisher” and only “platforms” get 230 protections. In Barnes, the court is quite clear that what Yahoo is doing is publishing activity, but since it is an interactive computer service and the underlying content is from a third party, it cannot be held liable as the publisher for that publishing activity under Section 230.

And yet, the court still sided with Barnes, noting that the direct promise from the employee at Yahoo to take care of the content went outside of traditional publishing activity and created a promise, and therefore a duty to live up to that promise.

In the fifteen years since that ruling, there have been various attempts to use Barnes to get around Section 230, but most have failed, as they didn’t have that clear promise like Barnes had. However, in the last couple of months, it seems the Ninth Circuit has decided that the “promise” part of Barnes can be used more broadly, and that could create a mess.

In YOLO, the company makes an add-on to Snapchat that lets users post questions and polls on the app. Other users could respond anonymously (they also had the option to reveal who they were). The app was very popular, but it shouldn’t be a huge surprise that some users used to harass and abuse others.

However YOLO claimed publicly, and in how it represented the service to users who signed up, that one way it would deal with harassment and abuse would be to reveal those users. As the Ninth Circuit explains:

As a hedge against these potential problems, YOLO added two “statements” to its application: a notification to new users promising that they would be “banned for any inappropriate usage,” and another promising to unmask the identity of any user who “sen[t] harassing messages” to others.

But it appears that YOLO never actually intended to live up to this, or it just became overwhelmed, because it appears not to have done it.

Now, this is always a bit tricky, because what some users consider abuse and harassment, a service (or other users!) might not consider to be abuse and harassment. But, in this case, it seems pretty clear that whatever trust & safety practices YOLO had were not living up to the notification it gave to users:

All four were inundated with harassing, obscene, and bullying messages including “physical threats, obscene sexual messages and propositions, and other humiliating comments.” Users messaged A.C. suggesting that she kill herself, just as her brother had done. A.O. was sent a sexual message, and her friend was told she was a “whore” and “boy-obsessed.” A.K. received death threats, was falsely accused of drug use, mocked for donating her hair to a cancer charity, and exhorted to “go kill [her]self,” which she seriously considered. She suffered for years thereafter. Carson Bride was subjected to constant humiliating messages, many sexually explicit and highly disturbing.

These users, and their families, sought to unmask the abusers. Considering that YOLO told users that’s how abuse and harassment would be dealt with, it wasn’t crazy for them to think that might work. But it did not. At all.

A.K. attempted to utilize YOLO’s promised unmasking feature but received no response. Carson searched the internet diligently for ways to unmask the individuals sending him harassing messages, with no success. Carson’s parents continued his efforts after his death, first using YOLO’s “Contact Us” form on its Customer Support page approximately two weeks after his death. There was no answer. Approximately three months later, his mother Kristin Bride sent another message, this time to YOLO’s law enforcement email, detailing what happened to Carson and the messages he received in the days before his death. The email message bounced back as undeliverable because the email address was invalid. She sent the same to the customer service email and received an automated response promising an answer that never came. Approximately three months later, Kristin reached out to a professional friend who contacted YOLO’s CEO on LinkedIn, a professional networking site, with no success. She also reached out again to YOLO’s law enforcement email, with the same result as before.

So, uh, yeah. Not great! Pretty terrible. And so there’s every reason to want YOLO to be in trouble here. The court determines that YOLO’s statements about unmasking harassers meant that it had made a promise, a la Barnes, and therefore had effectively violated an obligation which was separate from its publishing activities that were protected by Section 230.

Turning first to Plaintiffs’ misrepresentation claims, we find that Barnes controls. YOLO’s representation to its users that it would unmask and ban abusive users is sufficiently analogous to Yahoo’s promise to remove an offensive profile. Plaintiffs seek to hold YOLO accountable for a promise or representation, and not for failure to take certain moderation actions. Specifically, Plaintiffs allege that YOLO represented to anyone who downloaded its app that it would not tolerate “objectionable content or abusive users” and would reveal the identities of anyone violating these terms. They further allege that all Plaintiffs relied on this statement when they elected to use YOLO’s app, but that YOLO never took any action, even when directly requested to by A.K. In fact, considering YOLO’s staff size compared to its user body, it is doubtful that YOLO ever intended to act on its own representation.

And, again, given all the details, this feels understandable. But I still worry about where the boundaries are here. We’ve seen plenty of other cases. For example, six years ago, when the white supremacist Jared Taylor sued Twitter for banning him, he argued that it could not ban users because Twitter had said that it “believe[s] in free expression and believe[s] every voice has the power to impact the world.”

So it seems like there needs to be some clear line. In Barnes, there was a direct communication between the person and the company where an executive at the company directly made a promise to Barnes. That’s not the case in the YOLO ruling.

And when we combine the YOLO ruling with the Ninth Circuit’s ruling in the Calise case back in June, things get a little more worrisome. I didn’t get a chance to cover that ruling when it came out, but Eric Goldman did a deep dive on it and why it’s scary. That case also uses Barnes’ idea of a “promise” by the company to mean a “duty” to act that is outside of Section 230.

In that case, it was regarding scammy ads from Chinese advertisers. The court held that Meta had a “duty” based on public comments to somehow police advertisements, that was outside of its Section 230 protections. That ruling also contained a separate concurrence (oddly written by the same Judge who wrote the opinion, but which apparently he couldn’t get others to agree to) that just out and out trashed Section 230 and basically made it clear he hated it.

And thus, as Eric Goldman eloquently puts it, you have the Ninth Circuit “swiss-cheesing” Section 230 by punching all kinds of holes in it, enabling more questionable lawsuits to be brought, arguing that this or that statement by a company or a company employee represented some form of a promise under Barnes, and therefore a “duty” outside of Section 230.

In summary, Barnes is on all fours with Plaintiffs’ misrepresentation claims here. YOLO repeatedly informed users that it would unmask and ban users who violated the terms of service. Yet it never did so, and may have never intended to. Plaintiffs seek to enforce that promise—made multiple times to them and upon which they relied—to unmask their tormentors. While yes, online content is involved in these facts, and content moderation is one possible solution for YOLO to fulfill its promise, the underlying duty being invoked by the Plaintiffs, according to Calise, is the promise itself. See Barnes, 570 F.3d at 1106–09. Therefore, the misrepresentation claims survive.

And maybe that feels right in this case, where YOLO’s behavior is so egregious. But, it’s unclear where this theory ends, and that leaves it wide open for abuse. For example, how would this case have turned out if the messages sent to the kids weren’t actually “abusive” or “harassing”? I’m not saying that happened here, as it seems pretty clear that they were. But imagine a hypothetical where many people did not feel that the behavior was actually abusive, but the user argued that it was. Perhaps they even said this to be abusive back.

Under this ruling, would YOLO still need to reveal who the anonymous user was to avoid liability?

That seems… problematic?

However, the real lesson here is that anyone who runs a website now needs to be way more careful about what they say regarding how they moderate or do anything. Because anything they say could be used in court as an argument for why Section 230 doesn’t apply. Indeed, I could see how this could even conflict with other laws requiring websites to be more transparent about their moderation practices, but where doing so could remove 230 protections.

And I really worry about how this plays out in situations where a platform changes trust & safety policies mid-stream. I have no idea how that works out. What if when you signed up, the platform had a policy that said it would remove certain kinds of content, but later on decided to change that policy as it was ineffective. Would someone who signed up under the old policy regime now claim that the new policy regime violates the original promise that got them to sign up?

On top of that, I fear that this will lead companies to be way less transparent about their moderation policies and practices. Because now, being transparent about moderation policies means that anyone who thinks you didn’t enforce them properly might be able to sue and get around Section 230 by arguing you didn’t fulfill the duty you promised.

All that said, there is some other good language in this decision. The plaintiffs also tried a “product liability” claim, which has become a hipster legal strategy for many plaintiffs’ lawyers to try to get around Section 230. It has worked in some cases, but it fails here.

At root, all Plaintiffs’ product liability theories attempt to hold YOLO responsible for users’ speech or YOLO’s decision to publish it. For example, the negligent design claim faults YOLO for creating an app with an “unreasonable risk of harm.” What is that harm but the harassing and bullying posts of others? Similarly, the failure to warn claim faults YOLO for not mitigating, in some way, the harmful effects of the harassing and bullying content. This is essentially faulting YOLO for not moderating content in some way, whether through deletion, change, or suppression.

They also make clear, contrary to the claims we keep hearing, that an app having anonymous messaging as a feature isn’t an obvious liability. We’ve seen people claim this in many cases, but the court clearly rejects that idea:

Here, Plaintiffs allege that anonymity itself creates an unreasonable risk of harm. But we refuse to endorse a theory that would classify anonymity as a per se inherently unreasonable risk to sustain a theory of product liability. First, unlike in Lemmon, where the dangerous activity the alleged defective design incentivized was the dangerous behavior of speeding, here, the activity encouraged is the sharing of messages between users. See id. Second, anonymity is not only a cornerstone of much internet speech, but it is also easily achieved. After all, verification of a user’s information through government-issued ID is rare on the internet. Thus we cannot say that this feature was uniquely or unreasonably dangerous.

So, this decision is not the worst in the world, and it does seem targeted at a truly awful company. But poking a hole like this in Section 230 so frequently leads to others piling through that hole and widening it.

And one legitimate fear of a ruling like this is that it will actually harm efforts to get transparency in moderation practices, because the more companies say, the more liability they may face.

Filed Under: 9th circuit, anonymity, duty, duty of care, promises, promissory estoppel, section 230, terms of service
Companies: yolo

CA Governor Newsom And AG Bonta Pretend Court Agreed With Them On Kids Code

from the you-don't-have-to-do-this dept

Dear California Governor Newsom and Attorney General Bonta: you really don’t have to be the opposite end of the extremists in Florida and Texas. You don’t have to lie to your constituents and pretend losses are wins. Really. Trust me.

You may recall that the Attorneys General of Texas and Florida have taken to lying to the public when they lose big court cases. There was the time when Texas AG Ken Paxton claimed “a win” when the Supreme Court did exactly the opposite of what he had asked it to do.

Or the time when Florida AG Ashley Moody declared victory after the Supreme Court made it quite clear that Florida’s social media law was unconstitutional, but sent it back to the lower court to review on procedural grounds.

And now it looks like Newsom and Bonta are doing the same sort of thing, claiming victory out of an obvious loss, just on the basis of some procedural clean-up (ironically, the identical procedural clean-up that Moody declared victory over).

As you’ll recall, we just wrote about the Ninth Circuit rejecting California’s Age Appropriate Design Code (AADC) as an obvious First Amendment violation (just as we had warned both Bonta and Newsom, only to be ignored). However, because of the results in the Supreme Court decision in Moody, the Ninth Circuit sent some parts of the law back to the lower court.

The details here are kind of important. In the Moody decision, the Supreme Court said for there to be a “facial challenge” against an entire law (i.e., a lawsuit saying “this whole law is unconstitutional, throw it out”), the lower courts have to consider every part of the law and whether or not every aspect and every possible application is unconstitutional. In the Texas and Florida cases, the Supreme Court noted that the lower courts really only reviewed parts of those laws and how they might impact a few companies, rather than really evaluating whether or not some of the laws were salvageable.

However, the ruling also made quite clear that any law that seeks to tell social media companies how to moderate is almost certainly a violation of the First Amendment.

In the challenge to the AADC, most of the case (partly at the request of the district court judge!) focused on the “Data Protection Impact Assessment” (DPIA) requirements of the law. This was the main part of the law, and the part that would require websites to justify every single feature they offer and explain how they will “mitigate” any potential risks to kids. The terrible way that this was drafted would almost certainly require websites to come up with plans to remove content the California government disapproved of, as both courts recognized.

But the AADC had a broader scope than the DPIA section.

So, the Ninth Circuit panel sent part of the law back to the lower court following the requirements in the Moody ruling. They said the lower court had to do the full facial challenge, exploring the entirety of the law and how it might be applied, rather than throwing out the whole law immediately.

However (and this is the important part), the Ninth Circuit said that on the DPIA point specifically, which is the crux of the law, there was enough briefing and analysis to show that it was obviously a violation of the First Amendment. It upheld the injunction barring that part of the law from going into effect.

That doesn’t mean the rest of the law is good or constitutional. It just means that now the lower court will need to examine the rest of the law and how it might be applied before potentially issuing another injunction.

In no way and in no world is this a “win” for California.

But you wouldn’t know that to hear Newsom or Bonta respond to the news. They put out a statement that suggests they either don’t know what they’re talking about or they’re hoping the public is too stupid to realize this. It’s very likely the latter, but it’s a terrible look for both Newsom and Bonta. It suggests they’re so deep in their own bullshit that they can’t be honest with the American public. They supported an unconstitutional bill that has now been found to be unconstitutional by both the district and the appeals court.

First up, Newsom:

“California enacted this nation-leading law to shield kids from predatory practices. Instead of adopting these commonsense protections, NetChoice chose to sue — yet today, the Court largely sided with us. It’s time for NetChoice to drop this reckless lawsuit and support safeguards that protect our kids’ safety and privacy.”

Except, dude, they did not “largely side” with you. They largely sided with NetChoice and said there’s not enough briefing on the rest. I mean, read the fucking ruling, Governor:

We agree with NetChoice that it is likely to succeed in showing that the CAADCA’s requirement that covered businesses opine on and mitigate the risk that children may be exposed to harmful or potentially harmful materials online, Cal. Civ. Code §§ 1798.99.31(a)(1)–(2), facially violates the First Amendment. We therefore affirm the district court’s decision to enjoin the enforcement of that requirement, id., and the other provisions that are not grammatically severable from it…

And, no, the law does not shield kids from predatory practices. That’s the whole point that both courts have explained to you: the law pressures websites to remove content, not change conduct.

So, why would NetChoice drop this lawsuit that it is winning? Especially when letting this law go into effect will not protect kids’ safety and privacy, and would actually likely harm both, by encouraging privacy-destroying age verification?

As for Bonta:

“We’re pleased that the Ninth Circuit reversed the majority of the district court’s injunction, which blocked California’s Age-Appropriate Design Code Act from going into effect. The California Department of Justice remains committed to protecting our kids’ privacy and safety from companies that seek to exploit their online experiences for profit.”

Yeah, again, it did not “reverse the majority.” It upheld the key part, and the only part that was really debated in the lower court. It sent the rest back to be briefed on, and it could still be thrown out once the judges see what nonsense you’ve been pushing.

It wasn’t entirely surprising when Paxton and Moody pulled this kind of shit. After all, the GOP has made it clear that they’re the party of “alternative facts.” But the Democrats don’t need to do the same at the other end of the spectrum. We’ve already seen that Newsom’s instincts are to copy the worst of the GOP, but in favor of policies he likes. This is unfortunate. We don’t need insufferable hacks running both major political parties.

Look, is it so crazy to just ask for our politicians to not fucking lie to their constituents? If they can’t be honest about basic shit like this, what else are they lying about? You lost a case because you supported a bad law. Suck it up and admit it.

Filed Under: 9th circuit, aadc, ab 2273, california, dpia, facial challenge, gavin newsom, lies, rob bonta
Companies: netchoice

Court Sees Through California’s ‘Protect The Children’ Ruse, Strikes Down Kids Code

from the gee,-who-could-have-predicted dept

Friday morning gave us a nice victory for free speech in the 9th Circuit, where the appeals court panel affirmed most of the district court’s ruling finding California’s “Age Appropriate Design Code” unconstitutional as it regulated speech.

There’s a fair bit of background here that’s worth going over, so bear with me. California’s Age Appropriate Design Code advanced through the California legislature somewhat quietly, with little opposition. Many of the bigger companies, like Meta and Google, were said to support it, mainly because they knew they could easily comply with their buildings full of lawyers, whereas smaller competitors would be screwed.

Indeed, for a period of time it felt like only Professor Eric Goldman and I were screaming about the problems of the law. The law was drafted in part by a British Baroness and Hollywood movie director who fell deep for the moral panic that the internet and mobile phones are obviously evil for kids. Despite the lack of actual evidence supporting this, she has been pushing for laws in the UK and America to suppress speech she finds harmful to kids.

In the US, some of us pointed out how this violates the First Amendment. I also pointed out that the law is literally impossible to comply with for smaller sites like Techdirt.

The Baroness and the California legislators (who seem oddly deferential to her) tried to get around the obvious First Amendment issues by insisting that the bill was about conduct and design and not about speech. But as we pointed out, that was obviously a smokescreen. The only way to truly comply with the law was to suppress speech that politicians might later deem harmful to children.

California Governor Gavin Newsom eagerly signed the bill into law, wanting to get some headlines about how he was “protecting the children.” When NetChoice challenged the law, Newsom sent them a very threatening letter, demanding they drop the lawsuit. Thankfully, they did not, and the court saw through the ruse and found the entire bill unconstitutional for the exact reasons we had warned the California government about.

The judge recognized that the bill required the removal of speech, despite California’s claim that it was about conduct and privacy. California (of course) appealed, and now we have the 9th Circuit which has mostly (though not entirely) agreed with the district court.

The real wildcard in all of this was the Supreme Court’s decision last month in what is now called the Moody case, which also involved NetChoice challenging Florida’s and Texas’ social media laws. The Supreme Court said that the cases should be litigated differently as a “facial challenge” rather than an “as-applied challenge” to the law. And it seems that decision is shaking up a bunch of these cases.

But here, the 9th Circuit interpreted it to mean that it could send part of the case back down to the lower court to do a more thorough analysis on some parts of the AADC that weren’t as clearly discussed or considered. In a “facial challenge,” the courts are supposed to consider all aspects of the law, and whether or not they all violate the Constitution, or if some of them are salvageable.

On the key point, though, the 9th Circuit panel rightly found that the AADC violates the First Amendment. Because no matter how much California claims that it’s about conduct, design, or privacy, everyone knows it’s really about regulating speech.

Specifically, they call out the DPIA requirement. This is a major portion of the law, which requires certain online businesses to create and file a “Data Protection Impact Assessment” with the California Attorney General. Part of that DPIA is that you have to explain how you plan to “mitigate the risk” that “potentially harmful content” will reach children (defined as anyone from age 0 to 18).

And we’d have to do that for every “feature” on the website. Do I think that a high school student might read Techdirt’s comments and come across something the AG finds harmful? I need to first explain our plans to “mitigate” that risk. That sure sounds like a push for censorship.

And the Court agrees this is a problem. First, it’s a problem because of the compelled speech part of it:

We agree with NetChoice that the DPIA report requirement, codified at §§ 1798.99.31(a)(1)–(2) of the California Civil Code, triggers review under the First Amendment. First, the DPIA report requirement clearly compels speech by requiring covered businesses to opine on potential harm to children. It is well-established that the First Amendment protects “the right to refrain from speaking at all.”

California argued that because the DPIA reports are not public, it’s not compelled speech, but the Court (rightly) says that’s… not a thing:

The State makes much of the fact that the DPIA reports are not public documents and retain their confidential and privileged status even after being disclosed to the State, but the State provides no authority to explain why that fact would render the First Amendment wholly inapplicable to the requirement that businesses create them in the first place. On the contrary, the Supreme Court has recognized the First Amendment may apply even when the compelled speech need only be disclosed to the government. See Ams. for Prosperity Found. v. Bonta, 594 U.S. 595, 616 (2021). Accordingly, the district court did not err in concluding that the DPIA report requirement triggers First Amendment scrutiny because it compels protected speech.

More importantly, though, the Court recognizes that the entire underlying purpose of the DPIA system is to encourage websites to remove First Amendment-protected content:

Second, the DPIA report requirement invites First Amendment scrutiny because it deputizes covered businesses into serving as censors for the State. The Supreme Court has previously applied First Amendment scrutiny to laws that deputize private actors into determining whether material is suitable for kids. See Interstate Cir., Inc. v. City of Dallas, 390 U.S. 676, 678, 684 (1968) (recognizing that a film exhibitor’s First Amendment rights were implicated by a law requiring it to inform the government whether films were “suitable” for children). Moreover, the Supreme Court recently affirmed “that laws curtailing [] editorial choices [by online platforms] must meet the First Amendment’s requirements.” Moody, 144 S. Ct. at 2393.

The state’s argument that this analysis is unrelated to the underlying content is easily dismissed:

At oral argument, the State suggested companies could analyze the risk that children would be exposed to harmful or potentially harmful material without opining on what material is potentially harmful to children. However, a business cannot assess the likelihood that a child will be exposed to harmful or potentially harmful materials on its platform without first determining what constitutes harmful or potentially harmful material. To take the State’s own example, data profiling may cause a student who conducts research for a school project about eating disorders to see additional content about eating disorders. Unless the business assesses whether that additional content is “harmful or potentially harmful” to children (and thus opines on what sort of eating disorder content is harmful), it cannot determine whether that additional content poses a “risk of material detriment to children” under the CAADCA. Nor can a business take steps to “mitigate” the risk that children will view harmful or potentially harmful content if it has not identified what content should be blocked.

Accordingly, the district court was correct to conclude that the CAADCA’s DPIA report requirement regulates the speech of covered businesses and thus triggers review under the First Amendment.

I’ll note that this is an issue that is coming up in lots of other laws as well. For example, KOSA has defenders who insist that it is only focused on design, and not content. But at the same time, it talks about preventing harms around eating disorders, which is fundamentally a content issue, not a design issue.

The Court says that the DPIA requirement triggers strict scrutiny. The district court ruling had looked at it under intermediate scrutiny (a lower bar), found that it didn’t pass that bar, and said even if strict scrutiny is appropriate, it wouldn’t pass since it couldn’t even meet the lower bar. The Appeals court basically says we can jump straight to strict scrutiny:

Accordingly, the court assumed for the purposes of the preliminary injunction “that only the lesser standard of intermediate scrutiny for commercial speech applies” because the outcome of the analysis would be the same under both intermediate commercial speech scrutiny and strict scrutiny. Id. at 947–48. While we understand the district court’s caution against prejudicing the merits of the case at the preliminary injunction stage, there is no question that strict scrutiny, as opposed to mere commercial speech scrutiny, governs our review of the DPIA report requirement.

And, of course, the DPIA requirement fails strict scrutiny in part because it’s obviously not the least speech restrictive means of accomplishing its goals:

The State could have easily employed less restrictive means to accomplish its protective goals, such as by (1) incentivizing companies to offer voluntary content filters or application blockers, (2) educating children and parents on the importance of using such tools, and (3) relying on existing criminal laws that prohibit related unlawful conduct.

In this section, the court also responds to the overhyped fears that finding the DPIAs unconstitutional here would mean that they are similarly unconstitutional in other laws, such as California’s privacy law. But the court says “um, guys, one of these is about speech, and one is not.”

Tellingly, iLit compares the CAADCA’s DPIA report requirement with a supposedly “similar DPIA requirement” found in the CCPA, and proceeds to argue that the district court’s striking down of the DPIA report requirement in the CAADCA necessarily threatens the same requirement in the CCPA. But a plain reading of the relevant provisions of both laws reveals that they are not the same; indeed, they are vastly different in kind.

Under the CCPA, businesses that buy, receive, sell, or share the personal information of 10,000,000 or more consumers in a calendar year are required to disclose various metrics, including but not limited to the number of requests to delete, to correct, and to know consumers’ personal information, as well as the number of requests from consumers to opt out of the sale and sharing of their information. 11 Cal. Code Regs. tit. 11, § 7102(a); see Cal Civ. Code § 1798.185(a)(15)(B) (requiring businesses to conduct regular risk assessments regarding how they process “sensitive personal information”). That obligation to collect, retain, and disclose purely factual information about the number of privacy-related requests is a far cry from the CAADCA’s vague and onerous requirement that covered businesses opine on whether their services risk “material detriment to children” with a particular focus on whether they may result in children witnessing harmful or potentially harmful content online. A DPIA report requirement that compels businesses to measure and disclose to the government certain types of risks potentially created by their services might not create a problem. The problem here is that the risk that businesses must measure and disclose to the government is the risk that children will be exposed to disfavored speech online.

Then, the 9th Circuit basically gives up on the other parts of the AADC. The court effectively says that since the briefing was so focused on the DPIA part of the law, and now (thanks to the Moody ruling) a facial challenge requires a full exploration of all aspects of the law, the rest should be sent back to the lower court:

As in Moody, the record needs further development to allow the district court to determine “the full range of activities the law[] cover[s].” Moody, 144 S. Ct. at 2397. But even for the remaining provision that is likely to trigger First Amendment scrutiny in every application because the plain language of the provision compels speech by covered businesses, see Cal. Civ. Code §§ 1798.99.31(a)(7), we cannot say, on this record, that a substantial majority of its applications are likely to fail First Amendment scrutiny.

For example, the Court notes that there’s a part of the law dealing with “dark patterns” but there’s not enough information to know whether or not that could impact speech or not (spoiler alert: it absolutely can and will).

Still, the main news here is this: the law is still not going into effect. The Court recognizes that the DPIA part of the law is pretty clearly an unconstitutional violation of the First Amendment (just as some of us warned Newsom and the California legislature).

Maybe California should pay attention next time (he says sarcastically as a bunch of new bad bills are about to make their way to Newsom’s desk).

Filed Under: 9th circuit, aadc, ab 2273, age appropriate design code, california, dpia, gavin newsom, protect the children, rob bonta
Companies: netchoice

Court To RFK Jr.: Fact-Checking Doesn’t Violate 1st Amendment Nor Does Section 230 Make Meta A State Actor

from the that's-not-how-any-of-this-works dept

You may recall that RFK Jr.’s nonsense-peddling anti-vax organization “Children’s Health Defense” (CHD) sued Meta back in 2020 for the apparent crime of fact-checking and limiting the reach of the anti-vax nonsense it posted. Three years ago, the case was tossed out of court (easily) with the court pointing out that Meta is (*gasp*) a private entity that has the right to do all of this under its own free speech rights. The court needed to explain that the First Amendment applies to the government, and Meta is not the government.

Yes, Meta looked to the CDC for guidance on vaccine info, but that did not turn it into a state actor. It was a pretty clear and easy ruling smacking down CHD (represented, in part, by disgraced Yale law professor Jed Rubenfeld). So, of course RFK Jr. and CHD appealed.

Last week, the Ninth Circuit smacked them down again. And we learn that it’s going to go… very… slowly… to hopefully help RFK Jr. and Rubenfeld understand these things this time:

To begin by stating the obvious, Meta, the owner of Facebook, is a private corporation, not a government agency.

Yes, the majority opinion admits that there are some rare cases where private corporations can be turned into state actors, but this ain’t one of them.

CHD’s state-action theory fails at this threshold step. We begin our analysis by identifying the “specific conduct of which the plaintiff complains.” Wright, 48 F.4th at 1122 (quoting American Mfrs. Mut. Ins. Co. v. Sullivan, 526 U.S. 40, 51 (1999)). CHD challenges Meta’s “policy of censoring” posts conveying what it describes as “accurate information . . . challenging current government orthodoxy on . . . vaccine safety and efficacy.” But “the source of the alleged . . . harm,” Ohno, 723 F.3d at 994, is Meta’s own “policy of censoring,” not any provision of federal law. The closest CHD comes to alleging a federal “rule of conduct” is the CDC’s identification of “vaccine misinformation” and “vaccine hesitancy” as top priorities in 2019. But as we explain in more detail below, those statements fall far short of suggesting any actionable federal “rule” that Meta was required to follow. And CHD does not allege that any specific actions Meta took on its platforms were traceable to those generalized federal concerns about vaccine misinformation.

And, even if it could pass that first step, it would also fail at the second step of the test:

CHD’s failure to satisfy the first part of the test is fatal to its state action claim. See Lindke v. Freed, 601 U.S. 187, 198, 201 (2024); but see O’Handley, 62 F.4th at 1157 (noting that our cases “have not been entirely consistent on this point”). Even so, CHD also fails under the second part. As we have explained, the Supreme Court has identified four tests for when a private party “may fairly be said to be a state actor”: (1) the public function test, (2) the joint action test, (3) the state compulsion test, and (4) the nexus test. Lugar, 457 U.S. at 937, 939.

CHD invokes two of those theories of state action as well as a hybrid of the two. First, it argues that Meta and the federal government agreed to a joint course of action that deprived CHD of its constitutional rights. Second, it argues that Meta deprived it of its constitutional rights because government actors pressured Meta into doing so. Third, it argues that the “convergence” of “joint action” and “pressure,” as well as the “immunity” Meta enjoys under 47 U.S.C. § 230, make its allegations that the government used Meta to censor disfavored speech all the more plausible. CHD cannot prevail on any of these theories.

The majority opinion makes clear that CHD never grapples with the basic idea that the reason Meta might have taken action on CHD’s anti-vax nonsense was that it didn’t want kids to die because of anti-vax nonsense. Instead, it assumes without evidence that it must be the government censoring them.

But the facts that CHD alleges do not make that inference plausible in light of the obvious alternative—that the government hoped Meta would cooperate because it has a similar view about the safety and efficacy of vaccines.

Furthermore, the Court cites the recent Murthy decision at the Supreme Court (on a tangentially related issue) and highlighted how Meta frequently pushed back or disagreed with points raised by the government.

In any event, even if we were to consider the documents, they do not make it any more plausible that Meta has taken any specific action on the government’s say-so. To the contrary, they indicate that Meta and the government have regularly disagreed about what policies to implement and how to enforce them. See Murthy, 144 S. Ct. at 1987 (highlighting evidence “that White House officials had flagged content that did not violate company policy”). Even if Meta has removed or restricted some of the content of which the government disapproves, the evidence suggests that Meta “had independent incentives to moderate content and . . . exercised [its] own judgment” in so doing.

As for the fact that Meta offers a portal for some to submit reports, that doesn’t change the fact that it’s still judging those reports against its own policies and not just obeying the government.

That the government submitted requests for removal of specific content through a “portal” Meta created to facilitate such communication does not give rise to a plausible inference of joint action. Exactly the same was true in O’Handley, where Twitter had created a “Partner Support Portal” through which the government flagged posts to which it objected. 62 F.4th at 1160. Meta was entitled to encourage such input from the government as long as “the company’s employees decided how to utilize this information based on their own reading of the flagged posts.” Id. It does not become an agent of the government just because it decides that the CDC sometimes has a point.

The majority opinion also addresses the question of whether or not Meta was coerced. It first notes that if that were the issue, then Meta itself probably wouldn’t be the right defendant, the government would be. But then it notes the near total lack of evidence of coercion.

CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy. Instead, it cites statements by Members of Congress criticizing social media companies for allowing “misinformation” to spread on their platforms and urging them to combat such content because the government would hold them “accountable” if they did not. Like the “generalized federal concern[s]” in Mathis II, those statements do not establish coercion because they do not support the inference that the government pressured Meta into taking any specific action with respect to speech about vaccines. Mathis II, 75 F.3d at 502. Indeed, some of the statements on which CHD relies relate to alleged misinformation more generally, such as a statement from then-candidate Biden objecting to a Facebook ad that falsely claimed that he blackmailed Ukrainian officials. All CHD has pleaded is that Meta was aware of a generalized federal concern with misinformation on social media platforms and that Meta took steps to address that concern. See id. If Meta implemented its policy at least in part to stave off lawmakers’ efforts to regulate, it was allowed to do so without turning itself into an arm of the federal government.

To be honest, I’m not so sure of the last line there. If it’s true that Meta implemented policies because it wanted to avoid regulation, that strikes me as a potential First Amendment violation, but again, one that should be targeted at the government, not Meta.

The opinion also notes that angry letters from Rep. Adam Schiff and Senator Amy Klobuchar did not appear to cross the coercive line, despite being aggressive.

But in contrast to cases where courts have found coercion, the letters did not require Meta to take any particular action and did not threaten penalties for noncompliance….

Again, I think the opinion goes a bit too far here in suggesting that legislators mostly don’t have coercive power by themselves, giving them more leeway to send these kinds of letters.

Unlike “an executive official with unilateral power that could be wielded in an unfair way if the recipient did not acquiesce,” a single legislator lacks “unilateral regulatory authority.” Id. A letter from a legislator would therefore “more naturally be viewed as relying on her persuasive authority rather than on the coercive power of the government.”

I think that’s wrong. I’ve made the case that it’s bad when legislators threaten to punish companies for speech, and it’s frustrating that both Democrats and Republicans seem to do it regularly. Here, the 9th Circuit seems to bless that which is a bit frustrating and could lead to more attempts by legislators to suppress speech.

The Court then dismisses CHD’s absolutely laughable Section 230 state action theory. This was Rubenfeld’s baby. In January 2021, Rubenfeld co-authored one of the dumbest WSJ op-eds we’ve ever seen with a then mostly unknown “biotech exec” named Vivek Ramaswamy, arguing that Section 230 made social media companies into state actors. A few months later, Rubenfeld joined RFK Jr.’s legal team to push this theory in court.

It failed at the district court level and it fails here again on appeal.

The immunity from liability conferred by section 230 is undoubtedly a significant benefit to companies like Meta that operate social media platforms. It might even be the case that such platforms could not operate at their present scale without section 230. But many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government. If that were enough for state action, every large government contractor would be a state actor. But that is not the law.

The opinion notes that this crazy theory is based on a near complete misunderstanding of case law and how Section 230 works. Indeed, the Court calls out this argument as “exceptionally odd.”

It would be exceptionally odd to say that the government, through section 230, has expressed any preference at all as to the removal of anti-vaccine speech, because the statute was enacted years before the government was concerned with speech related to vaccines, and the statute makes no reference to that kind of speech. Rather, as the text of section 230(c)(2)(A) makes clear—and as the title of the statute (i.e., the “Communications Decency Act”) confirms—a major concern of Congress was the ability of providers to restrict sexually explicit content, including forms of such content that enjoy constitutional protection. It is not difficult to find examples of Members of Congress expressing concern about sexually explicit but constitutionally protected content, and many providers, including Facebook, do in fact restrict it.See, e.g., 141 Cong. Rec. 22,045 (1995) (statement of Rep. Wyden) (“We are all against smut and pornography . . . .”); id. at 22,047 (statement of Rep. Goodlatte) (“Congress has a responsibility to help encourage the private sector to protect our children from being exposed to obscene and indecent material on the Internet.”); Shielding Children’s Retinas from Egregious Exposure on the Net (SCREEN) Act, S. 5259, 117th Cong. (2022); Adult Nudity and Sexual Activity, Meta, https://transparency.fb.com/policies/communitystandards/adult-nudity-sexual-activity _[h_ttps://perma.cc/ SJ63-LNEA] (“We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content.”). While platforms may or may not share Congress’s moral concerns, they have independent commercial reasons to suppress sexually explicit content. “Such alignment does not transform private conduct into state action.”

Indeed, it points to the ridiculous logical conclusion of this Rubenfeld/Ramaswamy argument:

If we were to accept CHD’s argument, it is difficult to see why would-be purveyors of pornography would not be able to assert a First Amendment challenge on the theory that, viewed in light of section 230, statements from lawmakers urging internet providers to restrict sexually explicit material have somehow made Meta a state actor when it excludes constitutionally protected pornography from Facebook. So far as we are aware, no court has ever accepted such a theory

Furthermore, the Court makes clear that moderation decisions are up to the private companies, not the courts. And if people don’t like it, the answer is market forces and competition.

Our decision should not be taken as an endorsement of Meta’s policies about what content to restrict on Facebook. It is for the owners of social media platforms, not for us, to decide what, if any, limits should apply to speech on those platforms. That does not mean that such decisions are wholly unchecked, only that the necessary checks come from competition in the market—including, as we have seen, in the market for corporate control. If competition is thought to be inadequate, it may be a subject for antitrust litigation, or perhaps for appropriate legislation or regulation. But it is not up to the courts to supervise social media platforms through the blunt instrument of taking First Amendment doctrines developed for the government and applying them to private companies. Whether the result is “good or bad policy,” that limitation on the power of the courts is a “fundamental fact of our political order,” and it dictates our decision today

Even more ridiculous than the claims around content being taken down, CHD also claimed that the fact checks on their posts violated [checks notes… checks notes again] the Lanham Act, which is the law that covers things like trademark infringement and some forms of misleading advertising. The Court here basically does a “what the fuck are you guys talking about” to Kennedy and Rubenfeld.

By that definition, Meta did not engage in “commercial speech”—and, thus, was not acting “in commercial advertising or promotion”—when it labeled some of CHD’s posts false or directed users to fact-checking websites. Meta’s commentary on CHD’s posts did not represent an effort to advertise or promote anything, and it did not propose any commercial transaction, even indirectly.

And just to make it even dumber, CHD also had a RICO claim. Because, of course they did. Yet again, we will point you to Ken White’s “It’s not RICO dammit” lawsplainer, but the Court here does its own version:

The causal chain that CHD proposes is, to put it mildly, indirect. CHD contends that Meta deceived Facebook users who visited CHD’s page by mislabeling its posts as false. The labels that Meta placed on CHD’s posts included links to fact-checkers’ websites. If a user followed a link, the factchecker’s website would display an explanation of the alleged falsity in CHD’s post. On the side of the page, the fact-checker had a donation button for the organization. Meanwhile, Meta had disabled the donation button on CHD’s Facebook page. If a user decided to donate to the fact-checking organization, CHD maintains, that money would come out of CHD’s pocket, because CHD and factcheckers allegedly compete for donations in the field of health information.

The alleged fraud— Meta’s mislabeling of CHD’s posts— is several steps removed from the conduct directly responsible for CHD’s asserted injury: users’ depriving CHD of their donation dollars. At a minimum, the sequence relies on users’ independent propensities to intend to donate to CHD, click the link to a fact-checker’s site, and be moved to reallocate funds to that organization. This causal chain is far too attenuated to establish the direct relationship that RICO requires. Proximate cause “is meant to prevent these types of intricate, uncertain inquiries from overrunning RICO litigation.” Anza, 547 U.S. at 460.

CHD’s theory also strains credulity. It is not plausible that someone contemplating donating to CHD would look at CHD’s Facebook page, see the warning label placed there, and decide instead to donate to . . . a fact-checking organization. See Twombly, 550 U.S. at 555. The district court noted that CHD did not allege that any visitors to its page had in fact donated to other organizations because of Meta’s fraudulent scheme. CHD is correct that an actual transfer of money or property is not an element of wire fraud, as “[t]he wire fraud statute punishes the scheme, not its success.” Pasquantino v. United States, 544 U.S. 349, 371 (2005) (alteration in original) (quoting United States v. Pierce, 224 F.3d 158, 166 (2d Cir. 2000)). But the fact that no donations were diverted provides at least some reason to think that no one would have expected or intended the diversion of donations.

I love how the judge includes the… incredulous pause ellipses in that last highlighted section.

Oh, and I have to go back to one point. CHD had originally offered an even dumber RICO theory which it dropped, but the Court still mentions it:

In the complaint, CHD described a scheme whereby Meta placed warning labels on CHD’s posts with the intent to “clear the field” of CHD’s alternative point of view, thus keeping vaccine manufacturers in business so that they would buy ads on Facebook and ensure that Zuckerberg obtained a return on his investments in vaccine technology.

That’s brain worm logic speaking, folks.

There is a partial dissent from (Trump-appointed, natch) Judge Daniel Collins, who says that maybe, if you squint, there is a legitimate First Amendment claim. In part, this is because Collins thinks CHD should be able to submit additional material that wasn’t heard by the district court, which is not how any of this tends to work. You have to present everything at the lower court. The appeals court isn’t supposed to consider any new material beyond, say, new court rulings that might impact this ruling.

Collins then also seems to buy into Rubenfeld’s nutty 230-makes-you-a-state actor argument. He goes on for a while giving the history of Section 230 (including a footnote pointing out, correctly but pedantically, that it’s Section 230 of the Communications Act of 1934, but not Section 230 of the Communications Decency Act as most people call it — a point that only certain law professors talk about). The history is mostly accurate, highlighting the Stratton Oakmont decision and how it would be impossible to run an internet service if that had stood.

But then… it takes some leaps. It takes some giant leaps with massive factual errors. Embarrassing for a judge to be making:

The truly gigantic scale of Meta’s platforms, and the enormous power that Meta thereby exercises over the speech of others, are thus direct consequences of, and critically dependent upon, the distinctive immunity reflected in § 230. That is, because such massive third-party-speech platforms could not operate on such a scale in the absence of something like § 230, the very ability of Meta to exercise such unrestrained power to censor the speech of so many tens of millions of other people exists only by virtue of the legislative grace reflected in § 230’s broad immunity. Moreover, as the above discussion makes clear, it was Congress’s declared purpose, in conferring such immunity, to allow platform operators to exercise this sort of wide discretion about what speech to allow and what to remove. In this respect, the immunity granted by § 230 differs critically from other government-enabled benefits, such as the limited liability associated with the corporate form. The generic benefits of incorporation are available to all for nearly every kind of substantive endeavor, and the limitation of liability associated with incorporation thus constitutes a form of generally applicable non-speech regulation. In sharp contrast, both in its purpose and in its effect, § 230’s immunity is entirely a speech-related benefit—it is, by its very design, an immunity created precisely to give its beneficiaries the practical ability to censor the speech of large numbers of other persons.7 Against this backdrop, whenever Meta selectively censors the speech of third parties on its massive platforms, it is quite literally exercising a government-conferred special power over the speech of millions of others. The same simply cannot be said of newspapers making decisions about what stories to run or bookstores choosing what books to carry

I do not suggest that there is anything inappropriate in Meta’s having taken advantage of § 230’s immunity in building its mega-platforms. On the contrary, the fact that it and other companies have built such widely accessible platforms has created unprecedented practical opportunities for ordinary individuals to share their ideas with the world at large. That is, in a sense, exactly what § 230 aimed to accomplish, and in that particular respect the statute has been a success. But it is important to keep in mind that the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled.

Those highlighted sections are simply incorrect. Meta is constitutionally entitled to the ability to moderate thanks to the First Amendment. Section 230 simplifies the procedural aspects of it, in that companies need not fight an expensive and drawn-out First Amendment battle over it, as Section 230 shortcuts that procedurally by granting the immunity that ends cases much faster. Though it ends them the same way it would end otherwise, thanks to the First Amendment.

So, basically the key point that Judge Collins rests his dissent on is fundamentally incorrect. And it’s odd that he ignores the recent Moody ruling and even last year’s Taamneh ruling that basically explains why this is wrong.

Collins also seems to fall for the false idea that Section 230 requires a site to be a platform or a publisher, which is just wrong.

Rather, because its ability to operate its massive platform rests dispositively on the immunity granted as a matter of legislative grace in § 230, Meta is a bit of a novel legal chimera: it has the immunity of a conduit with respect to third-party speech, based precisely on the overriding legal premise that it is not a publisher; its platforms’ massive scale and general availability to the public further make Meta resemble a conduit more than any sort of publisher; but Meta has, as a practical matter, a statutory freedom to suppress or delete any third-party speech while remaining liable only for its own affirmative speech

But that’s literally wrong as well. Despite how it’s covered by many politicians and the media, Section 230 does not say that a website is not a publisher. It says that it shall not be treated as a publisher for third-party content even though it is engaging in publishing activities.

Incredibly, Collins even cites (earlier in his dissent) the 9th Circuit’s Barnes ruling which lays this out. In Barnes, the 9th Circuit is quite clear that Section 230 protects Yahoo from being held liable for third-party content explicitly because it is doing everything a publisher would do. Section 230 just removes liability from third-party content so that a website is not treated as a publisher, even when it is acting as a publisher.

In that case, the Court laid out all the reasons why Yahoo was acting as a publisher. It called what Yahoo engaged in “action that is quintessentially that of a publisher.” Then, it notes it couldn’t be held liable for those activities thanks to Section 230 (eventually Yahoo did lose that case, but under a different legal theory, related to promissory estoppel, but that’s another issue).

Collins even cites this very language from the Barnes decision:

As we stated in Barnes, “removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove.” Id. “Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties.”

So it’s bizarre that pages later, Collins falsely claims that Section 230 means that Meta is claiming not to be a publisher. As the Barnes case makes clear, Section 230 says you don’t treat the publisher of third-party content as a publisher of first-party content. But they’re both publishers of a sort. And Collins seemed to acknowledge this 20 pages earlier… and then… forgot?

Thankfully, Collin’s dissent is only a dissent and not the majority.

Still, as we noted back in May, RFK Jr. and Rubenfeld teamed up a second time to sue Meta yet again, once again claiming that Meta moderating him is a First Amendment violation. That’s a wholly different lawsuit, with the major difference being… that because RFK Jr. is a presidential candidate (lol), somehow this now makes it a First Amendment violation for Meta to moderate his nonsense.

So the Ninth Circuit should have yet another chance to explain the First Amendment to them both yet again.

Filed Under: 1st amendment, 9th circuit, content moderation, daniel collins, jed rubenfeld, lanham act, rfk jr., rico, section 230, state actor
Companies: children's health defense, meta

9th Circuit: No Immunity For Officers Who Answered Distress Call By Killing Distressed Person

from the first,-do-all-harm dept

Here’s yet more anecdotal evidence demonstrating why we’re be better off routing mental health calls to mental health professionals, rather than to people who tend to respond to things they can’t immediately control with violence. The good news is more cities are experimenting with multiple options for 911 response. The better news is that those experiments have been successful.

The bad news is everything else. Most cities aren’t willing to do this. And because they’re unwilling to explore their options, more people suffering mental health crises are going to end up dead. That’s what happened to Roy Scott, a Las Vegas resident who was “helped” to death by Las Vegas police officers Kyle Smith and Theodore Huntsman.

Here’s another story that’s all too familiar here in the United States, as recounted at the opening of the Ninth Circuit Appeals Court decision [PDF]:

Early in the morning on March 3, 2019, Roy Scott called the police for help. But he did not get it. Las Vegas Metropolitan Police Department Officers Kyle Smith and Theodore Huntsman came to the scene. Scott was unarmed and in mental distress. Though he complied with the officers’ orders and was not suspected of a crime, Smith and Huntsman initiated physical contact, forced Scott to the ground, and used bodyweight force to restrain him. Shortly after, Scott lost consciousness and he was later pronounced dead.

The one-two punch of “called for help”/”but he did not get it” makes it clear the officers’ response to the situation was objectively terrible, at least in the Appeal Court’s eyes. The phrase “initiated physical contact” gives a hint of what’s to follow in the narrative: an unwarranted deployment of force against an unarmed person who was already experiencing distress long before these officers decided to end his life.

The district court nailed it on the first pass, denying qualified immunity to both officers. The officers appealed, but are greeted with more of the same at the next judiciary level.

The first two paragraphs recounting the violent incident in greater detail contain some pretty chilling facts. First, the evidence shows both officers clearly understood they were dealing with someone in mental distress, rather than some sort of dangerous criminal.

Scott was distressed and hallucinating when Officers Smith and Huntsman arrived at his apartment. After Smith and Huntsman knocked and identified themselves, Scott yelled to the officers to “break the door down” claiming that there were people inside his house. The officers did not break the door in because they did not hear anyone inside the apartment. Instead, they continued to knock and order Scott to come to the door. About two minutes after first knocking on the door, Smith told Huntsman, “this is a 421A for sure,” using the department code to indicate he believed Scott was mentally ill. Huntsman then called through the door: “Sir, have you been diagnosed with any mental diseases?” After Scott did not come to the door, Smith asked dispatch to call Scott back to ask him to come to the door, noting again that Scott appeared to be mentally ill. Smith then said to Huntsman: “I ain’t going in there. That’s too sketchy.” Huntsman agreed, “That dude’s wacky.” Peering into Scott’s window, Huntsman asked Smith if he could see the “crazed look in [Scott’s] eye.” They could not see anyone else in Scott’s apartment.

While it’s obviously possible for someone to both be in mental distress and pose a safety threat to others, the first fact that matters is that both officers affirmed (in their own body cam recordings) that they believed they were dealing with a mental health issues, rather than actual criminal activity.

The next paragraph contains a pretty damning fact — one that is a leading indicator that police violence, misconduct, or rights violations will be the most likely outcome of any encounter.

When Scott did not open the door, Smith called their sergeant, turning off his body worn camera. On Huntsman’s camera, Smith can be heard telling their sergeant that Scott sounds mentally ill. After ending the call, Smith told Huntsman that their sergeant said that “at the end of the day we can’t do anything if we don’t hear any reason to have an exigent circumstance.” Smith also explained that their Sergeant suggested they try again to get Scott to come to the door.

Never a good sign. Fortunately for Scott’s survivors, the other officer continued recording and captured the rest of Roy Scott’s killing. Scott finally answered the door carrying a metal pipe — one that he immediately dropped when the officers asked him to. They asked if he had any other weapons. Scott handed them a knife he had in pocket — handle-first — and said “I am sorry.” The officers pushed him up against a wall, shining a flashlight in his face. Scott asked to be put in the cop car, telling officers he had schizophrenia and that the light was bothering him. This request was ignored. The officers told Scott, “We are out here to help you.”

They didn’t.

At first, the officers held Scott’s arms at his sides while he was lying on his back. In this position, Scott screamed, struggled, and pled with the officers to leave him alone for over two minutes. The officers then eventually rolled Scott onto his stomach, repeatedly ordering Scott to “stop.” With Scott on his stomach and with his hands restrained behind his back, Huntsman put his bodyweight on Scott’s back and neck for about one to two minutes. At the same time Smith put his weight on Scott’s legs, restraining his lower body. Scott’s pleas turned increasingly incoherent and breathless as Huntsman applied his bodyweight. After handcuffing him, the officers attempted to roll Scott on his side, as he continued to incoherently cry out that he wanted to be left alone. When they rolled Scott over, his face was bloody from contact with the ground. Scott stopped yelling and thrashing around after a few minutes. Scott did not respond when Smith and Huntsman tried to wake or revive him. Shortly after, when the paramedics arrived, Scott was still unresponsive. Scott was pronounced dead after paramedics removed him from the scene. Plaintiffs’ expert found that Scott had died from restraint asphyxia.

From there, the fact-finding is simple, especially since it was recorded. While the officers presented their one-sided argument for qualified immunity, the appeals court shuts this attempt down. First of all, the facts are on the side of the non-moving party’s assertions at this point. Second, the body cam footage takes care of most of the questions of fact and what’s left to be decided should be done in front of a jury.

The officers’ attempt to portray Scott as a threat falls flattest, in terms of appellate arguments. The officers claimed Scott was a threat because he was carrying two weapons — a metal pipe and a knife. The court reminds the officers that one had been dropped and the other voluntarily handed to officers well before the officers decided to take Scott to the ground and restrain him to death.

The law was clearly established when the officers ended Scott’s life. And the precedent is almost directly on point.

The similarities between this case and Drummond are striking. Scott was not suspected of a crime. Instead, he was taken into custody because of his mental health. Though they were presented with an individual experiencing a mental health crisis and presenting no obvious danger to others, Smith and Huntsman crushed Scott’s back and neck to subdue him while handcuffing him. Scott also cried out with increasing distress and incoherence as the officers’ force escalated. Reasonable officers would have known that their force was not reasonable and that it created a serious risk of asphyxiating Scott.

When the law is clearly established and any facts that might help the officers push their version of the events still in dispute (not including those caught on camera, which are indisputable), qualified immunity is not an option. This will return to the lower court to be argued in front of a jury, assuming the city of Los Angeles doesn’t decide to settle first. No matter how this ends up being resolved, the city and the PD would be wise to look into alternative response options for mental health calls. It’s pretty clear police officers can’t — or won’t — handle these calls responsibly.

Filed Under: 4th amendment, 9th circuit, excessive force, kyle smith, las vegas pd, police killing, police violence, qualified immunity, roy scott, theodore huntsman

Immunity Denied To Cop Whose Shooting Narrative Was Undercut By Other Officers On The Scene

from the get-on-the-same-page,-officers dept

The Ninth Circuit Appeals Court is one of the better circuits when it comes to holding the government accountable for the violence it inflicts on citizens. It’s pretty much the polar opposite of the Fifth Circuit, which can’t seem to forgive cops quickly enough.

This lawsuit, springing from the shooting and killing of Francis Calonge by San Jose police officer Edward Carboni, will be headed back down to be placed in front of a jury, if the city doesn’t decide to settle first. It might, because this one doesn’t look good.

That’s not to say the first response by San Jose PD officers was out of line. Dispatch had received two calls reporting a man walking down the street with a gun in his waistband. One caller said the man was headed towards a nearby school.

Several officers — including ones supposedly trained to respond to school shootings — arrived on the scene. A couple of minutes later, Francis Calonge was dead, shot in the back by Officer Carboni.

True, Calonge was carrying a gun. That it was only a BB gun makes little difference because officers were never close enough to determine whether it was just a BB gun or something far more deadly. The bigger problem was the immediate response by the several officers who arrived to handle the call. With no one taking the lead, everyone just shouted whatever they could think of in the direction of Calonge. From the Ninth Circuit decision [PDF]:

Officer Carboni began shouting commands to Calonge, including “let me see your hands” and “drop it.” A different officer shouted for Calonge to “drop the gun.” A third officer shouted, “do not reach for it.” That may have been Officer Yciano, who later testified that he instructed Calonge, “don’t reach for the gun.” A police report states that Officer Yciano also told Calonge to “get on the ground.”

This is just escalation and stupidity. The yelling creates the “danger” needed for a deadly response, especially when one officer (out of the several) actually took steps to record the interaction by activating his body camera (that would be Officer Carboni). There is no possible way to comply with all of these commands, which means the officer whose command is “ignored” might feel justified in using deadly force.

Fortunately, we don’t have to depend on the dispassionate and shaky footage captured by Carboni’s body cam. The other officers testified as well, with Officer Yciano admitting he told Calonge to (1) “don’t reach for the gun” and (2) “drop the gun.” Calonge would have needed to be two people to comply. The gun was tucked into his waistband. To drop the gun was to reach for it. Perhaps (understandably) confounded by this volley of conflicting orders, Calonge chose to turn around and walk in the other direction, which (dangerously… for him) took him in the direction of the school he previously had been walking away from.

Carboni, the camera operator and late-arriving interloper, engaged in the final and ultimate act of escalation.

Officers Carboni, McKenzie, and Yciano followed him on foot at a distance of ten to thirty yards, walking along the road’s median, while Officer Pedreira followed in a police car. According to the officers, Calonge looked over his shoulder a few times and smiled. He continued walking, but he did not speed up.

Officer Carboni started to say something to the other police officers. He began, “I’m gonna—hey . . .” before trailing off. He then shouted for Calonge to “drop it.” A few seconds later, he said to the other officers, “Hey, watch out, I’m gonna shoot him. Watch out, watch out. Get out of the way.” That statement took three seconds. Officer Carboni spent three more seconds steadying his rifle against a tree. He then shot Calonge once in the back. The bullet struck Calonge’s heart, killing him. At no point had Officer Carboni warned Calonge that he was going to shoot. Just over one minute had elapsed between when Officer Carboni exited his police car and when he fired his gun.

Officer Carboni claimed he saw Calonge’s arm “bow out,” which he claimed indicated Calonge was reaching for the gun. At this point in his testimony, he appears to have forgotten the last order he shouted at Calonge was for him to “drop it.” To do so, he would need to reach for it first. He also claimed Calonge was headed towards a group of students located 10 yards away and feared Calonge would “take them hostage.”

That might add up to a good, clean kill. But it doesn’t here. And the court notes this with a very dry statement that seems to indicate it knows officers generally try to agree on a narrative before testifying in court.

This case is unusual in that other officers on the scene contradict key facts asserted by the officer who used deadly force.

That is unusual. Cops generally help cops during litigation. Here, Carboni was either hung out to dry (possibly because the other officers who apparently never drew their guns, much less shot at Calonge felt he went too far too fast) or hung himself out to dry by producing a recording of the incident, which other officers might have felt uncomfortable trying to contradict while on the stand.

And Carboni’s statements in his defense were almost universally contradicted by statements from other officers or his own body camera footage.

As to whether Calonge moved his arm, although Officer McKenzie later stated that he saw Calonge’s arm move away from his body, Officer Pedreira stated that he did not see Calonge do anything that suggested he was pulling his gun out of his waistband during the minute before he was shot. And Officer Yciano stated that he saw Calonge only “turn[] at an angle . . . as if he was trying to hide” the gun from the officers.

As to whether there were students nearby, Officer Pedreira stated that he did not see anyone on the corner of the intersection toward which Calonge was headed. The footage from the body-worn cameras, including Officer Carboni’s camera, does not show any bystanders near Calonge or further down the sidewalk toward the intersection.

It’s an easy call for the Ninth, despite Calonge walking around with a BB gun other city residents had mistaken for an actual handgun. The conflicting testimony (which also conflicts with the recording) resolves in favor of the plaintiff at this point. As for claims Calonge had ignored orders to get down on the ground, the court says no officer testifies as to giving that order and none of the footage collected shows an officer giving this order.

Cops claim to be experts on the law (except when it’s in their best interest to play stupid), but it doesn’t work here. Officer Carboni wants the court to believe case law supports him, either by justifying his shooting or by awarding him qualified immunity. The Ninth Circuit says he’s not as smart as he thinks. At this stage, disputed facts go in front of a jury. And even Carboni’s fellow officers dispute his version of things, to say nothing about Carboni’s own body cam footage.

Back down it goes to the lower court. If the city sees anything worth salvaging here, it has better eyes than I do. This looks like the failure of one officer to control himself. And because he couldn’t, he stands a pretty good chance of losing this if it goes in front of a jury. The city knows what it can shell out to make this go away. But there’s no telling what a jury might award if given the chance. And for at least one family that’s been the victim of police violence, there might be some justice in the near future.

Filed Under: 9th circuit, excessive force, police shooting, police violence, qualified immunity

Ninth Circuit Dumps Lawsuit Against YouTube Brought By Anti-Vaxxer Whose Account Was Terminated

from the YouTube-isn't-obligated-to-hold-your-microphone,-dumbass dept

These lawsuits don’t work. They just don’t. And yet, they’re filed seemingly all the time.

When YouTube decides as a private company it would rather you take your stupid shit elsewhere, it’s allowed to do so. Its terms and conditions contain a phrase found pretty much anywhere: “or for any other reason.” That means that even if you — if you’re anti-vax loudmouth [squints skeptically at court filings] Dr. Joseph Mercola — can’t find any explicit language in the terms and conditions that applies to your content, YouTube can still kick your account to the curb.

Then there’s Section 230 of the CDA, which immunizes service providers against lawsuits like these that try to claim there’s something legally wrong with the way services moderate content. This one never reaches that point in the legal discussion, but if it had, Section 230 would have ended this lawsuit at this point as well.

But, as this short opinion [PDF] from the Ninth Circuit Appeals Court points out, YouTube actually had enumerated a rationale for this moderation decision. Just because Mercola didn’t agree with it doesn’t mean he has a legitimate cause for action. (h/t Eric Goldman)

The district court had it right when it made the first call, as recounted by the Ninth Circuit en route to its affirmation:

Mercola alleges that the Agreement’s Modification Clause required that YouTube provide it “reasonable advance notice” before YouTube terminated its account for allegedly violating YouTube’s Community Guidelines. The district court held that the Modification Clause did not override other provisions in the Agreement that allow YouTube to immediately take down content considered harmful to its users and that the Agreement did not give Mercola any right of access to the contents of a terminated account. It also found that the Agreement’s Limitations on Liability provision foreclosed relief.

Mercola thought the “reasonable advance notice” would net him a win in court since he apparently hadn’t received this “advance notice.” But, as the Ninth Circuit points out, other clauses in the same contract allowed YouTube to do what it did without doing anything remotely legally actionable.

However, the Agreement’s “Removal of Content” section states that if YouTube “reasonably believe[s]” that any content “may cause harm to YouTube, our users, or third parties,” it “may remove or take down that Content in our discretion,” and “will notify you with the reason for our action” unless doing so would breach the law, compromise an investigation, or cause harm. Also, the Agreement’s “Termination and Suspensions by YouTube for Cause” section states that YouTube may “suspend or terminate” an account if “you materially or repeatedly breach this Agreement” or if “we believe there has been conduct that creates (or could create) liability or harm to any user, other third party, YouTube or our Affiliates.”

In YouTube’s moderation opinion, spreading anti-vax horseshit might “cause harm” to other users, in which case it was free to remove the content (and the account spreading it) without notice. If the court was to buy Mercola’s argument about the terms and conditions, YouTube would be prevented from protecting other users until it had “notified” the user creating the perceived harm first, which means the platform would be doing more to protect harmful users than protect other users from harm. That would be even more problematic, which is why the language used in the terms and conditions is either/or, rather than something more restrictive.

The Ninth Circuit points this out specifically, if somewhat problematically:

[T]o construe the Modification Clause to prohibit the immediate termination of an account that causes harm to others would be contrary to protecting the public. In September 2021, when YouTube terminated Mercola’s account, it was reasonable (even if incorrect) to consider “anti vaccine” postings to be harmful to the public.

That’s kind of weird. Hopefully, that’s not a Ninth Circuit judge going off script to express a personal opinion about vaccinations. I mean, this isn’t the Supreme Court. This is one of the more-respected lower courts.

As Eric Goldman points out in his post on the lawsuit, this unfortunate wording may be just that: unfortunate.

This is a non-precedential memo opinion, so I’m going to assume that the reference to “incorrect” was sloppily phrased and instead intended as a hypothetical (i.e., even if YouTube was incorrect)–and not as a declaration that it’s official Ninth Circuit policy that anti-vax postings do not harm the public (of course they do).

And that ends this idiotic lawsuit. The dismissal is affirmed and the Ninth Circuit refuses to grant Mercola a chance to amend the lawsuit to pursue a heretofore unexamined legal theory. That means Mercola is completely out of luck as the lower court has already ruled his lawsuit is “barred as a matter of law,” which means no amount of rewriting can save it.

This should never have been filed. Mercola is still free to spread misinformation about vaccines. He’ll just have to do it elsewhere. YouTube isn’t contractually obligated to provide him a platform for his stupidity.

Filed Under: 9th circuit, content moderation, free speech, joseph mercola, lawsuit, section 230
Companies: alphabet, youtube

ACLU Asks 9th Circuit Not To Treat Abandoned Phones Like Any Other Abandoned Property

from the not-exactly-a-backpack dept

This is an interesting case with some very serious implications.

For the most part, anything discarded by a suspect fleeing from law enforcement officers can be searched or seized without a warrant. For years, this wasn’t necessarily a problem. The stuff discarded ranged from bags containing “substances” to wallets to the occasional backpack. The intrusion was limited and, in most cases, the evidentiary value (drug baggies, recently fired guns, etc.) was self-evident.

But in this case — one in which the ACLU has filed an amicus brief — the expectation of privacy is a bit more important. Previously, the Supreme Court rejected plenty of government arguments when it ruled that phones seized during arrests required a warrant to be searched. One of the many arguments rejected was this: that searching a phone was like searching a suspect’s pockets or the trunk of their car or the luggage they carried onto a plane.

The court rejected these arguments, equating the now-prevalent cell phones with the search of a house. In fact, searching a phone could be more intrusive than searching someone’s house, because someone’s house rarely contains thousands of photos, multiple thousands of conversations, and access to every other part of someone’s lives they’ve chose to connect to the internet.

In this case, the government is arguing a search of an “abandoned” phone should not require a warrant. It has chosen to treat cell phones — which contain people’s entire private and public lives — as something containing little more than your average wallet or backpack.

Imagine this: You lost your phone, or had it stolen. Would you be comfortable with a police officer who picked it up rummaging through the phone’s contents without any authorization or oversight, thinking you had abandoned it? We’ll hazard a guess: hell no, and for good reason.

Our cell phones and similar digital devices open a window into our entire lives, from messages we send in confidence to friends and family, to intimate photographs, to financial records, to comprehensive information about our movements, habits, and beliefs. Some of this information is intensely private in its own right; in combination, it can disclose virtually everything about a modern cell phone user.

If it seems like common sense that law enforcement shouldn’t have unfettered access to this information whenever it finds a phone left unattended, you’ll be troubled by an argument that government lawyers are advancing in a pending case before the Ninth Circuit Court of Appeals, United States v. Hunt. In Hunt, the government claims it does not need a warrant to search a phone that it deems to have been abandoned by its owner because, in ditching the phone, the owner loses any reasonable expectation of privacy in all its contents.

The government will (somewhat logically) argue that anyone “abandoning” property has lost any expectation of privacy. After all, any passerby could pick up the phone and attempt to recover its contents.

But there’s a big difference between what a passerby can obtain and what cops (with forensic tools) can acquire. And there’s an even bigger difference between what passersby can do with this information and what the government can do with it. Someone with access to the contents of the found phone can only exploit that information to engage in criminal acts. A cop, however, can just roam around looking at everything until they find something they can charge the phone’s former owner with. That’s a big difference. Identify fraud sucks but it’s nothing compared to being hit with criminal charges.

Unfortunately, the district court considered all “abandoned” property to be equal. So, as the ACLU proposes at the opening of its post, a ruling in favor of the government would remove any restraints currently curtailing government exploitation of found devices. While one would hope any phone found by cops would be used only to locate the owner of the phone, a decision that treats phones as little more than the equivalent of garbage bags set out by the curb (which are similarly considered abandoned) would invite a whole lot of opportunistic fishing expeditions by law enforcement officers with the free time and access to forensic search devices.

I won’t speculate on the amount of free time officers have, but it’s common knowledge most law enforcement agencies either own forensic search tools or have access to these tools via nearby agencies.

So, while this initially appears to be a discussion about suspects abandoning “evidence” while being pursued by cops, the implications are far bigger than your average neighborhood drug dealer tossing baggies into a bush while climbing over a fence.

That’s why the ACLU is involved. And that’s why the Ninth Circuit should consider its brief [PDF] carefully. But it’s not clear how this can be squared with established law. It would take another level of precedent with a very narrow finding. The problem with that is the nation’s top court, that may review any decision the Ninth Circuit makes, doesn’t appear to be all that interested in establishing new precedent unless it aligns with the ax-grinding proclivities of a handful of justices.

Which leads us to this unbearable realization: there are a bunch of cases out there too important to be [cough] entrusted to this particular version of the Supreme Court. All we can do at the moment is cross our fingers and hope….

Correction: An earlier version of this article said this case was at the Supreme Court, when it is currently at the Ninth Circuit. We have edited the article accordingly and regret the error.

Filed Under: 4th amendment, 9th circuit, cell phone searches, privacy, riley, supreme court, warrantless searches
Companies: aclu