blame – Techdirt (original) (raw)

from the beyond-clickbait-to-just-utter-nonsense dept

When tragedy strikes, it’s human nature to search for answers. But when you’re Fox News, it seems that the answer is always the same: blame video games, social media, or anything else that fits your preconceived notions, facts be damned.

In the aftermath of the assassination attempt on Donald Trump, I idly wondered how long it would take before someone tried to blame the shooting on social media, smartphones, or video games.

It took just slightly longer than I expected. On Thursday, ever reliable Fox News blasted this headline:

Trump shooter used gaming site that features presidential assassination game

And then they also mention, for good measure, that he “also had a Discord account.”

Image

Incredibly, it took two whole reporters to come up with this story.

And let’s be clear, almost the entire story is false, and the parts that aren’t false are stupid. This is some of the worst reporting I’ve seen in a while.

The “gaming site” in question was Steam. Anyone should know it has a ton of games. Listing the “Presidential assassination game” in the headline is basically an admission of just how dishonest Fox News is because five paragraphs into the article, they mention:

“there is no evidence Crooks ever played it.”

So what’s it doing in the fucking headline, Fox?

Also, it’s even worse than that because other reporters who actually understand what “reporting” means did the research and found out that the reported Steam account that some people claimed belonged to Crooks was fake. CBS reporters had that story:

A new analysis shows an online account that was believed to belong to the shooter in the assassination attempt on former President Donald Trump — and where he had purportedly called the date of the attack his “premiere” — was fake, a federal law enforcement official told CBS News on Thursday.

A law enforcement official and an additional source familiar with a briefing given to U.S. lawmakers on Wednesday previously told CBS News that the gunman, Thomas Matthew Crooks, had an account on an online gaming platform on which he posted: “July 13 will be my premiere.” But the federal law enforcement official says further investigation determined it was a fake account.

I saw some other reporting suggesting that the account on Steam was one where someone changed their username to pretend to be Crooks.

Watching the Fox News crowd get so desperate for anything to blame the shooting on is pretty pathetic. But I have no doubt that someone will bring this up in the future as if it were factual, and seeing the story morph to claim not just that he was on Steam, but that he played this supposed “Presidential assassin” game. Because facts no longer matter.

Filed Under: blame, blame video games, donald trump, steam, thomas crooks, video games
Companies: fox, fox news

Families Of Uvalde Shooting Victims File Silly Lawsuit Against Meta & Activision, Which Had Nothing To Do With The Shootings

from the exploiting-the-families-here-is-disgusting dept

You have to feel tremendous sympathy for the families of the victims in the school shooting in Uvalde, Texas. As has been well documented, there was a series of cascading failures by law enforcement that made that situation way worse and way more devastating than it should have been.

So who should be blamed? Apparently, Meta and Activision!

Yes, the families also went after the city of Uvalde and recently came to a settlement. That seems like a perfectly righteous lawsuit. However, this new one just seems like utter nonsense and is embarrassing.

The families are suing Meta and Activision for the shooting. It’s making a mockery of the very tragic and traumatic experience they went through for no reason other than to puff up the ego of a lawyer.

It’s reminiscent of moral panics around video games, Dungeons & Dragons, and comic books.

For what it’s worth, they’re also suing Daniel Defense, the company that made the assault rifle used by the shooter in Uvalde. And while that’s not my area of expertise, so I won’t dig deep on that part of the lawsuit, I can pretty much guarantee that has no chance either.

This lawsuit is performative nonsense for the lawyer representing the families. I’m not going to question the families for going along with this, but the lawyer is doing this to get his name more famous and we won’t oblige by talking about him here. He’s taking the families for a ride. This is a ridiculous lawsuit, and the lawyer should be ashamed of giving the families false hope in bringing such a nuisance lawsuit for his own ego and fame.

The lawsuit is 115 pages, and I’m not going through the whole thing. It has 19 different claims, though nearly all of them are variations on the “negligence” concept. Despite this being on behalf of families in Uvalde, Texas, it was filed in the Superior Court of Los Angeles. This is almost certainly because this silly “negligence” theory has actually been given life in this very court by some very silly judges.

I still think those other cases will eventually fail, but because judges in the Superior Court in LA seem willing to entertain the idea that any random connection you can find to a harm can have to go through a full lengthy litigation process, we’re going to see a lot more bullshit lawsuits like this, as lawyers keep bringing them, hoping for a settlement fee to go away, or maybe an even dumber judge who actually finds this ridiculous legal theory to be legitimate.

Section 230 was designed to get these frivolous cases tossed out, but the success of the “negligence” theory means that we’re getting a glimpse of how stupid the world looks without Section 230. Just claim “negligence” because someone who did something bad uses Instagram or plays Call of Duty, and you get to drag out the case. But, really, this is obviously frivolous nonsense:

To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters. Before the Uvalde school shooter, there was the Parkland school shooter, and before him, the Sandy Hook school shooter. These were the three most deadly K-12 school shootings in American history. In each one, the shooter was between the ages of 18 and 21 years old; in each one, the shooter was a devoted player of Call of Duty; and in each one, the shooter committed their attack in tactical gear, wielding an assault rifle.

Multiple studies have debunked this nonsense. Millions of people play Call of Duty. The fact that a few teenage boys later shoot up schools cannot, in any way, be pinned to Call of Duty.

And why Meta? Well, that’s even dumber:

Simultaneously, on Instagram, the Shooter was being courted through explicit, aggressive marketing. In addition to hundreds of images depicting and glorifying the thrill of combat, Daniel Defense used Instagram to extol the illegal, murderous use of its weapons.

In one image of soldiers on patrol, with no animal in sight, the caption reads: “Hunters Hunt.” Another advertisement shows a Daniel Defense rifle equipped with a holographic battle sight—the same brand used by the Shooter—and dubs the configuration “totally murdered out.” Yet another depicts the view through a rifle’s scope, looking down from a rooftop; the setting looks like an urban American street and the windshield of a parked car is in the crosshairs.

That’s it.

Literally. They’re suing Meta because the shooter saw some perfectly legal images on Instagram from a gun manufacturer. And somehow that should make Meta liable for the shooting? How the fuck is Meta supposed to prevent that? This is a nonsense connection cooked up by an ambulance chasing plaintiff’s lawyer who should be embarrassed for dragging the families of the victims through such a charade.

This is nothing but another Steve Dallas lawsuit, as we’ve dubbed such ridiculous lawsuits. This is based on the classic Bloom County comic strip, where Steve Dallas explains that the “American Way” is not to sue those actually responsible, but some tangentially related company with deep pockets.

Image

It’s been nearly 40 years since that strip was published, and we still see those kinds of lawsuits, now with increasing frequency thanks to very silly judges in very silly courts allowing obnoxious lawyers to try to make a name for themselves.

Filed Under: blame, call of duty, intermediary liability, liability, moral panic, steve dallas, uvalde, video games
Companies: activision, daniel defense, instagram, meta

Stop Expecting Tech Companies To Provide ‘Consequences’ For Criminal Behavior; That’s Not Their Job

from the stop-blaming-the-tools dept

Whose job is it to provide consequences when someone breaks the law?

It seems like this issue shouldn’t be that complicated. We expect law enforcement to deal with it when someone breaks the law. Not private individuals or organizations. Because that’s vigilantism.

Yet, on the internet, over and over again, we keep seeing people set the expectations that tech companies need to provide the consequences. That’s even when those who actually violate the law already face legal consequences.

None of this is to say that tech companies shouldn’t be focused on trying to minimize the misuse of their products. They have trust & safety teams for a reason. They know that if they don’t, they will face all sorts of reasonable backlash from advertisers or users leaving, due to negative media coverage and more. But demanding that they face legal consequences, while ignoring the legal consequences facing the actual users who violated the law… is weird.

For years, one of the cases that we kept hearing about as an example of why Section 230 was bad and needed to be done away with was Herrick v. Grindr. In that case, a person who was stalked and harassed sued Grindr for supposedly enabling such harassment and stalking.

What’s left out of the discussion is that the guy who stalked Herrick was arrested and ended up pleading guilty to criminal contempt, identity theft, falsely reporting an incident, and stalking. He was then sentenced to over a year in prison. Indeed, it appears he was arrested a few weeks before the lawsuit was filed against Grindr.

So, someone broke the law and faced the legal consequences. Yet some people are still much more focused on blaming the tech companies for not somehow “dealing” with these situations. Hell, much of the story around the Herrick case was about how there were no other remedies that he could find, even as the person who wronged him was, for good reason, in prison.

We’re now seeing a similar sort of thing with a new case you might have heard about recently. A few weeks ago, a high school athletic director, Dazhon Darien, was arrested in Baltimore after using some AI tools to mimic the principal at Pikesville High School, Eric Eiswart. Now Darien may need to use his AI tools to conjure up a lawyer.

A Maryland high school athletic director is facing criminal charges after police say he used artificial intelligence to duplicate the voice of Pikesville High School Principal Eric Eiswert, leading the community to believe Eiswert said racist and antisemitic things about teachers and students.

“We now have conclusive evidence that the recording was not authentic,” Baltimore County Police Chief Robert McCullough told reporters during a news conference Thursday. “It’s been determined the recording was generated through the use of artificial intelligence technology.”

Dazhon Darien, 31, was arrested Thursday on charges of stalking, theft, disruption of school operations and retaliation against a witness after a monthslong investigation from the Baltimore County Police Department.

This received plenty of attention as an example of the kind of thing people are worried about regarding “deepfakes” and whatnot: where someone is accused of doing something they didn’t by faking proof via AI tools.

However, every time this comes up, the person seems to be caught. And, in this case, they’ve been arrested and could face some pretty serious consequences including prison time and a conviction on their record.

And yet, in that very same article, NPR quotes professor Hany Farid complaining about the lack of consequences.

After following this story, Farid is left with the question: “What is going to be the consequence of this?”

[….]

Farid said there remains, generally, a lackluster response from regulators reluctant to put checks and balances on tech companies that develop these tools or to establish laws that properly punish wrongdoers and protect people.

“I don’t understand at what point we’re going to wake up as a country and say, like, why are we allowing this? Where are our regulators?”

I guess “getting arrested and facing being sentenced to prison” aren’t consequences? I mean, sure, maybe it doesn’t have the same ring to it as “big tech bad!!” but, really, how could anyone say with a straight face that there are no consequences here? How could anyone in the media print that without noting what the focus of the very story is?

It already breaks the law and is a criminal matter, and we let law enforcement handle those. If there were no consequences, and we were allowing this as a society, Darien would not have been arrested and would not be facing a trial next month.

I understand that there’s anger from some corners that this happened in the first place, but this is the nature of society. Some things break the law, and we treat them accordingly. Wishing to live in a world in which no one could ever break the law, or in which companies were somehow magically responsible for guaranteeing no one would ever misuse their products is not a good outcome. It would lead to a horrific mess of mostly useless tools, ruined by the small group of people who might misuse them.

We have a system to deal with criminals. We can use it. We shouldn’t be deputizing tech companies which are problematic enough to also have to take on Minority Report “pre-crime” style policing as well.

I understand that this is kinda Farid’s thing. Last year we highlighted him blaming Apple for CSAM online. Farid constantly wants to blame tech for the fact that some people will misuse the tech. And, I guess that gets him quoted in the media, but it’s pretty silly and disconnected from reality.

Yes, tech companies can put in place some safeguards, but people will always find some ways around them. If we’re talking about criminal behavior, the way to deal with it is through the criminal justice system. Not magically making tech companies go into excess surveillance mode to make sure no one is ever bad.

Filed Under: ai, blame, crime, dazhon darien, deep fakes, generative ai, hany farid, law enforcement

Sextortion Is A Real & Serious Criminal Issue; Blaming Section 230 For It Is Not

from the stay-focused-here dept

Let’s say I told you a harrowing story about a crime. Criminals from halfway around the world used fraudulent means and social engineering to scam a teenager, causing them to effectively destroy their lives (at least in the mind of the teen). The person whose life was destroyed then took an easily accessible gun from their parent and shot and killed themselves. Law enforcement investigated the crime, tracked down the people responsible, extradited them to the US and tried them. Eventually, they were sentenced to many years in prison.

Who would you blame for such a thing?

Apparently, for some people, the answer is Section 230. And it makes no sense at all.

That, at least, is the takeaway from an otherwise harrowing, distressing, and fascinating article in Bloomberg Businessweek about the very real and very serious problem of sextortion.

The article is well worth reading, as it not only details the real (and growing) problem of sextortion, but shows how a momentary youthful indiscretion — coaxed by a skillful social engineer — can destroy someone’s life.

The numbers on sextortion are eye-opening:

It was early 2022 when analysts at the National Center for Missing & Exploited Children (NCMEC) noticed a frightening pattern. The US nonprofit has fielded online-exploitation cybertips since 1998, but it had never seen anything like this.

Hundreds of tips began flooding in from across the country, bucking the trend of typical exploitation cases. Usually, older male predators spend months grooming young girls into sending nude photos for their own sexual gratification. But in these new reports, teen boys were being catfished by individuals pretending to be teen girls—and they were sending the nude photos first. The extortion was rapid-fire, sometimes occurring within hours. And it wasn’t sexually motivated; the predators wanted money. The tips were coming from dozens of states, yet the blackmailers were all saying the same thing:

“I’m going to ruin your life.”

“I’m going to make it go viral.”

“Answer me quickly. Time is ticking.”

“I have what I need to destroy your life.”

As the article details, there is something of a pattern in many of these sextortion cases. There are even “training” videos floating around that teach scammers how to effectively social engineer the result: get control over an Instagram or Snapchat account of a young girl and start friending/flirting with teen boys.

After getting flirty enough, send a fake nude and ask for one in return. Then, the scammer goes straight into extortion mode the second the teen boy does the teen boy thing and sends a compromising photo, focused on promising to ruin the boy’s life:

Around midnight, Dani got flirtatious. She told Jordan she liked “playing sexy games.” Then she sent him a naked photo and asked for one in return, a “sexy pic” with his face in it. Jordan walked down the hallway to the bathroom, pulled down his pants and took a selfie in the mirror. He hit send.

In an instant, the flirty teenage girl disappeared.

“I have screenshot all your followers and tags and can send this nudes to everyone and also send your nudes to your family and friends until it goes viral,” Dani wrote. “All you have to do is cooperate with me and I won’t expose you.”

Minutes later: “I got all I need rn to make your life miserable dude.”

As the article notes, this is part of the “playbook” that is used to teach the scammers:

The Yahoo Boys videos provided guidance on how to sound like an American girl (“I’m from Massachusetts. I just saw you on my friend’s suggestion and decided to follow you. I love reading, chilling with my friends and tennis”). They offered suggestions for how to keep the conversation flowing, how to turn it flirtatious and how to coerce the victim into sending a nude photo (“Pic exchange but with conditions”). Those conditions often included instructions that boys hold their genitals while “making a cute face” or take a photo in a mirror, face included.

Once that first nude image is sent, the script says, the game begins. “NOW BLACKMAIL 😀!!” it tells the scammer, advising they start with “hey, I have ur nudes and everything needed to ruin your life” or “hey this is the end of your life I am sending nudes to the world now.” Some of the blackmail scripts Raffile found had been viewed more than half a million times. One, called “Blackmailing format,” was uploaded to YouTube in September 2022 and got thousands of views. It included the same script that was sent to Jordan DeMay—down to the typos.

The article mostly focuses on the tragic case of one teen, DeMay, who shot himself very soon after getting hit with this scam. The article notes, just in passing, that DeMay had access to his father’s gun. Yet, somehow, guns and easy access to them are never mentioned as anything to be concerned about, even as the only two suicides mentioned in the article both involve teen boys who seemed to have unsupervised access to guns with which to shoot themselves.

Apparently, this is all the fault of Section 230 instead.

Hell, even as the article describes how this was a criminal case, and (somewhat amazingly!) the FBI tracked down the actual scammers in Nigeria, had them extradited to Michigan, and even got them to plead guilty to the crime (along with a mandatory minimum of 15 years in prison). Apparently, this is still… an internet problem?

The reality is that this is a criminal problem, and it’s appropriate to treat it as such, where law enforcement has to deal with it (as they did in this case).

It seems like there are many things to blame here: the criminals themselves (who are going to prison for many years), the easy access to guns, even the failure to teach kids to be careful with who they’re talking to or what to do if they got into trouble online. But, no, the article seems to think this is all Section 230’s fault.

DeMay’s family appears to have been suckered by a lawyer into suing Meta (the messages to him came via Instagram):

In January, Jordan’s parents filed a wrongful death lawsuit in a California state court accusing Meta of enabling and facilitating the crime. That month, John DeMay flew to Washington to attend the congressional hearing with social media executives. He sat in the gallery holding a picture of Jordan smiling in his red football jersey.

The DeMay case has been combined with more than 100 others in a group lawsuit in Los Angeles that alleges social media companies have harmed children by designing addictive products. The cases involve content sent to vulnerable teens about eating disorders, suicide and dangerous challenges leading to accidental deaths, as well as sextortion.

“The way these products are designed is what gives rise to these opportunistic murderers,” says Matthew Bergman, founder of the Seattle-based Social Media Victims Law Center, who’s representing Jordan’s parents. “They are able to exploit adolescent psychology, and they leverage Meta’s technology to do so.”

Except all of that is nonsense. Yes, sextortion is problematic, but what the fuck in the “design” of Instagram aids it? It’s a communication tool, like any other. In the past, people used phones and the mail service for extortion, and no one sued AT&T or the postal service because of it. It’s utter nonsense.

But Bloomberg runs with it and implies that Section 230 is somehow getting in the way here:

The lawsuits face a significant hurdle: overcoming Section 230 of the Communications Decency Act. This liability shield has long protected social media platforms from being held accountable for content posted on their sites by third parties. If Bergman’s product liability argument fails, Instagram won’t be held responsible for what the Ogoshi brothers said to Jordan DeMay.

Regardless of the legal outcome, Jordan’s parents want Meta to face the court of public opinion. “This isn’t my story, it’s his,” John DeMay says. “But unfortunately, we are the chosen ones to tell it. And I am going to keep telling it. When Mark Zuckerberg lays on his pillow at night, I guarantee he knows Jordan DeMay’s name. And if he doesn’t yet, he’s gonna.”

So here’s a kind of important question: how would this story have played out any differently in the absence of Section 230? What different thing would Mark Zuckerberg do? I mean, it’s possible that Facebook/Instagram wouldn’t really exist at all without such protections, but assuming they do, what legal liability would be on the platforms for this kind of thing happening?

The answer is nothing. For there to be any liability under the First Amendment, there would have to be evidence that Meta employees knew of the specific sextortion attempt against DeMay and did nothing to stop it. But that’s ridiculous.

Instagram has 2 billion users. What are the people bringing the lawsuit expecting Meta to do? To hire people to read every direct message going back and forth among users, spotting the ones that are sextortion, and magically stepping in to stop them? That’s not just silly, it’s impossible and ridiculously intrusive. Do you want Meta employees reading all your DMs?

Even more to the point, Section 230 is what allows Meta to experiment with better solutions to this kind of thing. For example, Meta has recently announced new tools to help fight sextortion by using nudity detectors to try to prevent kids from sending naked photos of themselves.

Developing such a tool and providing such help would be riskier without Section 230, as it would be an “admission” that people use their tools to send nudes. But here, the company can experiment with providing better tools because of 230. The focus on blaming Section 230 is so incredibly misplaced that it’s embarrassing.

The criminals are actually responsible for the sextortion scam and the end results, and possibly whoever made it so damn easy for the kid to get his father’s gun in the middle of the night to shoot himself. The “problem” here is not Section 230, and removing Section 230 wouldn’t change a damn thing. This lawsuit is nonsense, and sure, maybe it makes the family feel better to sue Meta, but just because a crime happened on Instagram, doesn’t magically make it Instagram’s fault.

And, for good reason. As noted above, this was always a law enforcement situation. We shouldn’t ever want to turn private companies into law enforcement. Because that would be an extremely dangerous result. Let Meta provide its communications tools. Let law enforcement investigate crimes and bring people to justice (as happened here). Maybe we should focus on better educating our kids to be aware of threats like sextortion and how to respond to it if they happen to make a mistake and get caught up in it.

There’s lots of blame to go around here, but none of it belongs on Section 230.

Filed Under: blame, criminals, fbi, guns, jordan demay, law enforcement, liability, section 230, sextortion, yahoo boys
Companies: meta

Ninth Circuit Dumps Three More ‘Sue Twitter Because Terrorism’ Lawsuits

from the when-exploitation-compounds-tragedy dept

While it’s understandable to desire someone be held responsible for brutal acts of terrorism, the responsibility for those actions lies with those who committed them. That’s hardly satisfying because it can be almost impossible to extract anything from the terrorists themselves, other than the limited recompense of seeing them arrested and jailed.

And that’s something that rarely happens. Many terrorist acts are suicidal, allowing perpetrators to exit the world as self-proclaimed martyrs, rather than the abhorrent murderers they actually are.

So, I understand the desire to sue social media companies whose platforms have been used by terrorist groups to recruit members and spread propaganda. The thing is, social media services aren’t complicit with these actions and, in most cases, are doing what they can to prevent this sort of content from being posted and shared.

What I can’t understand is the motivation of law firms like Excolo Law and (yes, this is its name) 1-800-LAW-FIRM to bring further misery to victims of terrorism by pretending there’s an actionable claim to be made in court against companies like Twitter and Facebook. This pretense — that has yet to hold up in court — allows these questionable legal firms to pretend they’re the Davids going up against these Goliaths, exploiting people that now have to relive these horrible experiences by becoming plaintiffs in lawsuits that cannot realistically expect to win.

The losses just keep mounting. The legal theories pushed by these firms have yet to secure a single win. And in just four pages, the Ninth Circuit Appeals Court has handed [PDF] these plaintiffs and their legal reps another three losses. (via Courthouse News Service)

This decision consolidates three appeals all stemming from dismissals with prejudice by lower courts. All three plaintiffs (all represented by the same two law firms listed above) sued Google, Twitter, and Facebook under the theory that the mere appearance of terrorist content on their platforms amounts to material support for terrorism or, at the very least, were negligent in their moderation efforts.

None of that works. It didn’t work at the lower level and the appeals court sees no reason to expend any more words than necessary to affirm these dismissals. This single paragraph is half the decision (not including footnotes) and it makes it extremely clear these arguments will never work in this circuit, or indeed, anywhere else in the federal judiciary, thanks to its brief citation of two overriding Supreme Court decisions.

The court concludes de novo that amending the operative complaints would be futile. See Leadsinger, Inc. v. BMG Music Publ’g, 512 F.3d 522, 532 (9th Cir. 2008). Plaintiffs-Appellants fail to allege the third element for aiding and abetting liability under 18 U.S.C. § 2333(d), that Defendants-Appellees “gave such knowing and substantial assistance to ISIS that they culpably participated” in the terrorist acts, Taamneh, 598 U.S. at 497 (applying the legal framework set forth in Halberstam v. Welch, 705 F.2d 472 (D.C. Cir. 1983)). Each district court properly considered this dispositive third element. See id. at 503–07. Plaintiffs-Appellants proffer no arguments that any of the district courts either erred in dismissing claims or abused its discretion in denying leave to amend.

The law firms behind these lawsuits are exploiting tragedies and the people who survived them to extend the distance between the victims and closure. And it has happened over and over and over again. Mandy Palmucci is one of the plaintiffs affected by this decision. Here’s what the lower court said when it dismissed her case:

Following the Fields decisions, materially similar direct liability claims have been rejected by numerous judges in this District and elsewhere. See Clayborn v. Twitter, Inc., 17-CV-06894- LB, 2018 WL 6839754 (N.D. Cal. Dec. 31, 2018); Copeland v. Twitter, Inc., 352 F. Supp. 3d 965, 17-CV-5851-WHO (N.D. Cal. 2018); Taamneh v. Twitter, Inc., 343 F. Supp. 3d 904, 17-CV04107-EMC (N.D. Cal. 2018); Cain v. Twitter Inc., 17-CV-02506-JD, 2018 WL 4657275 (N.D. Cal. Sept. 24, 2018); Gonzalez v. Google, Inc., 335 F. Supp. 3d 1156, 16-CV-03282-DMR (N.D. Cal. 2018) (Gonzalez II); Gonzalez v. Google, Inc., 282 F. Supp. 3d 1150 (N.D. Cal. Oct. 23, 2017) (Gonzalez I); Pennie v. Twitter, Inc., 281 F. Supp. 3d 874, 17-CV-00230-JCS (N.D. Cal. Dec. 4, 2017); see also Crosby v. Twitter, Inc., 303 F. Supp. 3d 564 (E.D. Mich. March 30, 2018).

The lawyers handling these cases know these are losing causes. But they keep doing the same thing over and over again when they should be telling these clients these are cases that can’t be won. These firms present themselves as crusaders against Big Tech but all they’re really doing is taking advantage of people who’ve already been subjected to the worst things this world has to offer. This track record would be inexcusable if it were the result of hallucinating AI. There’s not even a word that capably describes what this is when there’s actual living, breathing, law license-holding humans behind it.

Filed Under: 9th circuit, blame, social media, terrorism
Companies: 1-800-law-firm, excolo law, twitter, x

Confused NY Court Says That Section 230 Doesn’t Block Ridiculous Lawsuit Blaming Social Media For Buffalo Shooter

from the the-blame-game dept

Can you imagine what kind of world we’d live in if you could blame random media companies for tangential relationships they had with anyone who ever did anything bad? What would happen if we could blame newspapers for inspiring crime? Or television shows for inspiring terrorism? The world would be a much duller place.

We’ve talked a lot about how the entire purpose of Section 230 of the Communications Decency Act is to put the liability on the right party. That is, it’s entirely about making sure the right party is being sued and to avoid wasting everyone’s time by suing some random party, especially in pursuit of “Steve Dallas” type lawsuits, where you just sue some random company, tangentially connected to some sort of legal violation, because they have the deepest pockets.

Image

Unfortunately, a judge in the NY Supreme Court (which, bizarrely, is NY’s lowest level of courts) has allowed just such a lawsuit to move forward. It was filed by the son of a victim of the racist dipshit who went into a Buffalo supermarket and shot and killed a bunch of people a couple years ago. That is, obviously, a very tragic situation. And I can certainly understand the search for someone to blame. But blaming “social media” because someone shot up a supermarket is ridiculous.

It’s exactly the kind of thing that Section 230 was designed to get tossed out of court quickly.

Of course, NY officials spent months passing the blame and pointing at social media companies. NY’s Governor and Attorney General wanted to deflect blame from the state’s massive failings in handling the situation. I mean, the shooter had made previous threats, and law enforcement in NY had been alerted to those threats and failed to stop him. Also, he used a high capacity magazine that was illegal in NY and law enforcement failed to stop him. Also, when people in the store called 911, the dispatcher didn’t believe them and hung up on them.

The government had lots of failings that aren’t being investigated, and lawsuits aren’t being filed over those. But, because the shooter also happened to be a racist piece of shit on social media, people want to believe we should be able to magically sue social media.

And, the court is allowing this based on a very incorrect understanding of Section 230. Specifically, the court has bought into the trendy, but ridiculous, “product liability” theory, that is now allowing frivolous and vexatious plaintiffs across the country to use this “one weird trick” to get around Section 230. Just claim “the product was defective” and, boom, the court will let the case go forward.

That’s what happened here:

The social media/internet defendants may still prove their platforms were mere message boards and/or do not contain sophisticated algorithms thereby providing them with the protections of the CDA and/or First Amendment. In addition, they may yet establish their platforms are not products or that the negligent design features plaintiff has alleged are not part of their platforms. However, at this stage of the litigation the Court must base its ruling on the allegations of the complaint and not “facts” asserted by the defendants in their briefs or during oral argument and those allegations allege viable causes of action under a products liability theory.

Now, some might say “no big deal, the court says they can raise this issue again later,” but nearly all of the benefit of Section 230 is in how it gets these frivolous cases tossed early. Otherwise, the expense of these cases adds up and creates a real mess (along with the pressure to settle).

Also, the judge here seems very confused. Section 230 does not protect “mere message boards.” It protects any interactive computer service from being held liable for third-party speech. And whether or not they “contain sophisticated algorithms” should have zero bearing on whether or not Section 230 applies.

The Section 230 test to see if it applies is quite simple: (1) Is this an interactive computer service? (2) Would holding them liable in this scenario be holding them liable for the speech of someone else? (3) Does this not fall into any of the limited exceptions to Section 230 (i.e., intellectual property law, trafficking, or federal criminal law)? That’s it. Whether or not you’re a message board or if you use algorithms has nothing to do with it.

Again, this kind of ruling only encourages more such vexatious litigating.

Outside of Section 230, the social media defendants sought to go to the heart of the matter and just made it clear that there’s no causal link between “dipshit being a dipshit on social media” and “dipshit going on a murderous rampage.”

And, again, the judge doesn’t seem to much care, saying that a jury can figure that out:

As a general proposition the issue of proximate cause between the defendants’ alleged negligence and a plaintiff’s injuries is a question of fact for a jury to determine. Oishei v. Gebura 2023 NY Slip Op 05868, 221 AD3d 1529 (4th Dept 2023). Part of the argument is that the criminal acts of the third party, break any causal connection, and therefore causation can be decided as a matter of law. There are limited situations in which the New York Court of Appeals has found intervening third party acts to break the causal link between parties. These instances are where “only one conclusion may be drawn from the established facts and where the question of legal cause may be decided as a matter of law.” Derdiarian v Felix Contr. Corp., 51 NY2d 308 at 315 (1980). These exceptions involve independent intervening acts that do not flow from the original alleged negligence.

Again, though, getting this kind of case to a jury would be crazy. It would be a massive waste of everyone’s resources. By any objective standard, anyone looking at this case would recognize that it is not just frivolous and vexatious, but that it creates really terrible incentives all around.

If these kinds of cases are allowed to continue, you will get more such frivolous lawsuits for anything bad that happens. Worse, you will get much less overall speech online, as websites have incentives to take down or block any speech that isn’t Sesame Street-level in terms of how it’s portrayed. Any expression of anger, any expression of complaining, any expression of being unhappy about anything could otherwise be seen as proof that the social media was “defective in its design” for not magically connecting that expression to future violence.

That would basically be the end of any sort of forum for mental health. It would be the end of review sites. It would be the end of all sorts of useful websites, because the liability that could accrue from just one user on those forums saying something negative would be too much. If just one of the people in those forums then took action in the real world, people could blame it and sue it for not magically stopping the real world violence.

This would be a disaster for the world of online speech.

As Eric Goldman notes, it seems unlikely that this ruling will survive on appeal, but it’s still greatly problematic:

I am skeptical this opinion will survive an appeal. The court disregards multiple legal principles to reach an obviously results-driven decision from a judge based in the emotionally distraught community.

The court doesn’t cite other cases involving similar facts, including Gibson v. Craigslist and Godwin v. Facebook. One of the ways judges can reach the results they want is by selectively ignoring the precedent, but that approach doesn’t comply with the rule of law.

This opinion reinforces how the “negligent design” workaround to Section 230 will functionally eliminate Section 230 if courts allow plaintiffs to sue over third-party content by just relabeling their claims.

Separately, I will note my profound disappointment in seeing a variety of folks cheering on this obviously problematic ruling. Chief among them: Free Press. I don’t always agree with Free Press on policy prescriptions, but generally, their heart is in the right place on core internet freedom issues. Yet, they put out a press release cheering on this ruling.

In the press release, they claim (ridiculously) that letting this case move forward is a form of “accountability” for those killed in the horrific shooting in Buffalo. But that’s ridiculous and anyone should recognize that. There are people to hold liable for what happened there: most obviously the shooter himself. But trying to hold random social media sites liable for not somehow stopping future real world violence is beyond silly. As described above, it’s also extremely harmful to causes around free speech that Free Press claims as part of its mission.

I’m surprised and disappointed to see them take such a silly stance that undermines its credibility. But I’m even more disappointed in the court for ruling this way.

Filed Under: blame, buffalo shooting, liability, product liability, section 230, wayne jones
Companies: google, reddit, youtube

E-Bike Industry Blames Consumers For Fires In Effort To Undermine ‘Right To Repair’ Laws

from the fix-your-own-shit dept

Mon, Aug 28th 2023 05:29am - Karl Bode

Countless companies and industries enjoy making up scary stories when it comes to justifying their opposition to making it easier to repair your own tech. Apple claims that empowering consumers and bolstering independent repair shops will turn states into “hacker meccas.” The car industry insists that making it easier and cheaper to repair modern cars will be a boon to sexual predators.

Throughout the arguments is routinely peppered a single theme: providing easier and cheaper repair options to consumers is simply too dangerous to be considered. It apparently doesn’t matter that an FTC study recently found those claims to be self-serving bullshit designed to protect harmful repair monopolies from reform and lost repair revenue.

That right to repair is simply too dangerous to embrace is also apparently the argument being made by the growing E-Bike sector. People for Bikes, the national trade org representing bicycle manufacturers, has been reaching out to lawmakers urging them to exempt bicycles from all right to repair legislation. Successfully, as it turns out.

You might recall that New York recently passed a right to repair law that was immediately watered down by NY Governor Kathy Hochul. The bill already exempted key industries where repair monopolization is a problem, such as cars, home appliances, farm equipment, and medical gear. Unsatisfied, numerous industries got Hochul to water the bill down even further.

A report at Grist notes this included weakening the bill on behest of the bike industry, which in a letter to lawmakers tried to place the onus for now common e-bike fires on consumers:

In a letter sent to New York Governor Kathy Hochul in December, People for Bikes asked that e-bikes be excluded from the state’s forthcoming digital right-to-repair law, which granted consumers the right to fix a wide range of electronic devices. The letter cited “an unfortunate increase in fires, injuries and deaths attributable to personal e-mobility devices” including e-bikes. Many of these fires, People for Bikes claimed in the letter, “appear to be caused by consumers and others attempting to service these devices themselves,” including tinkering with the batteries at home.

This of course is an industry whose products are already often unreliable and dangerous on their own; there’s been just endless examples of deadly fires caused by shoddy products and unreliable batteries. Most of these fires have absolutely nothing to do with consumers making repair mistakes. When pressed for evidence, the organization claimed the statement was “anecdotal”:

Asked for data to back up the claim that e-bike fires were being caused by unauthorized repairs, Lovell said that it was “anecdotal, from folks that are on the ground in New York.”

How very truth-esque.

As e-bikes get more complicated, it’s obviously more important than ever to ensure that repairing those bikes is affordable. Activists note that to create a sustainable, environmentally responsible industry with satisfied customers, the bike industry needs to do a much better job designing its bikes to be repairable, standardizing parts, and making it easier for consumers to access manuals and tools:

“There’s huge interest” in fixing e-bikes, said Kyle Wiens, CEO of the online repair guide site iFixit. But outside of manufacturers and specialized shops, “no one knows how.”

New York’s original law could have gone a long way in fixing that, but lawmakers were intent on undermining their own legislation after hearing scary, often false stories by self-serving industries. Minnesota recently passed its own right to repair law, and while also watered down to exclude cars, medical equipment, and game consoles, it did at least manage to include e-bikes.

Filed Under: bicycles, bike shops, blame, consumers, e-bikes, freedom to tinker, repair monopolies, repair shops, right to repair

Witcher Producer: Show’s Shit Viewership Is Because Of Dumb Americans And Social Media

from the american-idiots dept

There’s nothing particularly novel when it comes to showrunners of media properties blaming all these damned kids and their internet for why their productions aren’t as successful as they wanted. Everything from broadway productions to viewership of the damned Olympics have had young people and social media blamed for declining or terrible viewership/attendance numbers. In nearly every case where you dig into this, however, you find that this blame game is exactly that. Sometimes the product just sucks, or the proper marketing hasn’t been undertaken, or the product just sucks, or you’ve misjudged what the audience wants, or the product just sucks.

Which brings us to Netflix’s adaptation of The Witcher novels. The two-part season 3 finale aired recently to a roughly 30% viewership drop compared with its previous season. You can speculate as to the reasons for that drop. It could be that beloved actor, Henry Cavill, who plays the titular character in the series, will not be back for season 4. Or perhaps the writing for the show deviating from the novels put the audience off. Or perhaps the show’s storyline was either too confusing or just not terribly interesting compared with previous seasons. Or perhaps show producer Tomek Baginski is right and the problem is that Americans, which make up a massive percentage of the audience for the show, are just too dumb for a more complicated plot, so they dumbed it down, and that somehow led to the show’s viewership decline?

“I had the same perceptual block when I presented Hardkor 44 [a never-made variation on the Warsaw Uprising] abroad years ago and tried to explain: there was an uprising against Germany, but the Russians were across the river, and on the German side there were also soldiers from Hungary or Ukraine,” Baginski told Wyborcza. “For Americans, it was completely incomprehensible, too complicated, because they grew up in a different historical context, where everything was arranged: America is always good, the rest are the bad guys. And there are no complications.”

Baginski continued, saying simplifications of plot points are just as painful for writers as it is for viewers but oversimplifications to an otherwise nuanced and complex topic are often “necessary” so that a show can reach a larger audience.

You should be able to see just how confused these statements are. Let me take you on a run-on sentence journey so you can see them all summarized together. Americans are too simple-minded to be able to handle a complicated plot, and they make up a huge part of the audience, so we dumbed the plot down for those idiots so we could appeal to a larger audience, except that audience is actually smaller, but that isn’t the fault of the show, because of the Americans.

I’m far from a jingoist, but that’s as befuddling a bit of logic as I’ve ever witnessed. Could be because I’m just an American idiot, to be sure.

But this also isn’t Baginski’s first foray into deflecting blame for his show’s viewership decline.

In an interview with the Polish YouTube channel Imponderabilia, Baginski singled out season two’s low viewership as a byproduct of younger viewers who frequent social media sites like YouTube and TikTok for having short attention spans.

“When it comes to shows, the younger the public is, the logic of the plot is less significant…Those people grew up on TikTok and YouTube, they jump from video to video,” Baginsk said, adding that young folks gravitate more toward “just emotions.”

When the interviewer chimed in and said they were part of the age range of viewers Baginski was talking about, the producer replied saying “Okay, so it’s time to be serious. Dear children, what you do to yourself makes you less resilient for longer content, for long and complicated chains of cause and effect.”

Which is exactly why there are no other analogue examples of successful shows that have complicated, long, saga-style content. Like Game of Thrones. Or The Expanse. Or Stranger Things. Oh, wait, all of those shows exist and are beloved both in America and abroad.

This blame game is silly. And, sure, you’ll find plenty of times when I’ve called Americans dumb myself. And in some ways, we are. But the viewership numbers for The Witcher declining is almost certainly a self-inflicted wound.

Filed Under: blame, social media, tomek baginski, witcher

New York Wants To Destroy Free Speech Online To Cover Up For Their Own Failings Regarding The Buffalo Shooting

from the elect-better-people dept

Back in May, it seemed fairly obvious how all of this was going to go down. Following on the horrific mass murder carried out at a supermarket in Buffalo, we saw NY’s top politicians all agree that the real blame… should fall on the internet and Section 230. It had quickly become clear that NY’s own government officials had screwed up royally multiple times in the leadup to the massacre. The suspect had made previous threats, which law enforcement mostly brushed off. And then, most egregiously, the 911 dispatcher who answered the call about the shooting, hung up on the caller. And we won’t even get into a variety of other societal failings that resulted in all of this. No, the powers that be have decided to pin all the blame on the internet and Section 230.

To push this narrative, and to avoid taking any responsibility themselves, NY’s governor Kathy Hochul had NY Attorney General Letitia James kick off a highly questionable “investigation” into how much blame they could pin on social media. The results of that “investigation” are now in, and would you believe it? AG James is pretty sure that social media and Section 230 are to blame for the shooting! Considering the entire point of this silly exercise was to deflect blame and put it towards everyone’s favorite target, it’s little surprise that this is what the investigation concluded.

Hochul and James are taking victory laps over this. Here’s Hochul:

“For too long, hate and division have been spreading rampant on online platforms — and as we saw in my hometown of Buffalo, the consequences are devastating,” Governor Hochul said. “In the wake of the horrific white supremacist shooting this year, I issued a referral asking the Office of the Attorney General to study the role online platforms played in this massacre. This report offers a chilling account of factors that contributed to this incident and, importantly, a road map toward greater accountability.”

Hochul is not concerned about the failings of law enforcement officials, nor the failings of mental health efforts. Nor the failings of efforts to keep unwell people from accessing weapons for mass murder. Nope. It’s the internet that’s to blame.

James goes even further in her statement, flat out blaming freedom of speech for mass murder.

“The tragic shooting in Buffalo exposed the real dangers of unmoderated online platforms that have become breeding grounds for white supremacy,” said Attorney General James.

The full 49 page report is full of hyperbole and insisting that the use of forums by people doing bad things is somehow proof that the forums themselves caused the people to be bad. The report puts tremendous weight on the claims of the shooter himself, an obviously troubled individual, who insists that he was “radicalized” online. The report’s authors simply assume that this is accurate, and that it wasn’t just the shooter trying to push off the responsibilities for his own actions.

Incredibly, the report has an entire section that highlights how residents of Buffalo feel that social media should be held responsible. But, that belief that social media is to blame is… mostly driven by misleading information provided by the very same people creating this report in order to offload their own blame. Like, sure, if you keep telling people that social media is to blame, don’t be surprised when they parrot back your talking points. But that doesn’t mean those are meaningful or accurate.

There are many other oddities in the report. The shooter apparently set up a Discord server, with himself as the only member, where he wrote out a sort of “diary” of his plans and thinking. The report seems to blame Discord for this, even though this is no different than opening a local notepad and keeping notes there, or writing them down by hand on a literal notepad. I mean, what is this nonsense:

By restricting access to the Discord server only to himself until shortly before the attack, he ensured to near certainty that his ability to write would not be impeded by Discord’s content moderation.

Discord’s content moderation operates dually at the individual user and server level, and generally across the platform. The Buffalo shooter had no incentive to operate any server-level moderation tools to moderate his own writing. But the platform’s scalable moderation tools also did not stop him from continuing to plan his mass violence down to every last detail.

[….]

But without users or moderators apart from the shooter himself to view his writings, there could be no reports to the platform’s Trust and Safety Team. In practice, he mocked the Community Guidelines, writing in January 2022, “Looks like this server may be in violation of some Discord guidelines,” quoting the policy prohibiting the use of the platform for the organization, promotion, or support of violent extremism, and commenting with evident sarcasm, “uh oh.” He continued to write for more than three and a half more months in the Discord server, filling its virtual pages with specific strategies for carrying out his murderous actions.

He used it as a scratchpad. How do you blame Discord for that?!? If he’d done the same thing in a physical notebook, would AG James be blaming Moleskine for selling him a notebook? This just all seems wholly disconnected from reality.

The report also blames YouTube, because the shooter watched a video on how to comply with NY gun laws. As if that can lead to blame?

One of the videos actually demonstrates the use of an attachment to convert a rifle to use only a fixed magazine in order to comply with New York and other states’ assault weapons bans. The presenter just happens to mention that the product box itself notes that the device can be removed with a drill.

The more you read in the report, the more it becomes obvious just how flimsy James’/Hochul’s argument is that social media is to blame. Here’s the report admitting that he didn’t do anything obviously bad on Reddit:

Like the available Discord comments, the content of most of these Reddit posts is largely exchanging information about the pros and cons of certain brands and types of body armor and ammunition. They generally lack context from which it could have been apparent to a reader that the writer was planning a murderous rampage. One comment, posted about a year ago, is chilling in retrospect; he asks with respect to dark-colored tactical gear, “in low light situations such as before dusk after dawn and at nighttime it would provide good camouflage, also maybe it would be also good for blending in in a city?” It is difficult to say, however, that this comment should have been flagged at the time it was made

The report also notes how all these social media sites sprung into action after the shooting — something helped along because of Section 230, and acts as if this is a reason to reform 230. Indeed, while the report complains that they were still able to find a few images and video clips from the attack, the numbers were tiny and clearly suggest that barely any slipped through. But, this report — again prepared by a NY state gov’t which had law enforcement check on the shooter and do nothing about it — suggests that not being perfect in their moderation is a cause for alarm:

For the period May 20, 2022 to June 20, 2022, OAG investigators searched a number of mainstream social networks and related sites for the manifesto and video of the shooting. Despite the efforts these platforms made at moderating this content, we repeatedly found copies of the video and manifesto, and links to both, on some of the platforms even weeks after the shooting. The OAG’s findings most likely represent a mere fraction of the graphic content actually posted, or attempted to be posted, to these platforms. For example, during the course of nine weeks immediately following the attacks, Meta automatically detected and removed approximately 1 million pieces of content related to the Buffalo shooting across its Facebook and Instagram platforms. Similarly, Twitter took action on approximately 5,500 Tweets in the two weeks following the attacks that included still images or videos of the Buffalo shooting, links to still images and videos, or the shooter’s manifesto. Of those, Twitter took action on more than 4,600 Tweets within the first 48 hours of the attack

When we found graphic content as part of these efforts, we reported it through user reporting tools as a violation of the platform’s policy. Among large, mainstream platforms, we found the most content containing video of the shooting, or links to video of the shooting, on Reddit (17 instances), followed by Instagram (7 instances) and Twitter (2 instances) during our review period. We also found links to the manifesto on Reddit (19 instances), the video sharing site Rumble (14 instances), Facebook (5 instances), YouTube (3 instances), TikTok (1 instance), and Twitter (1 instance). Response time varied from a maximum of eight days for Reddit to take down violative content to a minimum of one day for Facebook and YouTube to do so.

We did not find any of this content on the other popular online platforms we examined for such content, which included Pinterest, Quora, Twitch, Discord, Snapchat, and Telegram, during our review period. That is not to say, however, that it does not exist on those platforms.

In other words, sites like Twitter and Facebook took down thousands to millions of people reposting this content and single digit reposts may have slipped through the content moderation systems… and NY’s top politicians think this is a cause for concern?

I mean, honestly, it is difficult to read this report and think that social media is a problem. What the report actually shows is that social media was, at best, tangential to all of this, and when the shooter and his supporters tried to share and repost content associated with the attack, the sites were pretty good (if not absolutely perfect) about getting most of it off the platform. So it’s absolutely bizarre to read all of that and then jump to the “recommendations” section, where they act as if the report showed that social media is the main cause of the shooting, and just isn’t taking responsibility.

It’s almost as if the “recommendations” section was written prior to the actual investigation.

The report summary from Hochul leaves out how flimsy the actual report is, and insists it proves four things the report absolutely does not prove:

  1. Fringe platforms fuel radicalization: this is entirely based on the claims of the shooter himself, who has every reason to blame others for his action. The report provides no other support for this.
  2. Livestreaming has become a tool for mass shooters: again, the “evidence” here is that this guy did it… and so did the Christchurch shooter in 2019. Of course (tragically, and unfortunately) there have been a bunch of mass shootings between now and then, and the vast, vast majority of them do not involve livestreaming. To argue that there’s any evidence that livestreaming is somehow connected to mass shootings is beyond flimsy.
  3. Mainstream platforms moderation policies are inconsistent and opaque. Again, the actual report suggests otherwise. It shows (as we highlighted above) that the mainstream platforms are pretty aggressive in taking down content associated with a mass shooting, and relatively quick at doing so.
  4. Online platforms lack accountability. What does accountability even mean here? This prong is used to attack Section 230, ignoring that it’s Section 230 that enabled these companies to build up tools and processes in their trust & safety departments to react to tragedies like this one.

The actual recommendations bounce back and forth between “obviously unconstitutional restrictions on speech” and “confused and nonsensical” (some are both). Let’s go through each of them:

  1. Create Liability for the Creation and Distribution of Videos of Homicides: This is almost certainly problematic under the 1st Amendment. You may recall that law enforcement types have been calling for this sort of thing for ages, going back over a decade. Hell, we have a story from 2008 with NY officials calling for this very same thing. It’s all nonsense. Videos of homicides are… actual evidence. Criminalizing the creation and distribution of evidence of a crime seems like a weird thing for law enforcement to be advocating for. It’s almost as if they don’t want to take responsibility. Relatedly, this would also criminalize taking videos of police shooting people. Which, you know, probably is not such a good idea.
  2. Add Restrictions to Livestreaming: I remind you that the report mentions exactly two cases of livestreamed mass murders: this one in Buffalo and the one in 2019 in Christchurch, New Zealand. That is not exactly proof that livestreaming is deeply connected with mass murder. The suggestion is completely infeasible, demanding “tape delays” on livestreaming, so that… it is no longer livestreaming. They also demand ways to “identify first-person violence before it can be widely disseminated.” And I’d like a pony too.
  3. Reform Section 230: Again, the actual report shows how the various platforms did a ton to get rid of content glorifying the shooter. Yes, a few tiny things slipped through… just as the shooter slipped through New York police review when he was previously reported for threatening violence. But, Hochul and James are sure that 230 is a problem. They demand that “an online platform has the initial burden of establishing that its policies and practices were reasonably designed.” This is effectively a repeal of 230 (as I’ll explain below).
  4. Increase Transparency and Strengthen Moderation: As we’ve discussed at length, many of these transparency mandates are actually censorship demands in disguise. Also, reforming Section 230 as they want would not strengthen moderation, it would weaken it by making it that much more difficult to actually adapt to bad actors on the site. The same is likely true of most transparency mandates, which make it more difficult to adapt to changing threats, because the transparency requirements slow everyone down.

I want to call out, again, why the “reasonably designed” bit of the “reform 230” issue is so problematic. Again, this requires people to actually understand how Section 230 works. Section 230’s main benefit is the procedural benefit of getting frivolous, vexatious cases tossed out early. If you condition 230 protections on proving “reasonableness,” you literally take away the entire benefit of 230. Because, now, every time there’s a lawsuit, you first have to go through the expensive, and time consuming process of proving your policies are reasonable. And, thus, you lose all of the procedural benefits of 230 and are left fighting nuisance lawsuits constantly. The idea makes no sense at all.

Worse, it again greatly limits the ability of sites to adapt and improve their moderation efforts, because now every single change that they make needs to go through a careful legal review before it will get approved, and then every single change will open them up to a new legal challenge that these new policies are somehow “unreasonable.” The entire “reasonableness” scheme incentivizes companies to not fix moderation and to not adapt and strengthen moderation, because any change to your policies creates the risk of liability, and the need to fight long and expensive lawsuits.

So, to sum all this up: we have real evidence that NY state failed in major ways with regards to the Buffalo shooter. Instead of owning that, NY leadership decided to blame social media, initiating this “investigation.” The actual details of the investigation show that social media had very, very little to do with this shooting at all, and where it was used, it was used in very limited ways. It also shows that social media sites were actually extremely fast and on the ball in removing content regarding the shooting, while a very, very, very tiny bit of content may have slipped through the filtering process, it was hugely successful.

And yet… the report still blames social media, insists a bunch of false things are true, and then makes a bunch of questionable (unconstitutional) recommendations, along with recommendations to effectively take away all of Section 230’s benefits… which would actually make it that much more difficult for websites to respond to future events and future malicious actors.

It’s all garbage. But, of course, it’s just politicians grandstanding and deflecting from their own failings. Social media and Section 230 are a convenient scapegoat, so that’s what we get.

Filed Under: 1st amendment, blame, buffalo, content moderation, kathy hochul, letitia james, livestreaming, mass murder, new york, section 230
Companies: discord, reddit, twitch, youtube

Coroner Lists ‘Negative Effects Of Online Content’ As One Of The Causes Of A UK Teen’s Death

from the yikes dept

So… this is a thing that happened. Adam Satariano reports for the New York Times:

The coroner overseeing the case, who in Britain is a judgelike figure with wide authority to investigate and officially determine a person’s cause of death, was far less circumspect. On Friday, he ruled that Instagram and other social media platforms had contributed to her death — perhaps the first time anywhere that internet companies have been legally blamed for a suicide.

“Molly Rose Russell died from an act of self-harm while suffering from depression and the negative effects of online content,” said the coroner, Andrew Walker. Rather than officially classify her death a suicide, he said the internet “affected her mental health in a negative way and contributed to her death in a more than minimal way.”

This was the declaration entered as evidence in a UK court case revolving around the suicide of 14-year-old Molly Russell. Also entered as evidence was a stream of disturbing content pulled from the deceased teen’s accounts and mobile device — content that included videos, images, and content related to suicide, including a post copied almost verbatim by Russell in her suicide note.

The content Russell apparently viewed in the weeks leading up to her suicide was horrific.

Molly’s social media use included material so upsetting that one courtroom worker stepped out of the room to avoid viewing a series of Instagram videos depicting suicide. A child psychologist who was called as an expert witness said the material was so “disturbing” and “distressing” that it caused him to lose sleep for weeks.

All of this led to Meta executives being cross-examined and asked to explain how a 14-year-old could so easily access this content. Elizabeth Langone, Meta’s head of health and well-being policies, had no explanation.

As has been noted here repeatedly, content moderation at scale is impossible. What may appear to be easy access to disturbing content may be more a reflection of the user than the platform’s inability to curtail harmful content. And what may appear to be a callous disregard for users may be nothing more than a person slipping through the cracks of content moderation, allowing them to find the content that intrigues them despite efforts made by platforms to keep this content from surfacing unwelcomed on people’s feeds.

This declaration by the UK coroner is, unfortunately, largely performative. It doesn’t really say anything about the death other than what the coroner wants to say about it. And this coroner was pushed into pinning the death (at least partially) on social media by the 14-year-old’s parent, a television director with the apparent power to sway the outcome of the inquest — a process largely assumed to be a factual, rather than speculative, recounting of a person’s death.

Mr. Russell, a television director, urged the coroner reviewing Molly’s case to go beyond what is often a formulaic process, and to explore the role of social media. Mr. Walker agreed after seeing a sample of Molly’s social media history.

That resulted in a yearslong effort to get access to Molly’s social media data. The family did not know her iPhone passcode, but the London police were able to bypass it to extract 30,000 pages of material. After a lengthy battle, Meta agreed to provide more than 16,000 pages from her Instagram, such a volume that it delayed the start of the inquest. Merry Varney, a lawyer with the Leigh Day law firm who worked on the case through a legal aid program, said it had taken more than 1,000 hours to review the content.

What they found was that Molly had lived something of a double life. While she was a regular teenager to family, friends and teachers, her existence online was much bleaker.

From what’s seen here (and detailed in the New York Times article), Molly’s parents didn’t take a good look at her social media use until after she died by suicide. This is not to blame the parents for not taking a closer look sooner, but to point out how ridiculous it is for a coroner to deliver this sort of declaration, especially at the prompting of a grieving parent looking to find someone to blame for his daughter’s suicide.

If this coroner wants to list contributing factors on the public record — especially when involved in litigation — they should at least be consistent. They could have listed “lack of parental oversight,” “peer pressure,” and “unaddressed psychological issues” as contributing factors. This report is showboating intended to portray social media services as harmful and direct attention away from the teen’s desire to access “harmful content.”

And, truly, the role of the coroner is to find the physical causes of death. We go to dangerous places quickly when we start saying that this or that thing clearly caused someone to die by suicide. We don’t know. We can’t know. Even if someone were trained in psychology (not often the case with coroners) you still can’t ever truly say what makes a person take their own life. There are likely many reasons, and they may all contribute in their own ways. But in the end, it’s the person who makes the decision themselves, and only they know the real reasons.

As Mike has written in the past, when we officially put “blame” on parties over suicide, it actually creates very serious problems. It allows those who are considering suicide the power to destroy someone else’s life as well, by simply saying that they chose to end their life because of this or that person or company or whatever — whether or not there’s any truth to it.

I’m well aware social media services often value market growth and user activity over user health and safety, but performative inquests are not the way to alter platforms’ priorities. Instead, it provides a basis for bad faith litigation that seeks to hold platforms directly responsible for the actions of users.

This sort of litigation is already far too popular in the United States. Its popularity in the UK should be expected to rise immediately, especially given the lack of First Amendment protections or Section 230 immunity.

It’s understandable for parents to seek closure when their children die unexpectedly. But misusing a process that is supposed to be free of influence to create “official” declarations of contributory liability won’t make things better for social media users. All it will do is give them fewer options to connect with people that might be able to steer them away from self-harm.

Filed Under: blame, coroner report, intermediary liability, molly russell, suicide, uk
Companies: meta