liability – Techdirt (original) (raw)

Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic

from the that's-not-how-any-of-this-works dept

Step aside Fifth Circuit Court of Appeals, there’s a new contender in town for who will give us the most batshit crazy opinions regarding the internet. This week, a panel on the Third Circuit ruled that a lower court was mistaken in dismissing a case against TikTok on Section 230 grounds.

But, in order to do so, the court had to intentionally reject a very long list of prior caselaw on Section 230, misread some Supreme Court precedent, and (trifecta!) misread Section 230 itself. This may be one of the worst Circuit Court opinions I’ve read in a long time. It’s definitely way up the list.

The implications are staggering if this ruling stands. We just talked about some cases in the Ninth Circuit that poke some annoying and worrisome holes in Section 230, but this ruling takes a wrecking ball to 230. It basically upends the entire law.

At issue are the recommendations TikTok offers on its “For You Page” (FYP), which is the algorithmically recommended feed that a user sees. According to the plaintiff, the FYP recommended a “Blackout Challenge” video to a ten-year-old child, who mimicked what was shown and died. This is, of course, horrifying. But who is to blame?

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

In many ways, Section 230 was designed to speed up this analysis in the internet era, by making it explicit that a website publisher has no liability for harms that come from content posted by others, even if the publisher engaged in traditional publishing functions. Indeed, the point of Section 230 was to encourage platforms to engage in traditional publishing functions.

There is a long list of cases that say that Section 230 should apply here. But the panel on the Third Circuit says it can ignore all of those. There’s a very long footnote (footnote 13) that literally stretches across three pages of the ruling listing out all of the cases that say this is wrong:

We recognize that this holding may be in tension with Green v. America Online (AOL), where we held that § 230 immunized an ICS from any liability for the platform’s failure to prevent certain users from “transmit[ing] harmful online messages” to other users. 318 F.3d 465, 468 (3d Cir. 2003). We reached this conclusion on the grounds that § 230 “bar[red] ‘lawsuits seeking to hold a service provider liable for . . . deciding whether to publish, withdraw, postpone, or alter content.’” Id. at 471 (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Green, however, did not involve an ICS’s content recommendations via an algorithm and pre-dated NetChoice. Similarly, our holding may depart from the pre-NetChoice views of other circuits. See, e.g., Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1098 (9th Cir. 2019) (“[R]ecommendations and notifications . . . are not content in and of themselves.”); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (“Merely arranging and displaying others’ content to users . . . through [] algorithms—even if the content is not actively sought by those users—is not enough to hold [a defendant platform] responsible as the developer or creator of that content.” (internal quotation marks and citation omitted)); Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 21 (1st Cir. 2016) (concluding that § 230 immunity applied because the structure and operation of the website, notwithstanding that it effectively aided sex traffickers, reflected editorial choices related to traditional publisher functions); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014) (adopting Zeran by noting that “traditional editorial functions” are immunized by § 230); Klayman v. Zuckerburg, 753 F.3d 1354, 1359 (D.C. Cir. 2014) (immunizing a platform’s “decision whether to print or retract a given piece of content”); Johnson v. Arden, 614 F.3d 785, 791-92 (8th Cir. 2010) (adopting Zeran); Doe v. MySpace, Inc., 528 F.3d 413, 420 (5th Cir. 2008) (rejecting an argument that § 230 immunity was defeated where the allegations went to the platform’s traditional editorial functions).

I may not be a judge (or even a lawyer), but even I might think that if you’re ruling on something and you have to spend a footnote that stretches across three pages listing all the rulings that disagree with you, at some point, you take a step back and ask:

Principal Skinner meme. First frowning and looking down with hand stroking chin saying: "Am I so out of touch that if every other circuit court ruling disagrees with me, I should reconsider?" Second panel has him looking up and saying "No, it's the other courts who are wrong."

As you might be able to tell from that awful footnote, the Court here seems to think that the ruling in Moody v. NetChoice has basically overturned those rulings and opened up a clean slate. This is… wrong. I mean, there’s no two ways about it. Nothing in Moody says this. But the panel here is somehow convinced that it does?

The reasoning here is absolutely stupid. It’s taking the obviously correct point that the First Amendment protects editorial decision-making, and saying that means that editorial decision-making is “first-party speech.” And then it’s making that argument even dumber. Remember, Section 230 protects an interactive computer service or user from being treated as the publisher (for liability purposes) of third party information. But, according to this very, very, very wrong analysis, algorithmic recommendations are magically “first-party speech” because they’re protected by the First Amendment:

Anderson asserts that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them[.]” ECF No. 50 at 5. The Supreme Court’s recent discussion about algorithms, albeit in the First Amendment context, supports this view. In Moody v. NetChoice, LLC, the Court considered whether state laws that “restrict the ability of social media platforms to control whether and how third-party posts are presented to other users” run afoul of the First Amendment. 144 S. Ct. 2383, 2393 (2024). The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment….

Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too….

This is just flat out wrong. It is based on the false belief that any “expressive product” makes it “first-party speech.” That’s wrong on the law and it’s wrong on the precedent.

It’s a bastardization of an already wrong argument put forth by MAGA fools that Section 230 conflicts with the argument in Moody. The argument, as hinted at by Justices Thomas and Gorsuch, is that because NetChoice argues (correctly) that its editorial decision-making is protected by the First Amendment, it’s somehow in conflict with the idea that they have no legal liability for third-party speech.

But that’s only in conflict if you can’t read and/or don’t understand the First Amendment and Section 230 and how they interact. The First Amendment still protects any editorial actions taken by a platform. All Section 230 does is say that it can’t face liability for third party speech, even if it engaged in publishing that speech. The two things are in perfect harmony. Except to these judges in the Third Circuit.

The Supreme Court at no point says that editorial actions turn into first-party speech because they are protected by the First Amendment, contrary to what they say here. That’s never been true, as even the mushroom encyclopedia example shows above.

Indeed, reading Section 230 in this manner wipes out Section 230. It makes it the opposite of what the law was intended to do. Remember, the law was written in response to the ruling in Stratton Oakmont v. Prodigy, where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content. As then Reps. Chris Cox and Ron Wyden recognized, that would encourage no moderation at all, which made no sense. So they passed 230 to overturn that decision and make it so that internet services could feel free to engage in all sorts of publishing activity without facing liability for the underlying content when that content was provided by a third party.

But here, the Third Circuit has flipped that on its head and said that the second you engage in First Amendment-protected publishing activity around content (such as recommending it), you lose Section 230 protections because the content becomes first-party content.

That’s… the same thing that the court ruled in Stratton Oakmont, and which 230 overturned. It’s beyond ridiculous for the Court to say that Section 230 basically enshrined Stratton Oakmont, and it’s only now realizing that 28 years after the law passed.

And yet, that seems to be the conclusion of the panel.

Incredibly, Judge Paul Matey (a FedSoc favorite Trump appointee) has a concurrence/dissent where he would go even further in destroying Section 230. He falsely claims that 230 only applies to “hosting” content, not recommending it. This is literally wrong. He also falsely claims that Section 230 is a form of a “common carriage regulation” which it is not.

So he argues that the first Section 230 case, the Fourth Circuit’s important Zeran ruling, was decided incorrectly. The Zeran ruling established that Section 230 protected internet services from all kinds of liability for third-party content. Zeran has been adopted by most other circuits (as noted in that footnote of “all the cases we’re going to ignore” above). So in Judge Matey’s world, he would roll back Section 230 to only protect hosting of content and that’s it.

But that’s not what the authors of the law meant (they’ve told us, repeatedly, that the Zeran ruling was correct).

Either way, every part of this ruling is bad. It basically overturns Section 230 for an awful lot of publisher activity. I would imagine (hope?) that TikTok will request an en banc rehearing across all judges on the circuit and that the entire Third Circuit agrees to do so. At the very least, that would provide a chance for amici to explain how utterly backwards and confused this ruling is.

If not, then you have to think the Supreme Court might take it up, given that (1) they still seem to be itching for direct Section 230 cases and (2) this ruling basically calls out in that one footnote that it’s going to disagree with most other Circuits.

Filed Under: 1st amendment, 1st party speech, 3rd circuit, 3rd party speech, algorithms, fyp, liability, recommendations, section 230
Companies: tiktok

Families Of Uvalde Shooting Victims File Silly Lawsuit Against Meta & Activision, Which Had Nothing To Do With The Shootings

from the exploiting-the-families-here-is-disgusting dept

You have to feel tremendous sympathy for the families of the victims in the school shooting in Uvalde, Texas. As has been well documented, there was a series of cascading failures by law enforcement that made that situation way worse and way more devastating than it should have been.

So who should be blamed? Apparently, Meta and Activision!

Yes, the families also went after the city of Uvalde and recently came to a settlement. That seems like a perfectly righteous lawsuit. However, this new one just seems like utter nonsense and is embarrassing.

The families are suing Meta and Activision for the shooting. It’s making a mockery of the very tragic and traumatic experience they went through for no reason other than to puff up the ego of a lawyer.

It’s reminiscent of moral panics around video games, Dungeons & Dragons, and comic books.

For what it’s worth, they’re also suing Daniel Defense, the company that made the assault rifle used by the shooter in Uvalde. And while that’s not my area of expertise, so I won’t dig deep on that part of the lawsuit, I can pretty much guarantee that has no chance either.

This lawsuit is performative nonsense for the lawyer representing the families. I’m not going to question the families for going along with this, but the lawyer is doing this to get his name more famous and we won’t oblige by talking about him here. He’s taking the families for a ride. This is a ridiculous lawsuit, and the lawyer should be ashamed of giving the families false hope in bringing such a nuisance lawsuit for his own ego and fame.

The lawsuit is 115 pages, and I’m not going through the whole thing. It has 19 different claims, though nearly all of them are variations on the “negligence” concept. Despite this being on behalf of families in Uvalde, Texas, it was filed in the Superior Court of Los Angeles. This is almost certainly because this silly “negligence” theory has actually been given life in this very court by some very silly judges.

I still think those other cases will eventually fail, but because judges in the Superior Court in LA seem willing to entertain the idea that any random connection you can find to a harm can have to go through a full lengthy litigation process, we’re going to see a lot more bullshit lawsuits like this, as lawyers keep bringing them, hoping for a settlement fee to go away, or maybe an even dumber judge who actually finds this ridiculous legal theory to be legitimate.

Section 230 was designed to get these frivolous cases tossed out, but the success of the “negligence” theory means that we’re getting a glimpse of how stupid the world looks without Section 230. Just claim “negligence” because someone who did something bad uses Instagram or plays Call of Duty, and you get to drag out the case. But, really, this is obviously frivolous nonsense:

To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters. Before the Uvalde school shooter, there was the Parkland school shooter, and before him, the Sandy Hook school shooter. These were the three most deadly K-12 school shootings in American history. In each one, the shooter was between the ages of 18 and 21 years old; in each one, the shooter was a devoted player of Call of Duty; and in each one, the shooter committed their attack in tactical gear, wielding an assault rifle.

Multiple studies have debunked this nonsense. Millions of people play Call of Duty. The fact that a few teenage boys later shoot up schools cannot, in any way, be pinned to Call of Duty.

And why Meta? Well, that’s even dumber:

Simultaneously, on Instagram, the Shooter was being courted through explicit, aggressive marketing. In addition to hundreds of images depicting and glorifying the thrill of combat, Daniel Defense used Instagram to extol the illegal, murderous use of its weapons.

In one image of soldiers on patrol, with no animal in sight, the caption reads: “Hunters Hunt.” Another advertisement shows a Daniel Defense rifle equipped with a holographic battle sight—the same brand used by the Shooter—and dubs the configuration “totally murdered out.” Yet another depicts the view through a rifle’s scope, looking down from a rooftop; the setting looks like an urban American street and the windshield of a parked car is in the crosshairs.

That’s it.

Literally. They’re suing Meta because the shooter saw some perfectly legal images on Instagram from a gun manufacturer. And somehow that should make Meta liable for the shooting? How the fuck is Meta supposed to prevent that? This is a nonsense connection cooked up by an ambulance chasing plaintiff’s lawyer who should be embarrassed for dragging the families of the victims through such a charade.

This is nothing but another Steve Dallas lawsuit, as we’ve dubbed such ridiculous lawsuits. This is based on the classic Bloom County comic strip, where Steve Dallas explains that the “American Way” is not to sue those actually responsible, but some tangentially related company with deep pockets.

Image

It’s been nearly 40 years since that strip was published, and we still see those kinds of lawsuits, now with increasing frequency thanks to very silly judges in very silly courts allowing obnoxious lawyers to try to make a name for themselves.

Filed Under: blame, call of duty, intermediary liability, liability, moral panic, steve dallas, uvalde, video games
Companies: activision, daniel defense, instagram, meta

Congress Wants A Magic Pony: Get Rid Of Section 230, Perfect Moderation, And Only Nice People Allowed Online

from the the-land-of-magical-thinking dept

The internet is the wild west! Kids are dying! AI is scary and bad! Algorithms! Addiction! If only there was more liability and we could sue more often, internet companies would easily fix everything. Once, an AI read my mind, and it’s scary. No one would ever bring a vexatious lawsuit ever. Wild west! The “like” button is addictive and we should be able to sue over it.

Okay, you’re basically now caught up with the key points raised in yesterday’s House Energy & Commerce hearing on sunsetting Section 230. If you want to watch the nearly three hours of testimony, you can do so here, though I wouldn’t recommend it:

It went like most hearings about the internet, where members of Congress spend all their time publicly displaying their ignorance and confusion about how basically everything works.

But the basic summary is that people are mad about “bad stuff” on the internet, and lots of people seem to falsely think that if there were more lawsuits, internet companies would magically make bad stuff disappear. That, of course, elides all sorts of important details, nuances, tradeoffs, and more.

First of all, bad stuff did not begin with the internet. Blaming internet companies for not magically making bad stuff disappear is an easy out for moralizing politicians.

The two witnesses pushing for sunsetting Section 230 talked about how some people were ending up in harmful scenarios over and over again. They talked about the fact that this meant that companies were negligent and clearly “not doing enough.” They falsely insisted that there were no other incentives for companies to invest in tools and people to improve safety on platforms, ignoring the simple reality that if your platform is synonymous with bad stuff happening, it’s bad for business.

User growth slows, advertisers go away. If you’re an app, Apple or Google may ban you. The media trashes you. There are tons of incentives out there for companies to deal with dangerous things on their platforms, which neither the “pro-sunset” witnesses nor the congressional reps seemed willing to acknowledge.

But the simple reality is that no matter how many resources and tools are put towards protecting people, some people are going to do bad things or be put in unsafe positions. That’s humanity. That’s society. Thinking that if we magically threaten to sue companies that it will fix things is not just silly, it’s wrong.

The witnesses in favor of sunsetting 230 also tried to play this game. They insisted that frivolous lawsuits would never be filed because that would be against legal ethics rules (Ha!), while also insisting that they need to get discovery from companies to be able to prove that their cases aren’t frivolous. This, of course, ignores the fact that merely the threat of litigation can lead companies to fold. If the threat includes the extraordinarily expensive and time consuming (and soul-destroying) process of discovery, it can be absolutely ruinous for companies.

Thankfully, this time, there was one witness who was there who could speak up about that: Kate Tummarello from Engine (disclosure: we’ve worked with Kate and Engine in the past to create our Startup Trail startup policy simulation game and Moderator Mayhem, detailing the challenges of content moderation, both of which demonstrate why the arguments from those pushing for sunsetting 230 are disconnected from reality).

Kate’s written testimony is incredibly thorough. Her spoken testimony (not found in her written testimony, but can be seen in the video at around 34:45) was incredibly moving. She spoke from the heart about a very personal situation she faced in losing a pregnancy at 22 weeks and relying on online forums and groups to survive the “emotional trauma” of such a situation. And, especially at a time when there is a very strong effort to criminalize aspects of women’s health care, the very existence of such communities online can be a real risk and liability.

The other witnesses and the reps asking questions just kept prattling on about “harms” that had to be stopped online, without really acknowledging that for about half of the panel, they would consider the groups that Kate relied on through one of the most difficult moments in her life as a “harm” where liability should be there, allowing people to sue whoever hosts or runs such groups.

It’s clear that the general narrative of the “techlash” has taken all of the oxygen out of the room, disallowing thoughtful or nuanced conversations on the matter.

But what became clear at this hearing, yet again, is that Democrats think (falsely) that removing Section 230 will lead to some magic wonderland where internet companies remove “bad” information, like election denials, disinformation, and eating disorder content, but leave up “good” information, like information about abortions, voting info, and news. While Republicans think (falsely) that removing Section 230 will let their supporters post racial slurs without consequence, but encourage social media companies to remove “pro-terrorist” content and sex trafficking.

Oh, and also, AI is bad and scary and will kill us all. Also, big tech is evil.

The reality is a lot more complicated. AI tools are actually incredibly important in enabling good trust & safety practices that help limit access to truly damaging content and raise up more useful and important content. Removing Section 230 won’t make companies any better at stopping bad people from being bad or things like “cyberbullying”. This came up a lot in the discussion, even as at least one rep got the kid safety witness on the panel to finally admit that most cyberbullying doesn’t violate any law and is protected under the First Amendment.

Removing Section 230 would give people a kind of litigator’s veto. If you threaten a lawsuit over a feature, some content, or an algorithm recommendation you don’t like, smaller companies will feel pressured to remove it to avoid the risk of costly endless litigation.

It wouldn’t do much to harm “big tech,” though, since they have their buildings full of lawyers, snf large trust & safety teams empowered by tools they spend hundreds of millions of dollars developing. They can handle the litigation. It’s everyone else who suffers. The smaller sites. The decentralized social media sites. The small forums. The communities that are so necessary to folks like Kate when she faced her own tragic situation.

But none of that seemed to matter much to Congress, who just wants to enable ambulance chasing lawyers to sue Google and Meta. They heard a story about a kid who had an eating disorder, and they’re sure it’s because Instagram told them to. It’s not realistic.

The real victims of this rush to sunset Section 230 will be all the people, like Kate, and also like tons of kids looking for their community, or using the internet to deal with various challenges online.

Congress wants a magic pony. And, in the process, they’re going to do a ton of harm. Magic ponies don’t exist. Congress should deal in the land of reality.

Filed Under: algorithms, congress, content moderation, house energy & commerce committee, Kate Tummarello, liability, section 230

Raging Ignorantly At The Internet Fixes Nothing

from the old-men-of-media dept

Jann Wenner, the creator of Rolling Stone magazine, was certainly an early supporter of free speech. But he seems to have reached grumpy old man status, that allows him to whine about free speech online, mostly by not knowing shit about anything.

Writing for Air Mail, a publication by Graydon Carter (another Grumpy Old Man of Media™), Wenner has a facts-optional screed about Section 230, which has done more for free speech than Wenner ever did.

First off, Wenner gets the purpose and history of Section 230 backwards. Like exactly 100% backwards. This is the kind of thing any fact checker would catch, but who needs fact checkers here?

The original conceit behind Section 230 was that tech companies were not publishers of content but merely providers of “pipes”—innocent high-tech plumbers!—and so should be treated like Con Edison or the telephone company. In reality, they are pipes, publishers, monopoly capitalists, spy networks, and a whole lot more all bundled together.

This is literally the opposite of the “conceit behind Section 230.” The entire conceit was that they are publishers, but because they’re publishers that allow anyone to publish and (mostly) only do ex post moderation, it made no sense at all to hold them liable as traditional publishers, who review everything ex ante.

It has never meant that the service providers were intended to be mere “pipes” or neutral conduits. Indeed, it was always intended to be the exact opposite. The whole intent of Section 230, in the words of its authors, was to enable sites to moderate more easily, by knowing they wouldn’t be liable for what they missed.

This is well known to anyone (i.e., not Wenner or anyone who edited his work at Air Mail) who understands the history of Section 230, which is that it was in response to the ruling in Stratton Oakmont v. Prodigy. In that case, a judge found that, because Prodigy moderated (in an effort to keep its forums “family friendly”), it could be held liable for anything that remained up.

Reps. Chris Cox and Ron Wyden realized what a mess that would be for free speech. What internet service would be willing to freely host speech if they could get sued for anything they allowed to be posted, even if no one reviewed it? Thus, Section 230 was written to enable websites to (1) host content freely without having to carefully review each and every piece of content and (2) moderate freely, without facing liability for both the decisions and non-decisions.

Note that nothing in there has anything to do with being a “pipe.”

This is why Section 230 has been such a boon for actual free speech. It makes it possible for internet services to host third-party speech without having to review every bit of content for legal liability, while still retaining the necessary editorial control to remove what they dislike, without facing liability.

To be fair, Wenner is not the only one to make this false assumption. Starting about six years ago, it became common in mostly ignorant MAGA circles to assume that this was what Section 230 meant. I believe Senator Ted Cruz was the first to make this category error, but it has been repeated many times since then.

That’s no excuse for Wenner to repeat the mistake, though.

Unfortunately, the rest of the article is almost entirely based off of this false premise, and thus makes no sense at all.

Wenner uses this framing to suggest that Section 230 unfairly benefited the internet over magazines. He claims that magazines were bastions of free speech because they could be liable for defamation.

Under the banner of the First Amendment, I sent out my warriors—Hunter S. Thompson, Richard N. Goodwin, P. J .O’Rourke, William Greider, and Matt Taibbi, among others—to cover national affairs and a dozen presidential elections. They took wild liberties in their prose, but we diligently observed the rules about truth and malice—the price of free speech.

In my 50 years leading Rolling Stone, I had one major journalistic failure, when we neglected to rigorously vet a story about an alleged sexual assault at a University of Virginia fraternity house in a long investigative piece on the broader epidemic of rape on college campuses. It was singularly my worst moment as an editor.

Around this time, the rapidly declining profitability of Rolling Stone and the magazine industry as a whole made it clear that the Internet platforms had become a threat, against which we were helpless. They had limitless resources, technical skills beyond our understanding—and governmental exemption from obeying the libel laws.

But again, Wenner is fundamentally confused. The issue on the internet is not that defamation law no longer applies, it’s just a question of who it applies to. The law for both the internet and for a paper publication is that the party actually responsible for the violative statement is responsible.

No one sued a library that carried Rolling Stone for the defamatory UVA rape story. They sued Rolling Stone itself. If someone defames someone on the internet, it’s the person who actually is responsible for the speech who gets sued, rather than the “library” that hosts the content. And a physical library is not a “pipe” either. They still retain their editorial freedom of determining which works to allow and not allow on their shelves.

Wenner then bizarrely and legally illiterately tries to discuss the NetChoice cases before the Supreme Court. He notes that with the laws in Texas and Florida limiting moderation, the folks in those states fear that platforms will “disproportionately squelch conservative speech,” and claims that this is true (even though studies actually have shown the reverse — who needs facts?).

Wenner suggests that Section 230 “must now be thrown out,” without realizing how massively this would attack free speech online. He implicitly seems to recognize that social media would die under this plan, but doesn’t understand what would actually happen:

So how can we keep the social-media pipes open while enforcing rules against libel and willful disregard of the truth? Let people publish their sickest fantasy or The Protocols of the Elders of Zion on their own blogs and Web pages. (They can’t hide behind Section 230.)

I mean, sure, it would be great if more people posted content to their own blogs and websites, but without 230 who will host those blogs and websites, Jann? Because the only reason we have web hosting companies and cloud services is because the hosters of that content know they’re protected thanks to Section 230. They’re “the library” in this scenario.

So if you do away with Section 230, hosting companies will also disappear, and it will be a lot harder to have your own blog or website.

Wenner seems to think he’s got the path around that idea, but again, he doesn’t seem to understand what he’s talking about. He suggests that maybe 230 should disappear only for algorithmically recommended content:

But when commercial public platforms decide to recirculate, amplify, and empower such stuff by algorithm or human selection, then they must obey the law, whatever the cost, like every other publisher in the U.S.

This idea is often raised by people who have never actually had to think through the details and tradeoffs of these solutions. First, it would make search engines ridiculously dangerous to run. Any algorithmic recommendation could face liability? Search engines are recommendation algorithms.

Similarly, the main purpose of these algorithms these days is to diminish the reach of the people posting “their sickest fantasy or _The Protocols of the Elders of Zion._” Without algorithmic recommendations, people see more disinformation and more false information. Rather than improving access to truthful information, getting rid of the algorithm makes that more difficult.

Section 230’s theoretical purpose—to shelter the Internet in its infancy—was served long ago. Today, the Captains of the Internet hardly need shelter.

Again, that’s not its purpose, theoretical or otherwise. Its purpose was to make sure that companies would be willing to host speech at all. And we still need that today or else we end up with way less speech.

(Along with the gun industry, tech is one of the few businesses in America that has immunity from product-safety laws.)

This is just false. Tech remains very, very much liable under product safety laws. The only area where internet (not “tech”) sites are immune is for liability based on third-party speech. Wenners problem (among many other misunderstandings) is not understanding the difference between first party and third-party speech.

Just as magazine publishers for years have absorbed proportionately higher expenses for research, fact-checking, and libel review in their operating budgets, the tech companies can foot the bill to monitor their legal risk.

But, Jann, you just said that without 230 everyone would be forced to publish on their own blogs and web pages. They wouldn’t be posting on social media any more because without Section 230, social media companies aren’t going to be open to hosting such content anymore.

Also, Section 230 protects all websites for third-party content, not just Google and Meta. Perhaps they can afford the expenses described here, but every other website and service provider could not. Smaller communities would immediately shut down. If anything, it would give more power to the largest tech companies and make it impossible for smaller ones to crop up.

And maybe, just maybe, a fact checker would have caught this bit of nonsense as well:

The F.C.C. was established in part to allocate what were once thought to be finite broadcast wavelengths (a purpose long since made moot by technological innovation), and for decades the government has overseen the broadcast industry, issuing renewable licenses and requiring public-affairs content. It’s time for the F.T.C., which supposedly regulates Silicon Valley, to catch up with the 21st century.

It’s not even clear what he’s asking the FTC to do here. Yes, the FCC was established to handle the licensing of scarce public spectrum. But, as Wenner notes, that doesn’t apply to the internet, where there isn’t a limit on space. So what does he want the FTC to do here? Create licenses? That would violate the First Amendment he claims to hold so dear at the opening of the article.

This just seems to be blind rage at successful internet companies for being successful.

He makes that clear in the very next paragraph:

But the press has been decimated by the tech companies. They essentially stole the intellectual property we created with little in the way of compensation, re-purposing it as free giveaway content and selling it at massive discounts to our advertisers.

I often hear this from media folks, but it’s utter nonsense. What was “stolen”? The content was not taken or repurposed in any way. Media orgs chose themselves (for good reason!) to move online, and all that the successful internet companies did was provide free distribution. They provided links, which drove traffic. If Wenner was so bad at running a media org that he couldn’t take that free distribution and free circulation and make money off of it, that’s on him. Not the internet companies.

If you misdiagnose the problem, you come up with dumb solutions. This whole piece is just misguided, mistargeted rage from a media old-timer, past his prime, who is mad at the new thing without understanding why.

I get that there are many reasons to be mad about elements of the internet. But Wenner is raging ignorantly, mistaking the underlying factors, misstating the nature of the law, and therefore offering “solutions” that make matters worse, not better.

For all his glorified nostalgia for the “heydays of American Journalism” through “spirited, sane, and fact-based” journalism, his column here is none of those.

Filed Under: 1st amendment, defamation, free speech, jann wenner, liability, section 230

Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?

from the zuckerman-v.-zuckerberg dept

There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of reasons, including a potentially consequential typo in the law that has been ignored for years.

Buckle in, this is a bit of a wild ride.

You would think with how much attention has been paid to Section 230 over the last few years (there’s an entire excellent book about it!), and how short the law is, that there would be little happening with the existing law that would take me by surprise. But the new Zuckerman v. Meta case filed on behalf of Ethan Zuckerman by the Knight First Amendment Institute has got my attention.

It’s presenting a fairly novel argument about a part of Section 230 that almost never comes up in lawsuits, but could create an interesting opportunity to enable all kinds of adversarial interoperability and middleware to do interesting (and hopefully useful) things that the big platforms have been using legal threats to shut down.

If the argument works, it may reveal a surprising and fascinating trojan horse for a more open internet, hidden in Section 230 for the past 28 years without anyone noticing.

Of course, it could also have much wider ramifications that a bunch of folks need to start thinking through. This is the kind of thing that happens when someone discovers something new in a law that no one really noticed before.

But there’s also a very good chance this lawsuit flops for a variety of other reasons without ever really exploring the nature of this possible trojan horse. There are a wide variety of possible outcomes here.

But first, some background.

For years, we’ve talked about the importance of tools and systems that give end users more control over their own experiences online, rather than leaving it entirely up to the centralized website owners. This has come up in a variety of different contexts in different ways, from “Protocols, not Platforms” to “adversarial interoperability,” to “magic APIs” to “middleware.” These are not all exactly the same thing, but they’re all directionally strongly related, and conceivably could work well together in interesting ways.

But there are always questions about how to get there, and what might stand in the way. One of the biggest things standing in the way over the last decade or so has been interpretations of various laws that effectively allow social media companies to threaten and/or bring lawsuits against companies trying to provide these kinds of additional services. This can take the form of a DMCA 1201 claim for “circumventing” a technological block. Or, more commonly, it has taken the form of a civil (Computer Fraud & Abuse Act) CFAA claim.

The most representative example of where this goes wrong is when Facebook sued Power Ventures years ago. Power was trying to build a unified dashboard across multiple social media properties. Users could provide Power with their own logins to social media sites. This would allow Power to log in to retrieve and post data, so that someone could interact with their Facebook community without having to personally go into Facebook.

This was a potentially powerful tool in limiting Facebook’s ability to become a walled-off garden with too much power. And Facebook realized that too. That’s why it sued Power, claiming that it violated the CFAA’s prohibition on “unauthorized access.”

The CFAA was designed (poorly and vaguely) as an “anti-hacking” law. And you can see where “unauthorized access” could happen as a result of hacking. But Facebook (and others) have claimed that “unauthorized access” can also be “because we don’t want you to do that with your own login.”

And the courts have agreed to Facebook’s interpretation, with a few limitations (that don’t make that big of a difference).

I still believe that this ability to block interoperability/middleware with law has been a major (perhaps the most major) reason “big tech” is so big. They’re able to use these laws to block out the kinds of companies who would make the market more competitive and pull down some the walls of walled gardens.

That brings us to this lawsuit.

Ethan Zuckerman has spent years trying to make the internet a better, more open space (partially, I think, in penance for creating the world’s first pop-up internet ad). He’s been doing some amazing work on reimagining the digital public infrastructure, which I keep meaning to write about, but never quite find the time to get to.

According to the lawsuit, he wants to build a tool called “Unfollow Everything 2.0.” The tool is based on a similar tool, also called Unfollow Everything, that was built by Louis Barclay a few years ago and did what it says on the tin: let you automatically unfollow everything on Facebook. Facebook sent Barclay a legal threat letter and banned him for life from the site.

Zuckerman wants to recreate the tool with some added features enabling users to opt-in to provide some data to researchers about the impact of not following anyone on social media. But he’s concerned that he’d face legal threats from Meta, given what happened with Barclay.

Using Unfollow Everything 2.0, Professor Zuckerman plans to conduct an academic research study of how turning off the newsfeed affects users’ Facebook experience. The study is opt-in—users may use the tool without participating in the study. Those who choose to participate will donate limited and anonymized data about their Facebook usage. The purpose of the study is to generate insights into the impact of the newsfeed on user behavior and well-being: for example, how does accessing Facebook without the newsfeed change users’ experience? Do users experience Facebook as less “addictive”? Do they spend less time on the platform? Do they encounter a greater variety of other users on the platform? Answering these questions will help Professor Zuckerman, his team, and the public better understand user behavior online and the influence that platform design has on that behavior

The tool and study are nearly ready to launch. But Professor Zuckerman has not launched them because of the near certainty that Meta will pursue legal action against him for doing so.

So he’s suing for declaratory judgment that he’s not violating any laws. If he were just suing for declaratory judgment over the CFAA, that would (maybe?) be somewhat understandable or conventional. But, while that argument is in the lawsuit, the main claim in the case is something very, very different. It’s using a part of Section 230, section (c)(2)(B), that almost never gets mentioned, let alone tested.

Most Section 230 lawsuits involve (c)(1): the famed “26 words” that state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Some Section 230 cases involve (c)(2)(A) which states that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many people incorrectly think that Section 230 cases turn on this part of the law, when really, much of those cases are already cut off by (c)(1) because they try to treat a service as a speaker or publisher.

But then there’s (c)(2)(B), which says:

No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

As noted, this basically never comes up in cases. But the argument being made here is that this creates some sort of proactive immunity from lawsuits for middleware creators who are building tools (“technical means”) to “restrict access.” In short: does Section 230 protect “Unfollow Everything” from basically any legal threats from Meta, because it’s building a tool to restrict access to content on Meta platforms?

Or, according to the lawsuit:

This provision would immunize Professor Zuckerman from civil liability for designing, releasing, and operating Unfollow Everything 2.0

First, in operating Unfollow Everything 2.0, Professor Zuckerman would qualify as a “provider . . . of an interactive computer service.” The CDA defines the term “interactive computer service” to include, among other things, an “access software provider that provides or enables computer access by multiple users to a computer server,” id. § 230(f)(2), and it defines the term “access software provider” to include providers of software and tools used to “filter, screen, allow, or disallow content.” Professor Zuckerman would qualify as an “access software provider” because Unfollow Everything 2.0 enables the filtering of Facebook content—namely, posts that would otherwise appear in the feed on a user’s homepage. And he would “provide[] or enable[] computer access by multiple users to a computer server” by allowing users who download Unfollow Everything 2.0 to automatically unfollow and re-follow friends, groups, and pages; by allowing users who opt into the research study to voluntarily donate certain data for research purposes; and by offering online updates to the tool.

Second, Unfollow Everything 2.0 would enable Facebook users who download it to restrict access to material they (and Zuckerman) find “objectionable.” Id. § 230(c)(2)(A). The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

It could be a trojan horse that no one noticed in Section 230 that effectively bars websites from taking legal action against middleware providers who are providing technical means for people to filter or screen content on their feed. Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools, or just banning accounts or whatever. But that’s very different from threatening or filing civil suits.

If this theory works, it could do a lot to enable these kinds of middleware services and make it significantly harder for big social media companies like Meta to stop them. If you believe in adversarial interoperability, that could be a very big deal. Like, “shift the future of the internet we all use” kind of big.

Now, there are many hurdles before we get to that point. And there are some concerns that if this legal theory succeeds, it could also lead to other problematic results (though I’m less convinced by those).

Let’s start with the legal concerns.

First, as noted, this is a very novel and untested legal theory. Upon reading the case initially, my first reaction was that it felt like one of those slightly wacky academic law journal articles you see law professors write sometimes, with some far-out theory they have that no one’s ever really thought about. This one is in the form of a lawsuit, so at some point we’ll find out how the theory works.

But that alone might make a judge unwilling to go down this path.

Then there are some more practical concerns. Is there even standing here? ¯\_(ツ)_/¯ Zuckerman hasn’t released his tool. Meta hasn’t threatened him. He makes a credible claim that given Meta’s past actions, they’re likely to react unfavorably, but is that enough to get standing?

Then there’s the question of whether or not you can even make use of 230 in an affirmative way like this. 230 is used as a defense to get cases thrown out, not proactively for declaratory judgment.

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

Then there’s this kinda funny but possibly consequential issue: there’s a typo in Section 230 that almost everyone has ignored for years. Because it’s never really mattered. Except it matters in this case. Jeff Kosseff, the author of the book on Section 230, always likes to highlight that in (c)(2)(B), it says that the immunity is for using “the technical means to restrict access to material described in paragraph (1).”

But they don’t mean “paragraph (1).” They mean “paragraph (A).” Paragraph (1) is the “26 words” and does not describe any material, so it would make no sense to say “material described in paragraph (1).” It almost certainly means “paragraph (A),” which is the “good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” section. That’s the one that describes material.

I know that, at times, Jeff has joked when people ask him how 230 should be reformed he suggests they fix the typo. But Congress has never listened.

And now it might matter?

The lawsuit basically pretends that the typo isn’t there. Its language inserts the language from “paragraph (A)” where the law says “paragraph (1).”

I don’t know how that gets handled. Perhaps it gets ignored like every time Jeff points out the typo? Perhaps it becomes consequential? Who knows!

There are a few other oddities here, but this article is getting long enough and has mostly covered the important points. However, I will conclude on one other point that one of the people I spoke to raised. As discussed above, Meta has spent most of the past dozen or so years going legally ballistic about anyone trying to scrape or data mine its properties in anyway.

Yet, earlier this year, it somewhat surprisingly bailed out on a case where it had sued Bright Data for scraping/data mining. Lawyer Kieran McCarthy (who follows data scraping lawsuits like no one else) speculated that Meta’s surprising about-face may be because it suddenly realized that for all of its AI efforts, it’s been scraping everyone else. And maybe someone high up at Meta suddenly realized how it was going to look in court when it got sued for all the AI training scraping, if the plaintiffs point out that at the very same time it was suing others for scraping its properties.

For me, I suspect the decision not to appeal might be more about a shift in philosophy by Meta and perhaps some of the other big platforms than it is about their confidence in their ability to win this case. Today, perhaps more important to Meta than keeping others off their public data is having access to everyone else’s public data. Meta is concerned that their perceived hypocrisy on these issues might just work against them. Just last month, Meta had its success in prior scraping cases thrown back in their face in a trespass to chattels case. Perhaps they were worried here that success on appeal might do them more harm than good.

In short, I think Meta cares more about access to large volumes of data and AI than it does about outsiders scraping their public data now. My hunch is that they know that any success in anti-scraping cases can be thrown back at them in their own attempts to build AI training databases and LLMs. And they care more about the latter than the former.

I’ve separately spoken to a few experts who were worried about the consequences if Zuckerman succeeded here. They were worried that it might simultaneously immunize potential bad actors. Specifically, you could see a kind of Cambridge Analytica or Clearview AI situation, where companies trying to get access to data for malign purposes convince people to install their middleware app. This could lead to a massive expropriation of data, and possibly some very sketchy services as a result.

But I’m less worried about that, mainly because it’s the sketchy eventuality of how that data is being used that would still (hopefully?) violate certain laws, not the access to the data itself. Still, there are at least some questions being raised about how this type of more proactive immunity might result in immunizing bad actors that is at least worth thinking about.

Either way, this is going to be a case worth following.

Filed Under: adversarial interoperability, competition, copyright, dmca 1201, ethan zuckerman, liability, section 230, unfollow everyone
Companies: meta

Our Online Child Abuse Reporting System Is Overwhelmed, Because The Incentives Are Screwed Up & No One Seems To Be Able To Fix Them

from the mismatched-incentives-are-the-root-of-all-problems dept

The system meant to stop online child exploitation is failing — and misaligned incentives are to blame. Unfortunately, today’s political solutions, like KOSA and STOP CSAM, don’t even begin to grapple with any of this. Instead, they prefer to put in place solutions that could make the incentives even worse.

The Stanford Internet Observatory has spent the last few months doing a very deep dive on how the CyberTipline works (and where it struggles). It has released a big and important report detailing its findings. In writing up this post about it, I kept adding more and more, to the point that I finally decided it made sense to split it up into two separate posts to keep things manageable.

This first post covers the higher level issue: what the system is, why it works the way it does, and how the incentive structure of the system is completely messed up (even if it was done with good intentions), and how that’s contributed to the problem. A follow-up post will cover the more specific challenges facing NCMEC itself, law enforcement, and the internet platforms themselves (who often take the blame for CSAM, when that seems extremely misguided).

There is a lot of misinformation out there about the best way to fight and stop the creation and spread of child sexual abuse material (CSAM). It’s unfortunate because it’s a very real and very serious problem. Yet the discussion about it is often so disconnected from reality as to be not just unhelpful, but potentially harmful.

In the US, the system that was set up is the CyberTipline, which is run by NCMEC, the National Center on Missing and Exploited Children. It’s a private, non-profit; however, it has a close connection with the US government, which helped create it. At times, there has been some confusion about whether or not NCMEC is a government agent. The entire setup of it was designed to keep it as non-governmental, to avoid any 4th Amendment issues with the information it collects, but courts haven’t always seen it that way, which makes it tricky (even as the 4th Amendment is important).

And while the system was designed for the US, it has become a defacto global system, since so many of the companies are US based, and NCMEC will, when it can, send relevant details to foreign law enforcement as well (though, as the report details, that doesn’t always work well).

The main role CyberTipline has taken on is coordination. It takes in reports of CSAM (mostly, but not entirely, from internet platforms) and then, when relevant, hands off the necessary details to the (hopefully) correct law enforcement agency to handle things.

Companies that host user-generated content have certain legal requirements to report CSAM to the CyberTipline. As we discussed in a recent podcast, this role as a “mandatory reporter” is important in providing useful information to allow law enforcement to step in and actually stop abusive behavior. Because of the “government agent” issue, it would be unconstitutional to require social media platforms to proactively search for and identify CSAM (though many do use tools to do this). However, if they do find some, they must report it.

Unfortunately, the mandatory reporting has also allowed the media and politicians to use the number of reports sent in by social media companies in a misleading manner, suggesting that the mere fact that these companies find and report to NCMEC means that they’re not doing enough to stop CSAM on their platforms.

This is problematic because it creates a dangerous incentive, suggesting that internet services should actually not report CSAM they found, as politicians and the media will falsely portray a lot of reports as being a sign of a failure by the platforms to take this seriously. The reality is that the failure to take things seriously comes from the small number of platforms (Hi Telegram!) who don’t report CSAM at all.

Some of us from the outside have thought that the real issue was that NCMEC and law enforcement had been unsuccessful on the receiving end to take those reports and do enough that was productive with them. It seemed convenient for the media and politicians to just blame social media companies for doing what they’re supposed to do (reporting CSAM), ignoring that what happened on the back end of the system might be the real problem. That’s why things like Senator Ron Wyden’s Invest in Child Safety Act seemed like a better approach than things like KOSA or the STOP CSAM Act.

That’s because the approach of KOSA/STOP CSAM and some other bills is basically to add liability to social media companies. (These companies already do a ton to prevent CSAM from appearing on the platform and alert law enforcement via the CyberTipline when they do find stuff.) But that’s useless if those receiving the reports aren’t able to do much with them.

What becomes clear from this report is that while there are absolutely failures on the law enforcement side, some of that is effectively baked into the incentive structure of the system.

In short, the report shows that the CyberTipline is very helpful in engaging law enforcement to stop some child sexual abuse, but it’s not as helpful as it might otherwise be:

Estimates of how many CyberTipline reports lead to arrests in the U.S. range from 5% to 7.6%

This number may sound low, but I’ve been told it’s not as bad as it sounds. First of all, when a large number of the reports are for content that is overseas and not in the US, it’s more difficult for law enforcement here to do much about it (though, again, the report details some suggestions on how to improve this). Second, some of the content may be very old, where the victim was identified years (or even decades) ago, and where there’s less that law enforcement can do today. Third, there is a question of prioritization, with it being a higher priority to target those directly abusing children. But, still, as the report notes, almost everyone thinks that the arrest number could go higher if there were more resources in place:

Empirically, it is unknown what percent of reports, if fully investigated, would lead to the discovery of a person conducting hands-on abuse of a child. On the one hand, as an employee of a U.S. federal department said, “Not all tips need to lead to prosecution […] it’s like a 911 system.”10 On the other hand, there is a sense from our respondents—who hold a wide array of beliefs about law enforcement—that this number should be higher. There is a perception that more than 5% of reports, if fully investigated, would lead to the discovery of hands-on abuse.

The report definitely suggests that if NCMEC had more resources dedicated to the CyberTipline, it could be more effective:

NCMEC has faced challenges in rapidly implementing technological improvements that would aid law enforcement in triage. NCMEC faces resource constraints that impact salaries, leading to difficulties in retaining personnel who are often poached by industry trust and safety teams.

There appear to be opportunities to enrich CyberTipline reports with external data that could help law enforcement more accurately triage tips, but NCMEC lacks sufficient technical staff to implement these infrastructure improvements in a timely manner. Data privacy concerns also affect the speed of this work.

But, before we get into the specific areas where things can be improved in the follow-up post, I thought it was important to highlight how the incentives of this system contribute to the problem, where there isn’t necessarily an easy solution.

While companies (Meta, mainly, since it represents, by a very wide margin, the largest number of reports to the CyberTipline) keep getting blamed for failing to stop CSAM because of its large number of reports, most companies have very strong incentives to report anything they find. This is because the cost for not reporting something they should have reported is massive (criminal penalties), whereas the cost for over-reporting is nothing to the companies. That means, there’s an issue with overreporting.

Of course, there is a real cost here. CyberTipline employees get overwhelmed, and that can mean that reports that should get prioritized and passed on to law enforcement don’t. So you can argue that while the cost of over-reporting is “nothing” to the companies, the cost to victims and society at large can be quite large.

That’s an important mismatch.

But the broken incentives go further as well. When NCMEC hands off reports to law enforcement, they often go through a local ICAC (Internet Crimes Against Children) task force, who will help triage it and find the right state or local law enforcement agency to handle the report. Different law enforcement agencies who are “affiliated” with ICACs receive special training on how to handle reports from the CyberTipline. But, apparently, at least some of them feel that it’s just too much work, or (in some cases) too burdensome to investigate. That means that some law enforcement agencies are choosing not to affiliate with their local ICACs to avoid this added work. Even worse, some law enforcement agencies have “unaffiliated” themselves with the local ICAC because they just don’t want to deal with it.

In some cases, there are even reports of law enforcement unaffiliating with an ICAC out of a fear of facing liability for not investigating an abused child quickly enough.

A former Task Force officer described the barriers to training more local Task Force affiliates. In some cases local law enforcement perceive that becoming a Task Force affiliate is expensive, but in fact the training is free. In other cases local law enforcement are hesitant to become a Task Force affiliate because they will be sent CyberTipline reports to investigate, and they may already feel like they have enough on their plate. Still other Task Force affiliates may choose to unaffiliate, perceiving that the CyberTipline reports they were previously investigating will still get investigated at the Task Force, which further burdens the Task Force. Unaffiliating may also reduce fear of liability for failing to promptly investigate a report that would have led to the discovery of a child actively being abused, but the alternative is that the report may never be investigated at all.

[….]

This liability fear stems from a case where six months lapsed between the regional Task Force receiving NCMEC’s report and the city’s police department arresting a suspect (the abused children’s foster parent). In the interim, neither of the law enforcement agencies notified child protective services about the abuse as required by state law. The resulting lawsuit against the two police departments and the state was settled for $10.5 million. Rather than face expensive liability for failing to prioritize CyberTipline reports ahead of all other open cases, even homicide or missing children, the agency might instead opt to unaffiliate from the ICAC Task Force.

This is… infuriating. Cops choosing to not affiliate (i.e., get the necessary training to help) or removing themselves from an ICAC task force because they’re afraid if they don’t help save kids from abuse that they might get sued is ridiculous. It’s yet another example of cops running away, rather than doing the job they’re supposed to be doing, but which they claim they have no obligation to do.

That’s just one problem of many in the report, which we’ll get into in the second post. But, on the whole, it seems pretty clear that with the incentives this out of whack, something like KOSA or STOP CSAM aren’t going to be of much help. Actually tackling the underlying issues, the funding, the technology, and (most of all) the incentive structures, is necessary.

Filed Under: csam, cybertipline, icac, incentives, kosa, law enforcement, liability, stop csam
Companies: ncmec

Sextortion Is A Real & Serious Criminal Issue; Blaming Section 230 For It Is Not

from the stay-focused-here dept

Let’s say I told you a harrowing story about a crime. Criminals from halfway around the world used fraudulent means and social engineering to scam a teenager, causing them to effectively destroy their lives (at least in the mind of the teen). The person whose life was destroyed then took an easily accessible gun from their parent and shot and killed themselves. Law enforcement investigated the crime, tracked down the people responsible, extradited them to the US and tried them. Eventually, they were sentenced to many years in prison.

Who would you blame for such a thing?

Apparently, for some people, the answer is Section 230. And it makes no sense at all.

That, at least, is the takeaway from an otherwise harrowing, distressing, and fascinating article in Bloomberg Businessweek about the very real and very serious problem of sextortion.

The article is well worth reading, as it not only details the real (and growing) problem of sextortion, but shows how a momentary youthful indiscretion — coaxed by a skillful social engineer — can destroy someone’s life.

The numbers on sextortion are eye-opening:

It was early 2022 when analysts at the National Center for Missing & Exploited Children (NCMEC) noticed a frightening pattern. The US nonprofit has fielded online-exploitation cybertips since 1998, but it had never seen anything like this.

Hundreds of tips began flooding in from across the country, bucking the trend of typical exploitation cases. Usually, older male predators spend months grooming young girls into sending nude photos for their own sexual gratification. But in these new reports, teen boys were being catfished by individuals pretending to be teen girls—and they were sending the nude photos first. The extortion was rapid-fire, sometimes occurring within hours. And it wasn’t sexually motivated; the predators wanted money. The tips were coming from dozens of states, yet the blackmailers were all saying the same thing:

“I’m going to ruin your life.”

“I’m going to make it go viral.”

“Answer me quickly. Time is ticking.”

“I have what I need to destroy your life.”

As the article details, there is something of a pattern in many of these sextortion cases. There are even “training” videos floating around that teach scammers how to effectively social engineer the result: get control over an Instagram or Snapchat account of a young girl and start friending/flirting with teen boys.

After getting flirty enough, send a fake nude and ask for one in return. Then, the scammer goes straight into extortion mode the second the teen boy does the teen boy thing and sends a compromising photo, focused on promising to ruin the boy’s life:

Around midnight, Dani got flirtatious. She told Jordan she liked “playing sexy games.” Then she sent him a naked photo and asked for one in return, a “sexy pic” with his face in it. Jordan walked down the hallway to the bathroom, pulled down his pants and took a selfie in the mirror. He hit send.

In an instant, the flirty teenage girl disappeared.

“I have screenshot all your followers and tags and can send this nudes to everyone and also send your nudes to your family and friends until it goes viral,” Dani wrote. “All you have to do is cooperate with me and I won’t expose you.”

Minutes later: “I got all I need rn to make your life miserable dude.”

As the article notes, this is part of the “playbook” that is used to teach the scammers:

The Yahoo Boys videos provided guidance on how to sound like an American girl (“I’m from Massachusetts. I just saw you on my friend’s suggestion and decided to follow you. I love reading, chilling with my friends and tennis”). They offered suggestions for how to keep the conversation flowing, how to turn it flirtatious and how to coerce the victim into sending a nude photo (“Pic exchange but with conditions”). Those conditions often included instructions that boys hold their genitals while “making a cute face” or take a photo in a mirror, face included.

Once that first nude image is sent, the script says, the game begins. “NOW BLACKMAIL 😀!!” it tells the scammer, advising they start with “hey, I have ur nudes and everything needed to ruin your life” or “hey this is the end of your life I am sending nudes to the world now.” Some of the blackmail scripts Raffile found had been viewed more than half a million times. One, called “Blackmailing format,” was uploaded to YouTube in September 2022 and got thousands of views. It included the same script that was sent to Jordan DeMay—down to the typos.

The article mostly focuses on the tragic case of one teen, DeMay, who shot himself very soon after getting hit with this scam. The article notes, just in passing, that DeMay had access to his father’s gun. Yet, somehow, guns and easy access to them are never mentioned as anything to be concerned about, even as the only two suicides mentioned in the article both involve teen boys who seemed to have unsupervised access to guns with which to shoot themselves.

Apparently, this is all the fault of Section 230 instead.

Hell, even as the article describes how this was a criminal case, and (somewhat amazingly!) the FBI tracked down the actual scammers in Nigeria, had them extradited to Michigan, and even got them to plead guilty to the crime (along with a mandatory minimum of 15 years in prison). Apparently, this is still… an internet problem?

The reality is that this is a criminal problem, and it’s appropriate to treat it as such, where law enforcement has to deal with it (as they did in this case).

It seems like there are many things to blame here: the criminals themselves (who are going to prison for many years), the easy access to guns, even the failure to teach kids to be careful with who they’re talking to or what to do if they got into trouble online. But, no, the article seems to think this is all Section 230’s fault.

DeMay’s family appears to have been suckered by a lawyer into suing Meta (the messages to him came via Instagram):

In January, Jordan’s parents filed a wrongful death lawsuit in a California state court accusing Meta of enabling and facilitating the crime. That month, John DeMay flew to Washington to attend the congressional hearing with social media executives. He sat in the gallery holding a picture of Jordan smiling in his red football jersey.

The DeMay case has been combined with more than 100 others in a group lawsuit in Los Angeles that alleges social media companies have harmed children by designing addictive products. The cases involve content sent to vulnerable teens about eating disorders, suicide and dangerous challenges leading to accidental deaths, as well as sextortion.

“The way these products are designed is what gives rise to these opportunistic murderers,” says Matthew Bergman, founder of the Seattle-based Social Media Victims Law Center, who’s representing Jordan’s parents. “They are able to exploit adolescent psychology, and they leverage Meta’s technology to do so.”

Except all of that is nonsense. Yes, sextortion is problematic, but what the fuck in the “design” of Instagram aids it? It’s a communication tool, like any other. In the past, people used phones and the mail service for extortion, and no one sued AT&T or the postal service because of it. It’s utter nonsense.

But Bloomberg runs with it and implies that Section 230 is somehow getting in the way here:

The lawsuits face a significant hurdle: overcoming Section 230 of the Communications Decency Act. This liability shield has long protected social media platforms from being held accountable for content posted on their sites by third parties. If Bergman’s product liability argument fails, Instagram won’t be held responsible for what the Ogoshi brothers said to Jordan DeMay.

Regardless of the legal outcome, Jordan’s parents want Meta to face the court of public opinion. “This isn’t my story, it’s his,” John DeMay says. “But unfortunately, we are the chosen ones to tell it. And I am going to keep telling it. When Mark Zuckerberg lays on his pillow at night, I guarantee he knows Jordan DeMay’s name. And if he doesn’t yet, he’s gonna.”

So here’s a kind of important question: how would this story have played out any differently in the absence of Section 230? What different thing would Mark Zuckerberg do? I mean, it’s possible that Facebook/Instagram wouldn’t really exist at all without such protections, but assuming they do, what legal liability would be on the platforms for this kind of thing happening?

The answer is nothing. For there to be any liability under the First Amendment, there would have to be evidence that Meta employees knew of the specific sextortion attempt against DeMay and did nothing to stop it. But that’s ridiculous.

Instagram has 2 billion users. What are the people bringing the lawsuit expecting Meta to do? To hire people to read every direct message going back and forth among users, spotting the ones that are sextortion, and magically stepping in to stop them? That’s not just silly, it’s impossible and ridiculously intrusive. Do you want Meta employees reading all your DMs?

Even more to the point, Section 230 is what allows Meta to experiment with better solutions to this kind of thing. For example, Meta has recently announced new tools to help fight sextortion by using nudity detectors to try to prevent kids from sending naked photos of themselves.

Developing such a tool and providing such help would be riskier without Section 230, as it would be an “admission” that people use their tools to send nudes. But here, the company can experiment with providing better tools because of 230. The focus on blaming Section 230 is so incredibly misplaced that it’s embarrassing.

The criminals are actually responsible for the sextortion scam and the end results, and possibly whoever made it so damn easy for the kid to get his father’s gun in the middle of the night to shoot himself. The “problem” here is not Section 230, and removing Section 230 wouldn’t change a damn thing. This lawsuit is nonsense, and sure, maybe it makes the family feel better to sue Meta, but just because a crime happened on Instagram, doesn’t magically make it Instagram’s fault.

And, for good reason. As noted above, this was always a law enforcement situation. We shouldn’t ever want to turn private companies into law enforcement. Because that would be an extremely dangerous result. Let Meta provide its communications tools. Let law enforcement investigate crimes and bring people to justice (as happened here). Maybe we should focus on better educating our kids to be aware of threats like sextortion and how to respond to it if they happen to make a mistake and get caught up in it.

There’s lots of blame to go around here, but none of it belongs on Section 230.

Filed Under: blame, criminals, fbi, guns, jordan demay, law enforcement, liability, section 230, sextortion, yahoo boys
Companies: meta

Confused NY Court Says That Section 230 Doesn’t Block Ridiculous Lawsuit Blaming Social Media For Buffalo Shooter

from the the-blame-game dept

Can you imagine what kind of world we’d live in if you could blame random media companies for tangential relationships they had with anyone who ever did anything bad? What would happen if we could blame newspapers for inspiring crime? Or television shows for inspiring terrorism? The world would be a much duller place.

We’ve talked a lot about how the entire purpose of Section 230 of the Communications Decency Act is to put the liability on the right party. That is, it’s entirely about making sure the right party is being sued and to avoid wasting everyone’s time by suing some random party, especially in pursuit of “Steve Dallas” type lawsuits, where you just sue some random company, tangentially connected to some sort of legal violation, because they have the deepest pockets.

Image

Unfortunately, a judge in the NY Supreme Court (which, bizarrely, is NY’s lowest level of courts) has allowed just such a lawsuit to move forward. It was filed by the son of a victim of the racist dipshit who went into a Buffalo supermarket and shot and killed a bunch of people a couple years ago. That is, obviously, a very tragic situation. And I can certainly understand the search for someone to blame. But blaming “social media” because someone shot up a supermarket is ridiculous.

It’s exactly the kind of thing that Section 230 was designed to get tossed out of court quickly.

Of course, NY officials spent months passing the blame and pointing at social media companies. NY’s Governor and Attorney General wanted to deflect blame from the state’s massive failings in handling the situation. I mean, the shooter had made previous threats, and law enforcement in NY had been alerted to those threats and failed to stop him. Also, he used a high capacity magazine that was illegal in NY and law enforcement failed to stop him. Also, when people in the store called 911, the dispatcher didn’t believe them and hung up on them.

The government had lots of failings that aren’t being investigated, and lawsuits aren’t being filed over those. But, because the shooter also happened to be a racist piece of shit on social media, people want to believe we should be able to magically sue social media.

And, the court is allowing this based on a very incorrect understanding of Section 230. Specifically, the court has bought into the trendy, but ridiculous, “product liability” theory, that is now allowing frivolous and vexatious plaintiffs across the country to use this “one weird trick” to get around Section 230. Just claim “the product was defective” and, boom, the court will let the case go forward.

That’s what happened here:

The social media/internet defendants may still prove their platforms were mere message boards and/or do not contain sophisticated algorithms thereby providing them with the protections of the CDA and/or First Amendment. In addition, they may yet establish their platforms are not products or that the negligent design features plaintiff has alleged are not part of their platforms. However, at this stage of the litigation the Court must base its ruling on the allegations of the complaint and not “facts” asserted by the defendants in their briefs or during oral argument and those allegations allege viable causes of action under a products liability theory.

Now, some might say “no big deal, the court says they can raise this issue again later,” but nearly all of the benefit of Section 230 is in how it gets these frivolous cases tossed early. Otherwise, the expense of these cases adds up and creates a real mess (along with the pressure to settle).

Also, the judge here seems very confused. Section 230 does not protect “mere message boards.” It protects any interactive computer service from being held liable for third-party speech. And whether or not they “contain sophisticated algorithms” should have zero bearing on whether or not Section 230 applies.

The Section 230 test to see if it applies is quite simple: (1) Is this an interactive computer service? (2) Would holding them liable in this scenario be holding them liable for the speech of someone else? (3) Does this not fall into any of the limited exceptions to Section 230 (i.e., intellectual property law, trafficking, or federal criminal law)? That’s it. Whether or not you’re a message board or if you use algorithms has nothing to do with it.

Again, this kind of ruling only encourages more such vexatious litigating.

Outside of Section 230, the social media defendants sought to go to the heart of the matter and just made it clear that there’s no causal link between “dipshit being a dipshit on social media” and “dipshit going on a murderous rampage.”

And, again, the judge doesn’t seem to much care, saying that a jury can figure that out:

As a general proposition the issue of proximate cause between the defendants’ alleged negligence and a plaintiff’s injuries is a question of fact for a jury to determine. Oishei v. Gebura 2023 NY Slip Op 05868, 221 AD3d 1529 (4th Dept 2023). Part of the argument is that the criminal acts of the third party, break any causal connection, and therefore causation can be decided as a matter of law. There are limited situations in which the New York Court of Appeals has found intervening third party acts to break the causal link between parties. These instances are where “only one conclusion may be drawn from the established facts and where the question of legal cause may be decided as a matter of law.” Derdiarian v Felix Contr. Corp., 51 NY2d 308 at 315 (1980). These exceptions involve independent intervening acts that do not flow from the original alleged negligence.

Again, though, getting this kind of case to a jury would be crazy. It would be a massive waste of everyone’s resources. By any objective standard, anyone looking at this case would recognize that it is not just frivolous and vexatious, but that it creates really terrible incentives all around.

If these kinds of cases are allowed to continue, you will get more such frivolous lawsuits for anything bad that happens. Worse, you will get much less overall speech online, as websites have incentives to take down or block any speech that isn’t Sesame Street-level in terms of how it’s portrayed. Any expression of anger, any expression of complaining, any expression of being unhappy about anything could otherwise be seen as proof that the social media was “defective in its design” for not magically connecting that expression to future violence.

That would basically be the end of any sort of forum for mental health. It would be the end of review sites. It would be the end of all sorts of useful websites, because the liability that could accrue from just one user on those forums saying something negative would be too much. If just one of the people in those forums then took action in the real world, people could blame it and sue it for not magically stopping the real world violence.

This would be a disaster for the world of online speech.

As Eric Goldman notes, it seems unlikely that this ruling will survive on appeal, but it’s still greatly problematic:

I am skeptical this opinion will survive an appeal. The court disregards multiple legal principles to reach an obviously results-driven decision from a judge based in the emotionally distraught community.

The court doesn’t cite other cases involving similar facts, including Gibson v. Craigslist and Godwin v. Facebook. One of the ways judges can reach the results they want is by selectively ignoring the precedent, but that approach doesn’t comply with the rule of law.

This opinion reinforces how the “negligent design” workaround to Section 230 will functionally eliminate Section 230 if courts allow plaintiffs to sue over third-party content by just relabeling their claims.

Separately, I will note my profound disappointment in seeing a variety of folks cheering on this obviously problematic ruling. Chief among them: Free Press. I don’t always agree with Free Press on policy prescriptions, but generally, their heart is in the right place on core internet freedom issues. Yet, they put out a press release cheering on this ruling.

In the press release, they claim (ridiculously) that letting this case move forward is a form of “accountability” for those killed in the horrific shooting in Buffalo. But that’s ridiculous and anyone should recognize that. There are people to hold liable for what happened there: most obviously the shooter himself. But trying to hold random social media sites liable for not somehow stopping future real world violence is beyond silly. As described above, it’s also extremely harmful to causes around free speech that Free Press claims as part of its mission.

I’m surprised and disappointed to see them take such a silly stance that undermines its credibility. But I’m even more disappointed in the court for ruling this way.

Filed Under: blame, buffalo shooting, liability, product liability, section 230, wayne jones
Companies: google, reddit, youtube

Once Again, Google Caves To Political Pressure And Supports Questionable STOP CSAM Law

from the playing-political-games dept

It’s not surprising, but still disappointing, to see companies like Google and Meta, which used to take strong stands against bad laws, now showing a repeated willingness to cave on such principles in the interests of appeasing policymakers. It’s been happening a lot in the last few years and it’s happened again as Google has come out (on ExTwitter of all places) to express support for a mixed batch of “child safety” bills.

Image

If you can’t see that screenshot, they are tweets from the Google Public Policy team, stating:

Protecting kids online is a top priority—and demands both strong legislation and responsible corporate practices to make sure we get it right.

We support several important bipartisan bills focused on online child safety, including the Invest in Child Safety Act, the Project Safe Childhood Act, the Report Act, the Shield Act, and the STOP CSAM Act.

We’ve talked about a couple of these bills. The Invest in Child Safety Act seems like a good one, from Senator Ron Wyden, as it focuses the issue where it belongs: on law enforcement. That is, rather than blaming internet companies for not magically stopping criminals, it equips law enforcement to better do its job.

The Shield Act is about stopping the sharing of nonconsensual sexual images and seems mostly fine, though I’ve seen a few concerns raised on the margins about how some of the language might go too far in criminalizing activities that shouldn’t be criminal. According to Senator Cory Booker last week, he’s been working with Senator Klobuchar on fixing those problematic parts.

And the Project Safe Childhood Act also seems perfectly fine. In many ways it complements the Invest in Child Safety Act in that it’s directed at law enforcement and focused on getting law enforcement to be better about dealing with child sexual abuse material, coordinating with other parts of law enforcement, and submitting seized imagery to NCMEC’s cybertip line.

But, then there’s the STOP CSAM bill. As we’ve discussed, there are some good ideas in that bill, but they’re mixed with some problematic ones. And, some of the problematic ones are a backdoor attack on encryption. Senator Dick Durbin, the author of the bill, went on a rant about Section 230 last week in trying to get the bill through on unanimous consent, which isn’t great either, and suggests some issues with the bill.

In that rant, he talks about how cell phones are killing kids because of “some crazy person on the internet.” But, um, if that’s true, it’s a law enforcement issue and “the crazy person on the internet” should face consequences. But Durbin insists that websites should somehow magically stop the “crazy person on the internet” from saying stuff. That’s a silly and mistargeted demand.

In that rant, he also talked about the importance of “turning the lawyers loose” on the big tech companies to sue them for what their users posted.

You’d think that that would be a reason for a company like Google to resist STOP CSAM, knowing it’ll face vexatious litigation. But, for some reason, it is now supporting the bill.

Lots of people have been saying that Durbin has a new, better version of STOP CSAM, and I’ve seen a couple drafts that are being passed around. But the current version of the bill still has many problems. Maybe Google is endorsing a fixed version of the bill, but if so, it sure would be nice if the rest of us could see it.

In the meantime, Durbin put out a gloating press release about Google’s support.

“For too long, Big Tech used every trick in the book to halt legislation holding social media companies accountable, while still trying to win the PR game. I’m glad to see that some tech companies are beginning to make good on their word to work with Congress on meaningful solutions to keep children safe online. I encourage other tech companies to follow Google’s move by recognizing that the time for Big Tech to police itself is over and work with Congress to better protect kids.”

Can’t say I understand Google’s reasons for caving here. I’m sure there’s some political calculus in doing so. And maybe they have the inside scoop on a fixed version of Durbin’s bill. But to do so the day after he talks about “turning the lawyers loose” on websites for failing to magically stop people from saying stuff… seems really strange.

It seems increasingly clear that both Meta and Google, with their buildings full of lawyers, have decided that the strategic political move is to embrace some of these laws, even as they know they’ll get hit with dumb lawsuits over them. They feel they can handle the lawsuits and, as a bonus, they know that smaller upstart competitors will probably have a harder time.

Still, there was a time when Google stood on principle and fought bad bills. That time seems to have passed.

Filed Under: dick durbin, encryption, liability, section 230, stop csam
Companies: google

Senator Blumenthal Pretends To Fix KOSA; It’s A Lie

from the blumenthal's-lies-will-kill dept

As lots of folks are reporting, Senator Richard Blumenthal, this morning, released an updated version of the Kids Online Safety Act (KOSA). He and co-author Senator Marsha Blackburn are also crowing how they’ve now increased the list of co-sponsors to 62 Senators, including Senators Chuck Schumer and Ted Cruz among others.

Blumenthal, as he always does, is claiming that all of the claimed problems with KOSA are myths and that there’s nothing to worry about with this bill.

He’s wrong.

He’s lying.

Senator Blumenthal has done this before. He did it with FOSTA and people died because of him. Yet, he won’t take responsibility for his bad legislation.

And this is bad legislation that will kill more people. Senator Blumenthal is using children as a prop to further his political career at the expense of actual children.

Blumenthal and his staff know this. There was talk all week that the revised bill was coming out today. Normally, senators share them around for analysis. They’ll often share a “redline” of the bill so people can analyze what’s changed. Blumenthal shared this only with his closest allies, so they could do a full court press this morning claiming the bill is perfect now while people who actually understand this shit had to spend the morning creating a redline to see what was different from the previous bill and to analyze what problems remain.

The key change that was made was to kill the part that allowed State Attorneys General to be the arbiters of enforcing what was “harmful,” which tons of people pointed out would allow Republican State AGs to claim that LGBTQ content was “harmful.” Indeed, that part was a big part of the appeal to Republicans beforehand who publicly admitted it would be used to stifle LGBTQ content.

Now, that “duty of care” section no longer applies to state AGs (who can still enforce other parts of the bill, which are still a problem). Instead, the FTC is given the power regarding this section, but as we explained a few months back, that’s still a problem, and it’s clear how that can be abused. If Donald Trump wins in the fall, and installs a new MAGA FTC boss, does anyone think this new power won’t be abused to claim that LGBTQ content is “harmful” and that companies have a “duty of care” to protect kids from it?

It also does not fully remove state AGs. They still have enforcement power over other aspects of the bill, including requiring that platforms put in place “safeguards for minors” as well as their mandated “disclosures” regarding children.

The new version of the bill also does pare back the duty of care section a bit but not in a useful way. It now is much more uncertain what websites need to do to “exercise reasonable care,” which means that sites will aggressively block content to avoid even the risk of liability.

And, of course, nothing in this bill works unless websites embrace age verification, which has already been repeatedly deemed unconstitutional, as an infringement of the rights of kids, adults, and websites. There is some other nonsense about “filter bubbles” that appears to require a chronological feed (even as research has shown chronological feeds lead people to see more false information).

Anyway, the bill is still problematic. If Blumenthal were actually trying to solve the problems of the bill he might have shared it with actual critics, rather than keeping it secret. But, the goal is not to fix it. The goal is to get Blumenthal on TV to talk about how he’s saving kids, even as he’s putting them at risk.

And Blumenthal’s “Fact v. Fiction” attempt to pre-bunk the concerns is just full of absolute nonsense. It says that KOSA doesn’t give AGs or the FTC “the power to bring lawsuits over content or speech.” But that’s misleading. As we keep seeing, people are quick to blame platforms as being responsible for “features” or “design choices” that are really about the content found via those features or design choices. It is easy to bring an enforcement action pretending to be about design, which is really about speech.

Also, the bill enables the FTC to designate what are “best practices” regarding kid safety, and what site is going to risk the liability of not following those “best practices.” And we’ve already seen the last Trump administration pressure agencies like the FCC and FTC to take on culture war issues. There’s no way it won’t happen again.

And this one really gets me. Blumenthal claims that no one should be concerned about the duty of care, while giving us all the reasons to be concerned:

The “duty of care” requires social media companies to prevent and mitigate certain harms that they know their platforms and products are causing to young users as a result of their own design choices, such as their recommendation algorithms and addictive product features. The specific covered harms include suicide, eating disorders, substance use disorders, and sexual exploitation.

For example, if an app knows that its constant reminders and nudges are causing its young users to obsessively use their platform, to the detriment of their mental health or to financially exploit them, the duty of care would allow the FTC to bring an enforcement action. This would force app developers to consider ahead of time where theses nudges are causing harms to kids, and potentially avoid using them.

“Theses [sic] nudges” indeed, Senator Finsta.

But, here’s the issue: how do you separate out things like “nudges” from the underlying content. Is it a “nudge” or is it a notification that your friend messaged you? As we’ve detailed specifically in the area of eating disorders, when sites tried to mitigate the harms by limiting access to that content it made things even worse for people, because (1) it was a demand side problem from the kids, not a supply side problem from the sites, and (2) by trying to stifle that kind of content, it took kids away from helping resources, and pushed them to riskier content.

This whole thing is based on a myth that social media is the cause of eating disorders, suicides, and other things, when the evidence simply does not support that claim at all.

The “fact vs. fiction” section is just full of fiction. For example:

No, the Kids Online Safety Act does not make online platforms liable for the content they host or choose to remove.

That’s a fun one to say, but it only makes sense if you ignore reality. Again, in this very section (as detailed above), Blumenthal is quick to conflate potential harms from content (i.e., eating disorder, suicide, etc.) with harms of design choices. Given that Blumenthal himself confuses the two, it’s rich that he thinks those things are somehow cabined off from each other within the law.

Indeed, all the FTC or a state AG is going to need to do is claim that an increase in suicides or other problems is caused by “features” on the site, and to avoid risks and liability, the pressure is going to lead the sites to remove the content, since they know damn well it’s not the features that are the concern.

And, as the eating disorder case study found, because this is a demand-side issue, kids will just find other places to continue discussing this stuff, with less oversight, and much more risk involved. People will die because of this bill.

Another lie:

No, the Kids Online Safety Act does not impose age verification requirements or require platforms to collect more data about users (government IDs or otherwise).

In fact, the bill states explicitly that it does not require age gating, age verification, or the collection of additional data from users.

This is wrong. First of all, the bill does set up a study on age verification, which is a prelude to directly requiring age verification.

But, more importantly, the bill does require “safeguards for minors” and the “duty of care” for minors, and the only way you know if you have “minors” on your site is to age verify them. And the only way to do that is to collect way more information on them, which puts their privacy at risk.

Blumenthal will claim that the bill only requires those things for users who are “known” to be minors, but then it’s going to lead sites to either put their head in the sand so they know nothing about anyone (which isn’t great) or a series of stupid legal fights over what it means to know whether or not a minor is on the site.

There’s more, but KOSA is still a mess, and because everyone’s asking my opinion of it and Blumenthal only gave early copies to friends, this is what you start with. Tragically, Blumenthal has strategically convinced a few LGBTQ groups to remove their opposition to the bill. They’re not supporting it (as some have reported), but rather their letter says the groups “will not oppose” the new KOSA.

KOSA is still a bad bill. It will not protect children. It provides no more resources to actually protect children. It is an exercise in self-aggrandizement and burnishes Blumenthal’s desire to be hailed as a “savior,” rather than looking for ways to actually solve real problems.

Filed Under: duty of care, enforcement, ftc, kosa, liability, marsha blackburn, nudges, protect the children, richard blumenthal