recommendations – Techdirt (original) (raw)

NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment

from the that's-not-how-it-works dept

The NY Times has real difficulty not misrepresenting Section 230. Over and over and over and over and over again it has misrepresented how Section 230 works, even having to once run this astounding correction (to an article that had a half-page headline saying Section 230 was at fault):

A day later, it had to run another correction on a different article also misrepresenting Section 230:

You would think with all these mistakes and corrections that the editors at the NY Times might take things a bit more slowly when either a reporter or a columnist submits a piece purportedly about Section 230.

Apparently not.

Julia Angwin has done some amazing reporting on privacy issues in the past and has exposed plenty of legitimately bad behavior by big tech companies. But, unfortunately, she appears to have been sucked into nonsense about Section 230.

She recently wrote a terribly misleading opinion piece, bemoaning social media algorithms and blaming Section 230 for their existence. The piece is problematic and wrong on multiple levels. It’s disappointing that it ever saw the light of day without someone pointing out its many flaws.

A history lesson:

Before we get to the details of the article, let’s take a history lesson on recommendation algorithms, because it seems that many people have very short memories.

The early internet was both great and a mess. It was great because anyone could create anything and communicate with anyone. But it was a mess because that came with a ton of garbage and slop. There were attempts to organize that information and make it useful. Things like Yahoo became popular not because they had a search engine (that came later!) but because they were an attempt to “organize” the internet (Yahoo originally stood for “Yet Another Hierarchical Officious Oracle”, recognizing that there were lots of attempts to “organize” the internet at that time).

After that, searching and search algorithms became a central way of finding stuff online. In its simplest form, search is a recommendation algorithm based on the keywords you provide run against its index. In the early days, Google cracked the code to make that recommendation algorithm for content on the wider internet.

The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”

The next generation of the internet was content in various silos. Some of those were user-generated silos of content, such as Facebook and YouTube. And some of them were professional content, like Netflix or iTunes. But, once again, it wasn’t long before users felt overwhelmed with the sheer amount of content at their fingertips. Again, they sought out recommendation algorithms to help them find the relevant or “good” content, and to avoid the less relevant “bad” content. Netflix’s algorithm isn’t very different from Google’s recommendation engine. It’s just that, rather than “here’s what’s most relevant for your search keywords,” it’s “here’s what’s most relevant based on your past viewing history.”

Indeed, Netflix somewhat famously perfected the content recommendation algorithm in those years, even offering up a $1 million prize to anyone who could build a better version. Years later, a team of researchers won the award, but Netflix never implemented it, saying that the marginal gains in quality were not worth the expense.

Either way, though, it was clearly established that the benefit and the curse of the larger internet is that in enabling anyone to create and access content, too much content is created for anyone to deal with. Thus, curation and recommendation is absolutely necessary. And handling both at scale requires some sort of algorithms. Yes, some personal curation is great, but it does not scale well, and the internet is all about scale.

People also seem to forget that recommendation algorithms aren’t just telling you what content they think you’ll want to see. They’re also helping to minimize the content you probably don’t want to see. Search engines choosing which links show up first are also choosing which links they won’t show you. My email is only readable because of the recommendation engines I run against it (more than just a spam filter, I also run algorithms that automatically put emails into different folders based on likely importance and priority).

Algorithms aren’t just a necessary part of making the internet usable today. They’re a key part of improving our experiences.

Yes, sometimes algorithms get things wrong. They could recommend something you don’t want. Or demote something you do. Or maybe they recommend some problematic information. But sometimes people get things wrong too. Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm. If the problem is people outsourcing their brain to the algorithm, it won’t be solved by outlawing algorithms or adding liability to them.

It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time.

And opinions are protected free speech under the First Amendment.

If we held anyone liable for opinions or recommendations, we’d have a massive speech problem on our hands. If I go into a bookstore, and the guy behind the counter recommends a book to me that makes me sad, I have no legal recourse, because no law has been broken. If we say that tech company algorithms mean they should be liable for their recommendations, we’ll create a huge mess: spammers will be able to sue if email is filtered to spam. Terrible websites will be able to sue search engines for downranking their nonsense.

On top of that, First Amendment precedent has long been clear that the only way a distributor can be held liable for even harmful recommendation is if the distributor had actual knowledge of the law-violating nature of the recommendation.

I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.

We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

It’s not hard to transpose this to the internet. If Google recommends a link that causes someone to poison themselves, precedent says we can hold the author liable, but not the distributor/recommender unless they have actual knowledge of the illegal nature of the content. Absent that, there is nothing to actually sue over.

And, that’s good. Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.

Note that the issue of Section 230 does not come up even once in this history lesson. All that Section 230 does is say that websites and users (that’s important!) are immune from their editorial choices for third party content. That doesn’t change the underlying First Amendment protections for their editorial discretion, it just allows them to get cases tossed out earlier (at the very earliest motion to dismiss stage) rather than having to go through expensive discovery/summary judgment and possibly even all the way to trial.

Section 230 isn’t the issue here:

Now back to Angwin’s piece. She starts out by complaining about Mark Zuckerberg talking up Meta’s supposedly improved algorithms. Then she takes the trite and easy route of dunking on that by pointing out that Facebook is full of AI slop and clickbait. That’s true! But… that’s got nothing to do with legal liability. That simply has to do with… how Facebook works and how you use Facebook? My Facebook feed has no AI slop or clickbait, perhaps because I don’t click on that stuff (and I barely use Facebook). If there was no 230 and Facebook were somehow incentivized to do less algorithmic recommendation, feeds would still be full of nonsense. That’s why the algorithms were created in the first place. Indeed, studies have shown that when you remove algorithms, feeds are filled with more nonsense, because the algorithms don’t filter out the crap any more.

But Angwin is sure that Section 230 is to blame and thinks that if we change it, it will magically make the algorithms better.

Our legal system is starting to recognize this shift and hold tech giants responsible for the effects of their algorithms — a significant, and even possibly transformative, development that over the next few years could finally force social media platforms to be answerable for the societal consequences of their choices.

Let’s back up and start with the problem. Section 230, a snippet of law embedded in the 1996 Communications Decency Act, was initially intended to protect tech companies from defamation claims related to posts made by users. That protection made sense in the early days of social media, when we largely chose the content we saw, based on whom we “friended” on sites such as Facebook. Since we selected those relationships, it was relatively easy for the companies to argue they should not be blamed if your Uncle Bob insulted your strawberry pie on Instagram.

So, again, this is wrong. From the earliest days of the internet, we always relied on recommendation systems and moderation, as noted above. And “social media” didn’t even come into existence until years after Section 230 was created. So, it’s not just wrong to say that Section 230’s protections made sense for early social media, it’s backwards.

Also, it is somewhat misleading to call Section 230 “a snippet of law embedded in the 1995 Communications Decency Act.” Section 230 was an entirely different law, designed to be a replacement for the CDA. It was the Internet Freedom and Family Empowerment Act and was put forth by then Reps. Cox and Wyden as an alternative to the CDA. Then, Congress, in its infinite stupidity, took both bills and merged them.

But it was also intended to help protect companies from being sued for recommendations. Indeed, two years ago, Cox and Wyden explained this to the Supreme Court in a case about recommendations:

At the same time, Congress drafted Section 230 in a technology-neutral manner that would enable the provision to apply to subsequently developed methods of presenting and moderating user-generated content. The targeted recommendations at issue in this case are an example of a more contemporary method of content presentation. Those recommendations, according to the parties, involve the display of certain videos based on the output of an algorithm designed and trained to analyze data about users and present content that may be of interest to them. Recommending systems that rely on such algorithms are the direct descendants of the early content curation efforts that Congress had in mind when enacting Section 230. And because Section 230 is agnostic as to the underlying technology used by the online platform, a platform is eligible for immunity under Section 230 for its targeted recommendations to the same extent as any other content presentation or moderation activities.

So the idea that 230 wasn’t meant for recommendation systems is wrong and ahistorical. It’s strange that Angwin would just claim otherwise, without backing up that statement.

Then, Angwin presents a very misleading history of court cases around 230, pointing out cases where Section 230 has been successful in getting bad cases dismissed at an early stage, but in a way that makes it sound like the cases would have succeeded absent 230:

Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.

But again, these links misrepresent and misunderstand how Section 230 functions under the umbrella of the First Amendment. None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable. All Section 230 did was speed up the resolution of those cases, without stopping the plaintiffs from taking legal action against those actually responsible for the harms.

And, similarly, we could point to another list of cases where Section 230 “shielded tech firms from consequences” for things we want them shielded from consequences on, like spam filters, kicking Nazis off your platform, fact-checking vaccine misinformation and election denial disinformation, removing hateful content and much much more. Remove 230 and you lose that ability as well. And those two functions are tied together at the hip. You can’t get rid of the protections for the stuff Julia Angwin says is bad without also losing the protections for things we want to protect. At least not without violating the First Amendment.

This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).

Angwin’s issue (as is the issue with so many Section 230 haters) is that she wants to blame tech companies for harms created by users of those technologies. At its simplest level, Section 230 is just putting the liability on the party actually responsible. Angwin’s mad because she’d rather blame tech companies than the people actually selling drugs, sexually harassing people, selling illegal arms or engaging in human trafficking. And I get the instinct. Big tech companies suck. But pinning liability on them won’t fix that. It’ll just allow them to get out of having important editorial discretion (making everything worse) while simultaneously building up a bigger legal team, making sure competitors can never enter the space.

That’s the underlying issue.

Because if you blame the tech companies, you don’t get less of those underlying activities. You get companies who won’t even look to moderate such content, because that would be used in lawsuits against them as a sign of “knowledge.” Or if the companies do decide to more aggressively moderate, you would get any attempt to speak out about sexual harassment blocked (goodbye to the #MeToo movement… is that what Angwin really wants?)

Changing 230 would make things worse, not better:

From there, Angwin takes the absolutely batshit crazy 3rd Circuit opinion in Anderson v. TikTok, which explicitly ignored a long list of other cases based on misreading a non-binding throwaway line in a Supreme Court ruling, and gave no other justification for its ruling, as being a good thing?

If the court holds platforms liable for their algorithmic amplifications, it could prompt them to limit the distribution of noxious content such as nonconsensual nude images and dangerous lies intended to incite violence. It could force companies, including TikTok, to ensure they are not algorithmically promoting harmful or discriminatory products. And, to be fair, it could also lead to some overreach in the other direction, with platforms having a greater incentive to censor speech.

Except, it won’t do that. Because of the First Amendment it does the opposite. The First Amendment requires actual knowledge of the violative actions and content, so doing this will mean two things: companies taking either a much less proactive stance or, alternatively, taking one that will be much quicker to remove any controversial content (so goodbye #MeToo, #BlackLivesMatter or protests against the political class).

Even worse, Angwin seems to have spoken to no one with actual expertise on this if she thinks this is the end result:

My hope is that the erection of new legal guardrails would create incentives to build platforms that give control back to users. It could be a win-win: We get to decide what we see, and they get to limit their liability.

As someone who is actively working to help create systems that give control back to users, I will say flat out that Angwin gets this backwards. Without Section 230 it becomes way more difficult to do so. Because the users themselves would now face much greater liability, and unlike the big companies, the users won’t have buildings full of lawyers willing and able to fight such bogus legal threats.

If you face liability for giving users more control, users get less control.

And, I mean, it’s incredible to say we need legal guardrails and less 230 and then say this:

In the meantime, there are alternatives. I’ve already moved most of my social networking to Bluesky, a platform that allows me to manage my content moderation settings. I also subscribe to several other feeds — including one that provides news from verified news organizations and another that shows me what posts are popular with my friends.

Of course, controlling our own feeds is a bit more work than passive viewing. But it’s also educational. It requires us to be intentional about what we are looking for — just as we decide which channel to watch or which publication to subscribe to.

As a board member of Bluesky, I can say that those content moderation settings and the ability for others to make feeds and for them to be available for Angwin to choose what she wants are possible in large part due to Section 230. Without Section 230 to protect both Bluesky and its users, it makes it much more difficult to defend lawsuits over those feeds.

Angwin literally has this backwards. Without Section 230, is Bluesky as open to offering up third-party feeds? Are they as open to allowing users to create their own feeds? Under the world that Angwin claims to want, where platforms have to crack down on “bad” content, it would be a lot more legally risky to allow user control and third-party feeds. Not because providing the feeds would lead to legal losses, but without 230 it would encourage more bogus lawsuits, and cost way more to get those lawsuits tossed out under the First Amendment.

Bluesky doesn’t have a building full of lawyers like Meta has. If Angwin got her way, Bluesky would need that if it wanted to continue offering the features Angwin claims she finds so encouraging.

This is certainly not the first time that the NY Times has directly misled the public about how Section 230 works. But Angwin certainly knows many of the 230 experts in the field. It appears she spoke to none of them and wrote a piece that gets almost everything backwards. Angwin is a powerful and important voice towards fixing many of the downstream problems of tech companies. I just wish that she would spend some time understanding the nuances of 230 and the First Amendment to be more accurate in her recommendations.

I’m quite happy that Angwin likes Bluesky’s approach to giving power to end users. I only wish she wasn’t advocating for something that would make that way more difficult.

Filed Under: 1st amendment, algorithms, content moderation, free speech, history, julia angwin, recommendations, section 230

Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic

from the that's-not-how-any-of-this-works dept

Step aside Fifth Circuit Court of Appeals, there’s a new contender in town for who will give us the most batshit crazy opinions regarding the internet. This week, a panel on the Third Circuit ruled that a lower court was mistaken in dismissing a case against TikTok on Section 230 grounds.

But, in order to do so, the court had to intentionally reject a very long list of prior caselaw on Section 230, misread some Supreme Court precedent, and (trifecta!) misread Section 230 itself. This may be one of the worst Circuit Court opinions I’ve read in a long time. It’s definitely way up the list.

The implications are staggering if this ruling stands. We just talked about some cases in the Ninth Circuit that poke some annoying and worrisome holes in Section 230, but this ruling takes a wrecking ball to 230. It basically upends the entire law.

At issue are the recommendations TikTok offers on its “For You Page” (FYP), which is the algorithmically recommended feed that a user sees. According to the plaintiff, the FYP recommended a “Blackout Challenge” video to a ten-year-old child, who mimicked what was shown and died. This is, of course, horrifying. But who is to blame?

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

In many ways, Section 230 was designed to speed up this analysis in the internet era, by making it explicit that a website publisher has no liability for harms that come from content posted by others, even if the publisher engaged in traditional publishing functions. Indeed, the point of Section 230 was to encourage platforms to engage in traditional publishing functions.

There is a long list of cases that say that Section 230 should apply here. But the panel on the Third Circuit says it can ignore all of those. There’s a very long footnote (footnote 13) that literally stretches across three pages of the ruling listing out all of the cases that say this is wrong:

We recognize that this holding may be in tension with Green v. America Online (AOL), where we held that § 230 immunized an ICS from any liability for the platform’s failure to prevent certain users from “transmit[ing] harmful online messages” to other users. 318 F.3d 465, 468 (3d Cir. 2003). We reached this conclusion on the grounds that § 230 “bar[red] ‘lawsuits seeking to hold a service provider liable for . . . deciding whether to publish, withdraw, postpone, or alter content.’” Id. at 471 (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Green, however, did not involve an ICS’s content recommendations via an algorithm and pre-dated NetChoice. Similarly, our holding may depart from the pre-NetChoice views of other circuits. See, e.g., Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1098 (9th Cir. 2019) (“[R]ecommendations and notifications . . . are not content in and of themselves.”); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (“Merely arranging and displaying others’ content to users . . . through [] algorithms—even if the content is not actively sought by those users—is not enough to hold [a defendant platform] responsible as the developer or creator of that content.” (internal quotation marks and citation omitted)); Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 21 (1st Cir. 2016) (concluding that § 230 immunity applied because the structure and operation of the website, notwithstanding that it effectively aided sex traffickers, reflected editorial choices related to traditional publisher functions); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014) (adopting Zeran by noting that “traditional editorial functions” are immunized by § 230); Klayman v. Zuckerburg, 753 F.3d 1354, 1359 (D.C. Cir. 2014) (immunizing a platform’s “decision whether to print or retract a given piece of content”); Johnson v. Arden, 614 F.3d 785, 791-92 (8th Cir. 2010) (adopting Zeran); Doe v. MySpace, Inc., 528 F.3d 413, 420 (5th Cir. 2008) (rejecting an argument that § 230 immunity was defeated where the allegations went to the platform’s traditional editorial functions).

I may not be a judge (or even a lawyer), but even I might think that if you’re ruling on something and you have to spend a footnote that stretches across three pages listing all the rulings that disagree with you, at some point, you take a step back and ask:

Principal Skinner meme. First frowning and looking down with hand stroking chin saying: "Am I so out of touch that if every other circuit court ruling disagrees with me, I should reconsider?" Second panel has him looking up and saying "No, it's the other courts who are wrong."

As you might be able to tell from that awful footnote, the Court here seems to think that the ruling in Moody v. NetChoice has basically overturned those rulings and opened up a clean slate. This is… wrong. I mean, there’s no two ways about it. Nothing in Moody says this. But the panel here is somehow convinced that it does?

The reasoning here is absolutely stupid. It’s taking the obviously correct point that the First Amendment protects editorial decision-making, and saying that means that editorial decision-making is “first-party speech.” And then it’s making that argument even dumber. Remember, Section 230 protects an interactive computer service or user from being treated as the publisher (for liability purposes) of third party information. But, according to this very, very, very wrong analysis, algorithmic recommendations are magically “first-party speech” because they’re protected by the First Amendment:

Anderson asserts that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them[.]” ECF No. 50 at 5. The Supreme Court’s recent discussion about algorithms, albeit in the First Amendment context, supports this view. In Moody v. NetChoice, LLC, the Court considered whether state laws that “restrict the ability of social media platforms to control whether and how third-party posts are presented to other users” run afoul of the First Amendment. 144 S. Ct. 2383, 2393 (2024). The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment….

Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too….

This is just flat out wrong. It is based on the false belief that any “expressive product” makes it “first-party speech.” That’s wrong on the law and it’s wrong on the precedent.

It’s a bastardization of an already wrong argument put forth by MAGA fools that Section 230 conflicts with the argument in Moody. The argument, as hinted at by Justices Thomas and Gorsuch, is that because NetChoice argues (correctly) that its editorial decision-making is protected by the First Amendment, it’s somehow in conflict with the idea that they have no legal liability for third-party speech.

But that’s only in conflict if you can’t read and/or don’t understand the First Amendment and Section 230 and how they interact. The First Amendment still protects any editorial actions taken by a platform. All Section 230 does is say that it can’t face liability for third party speech, even if it engaged in publishing that speech. The two things are in perfect harmony. Except to these judges in the Third Circuit.

The Supreme Court at no point says that editorial actions turn into first-party speech because they are protected by the First Amendment, contrary to what they say here. That’s never been true, as even the mushroom encyclopedia example shows above.

Indeed, reading Section 230 in this manner wipes out Section 230. It makes it the opposite of what the law was intended to do. Remember, the law was written in response to the ruling in Stratton Oakmont v. Prodigy, where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content. As then Reps. Chris Cox and Ron Wyden recognized, that would encourage no moderation at all, which made no sense. So they passed 230 to overturn that decision and make it so that internet services could feel free to engage in all sorts of publishing activity without facing liability for the underlying content when that content was provided by a third party.

But here, the Third Circuit has flipped that on its head and said that the second you engage in First Amendment-protected publishing activity around content (such as recommending it), you lose Section 230 protections because the content becomes first-party content.

That’s… the same thing that the court ruled in Stratton Oakmont, and which 230 overturned. It’s beyond ridiculous for the Court to say that Section 230 basically enshrined Stratton Oakmont, and it’s only now realizing that 28 years after the law passed.

And yet, that seems to be the conclusion of the panel.

Incredibly, Judge Paul Matey (a FedSoc favorite Trump appointee) has a concurrence/dissent where he would go even further in destroying Section 230. He falsely claims that 230 only applies to “hosting” content, not recommending it. This is literally wrong. He also falsely claims that Section 230 is a form of a “common carriage regulation” which it is not.

So he argues that the first Section 230 case, the Fourth Circuit’s important Zeran ruling, was decided incorrectly. The Zeran ruling established that Section 230 protected internet services from all kinds of liability for third-party content. Zeran has been adopted by most other circuits (as noted in that footnote of “all the cases we’re going to ignore” above). So in Judge Matey’s world, he would roll back Section 230 to only protect hosting of content and that’s it.

But that’s not what the authors of the law meant (they’ve told us, repeatedly, that the Zeran ruling was correct).

Either way, every part of this ruling is bad. It basically overturns Section 230 for an awful lot of publisher activity. I would imagine (hope?) that TikTok will request an en banc rehearing across all judges on the circuit and that the entire Third Circuit agrees to do so. At the very least, that would provide a chance for amici to explain how utterly backwards and confused this ruling is.

If not, then you have to think the Supreme Court might take it up, given that (1) they still seem to be itching for direct Section 230 cases and (2) this ruling basically calls out in that one footnote that it’s going to disagree with most other Circuits.

Filed Under: 1st amendment, 1st party speech, 3rd circuit, 3rd party speech, algorithms, fyp, liability, recommendations, section 230
Companies: tiktok

NY’s ‘SAFE For Kids Act’: A Lesson in How Not to Regulate The Internet

from the make-sure-you-only-read-this-article-in-chronological-order dept

We’ve written a few times about New York’s preposterously bonkers “SAFE for Kids Act” (SAFE standing for “Stop Addictive Feeds Exploitation”). It’s an obviously unconstitutional bill that insists, without any real evidence, that basically all social media algorithmic feeds are somehow addictive and problematic.

Last week we posted a letter by a NY-based parent to his own legislators explaining why the bill would inherently do more harm than good.

But, no matter, the NY legislature passed a slightly modified version of the bill last week, which you can read here. The bill no longer has random, unsubstantiated bans on kids using social media in the middle of the night, but still bans algorithmic feeds for kids.

This means age verification will effectively become mandatory, despite claims to the contrary from bill supporters. If you will get in trouble for serving some type of content to those under 18, you need to have a system to determine how old they are. Thus, privacy-damaging age verification is effectively mandated.

But, even more important, the effective banning of algorithmic feeds is ridiculous. There is no evidence that the algorithmic nature of feeds has anything to do with any harm. It is all a fever dream by ignorant politicians. I know for a fact that when the sponsor of this bill, Andrew Gounardes, was asked by a constituent for evidence of the harms, he was dismissed as simply repeating “big tech talking points.” Gounardes seems absolutely convinced that any criticism of the bill must be from “big tech” lobbyists and refuses to consider the very real problems of this bill.

And the same is true for NY Governor Kathy Hochul, who cheered on the passage of the bill:

Governor Kathy Hochul today celebrated the legislative passage of two nation-leading bills to protect kids online. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act will restrict a child’s access to addictive feeds on social media, and the New York Child Data Protection Act will keep children’s personal data safe.

“New York is leading the nation to protect our kids from addictive social media feeds and shield their personal data from predatory companies,” Governor Hochul said. “Together, we’ve taken a historic step forward in our efforts to address the youth mental health crisis and create a safer digital environment for young people. I am grateful to Attorney General James, Majority Leader Stewart-Cousins and Speaker Heastie, and bill sponsors Senator Gounardes and Assemblymember Rozic for their vital partnership in advancing this transformative legislation.”

Remember, Hochul has been trying to blame social media for repeated failings of her own administration, so it’s little surprise she would celebrate this law.

But, again, to date, the research simply does not support the idea that algorithmic feeds are harmful or addictive. Studies have been done on both Meta properties and Twitter that find the only real difference between algorithmic and chronological feeds is that when forced to use chronological feeds, users see a lot more disinformation and junk they don’t want.

What Gounardes, Hochul, and lots of very silly people refuse to understand is that algorithmic recommendations not only give users more of what they want to see, but they also help remove the stuff they don’t want to see. And that’s kind of important.

But, really, just for the sake of comparison, if NY politicians are allowed to determine what content you see when you open a social media app, it also means they think they can control what content you see when you open a news story. What’s to stop them from similarly (falsely) claiming that editorial recommendations in the NY Times or the WSJ are “addictive” and all media sites need to only post articles in chronological order?

These requirements clearly violate the First Amendment, and it’s not a “big tech talking point” to say so.

It’s getting ridiculously exhausting to have to point out the problems with all of these clueless state laws from very foolish politicians, and for them to falsely insist that any critiques must come from “big tech.”

It would be nice if there were serious lawmakers out there willing to have serious discussions on the policies they’re thinking about. Tragically, New York has a bunch of clowns instead, just like tons of other states these days. It’s not partisan in any way. New York and California, both fairly blue states, have been pushing dozens of these kinds of laws. But so have Florida, Texas, Utah, Ohio, Arkansas and more.

Unfortunately these days, constitutionally infirm anti-internet legislation has become a bipartisan pastime. And the most likely result is that more taxpayer funds are going to be wasted while these nonsense bills are inevitably rejected as unconstitutional.

It seems that if you sponsor a bill that is eventually thrown out as unconstitutional, it should be grounds for impeachment.

Filed Under: algorithmic feeds, andrew gounardes, chronological feeds, free speech, kathy hochul, new york, protect the children, recommendations, safe for kids act

Elon Musk Says Only Those Who Pay Him Deserve Free Speech

from the price-check-on-freedom-on-aisle-7-please dept

Okay, okay, I think this is the last of my posts about Elon Musk’s unhinged appearance at the DealBook Summit with ill-prepared interviewer Andrew Ross Sorkin. We already covered his cursing out advertisers, while predicting that “earth will judge” them, as well as his statement that AI copyright lawsuits don’t matter because “Digital God” will be here before it’s over, but I also wanted to cover one more exchange, in which Musk effectively says that only those who give him money deserve “free speech” (his definition of free speech).

Again, Sorkin does a terrible job of setting up the question, so I’ll do what he should have done and explain the context. Sorkin is asking about a few times recently where ExTwitter has fiddled with the knobs to punish sites Elon appears not to like. In September, it was reported that one of those sites was the NY Times, and the process began in July, whereby something changed at ExTwitter so that NY Times’ tweets were suppressed in some manner (what some might call “shadow banned,” even though that term’s meaning has changed over time).

Since late July, engagement on X posts linking to the New York Times has dropped dramatically. The drop in shares and other engagement on tweets with Times links is abrupt, and is not reflected in links to similar news organizations including CNN, the Washington Post, and the BBC….

Now, remember, nonsense conspiracy theories about “shadow banning” were one of the reasons why Elon insisted he had to take over the company “to protect free speech.” But as soon as he was at the controls, he immediately started using the same tools to “shadow ban” some of those he disliked.

Anyway, Sorkin asks Musk about this, and Musk’s response is somewhat incredible to see. He more or less says that if you don’t give him money, you don’t deserve his version of “free speech” (which is the ability to post on ExTwitter).

The discussion starts out weird enough:

ARS: The New York Times newspaper it appeared over the summer, to be throttled.

Elon: What? What did?

ARS: The NY Times.

Elon: Well, we do require that that everyone has to buy a subscription and we don’t make exceptions for anyone and and I think if I want the New York Times I have to pay for a subscription and they don’t give me a free subscription, so I’m not going to give them a free subscription

First of all, what? What do you mean “we do require that everyone has to buy a subscription” because that’s literally not true. Over 99% of users on ExTwitter use the platform for free. Only a tiny, tiny percentage pay for a subscription.

Sorkin tries to bring it back around to throttling, but Musk continues to talk nonsensically about subscriptions, which have fuck all to do with what Sorkin is asking him about.

ARS: But were you throttling the New York Times relative to other news organizations? Relative to everybody else? Was it specific to the to the Times?

Musk: They didn’t buy a subscription. By the way, it only costs like a thousand dollars a month, so if they just do that then they’re back in the saddle.

ARS: But you are saying it was throttled.

Musk: No, I’m saying…

ARS: I’m saying I mean was there a conversation that you had with somebody you said, ‘look, you know, I’m unhappy with the Times, they should either be buying the subscription or I don’t like their content or whatever.’ Whatever.

Musk: Any organization that refuses to buy a subscription is is not going to be recommended.

So, Sorkin and Musk are obviously talking at cross purposes here. Sorkin’s asking about deliberate throttling. Musk is trying to say that news orgs that don’t pay $1,000/month (which is not, as Musk implies, cheap, nor is it worth it, given how little traffic Twitter actually sends to news sites) aren’t recommended.

The correct question for Sorkin to ask here is what’s the difference between “not recommended” and “throttled,” if any, because the evidence suggests that the Times was deliberately punished beyond just not being “recommended.” And, yes, this exact kind of thing was part of what Musk said he had to buy Twitter to stop. So Sorkin jumps ahead (awkwardly) to try to sorta make that point:

ARS: But then what does that say about free speech? And what does it say about amplifying certain voices…

Musk: Well, it says free speech is not exactly free, it costs a little bit.

Which, um, is kinda a big claim. Especially given what he’s said about free speech in the past. Sorkin seems stumped for a moment and so Elon starts laughing and then comes up with some non sequitur from South Park.

Musk: You know, it’s like… uh… South Park like they say: you know freedom isn’t free, it costs of buck o’ five or whatever. So but it’s pretty cheap. Okay? It’s low cost low cost freedom.

So, again, he doesn’t actually answer the question or address the underlying issue, which is that for all the claims that he purchased Twitter to stop those kinds of knob fiddling that he (incorrectly) believes were being done for ideological reasons, he’s now much more actively fiddling with the knobs, including suppressing speech of those who won’t give him money.

The sense of entitlement, again, is astounding. It’s basically, “you don’t get free speech unless you pay me.”

Filed Under: andrew ross sorkin, elon musk, free speech, recommendations, shadowbanning, shadowbans, subscriptions, throttling
Companies: ny times, twitter, x

Important Things At Twitter Keep Breaking, And Making The Site More Dangerous

from the there's-no-autocompleting-safety dept

It turns out that if you fire basically all of the competent trust & safety people at your website, you end up with a site that is neither trustworthy, nor safe. We’ve spent months covering ways in which you cannot trust anything from Twitter or Elon Musk, and there have been some indications of real safety problems on the site, but it’s been getting worse lately, with two somewhat terrifying stories that show just how unsafe the site has become, and how risky it is to rely on Twitter for anything.

First, former head of trust & safety at Twitter, Yoel Roth, a few weeks after he quit, said that “if protected tweets stop working, run.” Basically, when core security features break down, it’s time to get the hell out of there.

Protected tweets do still seem to kinda be working, but a related feature, Twitter’s “circles,” which lets you tweet to just a smaller audience, broke. Back in February, some people noticed that it was “glitching,” in ways that were concerning, including a few reports that some things that were supposedly posted to a circle were viewable publicly, but there weren’t many details. However, in early April, such reports became widespread, with further reports of nude imagery that people thought was being shared privately among a smaller group being available publicly.

Twitter said nothing for a while, before finally admitting earlier this month that there was a “security incident” that may have exposed some of those supposed-to-be-private tweets, though it appears to have only sent that admission to some users via email, rather than publicly commenting on it.

The second incident is perhaps a lot more concerning. Last week, some users discovered that Twitter’s search autocomplete was recommending… um… absolutely horrific stuff, including potential child sexual abuse material and animal torture videos. As an NBC report by Ben Collins notes, Twitter used to have tools that stopped search from recommending such awful things, but it looks like someone at Twitter 2.0 just turned off that feature, enabling anyone to get recommended animal torture.

Yoel Roth, Twitter’s former head of trust and safety, told NBC News that he believes the company likely dismantled a series of safeguards meant to stop these kinds of autocomplete problems.

Roth explained that autocompleted search results on Twitter were internally known as “type-ahead search” and that the company had built a system to prevent illegal, illicit and dangerous content from appearing as autocompleting suggestions.

“There is an extensive, well-built and maintained list of things that filtered type-ahead search, and a lot of it was constructed with wildcards and regular expressions,” Roth said.

Roth said there was a several-step process to prevent gore and death videos from appearing in autocompleted search suggestions. The process was a combination of automatic and human moderation, which flagged animal cruelty and violent videos before they began to appear automatically in search results.

“Type-ahead search was really not easy to break. These are longstanding systems with multiple layers of redundancy,” said Roth. “If it just stops working, it almost defies probability.”

In other words, this isn’t something that just “breaks.” It’s something that someone had to go in and actively go through multiple steps to turn off.

After news of this started to get attention, Twitter responded by… turning off autocomplete entirely. Which, I guess, is better than leaving up the other version.

But, still, this is why you have a trust & safety team who works through this stuff to keep your site safe. It’s not just content moderation, as there’s a lot more to it than that. But Twitter 2.0 seems to have burned to the ground a ton of institutional knowledge and is just winging it. If that means recommending CSAM and animal torture videos, well, I guess that’s just the kind of site Twitter wants to be.

Filed Under: animal torture, autocomplete, csam, ella irwin, elon musk, recommendations, torture, trust and safety
Companies: twitter

Reminder: Section 230 Protects You When You Forward An Email

from the section-230-protects-you dept

Sometimes it feels like we need to keep pointing this out, but it’s (1) often forgotten and (2) really, really important. Section 230 doesn’t just protect “big tech.” It also doesn’t just protect “small tech.” It literally protects you and me. Remember, the key part of the law so that no provider or user of an interactive computer service shall be held liable for someone else’s speech:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

People often ignore or forget about that part, but it’s important. It’s come up in cases before, such as in Barrett v. Rosenthal. And, now we’ve got another such case, highlighted first by Prof. Eric Goldman.

Professor Janet Monge from the University Pennsylvania, and curator of part of the Penn Museum did not like what this HyperAllergic article said about her, and insisted that it was defamatory. She then sued a whole bunch of people, including the publisher, HyperAllergic, and the two authors of the article, Kinjal Dave and Jake Nussbaum. However, there were many others listed as well, including a fellow UPenn faculty member, Dr. Deborah Thomas, who did nothing more than share the article on an email listserv.

Back in February, the court easily dismissed the defamation claims against HyperAllergic and the two authors mainly because the allegations are… true:

The allegations in Dr. Monge’s amended complaint demonstrate that this statement is, in all material respects, substantially true, and thus Hyperallergic, Ms. Dave, and Mr. Nussbaum cannot be held liable.

Other statements are non-defamatory because they’re “pure opinions that convey the subjective belief of the speaker and are based on disclosed facts.”

However, now in dealing with the claims against Dr. Thomas, the court was able to use Section 230 to dismiss them even more easily without having to even analyze the content again:

Dr. Monge, by asserting defamation claims against Dr. Thomas, seeks to treat Dr. Thomas as the publisher of the allegedly defamatory articles which Dr. Thomas shared via email. This is precisely the kind of factual scenario where CDA immunity applies. Therefore, Dr. Thomas’s conduct of sharing allegedly defamatory articles via email is immune from liability under the CDA.

Monge tried to get around this by arguing that Thomas “materially contributed” to the defamation by including commentary in the email forward, but the court notes that since she did not contribute any defamatory content, that’s not how this works. You have to imbue the content with its violative nature, and simply summarizing or expressing an opinion about the article in question is not that:

The CDA provides immunity to Dr. Thomas for sharing the allegedly defamatory articles via email and for allegedly suggesting that Dr. Monge mishandled the remains because Dr. Thomas did not materially contribute to the allegedly defamatory articles she forwarded.

As Prof. Goldman notes in his writeup of this case (which he describes as “an easy case” regarding 230), this highlights two key aspects of Section 230:

This is a good example of how Section 230 benefits online users, not just “Big Tech.” Dr. Thomas gets the same legal protection as Google and Facebook, even though she’s didn’t operate any system at all.

It’s also a reminder of how Section 230 currently protects the promotion of content, in addition to the hosting of it. That aspect remains pending with the US Supreme Court.

These are both important points. In the leadup to the Gonzalez case at the Supreme Court, lots of people kept trying to argue that merely recommending content somehow should not be covered by Section 230, but as this case shows were that to be the case, it would wipe out 230 in cases like this where its protections are so important.

Filed Under: deborah thomas, defamation, janet monge, opinion, recommendations, section 230, truth, users
Companies: hyperallergic

Seattle School District Files Laughably Stupid Lawsuit Against Basically Every Social Media Company For… ‘Being A Public Nuisance’

from the that's-not-how-any-of-this-works dept

I just wrote about Utah’s ridiculously silly plans to sue every social media company for being dangerous to children, in which I pointed out that the actual research doesn’t support the underlying argument at all. But I forgot that a few weeks ago, Seattle’s public school district actually filed just such a lawsuit, suing basically every large social media company for being a “public nuisance.” The 91-page complaint is bad. Seattle taxpayers should be furious that their taxes, which are supposed to be paying for educating their children, are, instead, going to lawyers to file a lawsuit so ridiculous that it’s entirely possible the lawyers get sanctioned.

The lawsuit was filed against a variety of entities and subsidiaries, but basically boils down to suing Meta (over Facebook, Instagram), Google (over YouTube), Snapchat, and TikTok. Most of the actual lawsuit reads like any one of the many, many moral panic articles you read about how “social media is bad for you,” with extremely cherry-picked facts that are not actually supported by the data. Indeed, one might argue that the complaint itself, filed by Seattle Public Schools lawyer Gregory Narver and the local Seattle law firm of Keller Rohrback, is chock full of the very sort of misinformation that they so quickly wish to blame the social media companies for spreading.

First: as we’ve detailed, the actual evidence that social media is harming children basically… does not exist. Over and over again studies show a near total lack of evidence. Indeed, as recent studies have shown, the vast majority of children get value from social media. There are plenty of moral paniciky pieces from adults freaked out about what “the kids these days” are doing, but little evidence to support any of it. Indeed, the parents often seem to be driven into a moral panic fury by… misinformation they (the adults) encountered on social media.

The school’s lawsuit reads like one giant aggregation of basically all of these moral panic stories. First, it notes that the kids these days, they use social media a lot. Which, well, duh. But, honestly, when you look at the details it suggests they’re mostly using them for entertainment, meaning that it hearkens back to previous moral panics about every new form of entertainment from books, to TV, to movies, etc. And, even then, none of this even looks that bad? The complaint argues that this chart is “alarming,” but if you asked kids about how much TV they watched a couple decades ago, I’m guessing it would be similar to what is currently noted about YouTube and TikTok (and note that others like Facebook/Instagram don’t seem to get that much use at all according to this chart, but are still being sued):

There’s a whole section claiming to show that “research has confirmed the harmful effects” of social media on youth, but that’s false. It’s literally misinformation. It cherry-picks a few studies, nearly all of which are by a single researcher, and ignores the piles upon piles of research suggesting otherwise. Hell, even the graphic above that it uses to show the “alarming” addition to social media is from Pew Research Center… the organization that just released a massive study about how social media has made life better for teens. Somehow, the Seattle Public Schools forgot to include that one. I wonder why?

Honestly, the best way to think about this lawsuit is that it is the Seattle Public School system publicly admitting that they’re terrible educators. While it’s clear that there are some kids who end up having problems exacerbated by social media, one of the best ways to deal with that is through good education. Teaching kids how to use social media properly, how to be a good digital citizen, how to have better media literacy for things they find on social media… these are all the kinds of things that a good school district builds into its curriculum_._

This lawsuit is effectively the Seattle Public School system publicly stating “we’re terrible at our job, we have not prepared your kids for the real world, and therefore, we need to sue the media apps and services they use, because we failed in our job.” It’s not a good look. And, again, if I were a Seattle taxpayer — and especially if I were a Seattle taxpayer with kids in the Seattle public school district — I would be furious.

The complaint repeatedly points out that the various social media platforms have been marketed to kids, which, um, yes? That doesn’t make it against the law. While the lawsuit mentions COPPA, the law designed to protect kids, it’s not making a COPPA claim (which it can’t make anyway). Instead, it’s just a bunch of blind conjectures, leading to a laughably weak “public nuisance” claim.

Pursuant to RCW 7.48.010, an actionable nuisance is defined as, inter alia, “whatever is injurious to health or indecent or offensive to the senses, or an obstruction to the free use of property, so as to essentially interfere with the comfortable enjoyment of the life and property.”

Specifically, a “[n]uisance consists in unlawfully doing an act, or omitting to perform a duty, which act or omission either annoys, injures or endangers the comfort, repose, health or safety of others, offends decency . . . or in any way renders other persons insecure in life, or in the use of property.”

Under Washington law, conduct that substantially and/or unreasonably interferes with the Plaintiff’s use of its property is a nuisance even if it would otherwise be lawful.

Pursuant to RCW 7.48.130, “[a] public nuisance is one which affects equally the rights of an entire community or neighborhood, although the extent of the damage may be unequal.”

Defendants have created a mental health crisis in Seattle Public Schools, injuring the public health and safety in Plaintiff’s community and interfering with the operations, use, and enjoyment of the property of Seattle Public Schools

Employees and patrons, including students, of Seattle Public Schools have a right to be free from conduct that endangers their health and safety. Yet Defendants have engaged in conduct which endangers or injures the health and safety of the employees and students of Seattle Public Schools by designing, marketing, and operating their respective social media platforms for use by students in Seattle Public Schools and in a manner that substantially interferes with the functions and operations of Seattle Public Schools and impacts the public health, safety, and welfare of the Seattle Public Schools community

This reads just as any similar moral panic complaint would have read against older technologies. Imagine schools in the 1950s suing television or schools in the 1920s suing radios. Or schools in the 19th century suing book publishers for early pulp novels.

For what it’s worth, the school district also tries (and, frankly, fails) to take on Section 230 head on, claiming that it is “no shield.”

Plaintiff anticipates that Defendants will raise section 230 of the Communications Decency Act, 47 U.S.C. § 230(c)(1), as a shield for their conduct. But section 230 is no shield for Defendants’ own acts in designing, marketing, and operating social media platforms that are harmful to youth.

….

Section 230 does not shield Defendants’ conduct because, among other considerations: (1) Defendants are liable for their own affirmative conduct in recommending and promoting harmful content to youth; (2) Defendants are liable for their own actions designing and marketing their social media platforms in a way that causes harm; (3) Defendants are liable for the content they create that causes harm; and (4) Defendants are liable for distributing, delivering, and/or transmitting material that they know or have reason to know is harmful, unlawful, and/or tortious.

Except that, as we and many others explained in our briefs in the Supreme Court’s Gonzalez case, that’s all nonsense. All of them are still attempting to hold companies liable for the speech of users. None of the actual complaints are about actions by the companies, but rather how they don’t like the fact that the expression of these sites users are (the school district misleadingly claims) harmful to the kids in their schools.

First, Plaintiff is not alleging Defendants are liable for what third-parties have said on Defendants’ platforms but, rather, for Defendants’ own conduct. As described above, Defendants affirmatively recommend and promote harmful content to youth, such as proanorexia and eating disorder content. Recommendation and promotion of damaging material is not a traditional editorial function and seeking to hold Defendants liable for these actions is not seeking to hold them liable as a publisher or speaker of third party-content.

Yes, but recommending and promoting content is 1st Amendment protected speech. They can’t be sued for that. And, it’s not the “recommendation” that they’re really claiming is harmful, but the speech that is being recommended which (again) is protected by Section 230.

Second, Plaintiff’s claims arise from Defendants’ status as designers and marketers of dangerous social media platforms that have injured the health, comfort, and repose of its community. The nature of Defendants’ platforms centers around Defendants’ use of algorithms and other designs features that encourage users to spend the maximum amount of time on their platforms—not on particular third party content.

One could just as reasonably argue that the harm actually arises from the Seattle Public School system’s apparently total inability to properly prepare the children in their care for modern communications and entertainment systems. This entire lawsuit seems like the school district foisting the blame for their own failings on a convenient scapegoat.

There’s a lot more nonsense in the lawsuit, but hopefully the court quickly recognizes how ridiculous this is and tosses it out. Of course, if the Supreme Court screws up everything with a bad ruling in the Gonzalez case, well, then this lawsuit should give everyone pretty clear warning of what’s to come: a whole slew of utterly vexatious, frivolous lawsuits against internet websites for any perceived “harm.”

The only real takeaways from this lawsuit should be (1) Seattle parents should be furious, (2) the Seattle Public School system seems to be admitting it’s terrible at preparing children for the real world, and (3) Section 230 remains hugely important in protecting websites against these kinds of frivolous SLAPP suits.

Filed Under: 1st amendment, free speech, public nuisance, recommendations, seattle, section 230, vexatious lawsuits
Companies: bytedance, facebook, google, instagram, meta, seattle public schools, snapchat, tiktok, youtube

Supreme Court Takes Section 230 Cases… Just Not The Ones We Were Expecting

from the well,-this-is-not-great dept

So, plenty of Supreme Court watchers and Section 230 experts all knew that this term was going to be a big one for Section 230… it’s just that we all expected the main issue to be around the Netchoice cases regarding Florida and Texas’s social media laws (those cases will likely still get to SCOTUS later in the term). There were also a few other possible Section 230 cases that I thought SCOTUS might take on, but still, the Court surprised me by agreeing to hear two slightly weird Section 230 cases. The cases are Gonzalez v. Google and Twitter v. Taamneh.

There are a bunch of similar cases, many of which were filed by two law firms together, 1-800-LAW-FIRM (really) and Excolo Law. Those two firms have been trying to claim that anyone injured by a terrorist group should be able to sue internet companies because those terrorist groups happened to use those social media sites. Technically, they’re arguing “material support for terrorism,” but the whole concept seems obviously ridiculous. It’s the equivalent of the family of a victim of ISIS suing Toyota after finding out that some ISIS members drove Toyotas.

Anyway, we’ve been writing about a bunch of these cases, including both of the cases at issue here (which were joined at the hip by the 9th Circuit). Most of them get tossed out pretty quickly, as the court recognizes just how disconnected the social media companies are from the underlying harm. But one of the reasons they seem to have filed so many such cases all around the country was to try to set up some kind of circuit split to interest the Supreme Court.

The first case (Gonzalez) dealt with ISIS terrorist attacks in Paris in 2015. The 9th Circuit rejected the claim that Google provided material support to terrorists because ISIS posted some videos to YouTube. To try to get around the obvious 230 issues, Gonzalez argued that YouTube recommended some of those videos via the algorithm, and those recommendations should not be covered by 230. The second case, Taamneh, was… weird. It has a somewhat similar fact pattern, but dealt with the family of someone who was killed by an ISIS attack at a nightclub in Istanbul in 2017.

The 9th Circuit tossed out the Gonzalez case, saying that 230 made the company immune even for recommended content (which is the correct outcome) but allowed the Taamneh case to move forward, for reasons that had nothing to do with Section 230. In Taamneh, the district court initially dismissed the case entirely without even getting to the Section 230 issue by noting that Taamneh didn’t even file a plausible aiding-and-abetting claim. The 9th Circuit disagreed, said that there was enough in the complaint to plead aiding-and-abetting, and sent it back to the district court (which could then, in all likelihood, dismiss under Section 230). Oddly (and unfortunately) some of the judges in that ruling issued concurrences which meandered aimlessly, talking about how Section 230 had gone too far and needed to be trimmed back.

Gonzalez appealed the issue regarding 230 and algorithmic promotion of content, while Twitter appealed the aiding and abetting ruling (noting that every other court to try similar cases found no aiding and abetting).

Either way, the Supreme Court is taking up both cases and… it might get messy. Technically, the question the Supreme Court is asked to answer in the Gonzalez case is:

Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.

Basically: can we wipe out Section 230’s key liability protections for any content recommended? This would be problematic. The whole point of Section 230 is to put the liability on the proper party: the one actually speaking. Making sites liable for recommendations creates all of the same problems that making them liable for hosting would — specifically, requiring them to take on liability for content they couldn’t possibly thoroughly vet before recommending it. A ruling in favor of Gonzalez would create huge problems for anyone offering search on any website, because a “bad” content recommendation could lead to liability, not for the actual content provider, but for the search engine.

That can’t be the law, because that would make search next to impossible.

For what it’s worth, there were some other dangerously odd parts of the 9th Circuit’s Gonzalez rulings regarding Section 230 that are ripe for problematic future interpretation, but those parts appear not to have been included in the cert petition.

In Taamneh, the question is focused on the aiding and abetting question, but ties into Section 230, because it asks if you can hold a website liable for aiding and abetting if they try to remove terrorist content but a plaintiff argues they could have been more aggressive in weeding out such content. There’s also a second question of whether or not you can hold a website liable for an “act of intentional terrorism” when the actual act of terrorism had nothing whatsoever to do with the website, and was conducted off of the website entirely.

(1) Whether a defendant that provides generic, widely available services to all its numerous users and “regularly” works to detect and prevent terrorists from using those services “knowingly” provided substantial assistance under 18 U.S.C. § 2333 merely because it allegedly could have taken more “meaningful” or “aggressive” action to prevent such use; and (2) whether a defendant whose generic, widely available services were not used in connection with the specific “act of international terrorism” that injured the plaintiff may be liable for aiding and abetting under Section 2333.

These cases should worry everyone, especially if you like things like searching online. My biggest fear, honestly, is that this Supreme Court (as it’s been known to do) tries to split the baby (which, let us remember, kills the baby) and says that Section 230 doesn’t apply to recommended content, but that the websites still win because the things on the website are so far disconnected from the actual terrorist acts.

That really feels like the kind of solution that the Roberts court might like, thinking that it’s super clever when really it’s just dangerously confused. It would open up a huge pandora’s box of problems, leading to all sorts of lawsuits regarding any kind of recommended content, including search, recommendation algorithms, your social media feeds, and more.

A good ruling (if such a thing is possible) would be a clear statement that of course Section 230 protects algorithmically rated content, because Section 230 is about properly putting liability on the creator of the content and not the intermediary. But we know that Justices Thomas and Alito are just itching to destroy 230, so we’re already down two Justices to start.

Of course, given that this court is also likely to take up the NetChoice cases later this term, it is entirely possible that next year the Supreme Court may rules that (1) websites are liable for failing to remove certain content (in these two cases) and(2) websites can be forced to carry all content.

It’ll be a blast figuring out how to make all that work. Though, some of us will probably have to do that figuring out off the internet, since it’s not clear how the internet will actually work at that point.

Filed Under: aiding and abetting, algorithms, gonzalez, isis, recommendations, section 230, supreme court, taamneh, terrorism, terrorism act
Companies: google, twitter

Minnesota Pushing Bill That Says Websites Can No Longer Be Useful For Teenagers

from the i-mean-what-is-going-on-here dept

The various “for the children” moral panic bills about the internet are getting dumber. Over in Minnesota, the legislature has moved forward with a truly stupid bill, which the legislature’s own website says could make the state “a national leader in putting new guardrails on social media platforms.” The bill is pretty simple — it says that any social media platform with more than 1 million account holders (and operating in Minnesota) cannot use an algorithm to recommend content to users under the age of 18.

Prohibitions; social media algorithm. (a) A social media platform with more than 1,000,000 account holders operating in Minnesota is prohibited from using a social media algorithm to target user-created content at an account holder under the age of 18.

(b) The operator of a social media platform is liable to an individual account holder who received user-created content through a social media algorithm while the individual account holder was under the age of 18 if the operator of a social media platform knew or had reason to know that the individual account holder was under the age of 18. A social media operator subject to this paragraph is liable to the account holder for (1) any regular orspecial damages, (2) a statutory penalty of $1,000 for each violation of this section, and (3) any other penalties available under law.

So, um, why? I mean, I get that for computer illiterate people the word “algorithm” is scary. And that there’s some ridiculous belief among people who don’t know any better that recommendation algorithms are like mind control, but the point of an algorithm is… to recommend content. That is, to make a social media (or other kind of service) useful. Without it, you just get an undifferentiated mass of content, and that’s not very useful.

In most cases, algorithms are actually helpful. They point you to the information that actually matters to you and avoid the nonsense that doesn’t. Why, exactly, is that bad?

Also, it seems that under this law, websites would have to create a different kind of service for those under 18 and for those over 18, and carefully track how old those users are, which seems silly. Indeed, it would seem like this bill should raise pretty serious privacy concerns, because now companies are going to have to much more aggressively track age information, meaning they need to be much more intrusive. Age verification is a difficult problem to solve, and with a bill like this, making a mistake (and every website will make mistakes) will be costly.

But, the reality is that the politicians pushing this bill know how ridiculous and silly it is, and how algorithms are actually useful. Want to know how I know? Because the bill has a very, very, very telling exemption:

Exceptions. User-created content that is created by a federal, state, or local government or by a public or private school, college, or university is exempt from this section.

Algorithms recommending content are bad, you see, except if it’s recommending content from us, your loving, well-meaning leaders. For us, keep on recommending our content and only our content.

Filed Under: algorithms, for the children, minnesota, recommendations

House Democrats Decide To Hand Facebook The Internet By Unconstitutionally Taking Section 230 Away From Algorithms

from the this-is-not-a-good-idea dept

We’ve been pointing out for a while now that mucking with Section 230 as an attempt to “deal” with how much you hate Facebook is a massive mistake. It’s also exactly what Facebook wants, because as it stands right now, Facebook is actually losing users to its core product, and the company has realized that burdening competitors with regulations — regulations that Facebook can easily handle with its massive bank account — is a great way to stop competition and lock in Facebook’s dominant position.

And yet, for reasons that still make no sense, regulators (and much of the media) seem to believe that Section 230 is the only regulation to tweak to get at Facebook. This is both wrong and shortsighted, but alas, we now have a bunch of House Democrats getting behind a new bill that claims to be narrowly targeted to just remove Section 230 from algorithmically promoted content. The full bill, the “Justice Against Malicious Algorithms Act of 2021 is poorly targeted, poorly drafted, and shows a near total lack of understanding of how basically anything on the internet works. I believe that it’s well meaning, but it was clearly drafted without talking to anyone who understands either the legal realities or the technical realities. It’s an embarrassing release from four House members of the Energy & Commerce Committee who should know better (and at least 3 of the 4 have done good work in the past on important tech-related bills): Frank Pallone, Mike Doyle, Jan Schakowsky, and Anna Eshoo.

The key part of the bill is that it removes Section 230 for “personalized recommendations.” It would insert the following “exception” into 230.

(f) PERSONALIZED RECOMMENDATION OF INFORMATION PROVIDED BY ANOTHER INFORMATION CONTENT PROVIDER.?

> ??(1) IN GENERAL.?Subsection (c)(1) does not apply to a provider of an interactive computer service with respect to information provided through such service by another information content provider if? > > > ?(A) such provider of such service? > > ??(i) knew or should have known such provider of such service was making a personalized recommendation of such information; or > > ??(ii) recklessly made a personalized recommendation of such information; and > > ??(B) such recommendation materially contributed to a physical or severe emotional injury to any person.

So, let’s start with the basics. I know there’s been a push lately among some — including the whistleblower Frances Haugen — to argue that the real problem with Facebook is “the algorithm” and how it recommends “bad stuff.” The evidence to support this claim is actually incredibly thin, but we’ll leave that aside for now. But at its heart, “the algorithm” is simply a set of recommendations, and recommendations are opinions and opinions are… protected expression under the 1st Amendment.

Exempting Section 230 from algorithms cannot change this underlying fact about the 1st Amendment. All it means is that rather than getting a quick dismissal of the lawsuit, you’ll have a long, drawn out, expensive lawsuit on your hands, before ultimately finding out that of course algorithmic recommendations are protected by the 1st Amendment. For much more on the problem of regulating “amplification,” I highly, highly recommend reading Daphne Keller’s essay on the challenges of regulating amplification (or listen to the podcast I did with Daphne about this topic). It’s unfortunately clear that none of the drafters of this bill read Daphne’s piece (or if they did, they simply ignored it, which is worse). Supporters of this bill will argue that in simply removing 230 from amplification/algorithms, this is a “content neutral” approach. Yet as Daphne’s paper detailed, that does not get you away from the serious Constitutional problems.

Another way to think about this: this is effectively telling social media companies that they can be sued for their editorial choices of which things to promote. If you applied the same thinking to the NY Times or CNN or Fox News or the Wall Street Journal, you might quickly recognize the 1st Amendment problems here. I could easily argue that the NY Times’ constant articles misrepresenting Section 230 subject me to “severe emotional injury.” But of course, any such lawsuit would get tossed out as ridiculous. Does flipping through a magazine and seeing advertisements of products I can’t afford subject me to severe emotional injury? How is that different than looking at Instagram and feeling bad that my life doesn’t seem as cool as some lame influencer?

Furthermore, this focus on “recommendations” is… kinda weird. It ignores all the reasons why recommendations are often quite good. I know that some people have a kneejerk reaction against such recommendations but nearly every recommendation engine I use makes my life much better. Nearly every story I write on Techdirt I find via Twitter recommending tweets to me or Google News recommending stories to me — both based on things I’ve clicked on in the past. And both are (at times surprisingly) good at surfacing stories I would be unlikely to find otherwise, and doing so quickly and efficiently.

Yet, under this plan, all such services would be at significant risk of incredibly expensive litigation over and over and over again. The sensible thing for most companies to do in such a situation is to make sure that only bland, uncontroversial stuff shows up in your feed. This would be a disaster for marginalized communities. Black Lives Matter? That can’t be allowed as it might make people upset. Stories about bigotry, or about civil rights violations? Too “controversial” and might contribute to emotional injury.

The backers of this bill also argue that the bill is narrowly tailored and won’t destroy the underlying Section 230, but that too is incorrect. As Cathy Gellis just pointed out, removing the procedural benefits of Section 230 takes away all the benefits. Section 230 helps get you out of these cases much more quickly. But under this bill, now everyone will add in a claim under this clause that the “recommendation” cause “emotional injury” and now you have to litigate whether or not you’re even covered by Section 230. That means no more procedural benefit of 230.

The bill has a “carve out” for “smaller” companies, but again gets all that wrong. It seems clear that they either did not read, or did not understand, this excellent paper by Eric Goldman and Jess Miers about the important nuances of regulating internet services by size. In this case, the “carve out” is for sites that have 5 million or fewer “unique monthly visitors or users for not fewer than 3 of the preceding 12 months.” Leaving aside the rather important point that there really is no agreed upon notion of what a “unique monthly visitor” actually is (seriously, every stats package will give you different results, and now every site will have incentive to use a stats package that lies and gives you lower results to get beneath the number), that number is horrifically low.

Earlier this year, I suggested a test suite of websites that any internet regulation bill should be run against, highlighting that bills like these impact way more than Facebook and Google. And lots and lots of the sites I mention get way beyond 5 million monthly views.

So under this bill, a company like Yelp would face real risk in recommending restaurants to you. If you got food poisoning, that would be an injury you could now sue Yelp over. Did Netflix recommend a movie to you that made you sad? Emotional injury!

As Berin Szoka notes in a Twitter thread about the bill, this bill from Democrats, actually gives Republican critics of 230 exactly what they wanted: a tool to launch a million “SLAM” suits — Strategic Lawsuits Against Moderation. And, as such, he notes that this bill would massively help those who use the internet to spread baseless conspiracy theories, because THEY WOULD NOW GET TO SUE WEBSITES for their moderation choices. This is just one example of how badly the drafters of the bill misunderstand Section 230 and how it functionally works. It’s especially embarrassing that Rep. Eshoo would be a co-sponsor of a bill like this, since this bill would be a lawsuit free-for-all for companies in her district.

> 10/ In short, Republicans have long aimed to amend #Section230 to enable Strategic Lawsuits Against Moderation (SLAMs) > > This new Democratic bill would do the same > > Who would benefit? Those who use the Internet to spread hate speech and lies about elections, COVID, etc > > — Berin Sz?ka ? (@BerinSzoka) October 14, 2021

Another example of the wacky drafting in the bill is the “scienter” bit. Scienter is basically whether or not the defendant had knowledge that what they were doing was wrongful. So in a bill like this, you’d expect that the scienter would require the platforms to know that the information they were recommending was harmful. That’s the only standard that would even make sense (though would still be constitutionally problematic). However, that’s not how it is in the bill. Instead, the scienter is… that the platform knows they recommend stuff. That’s it. In the quote above the line that matters is:

such provider of a service knew or should have known such provider of a service was making a personalized recommendation of such information

In other words, the scienter here… is that you knew you were recommending stuff personally. Not that it was bad. Not that it was dangerous. Just that you were recommending stuff.

Another drafting oddity is the definition of a “personalized recommendation.” It just says it’s a personalized recommendation if it uses a personalized algorithm. And the definition of “personalized algorithm” is this bit of nonsense:

The term ‘personalized algorithm’ means an algorithm that relies on information specific to an individual.

“Information specific to an individual” could include things like… location. I’ve seen some people suggest that Yelp’s recommendations wouldn’t be covered by this law because they’re “generalized” recommendations, not “personal ones” but if Yelp is recommending stuff to me based on my location (kinda necessary) then that’s now information specific to me, and thus no more 230 for the recommendation.

It also seems like this would be hell for spam filters. I train my spam filter, so the algorithm it uses is specific to me and thus personalized. But I’m pretty sure that under this bill a spammer whose emails are put into a spam filter can now sue, claiming injury. That’ll be fun.

Meanwhile, if this passes, Facebook will be laughing. The services that have successfully taken a bite out of Facebook’s userbase over the last few years have tended to be ones that have a better algorithm for recommending things: like TikTok. The one Achilles heel that Facebook has — it’s recommendations aren’t as good as new upstarts — gets protected by this bill.

Almost nothing here makes any sense at all. It misunderstands the problems. It misdiagnoses the solution. It totally misunderstands Section 230. It creates massive downside consequences for competitors to Facebook and to users. It enables those who are upset about moderation choices to sue companies (helping conspiracy theorists and misinformation peddlers). I can’t see a single positive thing that this bill does. Why the hell is any politician supporting this garbage?

Filed Under: algorithms, anna eshoo, frank pallone, intermediary liability, jan schakowsky, mike doyle, news feeds, personalized recommendations, recommendations, section 230
Companies: facebook, yelp