tiktok – Techdirt (original) (raw)

Trump May Kill America’s Performative TikTok Ban For The Benefit Of His Billionaire Buddy

from the you-know,-for-freedom dept

We’ve noted more time than I can count how the U.S. ban of TikTok (yes, yes I know, it’s not a ban, it’s a forced divestment ByteDance was never going to agree with) was pointless fucking performance art.

Not only was it unconstitutional, it did nothing to actually address the privacy and national security issues it professed to fix. We’re a country too corrupt to pass even a baseline privacy law. We’re too corrupt to even regulate data brokers that routinely hoover up oceans of sensitive consumer data and then sell it to any nitwit with two nickels to rub together (including domestic extremists and foreign intelligence).

Hyperventilating about a single Chinese-app in an ocean of dodgy and unregulated consumer surveillance was always more about greed and protecting Facebook and U.S. tech companies from competition than it ever was about seriously addressing U.S. privacy, NatSec, or propaganda concerns.

With that as backdrop, Trump is telling his allies (for whatever that’s ultimately worth) that he wants to reverse the U.S. ban on TikTok. The law, passed last April, gave ByteDance until January 19 to find a U.S. buyer or face getting kicked out of the country.

“The president-elect has not yet announced a decision on if, or how to proceed, but some advisers expect him to intervene on TikTok’s behalf if necessary — including Conway and three others, who spoke on the condition of anonymity to discuss private conversations. Trump promised during the campaign to protect the app even though he also signed an executive order in his first term that would have effectively banned it: “I’m gonna save TikTok,” he said in one of his first videos on the app this June.”

Trump of course isn’t operating with any sort of genuine, good faith policy or intellectual curiosity here. He correctly believes TikTok can be useful for Republicans’ massive online propaganda efforts, and, like most feckless U.S. tech companies, ultimately bullied away from competently moderating right wing propaganda and race-baiting bile on the internet if it wants to keep doing business here.

It’s also just about money. In 2020, Trump wanted to ban TikTok when he thought there was a chance he could offload it to his buddies Larry Ellison and Safra Catz at Oracle. In 2024, Trump’s motivation is in cozying up to Jeffrey Yass, a major billionaire Trump donor creator of the conservative Club for Growth, who holds a 15% stake in TikTok’s Chinese parent company ByteDance.

A Trump reversal of a TikTok ban (which the Post explains won’t be easy) will result in all sorts of entertaining chaos among his bobble-headed brigadiers. Kellyanne Conway now works for Yass and Club For Growth defending TikTok in the press. In contrast, Trump’s likely FCC boss Brendan Carr has spent the last four years crying about TikTok to please Trump and get his face on cable TV.

As Conway’s quote to the Post makes clear, Yass and Trump want to frame this self-serving reversal as something profoundly more noble than it actually is, leveraging the fact that this ban was always a giant political turd for Democrats:

“He appreciates the breadth and reach of TikTok, which he used masterfully along with podcasts and new media entrants to win,” said Kellyanne Conway, who ran Trump’s first presidential campaign, served in the White House and remains close to him and now also advocates for TikTok. “There are many ways to hold China to account outside alienating 180 million U.S. users each month. Trump recognized early on that Democrats are the party of bans — gas-powered cars, menthol cigarettes, vapes, plastic straws and TikTok — and to let them own that draconian, anti-personal-choice space.”

Then of course you’ve got Mark Zuckerberg and Facebook, who, ahead of the ban, were caught seeding no limit of bogus moral panics in DC and among press outlets for anticompetitive reasons (which oddly gets omitted from most press coverage of this story).

Anybody who thinks any of these folks care about protecting consumer privacy or national security is deluding themselves. The U.S. refusal to regulate data brokers or pass a privacy law makes it repeatedly, painfully clear that this country has prioritized making money over consumer privacy and public safety. Any pretense we care about fighting propaganda is even more laughable in the wake of this election.

Another major reason the U.S. government doesn’t want to seriously tackle consumer privacy is because the dysfunctional and unaccountable data broker space allows them to spy on Americans without getting a pesky warrant. Banning Tiktok is a performance that distracted the public from our broader widespread failures on propaganda, surveillance, consumer protection, privacy, and national security.

There certainly are privacy, propaganda, and national security concerns related to TikTok. They’ll never be confused for an ethical company. But that’s never really been what any of this was about for this pit of self-serving vipers, who were primarily interested in using those issues (and xenophobia) as cover to prop up their varied and often conflicting financial ambitions.

Filed Under: donald trump, jeffrey yass, national security, privacy, propaganda, social media, surveillance, tiktok ban
Companies: bytedance, tiktok

Ctrl-Alt-Speech: Presidents & Precedents

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: australia, canada, content moderation, donald trump, social media
Companies: tiktok, twitter, x

Ctrl-Alt-Speech: An Appeal A Day Keeps The Censor Away

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: ai, artificial intelligence, child safety, content moderation, dsa, scams
Companies: facebook, instagram, kick, meta, tiktok, truth social

A Whole Bunch Of States File Garbage Grandstanding Lawsuits Against TikTok With The Main Complaint Being ‘Kids Like It’

from the what-a-waste-of-everyone's-time dept

You may have seen the news yesterday about 14 attorneys general filing lawsuits against TikTok. It was covered in a bunch of places, including Reuters, CNBC, the NY Times, the Washington Post, NPR, CNN and more. And, bizarrely, none of them seemed to include links to any actual complaint. It took me a little while to realize why. Partially it’s because the mainstream media often doesn’t link to actual complaints (though a lot of the sources named here are normally better about it). But more likely the issue was that this wasn’t “14 attorneys general team up to file a lawsuit” like we’re used to. It was “14 AGs coordinated to file individual lawsuits in their home state courts, alleging the same thing.”

We’ll get into a bit more detail shortly about what’s in the lawsuit, but the summary is “kids use TikTok too much, so your features that kids like must be illegal.” It’s really that simplistically stupid.

And it’s actually fifteen AGs, because on Friday, Ken Paxton also filed a case against TikTok that has some similarities. This suggests that the organizers of all these cases approached Paxton to join in. Then, like the total asshole he is, he decided to file a few days early to try to steal the thunder.

So, anyway, here are thirteen of these complaints. I am providing them here, despite the fact that Techdirt’s entire budget probably doesn’t cover the cost of coffee in the newsrooms of all of the publications listed above who refused to do the work that just took me quite some time.

Just something to think about when you consider which kinds of news orgs you want to support. It’s thirteen fourteen instead of fifteen because (1) Oregon hasn’t actually filed its yet, and says it will sometime today (a day after the rest), but it wasn’t available as I was writing this and (2) Kentucky doesn’t seem to have put out a press release about its filing (every other state did) (update: thanks to a reader for getting me Kentucky’s lawsuit which has now been added).

Anyway… it’s not worth going through all the complaints other than to note that most of them are quite similar and can be summed up as “TikTok made a product kids like to use, and we’re sure that violates consumer protection laws somehow.”

I’ll pick on New York’s filing out of the batch because it lays out the contents of the argument in a way that’s easy to see and recognize that they’re literally saying “oh, providing features users like is getting kids to use the site more.”

![I. TikTok’s Business Model is to Maximize Young Users’ Time on the Platform ............9 II. TikTok is Designed to Be Addictive ........................................................................... 11 A. Conscious Exploitation of Dopamine in Young Users ......................................... 11 B. TikTok Uses Multiple Features to Manipulate Users into Compulsive and Excessive Use .......................................................................................................12

  1. “For You” Feed ...........................................................................................12
  2. Autoplay .....................................................................................................13
  3. Endless Scroll ..............................................................................................13
  4. Ephemeral Content: TikTok Stories and TikTok LIVE ...............................14
  5. Push Notifications ......................................................................................14
  6. Likes, Comments, and Other Interactions ..................................................15 C. TikTok Designs and Provides Beauty Filters That It Knows Harm Young Users 16 D. TikTok Challenges Have Caused Deaths and Illegal Behavior ...........................20 III. Minors Are Especially Susceptible to Compulsive Use of TikTok .............................21 A. The United States Surgeon General’s Warning ...................................................23 B. Teen Mental Health in New York has Declined for Years ...................................24 C. Social Media Addiction Compared to Substance Addiction ................................24](https://i0.wp.com/lex-img-p.s3.us-west-2.amazonaws.com/img/f6020727-9fa5-4ac3-8b3f-5801dc749c63-RackMultipart20241009-195-i5v1yo.png?ssl=1)

This goes on for another couple pages, but it doesn’t get much better. There’s a heading that “journalists have reported on the harms on TikTok for years.” Which does not mean that TikTok is liable for kids doing stupid shit on TikTok. You would think that these high powered lawyers would also know that journalists aren’t always entirely accurate?

But just looking at the parts above, this lawsuit is laughable. First of all, any business’s focus is to try to maximize customers’ use. That’s… capitalism. Are all these states saying that restaurants that serve good food are violating consumer protection laws by getting people to want to come back frequently?

Again, features that people like are not illegal. We don’t let government decide what features go in software for good reason, and you can’t do that just because you claim it’s a consumer protection issue with no actual evidence beyond whims.

As for the TikTok challenges, that’s not TikTok doing it. It’s TikTok users, and such challenges have long predated TikTok and many of the reports of viral TikTok challenges are the media falling for myths and nonsense. Blaming TikTok for challenges is not just weird, it’s legally incomprehensible. Years ago such challenges would get sent around via email or Usenet forums or whatever. Did we sue email providers? Of course not.

That last section is also scientific nonsense. The Surgeon General’s report makes it quite clear that the scientific evidence does not say that social media is inherently harmful to mental health. And, no “social media addiction” is nothing like substance addiction, which is literally a chemical addiction.

These lawsuits are embarrassing nonsense.

If you file a lawsuit, you have to explain your cause of action. You don’t get to just say “infinite scroll is bad, therefore it violates consumer protection laws.” Notably in the NY complaint it’s 64 pages of screaming about how evil TikTok is and only gets to the actual claims at the very end, with basically no explanation. It just vaguely states that all of the stuff people are mad about regarding TikTok violate laws against “fraudulent” and “deceptive” business conduct.

Honestly, these cases are some of the weakest lawsuits I’ve ever seen filed by a state AG.

In many ways, they’re quite similar to the many, many lawsuits filed over the last couple of years against social media companies by school districts. Those were embarrassing enough, but at least I could understand that those were filed by greedy class action plaintiffs’ lawyers hoping to get a massive payday and not caring about the actual evidence.

Elected officials file these cases using taxpayer money. For what? Well, obviously for election season. Every single one of these AGs is at least a good enough lawyer to know that the lawsuits are absolute fucking garbage and are embarrassing.

But golly, it’s one month from election day. So why not get that press release out there claiming that you’re “protecting kids from the evils of TikTok”?

It’s cynical fucking nonsense. All of the Attorneys General involved should be ashamed of wasting taxpayer money, as well as valuable court time and resources, on such junk. There are plenty of legitimate consumer protection issues to take up. But we’re wasting taxpayer money because TikTok has a “for you” feed that tries to recommend more interesting content?

Come on.

Filed Under: addictive feeds, california, child safety, consumer protection, dc, for you, ken paxton, letitia james, moral panic, new york, rob bonta, state attorneys general, texas
Companies: tiktok

Ctrl-Alt-Speech: Blunder From Down Under

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Riana Pfefferkorn, a Policy Fellow at the Stanford Institute for Human Centered AI. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: asio, australia, child safety, content moderation, first amendment, social media, utah
Companies: snap, tiktok

The Third Circuit’s Section 230 Decision In Anderson v. TikTok Is Pure Poppycock.

from the that's-not-how-any-of-this-works dept

Last week, the U.S. Court of Appeals for the Third Circuit concluded, in Anderson v. TikTok, that algorithmic recommendations aren’t protected by Section 230. Because they’re the platforms’ First Amendment-protected expression, the court reasoned, algorithms are the platforms’ “own first-party speech,” and thus fall outside Section 230’s liability shield for the publication of third-party speech.

Of course, a platform’s decision to host a third party’s speech at all is also First Amendment-protected expression. By the Third Circuit’s logic, then, such hosting decisions, too, are a platform’s “own first-party speech” unprotected by Section 230.

We’ve already hit (and not for the last time) the key problem with the Third Circuit’s analysis. “Given … that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms,” the court declared, “it follows that doing so amounts to first-party speech under [Section] 230, too.” No, it does not. Assuming a lack of overlap between First Amendment protection and Section 230 protection is a basic mistake.

Section 230(c)(1) says that a website shall not be “treated as the publisher” of most third-party content it hosts and spreads. Under the ordinary meaning of the word, a “publisher” prepares information for distribution and disseminates it to the public. Under Section 230, therefore, a website is protected from liability for posting, removing, arranging, and otherwise organizing third-party content. In other words, Section 230 protects a website as it fulfills a publisher’s traditional role. And one of Section 230’s stated purposes is to “promote the continued development of the Internet”—so the statute plainly envisions the protection of new, technology-driven publishing tools as well.

The plaintiffs in Anderson are not the first to contend that websites lose Section 230 protection when they use fancy algorithms to make publishing decisions. Several notable court rulings (all of them unceremoniously brushed aside by the Third Circuit, as we shall see) reject the notion that algorithms are special.

The Second Circuit’s 2019 decision in Force v. Facebook is especially instructive. The plaintiffs there argued that “Facebook’s algorithms make … content more ‘visible,’ ‘available,’ and ‘usable.’” They asserted that “Facebook’s algorithms suggest third-party content to users ‘based on what Facebook believes will cause the user to use Facebook as much as possible,’” and that “Facebook intends to ‘influence’ consumers’ responses to that content.” As in Anderson, the plaintiffs insisted that algorithms are a distinct form of speech, belonging to the platform and unprotected by Section 230.

The Second Circuit was unpersuaded. Nothing in the text of Section 230, it observed, suggests that a website “is not the ‘publisher’ of third-party information when it uses tools such as algorithms that are designed to match that information with a consumer’s interests.” In fact, it noted, the use of such tools promotes Congress’s express policy “to promote the continued development of the Internet.”

By “making information more available,” the Second Circuit wrote, Facebook was engaging in “an essential part of traditional publishing.” It was doing what websites have done “on the Internet since its beginning”—“arranging and distributing third-party information” in a manner that “forms ‘connections’ and ‘matches’ among speakers, content, and viewers of content.” It “would turn Section 230(c)(1) upside down,” the court concluded, to hold that Congress intended to revoke Section 230 protection from websites that, whether through algorithms or otherwise, “become especially adept at performing the functions of publishers.” The Second Circuit had no authority, in short, to curtail Section 230 on the ground that by deploying algorithms, Facebook had “fulfill[ed] its role as a publisher” too “vigorously.”

As the Second Circuit recognized, it would be exceedingly difficult, if not impossible, to draw logical lines, rooted in law, around how a website arranges third-party content. What in Section 230 would enable a court to distinguish between content placed in a “for you” box, content that pops up in a newsfeed, content that appears at the top of a homepage, and content that’s permitted to exist in the bowels of a site? Nothing. It’s the wrong question. The question is not how the website serves up the content; it’s what makes the content problematic. When, under Section 230, is third-party content also a website’s first-party content? Only, the Second Circuit explained, when the website “directly and materially contributed to what made the content itself unlawful.” This is the “crucial distinction”—presenting unlawful content (protected) versus creating unlawful content (unprotected).

Perhaps you think the problem of drawing non-arbitrary lines around different forms of presentation could be solved, if only we could get the best and brightest judges working on it? Well, the Supreme Court recently tried its luck, and it failed miserably. To understand the difficulties with excluding algorithmic recommendations from Section 230, all the Third Circuit had to do was meditate on the oral argument in Gonzalez v. Google. It was widely assumed that the justices took that case because at least some of them wanted to carve algorithms out of Section 230. How hard could it be? But once the rubber hit the road, once they had to look at the matter closely, the justices had not the faintest idea how to do that. They threw up their hands, remanding the case without reaching the merits.

The lesson here is that creating an “algorithm” rule would be rash and wrong—not least because it would involve butchering Section 230 itself—and that opinions such as Force v. Facebook are correct. But instead of taking its cues from the Gonzalez non-decision, the Third Circuit looked to the Supreme Court’s newly released decision in Moody v. NetChoice.

Moody confirms (albeit, alas, in dicta) that social media platforms have a First Amendment right to editorial control over their newsfeeds. The right to editorial control is the right to decide what material to host or block or suppress or promote, including by algorithm. These are all expressive choices. But the Third Circuit homed in on the algorithm piece alone. Because Moody declares algorithms a platform’s protected expression, the Third Circuit claims, a platform does not enjoy Section 230 protection when using an algorithm to recommend third-party content.

The Supreme Court couldn’t coherently separate algorithms from other forms of presentation, and the distinguishing feature of the Third Circuit’s decision is that it never even tries to do so. Moody confirms that choosing to host or block third-party content, too, is a platform’s protected expression. Are those choices “first-party speech” unprotected by Section 230? If so—and the Third Circuit’s logic _requires that result_—Section 230(c)(1) is a nullity.

This is nonsense. And it’s lazy nonsense to boot. Having treated _Moody_’s stray lines about algorithms like live hand grenades, the Third Circuit packs up and goes home. Moody doesn’t break new ground; it merely reiterates existing First Amendment principles. Yet the Third Circuit uses Moody as one neat trick to ignore the universe of Section 230 precedent. In a footnote (for some reason, almost all the decision’s analysis appears in footnotes) the court dismisses eight appellate rulings, including Force v. Facebook, that conflict with its ruling. It doesn’t contest the reasoning of these opinions; it just announces that they all “pre-dated [_Moody v._] NetChoice.”

Moody roundly rejects the Fifth Circuit’s (bananas) First Amendment analysis in Paxton v. NetChoice. In that faulty decision, the Fifth Circuit wrote that Section 230 “reflects Congress’s factual determination that Platforms are not ‘publishers,’” and that they “are not ‘speaking’ when they host other people’s speech.” Here again is the basic mistake of seeing the First Amendment and Section 230 as mutually exclusive, rather than mutually reinforcing, mechanisms. The Fifth Circuit conflated not treating a platform as a publisher, for purposes of liability, with a platform’s not being a publisher, for purposes of the First Amendment. In reality, websites that disseminate third-party content both exercise First Amendment-protected editorial control and enjoy Section 230 protection from publisher liability.

The Third Circuit fell into this same mode of woolly thinking. The Fifth Circuit concluded that because the platforms enjoy Section 230 protection, they lack First Amendment rights. Wrong. The Supreme Court having now confirmed that the platforms have First Amendment rights, the Third Circuit concluded that they lack Section 230 protection. Wrong again. Congress could not revoke First Amendment rights wherever Section 230 protection exists, and Section 230 would serve no purpose if it did not apply wherever First Amendment rights exist.

Many on the right think, quite irrationally, that narrowing Section 230 would strike a blow against the bogeyman of online “censorship.” Anderson, meanwhile, involved the shocking death of a ten-year-old girl. (A sign, in the view of one conservative judge on the Anderson panel, that social media platforms are dens of iniquity. For a wild ride, check out his concurring opinion.) So there are distorting factors at play. There are forces—a desire to stick it to Big Tech; the urge to find a remedy in a tragic case—pressing judges to misapply the law. Judges engaging in motivated reasoning is bad in itself. But it is especially alarming here, where judges are waging a frontal assault on the great bulwark of the modern internet. These judges seem oblivious to how much damage their attacks, if successful, are likely to cause. They don’t know what they’re doing.

Corbin Barthold is internet policy counsel at TechFreedom.

Filed Under: 1st amendment, 3rd circuit, anderson v. tiktok, free speech, section 230
Companies: tiktok

Ctrl-Alt-Speech: The Platform To Prison Pipeline

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: brazil, content moderation, donald trump, mark zuckerberg, pavel durov, section 230, third circuit
Companies: telegram, tiktok, twitter, x

Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic

from the that's-not-how-any-of-this-works dept

Step aside Fifth Circuit Court of Appeals, there’s a new contender in town for who will give us the most batshit crazy opinions regarding the internet. This week, a panel on the Third Circuit ruled that a lower court was mistaken in dismissing a case against TikTok on Section 230 grounds.

But, in order to do so, the court had to intentionally reject a very long list of prior caselaw on Section 230, misread some Supreme Court precedent, and (trifecta!) misread Section 230 itself. This may be one of the worst Circuit Court opinions I’ve read in a long time. It’s definitely way up the list.

The implications are staggering if this ruling stands. We just talked about some cases in the Ninth Circuit that poke some annoying and worrisome holes in Section 230, but this ruling takes a wrecking ball to 230. It basically upends the entire law.

At issue are the recommendations TikTok offers on its “For You Page” (FYP), which is the algorithmically recommended feed that a user sees. According to the plaintiff, the FYP recommended a “Blackout Challenge” video to a ten-year-old child, who mimicked what was shown and died. This is, of course, horrifying. But who is to blame?

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

In many ways, Section 230 was designed to speed up this analysis in the internet era, by making it explicit that a website publisher has no liability for harms that come from content posted by others, even if the publisher engaged in traditional publishing functions. Indeed, the point of Section 230 was to encourage platforms to engage in traditional publishing functions.

There is a long list of cases that say that Section 230 should apply here. But the panel on the Third Circuit says it can ignore all of those. There’s a very long footnote (footnote 13) that literally stretches across three pages of the ruling listing out all of the cases that say this is wrong:

We recognize that this holding may be in tension with Green v. America Online (AOL), where we held that § 230 immunized an ICS from any liability for the platform’s failure to prevent certain users from “transmit[ing] harmful online messages” to other users. 318 F.3d 465, 468 (3d Cir. 2003). We reached this conclusion on the grounds that § 230 “bar[red] ‘lawsuits seeking to hold a service provider liable for . . . deciding whether to publish, withdraw, postpone, or alter content.’” Id. at 471 (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Green, however, did not involve an ICS’s content recommendations via an algorithm and pre-dated NetChoice. Similarly, our holding may depart from the pre-NetChoice views of other circuits. See, e.g., Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1098 (9th Cir. 2019) (“[R]ecommendations and notifications . . . are not content in and of themselves.”); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (“Merely arranging and displaying others’ content to users . . . through [] algorithms—even if the content is not actively sought by those users—is not enough to hold [a defendant platform] responsible as the developer or creator of that content.” (internal quotation marks and citation omitted)); Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 21 (1st Cir. 2016) (concluding that § 230 immunity applied because the structure and operation of the website, notwithstanding that it effectively aided sex traffickers, reflected editorial choices related to traditional publisher functions); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014) (adopting Zeran by noting that “traditional editorial functions” are immunized by § 230); Klayman v. Zuckerburg, 753 F.3d 1354, 1359 (D.C. Cir. 2014) (immunizing a platform’s “decision whether to print or retract a given piece of content”); Johnson v. Arden, 614 F.3d 785, 791-92 (8th Cir. 2010) (adopting Zeran); Doe v. MySpace, Inc., 528 F.3d 413, 420 (5th Cir. 2008) (rejecting an argument that § 230 immunity was defeated where the allegations went to the platform’s traditional editorial functions).

I may not be a judge (or even a lawyer), but even I might think that if you’re ruling on something and you have to spend a footnote that stretches across three pages listing all the rulings that disagree with you, at some point, you take a step back and ask:

Principal Skinner meme. First frowning and looking down with hand stroking chin saying: "Am I so out of touch that if every other circuit court ruling disagrees with me, I should reconsider?" Second panel has him looking up and saying "No, it's the other courts who are wrong."

As you might be able to tell from that awful footnote, the Court here seems to think that the ruling in Moody v. NetChoice has basically overturned those rulings and opened up a clean slate. This is… wrong. I mean, there’s no two ways about it. Nothing in Moody says this. But the panel here is somehow convinced that it does?

The reasoning here is absolutely stupid. It’s taking the obviously correct point that the First Amendment protects editorial decision-making, and saying that means that editorial decision-making is “first-party speech.” And then it’s making that argument even dumber. Remember, Section 230 protects an interactive computer service or user from being treated as the publisher (for liability purposes) of third party information. But, according to this very, very, very wrong analysis, algorithmic recommendations are magically “first-party speech” because they’re protected by the First Amendment:

Anderson asserts that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them[.]” ECF No. 50 at 5. The Supreme Court’s recent discussion about algorithms, albeit in the First Amendment context, supports this view. In Moody v. NetChoice, LLC, the Court considered whether state laws that “restrict the ability of social media platforms to control whether and how third-party posts are presented to other users” run afoul of the First Amendment. 144 S. Ct. 2383, 2393 (2024). The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment….

Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too….

This is just flat out wrong. It is based on the false belief that any “expressive product” makes it “first-party speech.” That’s wrong on the law and it’s wrong on the precedent.

It’s a bastardization of an already wrong argument put forth by MAGA fools that Section 230 conflicts with the argument in Moody. The argument, as hinted at by Justices Thomas and Gorsuch, is that because NetChoice argues (correctly) that its editorial decision-making is protected by the First Amendment, it’s somehow in conflict with the idea that they have no legal liability for third-party speech.

But that’s only in conflict if you can’t read and/or don’t understand the First Amendment and Section 230 and how they interact. The First Amendment still protects any editorial actions taken by a platform. All Section 230 does is say that it can’t face liability for third party speech, even if it engaged in publishing that speech. The two things are in perfect harmony. Except to these judges in the Third Circuit.

The Supreme Court at no point says that editorial actions turn into first-party speech because they are protected by the First Amendment, contrary to what they say here. That’s never been true, as even the mushroom encyclopedia example shows above.

Indeed, reading Section 230 in this manner wipes out Section 230. It makes it the opposite of what the law was intended to do. Remember, the law was written in response to the ruling in Stratton Oakmont v. Prodigy, where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content. As then Reps. Chris Cox and Ron Wyden recognized, that would encourage no moderation at all, which made no sense. So they passed 230 to overturn that decision and make it so that internet services could feel free to engage in all sorts of publishing activity without facing liability for the underlying content when that content was provided by a third party.

But here, the Third Circuit has flipped that on its head and said that the second you engage in First Amendment-protected publishing activity around content (such as recommending it), you lose Section 230 protections because the content becomes first-party content.

That’s… the same thing that the court ruled in Stratton Oakmont, and which 230 overturned. It’s beyond ridiculous for the Court to say that Section 230 basically enshrined Stratton Oakmont, and it’s only now realizing that 28 years after the law passed.

And yet, that seems to be the conclusion of the panel.

Incredibly, Judge Paul Matey (a FedSoc favorite Trump appointee) has a concurrence/dissent where he would go even further in destroying Section 230. He falsely claims that 230 only applies to “hosting” content, not recommending it. This is literally wrong. He also falsely claims that Section 230 is a form of a “common carriage regulation” which it is not.

So he argues that the first Section 230 case, the Fourth Circuit’s important Zeran ruling, was decided incorrectly. The Zeran ruling established that Section 230 protected internet services from all kinds of liability for third-party content. Zeran has been adopted by most other circuits (as noted in that footnote of “all the cases we’re going to ignore” above). So in Judge Matey’s world, he would roll back Section 230 to only protect hosting of content and that’s it.

But that’s not what the authors of the law meant (they’ve told us, repeatedly, that the Zeran ruling was correct).

Either way, every part of this ruling is bad. It basically overturns Section 230 for an awful lot of publisher activity. I would imagine (hope?) that TikTok will request an en banc rehearing across all judges on the circuit and that the entire Third Circuit agrees to do so. At the very least, that would provide a chance for amici to explain how utterly backwards and confused this ruling is.

If not, then you have to think the Supreme Court might take it up, given that (1) they still seem to be itching for direct Section 230 cases and (2) this ruling basically calls out in that one footnote that it’s going to disagree with most other Circuits.

Filed Under: 1st amendment, 1st party speech, 3rd circuit, 3rd party speech, algorithms, fyp, liability, recommendations, section 230
Companies: tiktok

Ctrl-Alt-Speech: ChatGPT Told Us Not To Say This, But YOLO

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Daphne Keller, the Director of the Program on Platform Regulation at Stanford’s Cyber Policy Center. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: age appropriate design code, chatgpt, content moderation, dsa, kosa, ninth circuit
Companies: patreon, tiktok, twitter, x, yolo, youtube

Ctrl-Alt-Speech: I Bet You Think This Block Is About You

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

IIn this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Discord. In our Bonus Chat at the end of the episode, Mike speaks to Juliet Shen and Camille Francois about the Trust & Safety Tooling Consortium at Columbia School of International and Public Affairs, and the importance of open source tools for trust and safety.

Filed Under: child safety, content moderation, coppa, jim jordan, kosa, social media
Companies: google, tiktok, twitter, x