content moderation – Techdirt (original) (raw)

Ctrl-Alt-Speech: Don’t Believe What This Podcast Says About Misinformation

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: ai, artificial intelligence, brazil, community notes, content moderation, labor rights, misinformation
Companies: meta, twitter, x

Supreme Court Rejects Jason ‘Pee King Of Facebook’ Fyk’s ‘But Muh Pee Videos’ Appeal

from the bro,-fb-isn't-prime-pee-video-territory dept

All hail Jason Fyk, one of the most aggrieved “failure to monetize piss videos” dudes ever. In fact, he might be the only person angered about his inability to turn pee into cash with third-party content featuring people urinating.

Anything that gives me a chance to embed this video (which also served as the ultimate piss take review of a Jet album by snarky music criticism overlords, Pitchfork) is welcomed, no matter how incremental the incident:

First, this is an ape, not a monkey. Second, while there’s definitely a market for videos of people urinating, it’s not on Facebook. It’s on any site that makes room for that particular kink, which means any porn site still in operation will host the content without complaint, even if it limits your monetization options.

Jason Fyk’s misplaced anger and long string of court losses stems from his unwillingness/inability to comprehend why any social media site might have a problem with this particular get [slightly] rich[er] scheme.

Fyk was already making plenty of money with his Facebook pages, if his own legal complaints are to be believed. Let’s check in with the author of this post, who has previously covered this extremely particular subject:

[T]hings were going good for Jason Fyk, at least as of a decade ago. He had 40 Facebook pages, 28 million “likes” and a potential audience of 260 million. Then it (allegedly)(partially) came crashing down. Fyk created a page Facebook didn’t like. Facebook took it down. That left Fyk with at least 39 other money-making pages but he still felt slighted to the extent he decided to start suing.

And sue he did! Of course, none of these lawsuits went anywhere. Not that Fyk hasn’t tried. He’s spent most of the last eight years hoping to smuggle a win out of federal court under the full-length dress of Lady Justice. Fyk lost and lost and lost and sued the government over Section 230 itself and lost and lost and lost.

Last year’s appellate Hail Mary from the would-be Pee King of Facebook was covered by Eric Goldman, who knows a thing or several about Section 230 and Section 230 lawsuits. Some Fyk fatigue was exhibited in Goldman’s December 2024 headline:

How Many Times Must the Courts Say “No” to This Guy?–Fyk v. Facebook

Goldman’s post suggested there might be a way to dissuade Fyk from increasing his losing streak:

Fyk argued that the law regarding anticompetitive animus had changed during his 6-year-long litigation quest, citing the Enigma v. Malwarebytes and Lemmon v. Snap decisions. However, the Ninth Circuit previously rejected the implications of Malwarebytes for Fyk’s case in its last ruling, and “Lemmon says nothing about whether Section 230(c)(1) shields social-media providers for content-moderation decisions made with anticompetitive animus.” Without any change in the relevant law, the court easily dismisses the case again. Remarkably, the court doesn’t impose any sanctions for what some courts might have felt was vexatious relitigation of resolved matters.

And that’s what Fyk does best: make arguments that make no sense, cite irrelevant court decisions, and generally waste everyone’s tax dollars and time. Here’s what the Ninth Circuit Appeals Court said to Fyk the last time around:

The remaining cases Fyk cites are unpublished, dissenting, out-of-circuit, or district-court opinions, which are not binding in this circuit and therefore do not constitute a change in the law.

Fyk is nothing if not persistent. Despite being rejected by the Supreme Court in the final year of what was supposed to be Trump’s only presidential term, Fyk decided his latest loss in the Ninth Circuit demanded another swing at Supreme Court certification.

And despite certain Supreme Court justices getting super-weird about content moderation since it’s preventing their buddies from going Nazi on main, Fyk return to the top court in the land ends like his last one: a single line under the heading “Certiorari Denied” in SCOTUS’s most recent order list release. Even justices sympathetic to bad people who want to be even worse online (so long as they hold certain “conservative views“) aren’t willing to die on Fyk’s piss-soaked hill, no matter how much urine of his own he sprays while wrongly correcting people about Section 230. His complaint is, once again, as dead as the banned account he’s been suing about for most of the last decade.

Filed Under: content moderation, frivolous lawsuit, jason fyk, lolsuit, section 230
Companies: facebook, meta, wtfnews

SCOTUS Refuses To Hear Anti-Vax Group’s Claim That Meta’s Private Actions Are 1A Violations

from the go-away dept

Back in the summer of 2020, RFK Jr. was leading the Children’s Health Defense organization, built on an anti-vaccination platform. The organization sued Facebook/Meta that year, along with several fact-checking organizations, for limiting the reach of, and otherwise fact-checking, its posts due to their inclusion of medical and scientific misinformation. CHD argued, in an incredibly stupid filing, that Meta was acting as an arm of the government due to Democratic lawmakers complaining about misinformation being published on the platform and, idiotically, because Section 230 exists. Mike’s takedown of the lawsuit was thorough and complete and very much worth your time if you’re not familiar with this case.

The District Court agreed, tossing this turd in the waste bin. Its explanation was clear: lawmakers complaining about what appears on Meta does not amount to Meta being a state actor, nor does Section 230 existing, and, finally, Meta is a private actor allowed to moderate its own platform as a function of its own speech rights.

That should have been the end of it. Instead, CHD appealed the ruling, making essentially the same arguments, many of which it failed to provide legal precedent and/or any evidence of its claims. The Ninth Circuit ruled against CHD again, and for all the same reasons.

That should have been the end of it… again. Instead, CHD appealed once more to the Supreme Court. A Supreme Court that is chockablock with conservative justices, a third of them appointed by President Trump. At a time when the GOP holds the majority of all branches of government. And, finally, at a time in which RFK Jr. is the head of HHS, having left CHD to pursue his career in federal government.

And with all of those factors in theory lining up in favor of CHD’s lawsuit… even this SCOTUS laughed the appeal out of the room.

The Supreme Court on Monday turned away without comment a claim brought by the group formerly run by Robert F. Kennedy Jr. alleging that its anti-vaccine speech was censored by the social media company Meta Platforms.

The justices left in place lower court rulings that tossed out the lawsuit, which claimed that Facebook, starting in 2019, colluded with the federal government to restrict access to its content. The issue came to a head during the Covid-19 pandemic, with Facebook removing the group’s page in 2022.

That will be the end of this. And hopefully it serves as a lesson to other, like-minded groups out there that don’t seem to understand that free speech laws apply and protect them from government actions, not privately held platforms that in fact have their own free speech rights. If Meta, or other social media groups, want to fact-check your content, take down your pages, or limit the reach of your posts on their platform… well, they can. It’s theirs.

Unfortunately, Kennedy remains free to do his anti-vax, anti-science damage from the halls of government.

Filed Under: content moderation, free speech, rfk jr., supreme court
Companies: children's health defense, meta

Ctrl-Alt-Speech: Teen But Not Heard

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Bridget Todd, a technology and culture writer, speaker and trainer and host of two great podcasts, There are No Girls on the Internet and IRL: Online Life is Real Life. Together, they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: ai, artificial intelligence, content moderation, singapore, turkey

Ctrl-Alt-Speech: Outsourced But Not Out Of Mind

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Mercy Mutemi, lawyer and managing partner of Nzili & Sumbi Advocates. Together, they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: africa, community notes, content moderation, kenya, outsourcing, scams
Companies: facebook, meta, tiktok, twitter, x

Why Making Social Media Companies Liable For User Content Doesn’t Do What Many People Think It Will

from the how-stuff-works dept

Brazil’s Supreme Court appears close to ruling that social media companies should be liable for content hosted on their platforms—a move that appears to represent a significant departure from the country’s pioneering Marco Civil internet law. While this approach has obvious appeal to people frustrated with platform failures, it’s likely to backfire in ways that make the underlying problems worse, not better.

The core issue is that most people fundamentally misunderstand both how content moderation works and what drives platform incentives. There’s a persistent myth that companies could achieve near-perfect moderation if they just “tried harder” or faced sufficient legal consequences. This ignores the mathematical reality of what happens when you attempt to moderate billions of pieces of content daily, and it misunderstands how liability actually changes corporate behavior.

Part of the confusion, I think, stems from people’s failure to understand the impossibility of doing content moderation well at scale. There is a very wrong assumption that social media platforms could do perfect (or very good) content moderation if they just tried harder or had more incentive to do better. Without denying that some entities (*cough* ExTwitter *cough*) have made it clear they don’t care at all, most others do try to get this right, and discover over and over again how impossible that is.

Yes, we can all point to examples of platform failures that are depressing and seem obvious that things should have been done differently, but the failures are not there because “the laws don’t require it.” The failures are because it’s impossible to do this well at scale. Some people will always disagree with how a decision comes out, and other times there are no “right” answers. Also, sometimes, there’s just too much going on at once, and no legal regime in the world can possibly fix that.

Given all of that, what we really want are better overall incentives for the companies to do better. Some people (again, falsely) seem to think the only incentives are regulatory. But that’s not true. Incentives come in all sorts of shapes and sizes—and much more powerful than regulations are things like the users themselves, along with advertisers and other business partners.

Importantly, content moderation is also a constantly moving and evolving issue. People who are trying to game the system are constantly adjusting. New kinds of problems arise out of nowhere. If you’ve never done content moderation, you have no idea how many “edge cases” there are. Most people—incorrectly—assume that most decisions are easy calls and you may occasionally come across a tougher one.

But there are constant edge cases, unique scenarios, and unclear situations. Because of this, every service provider will make many, many mistakes every day. There’s no way around this. It’s partly the law of large numbers. It’s partly the fact that humans are fallible. It’s partly the fact that decisions need to be made quickly without full information. And a lot of it is that those making the decisions just don’t know what the “right” approach is.

The way to get better is constant adjusting and experimenting. Moderation teams need to be adaptable. They need to be able to respond quickly. And they need the freedom to experiment with new approaches to deal with bad actors trying to abuse the system.

Putting legal liability on the platform makes all of that more difficult

Now, here’s where my concerns about the potential ruling in Brazil get to: if there is legal liability, it creates a scenario that is actually less likely to lead to good outcomes. First, it effectively requires companies to replace moderators with lawyers. If your company is now making decisions that come with significant legal liability, that likely requires a much higher type of expertise. Even worse, it’s creating a job that most people with law degrees are unlikely to want.

Every social media company has at least some lawyers who work with their trust & safety teams to review the really challenging cases, but when legal liability could accrue for every decision, it becomes much, much worse.

More importantly, though, it makes it way more difficult for trust & safety teams to experiment and adapt. Once things include the potential of legal liability, then it becomes much more important for the companies to have some sort of plausible deniability—some way to express to a judge “look, we’re doing the same thing we always have, the same thing every company has always done” to cover themselves in court.

But that means that these trust & safety efforts get hardened into place, and teams are less able to adapt or to experiment with better ways to fight evolving threats. It’s a disaster for companies that want to do the right thing.

The next problem with such a regime is that it creates a real heckler’s veto-type regime. If anyone complains about anything, companies are quick to take it down, because the risk of ruinous liability just isn’t worth it. And we now have decades of evidence showing that increasing liability on platforms leads to massive overblocking of information. I recognize that some people feel this is acceptable collateral damage… right up until it impacts them.

This dynamic should sound familiar to anyone who’s studied internet censorship. It’s exactly how China’s Great Firewall originally operated—not through explicit rules about what was forbidden, but by telling service providers that the punishment would be severe if anything “bad” got through. The government created deliberate uncertainty about where the line was, knowing that companies would respond with massive overblocking to avoid potentially ruinous consequences. The result was far more comprehensive censorship than direct government mandates could have achieved.

Brazil’s proposed approach follows this same playbook, just with a different enforcement mechanism. Rather than government officials making vague threats, it would be civil liability creating the same incentive structure: when in doubt, take it down, because the cost of being wrong is too high.

People may be okay with that, but I would think that in a country with a history of dictatorships and censorship, they would like to be a bit more cautious before handing the government a similarly powerful tool of suppression.

It’s especially disappointing in Brazil, which a decade ago put together the Marco Civil, an internet civil rights law that was designed to protect user rights and civil liberties—including around intermediary liability. The Marco Civil remains an example of more thoughtful internet lawmaking (way better than we’ve seen almost anywhere else, including the US). So this latest move feels like backsliding.

Either way, the longer-term fear is that this would actually limit the ability of smaller, more competitive social media players to operate in Brazil, as it will be way too risky. The biggest players (Meta) aren’t likely to leave, but they have buildings full of lawyers who can fight these lawsuits (and often, likely, win). A study we conducted a few years back detailed how as countries ratcheted up their intermediary liability, the end result was, repeatedly, fewer online places to speak.

That doesn’t actually improve the social media experience at all. It just gives more of it to the biggest players with the worst track records. Sure, a few lawsuits may extract some cash from these companies for failing to be perfect, but it’s not like they can wave a magic wand and not let any “criminal” content exist. That’s not how any of this works.

Some responses to issues raised by critics

When I wrote about this on a brief Bluesky thread, I received hundreds of responses—many quite angry—that revealed some common misunderstandings about my position. I’ll take the blame for not expressing myself as clearly as I should have and I’m hoping the points above lay out the argument more clearly regarding how this could backfire in dangerous ways. But, since some of the points were repeated at me over and over again (sometimes with clever insults), I thought it would be good to address some of the arguments directly:

But social media is bad, so if this gets rid of all of it, that’s good. I get that many people hate social media (though, there was some irony in people sending those messages to me on social media). But, really what most people hate is what they see on social media. And as I keep explaining, the way we fix that is with more experimentation and more user agency—not handing everything over to Mark Zuckerberg and Elon Musk or the government.

Brazil doesn’t have a First Amendment, so shut up and stop with your colonialist attitude. I got this one repeatedly and it’s… weird? I never suggested Brazil had a First Amendment, nor that it should implement the equivalent. I simply pointed out the inevitable impact of increasing intermediary liability on speech. You can decide (as per the comment above) that you’re fine with this, but it has nothing to do with my feelings about the First Amendment. I wasn’t suggesting Brazil import American free speech laws either. I was simply pointing out what the consequences of this one change to the law might create.

Existing social media is REALLY BAD, so we need to do this. This is the classic “something must be done, this is something, we will do this” response. I’m not saying nothing must be done. I’m just saying this particular approach will have significant consequences that it would help people to think through.

It only applies to content after it’s been adjudicated as criminal. I got that one a few times from people. But, from my reading, that’s not true at all. That’s what the existing law was. These rulings would expand it greatly from what I can tell. Indeed, the article notes how this would change things from existing law:

The current legislation states social media companies can only be held responsible if they do not remove hazardous content after a court order.

[….]

Platforms need to be pro-active in regulating content, said Alvaro Palma de Jorge, a law professor at the Rio-based Getulio Vargas Foundation, a think tank and university.

“They need to adopt certain precautions that are not compatible with simply waiting for a judge to eventually issue a decision ordering the removal of that content,” Palma de Jorge said.

You’re an anarchocapitalist who believes that there should be no laws at all, so fuck off. This one actually got sent to me a bunch of times in various forms. I even got added to a block list of anarchocapitalists. Really not sure how to respond to that one other than saying “um, no, just look at anything I’ve written for the past two and a half decades.”

America is a fucking mess right now, so clearly what you are pushing for doesn’t work. This one was the weirdest of all. Some people sending variations on this pointed to multiple horrific examples of US officials trampling on Americans’ free speech, saying “see? this is what you support!” as if I support those things, rather than consistently fighting back against them. Part of the reason I’m suggesting this kind of liability can be problematic is because I want to stop other countries from heading down a path that gives governments the power to stifle speech like the US is doing now.

I get that many people are—reasonably!—frustrated about the terrible state of the world right now. And many people are equally frustrated by the state of internet discourse. I am too. But that doesn’t mean any solution will help. Many will make things much worse. And the solution Brazil is moving towards seems quite likely to make the situation worse there.

Filed Under: brazil, content moderation, free speech, impossibility, intermediary liability, marco civil, platform liability, social media

Trump’s FTC Turns Consumer Protection Into MAGA Protection Racket

from the everything's-corruption dept

When Andrew Ferguson made his pitch to Donald Trump to take over the organization, his one-page “pick me” plea talked about “ending” former FTC Chair Lina Khan’s “politically motivated investigations.” We pointed out at the time how hilarious it was that he then made it clear he fully intended to abuse the power of the FTC to, instead, launch “politically motivated investigations” on behalf of MAGA culture war interests.

Now we have two separate reports of the FTC going way further than just launching bogus “politically motivated investigations,” but also looking to use consent decrees for clearly partisan support. This isn’t just garden-variety regulatory capture. It’s the transformation of a consumer protection agency into a protection racket for Trump loyalists and billionaire friends.

We’ve joked in the past that it’s become something of a rite of passage for large internet companies that they end up with a 20-year FTC consent decree at some point. Almost always, this is because of some gross violation of privacy by the company, leading to promises not to be so negligent and to be a lot more careful going forward. For a lot of companies it’s kind of the cost of becoming big enough to matter. Some, like Elon Musk, constantly whine about how unfair these consent decrees are.

But now Ferguson is clearly looking to weaponize consent decrees to help friends and punish enemies.

The Meta Shakedown: Pay Up For Exercising Editorial Rights

First up, a story from the NY Post ostensibly about how the big tech billionaires all kissed Donald Trump’s ass… for basically nothing in return. Trump and his allies are still abusing regulatory power to punish these companies. But, buried in that piece is this bit of ridiculous news:

Trump’s team, sources told me, are now pushing for aggressive measures, including a potential consent decree as part of an FTC deal that could force Meta to pay restitution to conservative users and businesses harmed by content moderation that was ratcheted up dramatically during covid.

It’s kind of shocking how ridiculous and inappropriate that would be. First of all, courts up to and including the Supreme Court have already made it abundantly clear that content moderation is protected by the First Amendment, noting that it is the same as the type of editorial discretion that enables Fox News to only spew bullshit and rarely post stories critical of Donald Trump.

Second, the FTC has zero authority to regulate speech or force companies to pay damages for exercising their editorial rights. Consumer protection agencies don’t get to second-guess private companies’ editorial decisions, even when those decisions upset powerful political constituencies.

Third, the predicate for this entire scheme—that Meta was biased against conservatives—is completely fabricated. Study after study after study has shown that Meta strongly favored conservative users rather than targeting them. Indeed, it had a separate set of rules that allowed MAGA types to violate its rules more frequently before facing any consequences, while deliberately limiting the reach of more liberal voices. This is why the platform is dominated by MAGA voices and has been for years.

In other words, Ferguson wants to force a company to pay damages to people who broke that company’s rules, based on a completely false premise about bias, in direct violation of both the First Amendment and the FTC’s statutory authority.

That seems… bad?

But, of course, with Zuckerberg so desperate to suck up to Trump, watch him actually agree to this bit of nonsense.

The Advertising Racket: Pay Elon Or No Deal

The second example is a NY Times article regarding the FTC’s review of the potential merger between advertising giants Omnicom and Interpublic. There are plenty of legitimate reasons to be concerned about this deal leading to even more consolidation in the advertising market, but that doesn’t seem to be the major concern of the Ferguson FTC.

Instead, the agency wants to use the merger review as leverage to force these companies to buy ads on Elon Musk’s flailing ExTwitter platform:

A proposed consent decree would prevent the merged company from boycotting platforms because of their political content by refusing to place their clients’ advertisements on them , according to two people briefed on the matter.

This sanitized language obscures what’s really happening here: a protection racket for Elon Musk. As we’ve covered, Elon Musk is very, very mad that he drove away the majority of ExTwitter’s advertisers. But rather than look inward at what he did to cause that, he’s blaming everyone else—to the point that he is suing advertisers directly for not advertising on ExTwitter (while demanding others advertise or be added to the suit). He’s also been trying to encourage government officials to spin up “investigations” into advertisers who won’t advertise on ExTwitter, claiming (ridiculously) it’s an illegal boycott.

Courts at both the district and appeals court levels have rejected this theory as an obvious attack on protected First Amendment activity (i.e., advertisers saying they don’t want their brands associated with neo-Nazi reactionary nonsense).

But, the Ferguson/Trump FTC launched a similarly bogus investigation anyway, in an effort to abuse the power of the FTC to browbeat firms into giving Elon Musk cash (I assume, so long as Elon stays in Trump’s good graces).

So when the FTC proposes a consent decree preventing ad agencies from “boycotting platforms because of their political content,” it’s essentially telling Omnicom and Interpublic: “If you want this merger approved, you’ll agree in writing to buy ads on ExTwitter, whether your clients want them or not.”

This is textbook corruption: using regulatory approval as leverage to benefit a specific company that happens to be owned by someone (for the moment) in the president’s inner circle.

A Pattern of Regulatory Abuse

What connects these two schemes is how far they stray from the FTC’s actual authority. The agency is supposed to protect consumers from unfair or deceptive business practices and prevent anticompetitive mergers. It’s not supposed to act as an enforcement arm for aggrieved conservatives or as a collection agency for politically connected billionaires.

But, as with Zuckerberg, it’s entirely possible that the ad firms may agree to such a condition just to get the merger done.

Ferguson promised to end “politically motivated investigations” and instead launched obviously political shakedown schemes that would make Al Capone proud. The transformation is complete: an agency created to protect consumers from corporate abuse has become a tool for extracting tribute from corporations on behalf of powerful political interests.

This isn’t just garden-variety corruption or regulatory capture. It’s the systematic transformation of consumer protection regulatory tools into weapons of political retribution and personal enrichment. And it’s happening so brazenly that these officials barely even bother to hide their motives anymore.

The corruption is so brazen because they know no one will stop them.

The real tragedy isn’t just that this undermines the rule of law or corrupts important regulatory institutions. It’s that when everything becomes nakedly political, we lose the ability to distinguish between legitimate regulatory action and partisan hackery. It creates increased cynicism and distrust of government organizations. And, perhaps that’s part of the point.

Filed Under: 1st amendment, advertising, andrew ferguson, anti-conservative bias, bias, boycott, consent decree, content moderation, elon musk, free association, free speech, ftc
Companies: interpublic, meta, omnicom, twitter, x

Ctrl-Alt-Speech: Outrage For The Machine

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: bonnie blue, content moderation, dsa, eu, france
Companies: bluesky, onlyfans, reddit, twitter, x

Ctrl-Alt-Speech: Algorithm Shrugged

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Zeve Sanderson, the founding Executive Director of the NYU Center for Social Media & Politics. Together, they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Modulate. In our Bonus Chat, we speak with Modulate CTO Carter Huffman about how their voice technology can actually detect fraud.

Filed Under: ai, algorithms, artificial intelligence, content moderation, media matters, regulation
Companies: anthropic

Rubio Announces Ban On Foreign ‘Censors’ Coming To The US, While Simultaneously Having Students Kidnapped For Their Speech

from the oh-come-on dept

Secretary of State Marco Rubio announced this week that he’s barring visas for foreign nationals who “censor Americans,” declaring that “free speech is among the most cherished rights we enjoy as Americans.”

This is yet another example of the most censorial administration falsely wrapping itself in the cloak of “free speech warriors” to defend censorship. Rubio has spent his tenure as Secretary of State conducting the most aggressive authoritarian censorship campaign in recent American history — literally having students kidnapped off the street for their speech and moved around the country to hide them from the courts. He’s declared foreign students “lunatics” for their opinions and yanked their visas without warning or due process.

So when Rubio positions himself as a free speech champion, it’s worth examining what he’s actually doing versus what he’s claiming to oppose.

The New Visa Policy: Extremely Selective “Anti-Censorship”

Wednesday’s announcement targets foreign officials who “censor” Americans:

Free speech is among the most cherished rights we enjoy as Americans. This right, legally enshrined in our constitution, has set us apart as a beacon of freedom around the world. Even as we take action to reject censorship at home, we see troubling instances of foreign governments and foreign officials picking up the slack. In some instances, foreign officials have taken flagrant censorship actions against U.S. tech companies and U.S. citizens and residents when they have no authority to do so.

Today, I am announcing a new visa restriction policy that will apply to foreign nationals who are responsible for censorship of protected expression in the United States. It is unacceptable for foreign officials to issue or threaten arrest warrants on U.S. citizens or U.S. residents for social media posts on American platforms while physically present on U.S. soil. It is similarly unacceptable for foreign officials to demand that American tech platforms adopt global content moderation policies or engage in censorship activity that reaches beyond their authority and into the United States. We will not tolerate encroachments upon American sovereignty, especially when such encroachments undermine the exercise of our fundamental right to free speech.

In isolation, protecting Americans’ speech rights might be worth considering. But this isn’t happening in isolation — it’s coming from the most censorial administration in recent memory.

The real tell is what constitutes “censorship” in Rubio’s framework. The policy specifically targets demands that “American tech platforms adopt global content moderation policies.” Translation: this is about protecting platforms like ExTwitter from having to follow rules in places like the EU or Brazil that Elon Musk doesn’t like. Meanwhile, Rubio’s own government is literally disappearing people for their speech.

The Selective Enforcement Game

Expect this policy to be applied with surgical precision against countries whose content policies displease the administration — likely targeting EU officials, Brazilian judges, and Australian regulators who’ve pressured social media companies. And, yes, those all have done things we consider problematic, but banning them from the US entirely seems ridiculous and an attack on foreign sovereignty. We may disagree with their policies, but this is the US meddling in those policies elsewhere.

Meanwhile, will the visa ban apply to Recep Erdogan of Turkey? Narendra Modi of India? Vladimir Putin? All regularly engage in actual social media censorship, but somehow I doubt they’ll face visa restrictions.

The tell here is that Rubio chose to announce this through an “exclusive” article from Michael Shellenberger, one of the leading voices in the “censorship industrial complex” mythology — who is now employed as a “professor” at a “university” that espouses “academic freedom” but fires people who post on social media in a way that challenges a funder’s ideology. Shellenberger has spent years misrepresenting basic content moderation concepts while claiming private companies enforcing their own rules constitutes government censorship. Now, faced with an administration literally kidnapping people for speech, he’s writing puff pieces celebrating their “anti-censorship” efforts.

To show just how upside-down this has become, in this article (which I’m not linking to, because fuck that) celebrating Rubio’s announcement as pro-free speech, Shellenberger closed by also celebrating Trump’s decision to revoke the security clearance of Chris Krebs. According to Shellenberger’s fantasy, Krebs was fired for “demanding” social media company’s censor content — a thing that never happened. Krebs was the CISA director who accurately pointed out that the 2020 election was secure — and was fired for that speech, and is now being further retaliated against for that speech. Stripping security clearance from someone for contradicting your preferred narrative is textbook retaliation for speech, the exact kind of government censorship Shellenberger supposedly opposes.

The Same-Day Contradiction

As if to underscore the hypocrisy, on the very same day Rubio announced his anti-censorship visa policy, he also announced he’s revoking visas for Chinese students:

Under President Trump’s leadership, the U.S. State Department will work with the Department of Homeland Security to aggressively revoke visas for Chinese students, including those with connections to the Chinese Communist Party or studying in critical fields. We will also revise visa criteria to enhance scrutiny of all future visa applications from the People’s Republic of China and Hong Kong.

Notice the language: “including those with connections” suggests that’s just a subset of all Chinese student visas being canceled. What constitutes “critical fields”? The administration won’t say, because the point is arbitrary power to punish anyone they want.

So on the same day Rubio claims to defend free speech, he’s revoking student visas based on nationality and academic interests. That’s not protecting speech — that’s targeting it.

The capstone came when Education Secretary Linda McMahon announced that universities should only receive federal funds if their research is “in sync with the administration.”

This is the opposite of everything conservatives claimed to stand for regarding academic freedom. It’s a direct assault on the First Amendment’s protection of intellectual inquiry.

The Real Pattern

The pattern here isn’t about protecting free speech — it’s about protecting the speech the administration likes while silencing the speech it doesn’t. When foreign students write op-eds critical of US policy, they get kidnapped. When foreign officials pressure US tech companies in ways that displease Musk, they get visa bans. When universities pursue research the administration dislikes, they lose funding.

But when Rubio wants to position himself as a free speech champion, he can count on useful idiots in the “free speech” movement to cheer him on, even as he’s conducting the most systematic attack on speech rights in recent American history.

If Rubio truly believed “free speech is among the most cherished rights we enjoy as Americans,” he’d stop being the biggest threat to that right in the US government.

Filed Under: academic freedom, censorship, content moderation, free speech, linda mcmahon, marco rubio, michael shellenberger, secretary of state, visas