disinformation – Techdirt (original) (raw)
Ctrl-Alt-Speech: Sorry, This Episode Will Not Cheer You Up
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- These look like Harris ads. Trump backers bought them (Washington Post)
- Facebook Took More Than $1 Million For Ads Sowing Election Lies (Forbes)
- Election officials are outmatched by Elon Musk’s misinformation machine (CNN)
- Election Falsehoods Take Off on YouTube as It Looks the Other Way (New York Times)
- Exploiting Meta’s Weaknesses, Deceptive Political Ads Thrived on Facebook and Instagram in Run-Up to Election (ProPublica)
- The U.S. Spies Who Sound the Alarm About Election Interference (New Yorker)
- This Is What $44 Billion Buys You (The Atlantic)
- How Russia, China and Iran Are Interfering in the Presidential Election (New York Times)
- Can A.I. Be Blamed for a Teen’s Suicide? (New York Times)
- ‘Sickening’ Molly Russell chatbots found on Character.ai (BBC)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Concentrix, the technology and services leader driving trust, safety, and content moderation globally. In our Bonus Chat, Dom Sparkes, Trust and Safety Director for EMEA, and David Elliot, Head of Technology, try to lighten the mood by discussing how to make a compelling business case for online safety and the importance of measuring ROI.
Filed Under: .politics, ai, artificial intelligence, content moderation, disinformation, election, misinformation
Companies: character.ai, facebook, meta
Lies, Damned Lies, And Elon Musk
from the we-used-to-believe-in-reality dept
What do you do when the misinformation is coming from inside the house?
In the recent book Character Limit, about Musk’s takeover of Twitter, there’s an anecdote that is in the introduction. A data scientist who worked at the company (and had survived the early purge), who was horrified at how Musk had fallen for a blatantly obvious made up conspiracy theory, decided that he’d take the opportunity Musk had offered to talk to anyone personally to explain just how gullible Musk seemed:
Musk’s assistant peeked back the muttered and said he had another meeting. “Do you have any final thoughts?” she asked.
“Yes, I want to say one thing.” the data scientist said. He took a deep breath and turned to Musk.
“I’m resigning today. I was feeling excited about the takeover, but I was really disappointed by your Paul Pelosi tweet. It’s really such obvious partisan misinformation and it makes me worry about you and what kind of friends you’re getting information from. It’s only really like the tenth percentile of the adult population who’d be gullible enough to fall for this.”
The color drained from Musk’s already pale face. He leaned forward in his chair. No one spoke to him like this. And no one, least of all someone who worked for him, would dare to question his intellect or his tweets. His darting eyes focused for a second directly on the data scientist.
“Fuck you!” Musk growled.
This is a pattern. Musk has all the money in the world. He has the ability to be one of the best informed people in the world. And he’s built for himself a snowglobe of confirmation bias, making sure that a randomly floating combination of grifters and morons continue to feed him the dumbest shit imaginable, rather than take the slightest effort to actually inform himself of reality.
We just recently had a post contrasting how science educator Hank Green approached some possibly damning information about voting, compared to how Elon Musk handled it. Green was concerned, but spent the time researching it, and realized his original concerns were misplaced, and the institutions had actually done things in a smart way. Elon does no research, and assumes the worst, and simply will retweet any nonsense he comes across so long as it confirms his (very, very confused) biases.
This keeps playing out day after day on ExTwitter, the website that Elon owns. Just recently, I saw Elon post a tweet that was so egregiously wrong, it got Community Noted twice, not that Elon ever acknowledged it was false. As I write this a few days later, the tweet is still up:
That’s a quote tweet from Elon Musk saying “They are literally foaming at their mouth” in response to a tweet from some rando saying “Completely insane story in The Atlantic today” with a faked screenshot of an Atlantic piece with the false headline “Trump is Literally Hitler.” Both the original tweet and Elon’s tweet have a Community Note on it saying “this is not a real article.”
But millions of people saw the original, without the Community Note, and seem to think it’s true.
The original tweeter later admitted that he faked the headline. But because Elon wanted to believe it was true and had to retweet it, giving it a bunch of attention, even The Atlantic was forced to put out a statement noting they never published any such article.
The Atlantic concludes that statement with the following rather straightforward point:
Anyone encountering these images can quickly verify whether something is real––or not––by visiting The Atlantic and searching our site.
The bare minimum effort, which Elon couldn’t be bothered with.
Amusingly, the same day my piece comparing how Green and Musk deal with such information came out, Green released another video, this time talking about how he looked at one day’s worth of Elon tweets and was shocked to see how blatantly and easily Musk publishes easily disproven false information with the implication that Musk believes it’s true. Green found six outright lies posted in just 24 hours.
Again, this is not a one-off thing. Recently, the NY Times looked at five days’ worth of Elon’s tweets and found almost one-third of them “were false, misleading, or missing vital context.”
Nearly a third of his posts last week were false, misleading or missing vital context. They included misleading posts claiming Democrats were making memes “illegal” and falsehoods that they want to “open the border” to gain votes from illegal immigrants. His misleading posts were seen more than 800 million times on X, underscoring Mr. Musk’s unique role as the platform’s most-followed account and a significant source of its misleading content.
[….]
His most-viewed post, seen more than 100 million times, was a misleading projection of the presidential race that showed Mr. Trump winning most battleground states. The data was based on an outdated forecast from Nate Silver, an election modeler. By the time Mr. Musk shared the data, Mr. Silver’s forecast had shifted, suggesting instead that Vice President Kamala Harris was faring better than Mr. Trump. Some users quickly noted that the data was wrong, but Mr. Musk did not remove the post or make a correction.
It wasn’t just one bad week. Bloomberg just came out with an even bigger report, looking at all of Elon Musk’s tweets from 2011 through this week. It’s a really fascinating piece of data-based journalism, showing how he was a pretty ordinary tweeter in the early days, but obviously things changed after he took over Twitter.
As the report notes, in recent weeks, Musk has become obsessed with false and disproven conspiracy theories about the election and immigration.
Musk’s posts about immigration primarily promote misleading narratives: that the election will be unfair because of migrants; that migrants are dangerous, and flooding unchecked into the country; that the vast majority of immigrants have not settled into the US in the “right” way; that migrants have gotten unreasonable, special treatment from the government; and that Democrats are responsible for ushering in large numbers of migrants who go on to commit crimes in the US.
Bloomberg ran a machine learning model on the posts to identify subjects that Musk most often discusses on X, and found that about 1,300 of Musk’s posts in 2024 revolved around immigration and voter fraud. Reporters then manually reviewed hundreds of them to ensure they were properly categorized. Posts were provided by researchers at Clemson University’s Media Forensics Hub and the data platform Bright Data.
Musk’s commentary on noncitizens voting is based on a “weak to non-existent” understanding of election law, said David Schultz, a professor of political science at Hamline University in St. Paul, Minnesota. Federal law bars non-US citizens from voting in presidential elections, and voters must legally swear, under penalty of criminal prosecution, that they’re eligible to cast a ballot.
In order to become a US citizen and vote, undocumented immigrants have only a few viable paths, some which take years, such as securing asylum or successfully challenging a deportation order. Meanwhile, state-led investigations by both Republican and Democratic officials have repeatedly found that noncitizen voting is extraordinarily rare — and it’s never been shown to affect the outcome of any election. “Given what we know about how infrequently voter fraud has occurred over the last two or three elections in the US, the odds of drawing a random ballot, and that ballot being fraudulent, approach that of winning the Powerball,” Schultz said.
It also appears that there’s some element of “audience capture” going on. Musk appears to track closely what sort of response his posts get (which is partly why he ordered the company to make his tweets get more attention) and then responds accordingly:
Any time Musk talks about immigration on X, the reposts, replies and views reliably roll in. Though Musk has written about immigration and voter fraud issues in 2024 with about the same frequency as he’s written about Tesla, the automaker he is chief executive of, his immigration-related posts have amassed more than six times the number of reposts.
The article includes a lot more, like the reporters talking to some Trump supporters who are praising Musk while repeating the easily debunked nonsense that he regularly tweets, retweets, or engages with.
There is nothing illegal in what he’s doing, though presenting potentially harmful misinformation about elections, specifically around where and how to vote, can cross the line. But it’s quite striking how Musk, driven by his insatiable desire to be “liked” on his own platform, has shown that he has zero interest in actual truth, and is happy to push any lie that works for his current support for Donald Trump’s campaign.
It’s not new that greedy, disconnected billionaires will often use the media to push lies to support their favored candidates. Yellow journalism is a thing that existed throughout American history. However, it’s pretty shocking just how frequently and how often Elon will directly promote baseless claims or outright falsehoods, never ever taking responsibility or admitting to having promoted bullshit.
This is why, though, it’s important to call it out and for people to recognize what’s happening. Musk is actively miseducating people. And people are believing what he’s posting.
People can argue over why he’s doing this. Some say that he knows he’s spreading lies to idiots and cultists who follow him, because he knows he can get away with it. However, I don’t think that’s true. Having followed the guy and his statements for a while, I honestly think he believes the shit he’s tweeting, and believes the likes and the cultists cheering him on prove that he’s correct.
That he could easily find out the truth is not particularly important to him. It’s not about the truth. It’s about the truthiness of the world he wants to inhabit, where he is the only person who matters.
Whether or not this actually has an impact on the election or other important events in the future is impossible to predict. But it hardly seems like a good state of affairs.
Filed Under: disinformation, donald trump, elon musk, hank green, lies, misinformation, propaganda
Companies: twitter, x
Ctrl-Alt-Speech: Regulate, Rinse, Repeat
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- How Russian disinformation is reaching the U.S. ahead of the 2024 election (NBC News)
- The Rise of the Compliant Speech Platform (Lawfare)
- ExTwitter Makes It Official: Blocks Are No Longer Blocks (Techdirt)
- People are flocking to Bluesky as X makes more unwanted changes (The Verge)
- Instagram blames some moderation issues on human reviewers, not AI (TechCrunch)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our launch sponsor Modulate, which builds prosocial voice technology that combats online toxicity and elevates the health and safety of online communities. Mike Pappas joins us for our bonus chat, talking to Mike about the ever important decision between building your own trust & safety tools versus buying them from vendors.
Filed Under: ai, artifical intelligence, content moderation, disinformation, misinformation, regulation
Companies: bluesky, instagram, meta, twitter, x
Republicans, Musk, Pretend To Care About Media Consolidation… When George Soros (Read: A Jew) Is Involved
from the do-as-I-say,-not-as-I-do dept
Tue, Oct 8th 2024 05:25am - Karl Bode
Though Republicans are the worse of the two offenders, neither Democrats nor Republicans have cared all too much about preventing media consolidation. As a result, U.S. journalism and media has increasingly fallen into the hands of a handful of wealthy corporations and billionaires; and it’s routinely reflected in terrible journalism (especially political journalism) and bumbling companies like Time Warner.
In fact, you might remember that in 2017 the Trump FCC went comically out of its way to strip away what was left of media consolidation limits so that Sinclair Broadcasting — a right wing propaganda empire posing as a local news channel — could merge with Tribune Broadcasting. The irony: Sinclair lied so much during its merger application that even the Trump FCC had to ultimately block the deal.
That’s a long way of saying that Republicans historically couldn’t care less about media consolidation at the hands of rich billionaires and corporations. In fact, they routinely, actively encourage it. Unless, of course, George Soros is distantly involved, apparently.
In what would otherwise be a completely ignored deal, the FCC earlier this month voted 3-2 to approve the bankruptcy restructuring of Audacy, the nation’s second biggest owner of radio stations. Audacy was delisted from the New York Stock Exchange last May due to company incompetence and an overall downturn in interest in traditional media (though Audacy is also involved in a lot of podcasts).
The bankruptcy is expected to reduce Audacy’s debt about 80% to $350 million. As part of that restructuring, the post-bankruptcy Audacy will see a 57% ownership stake by Laurel Tree Opportunities Corporation. It’s not really any sort of controversy, and the kind of restructuring that happens constantly.
But Laurel Tree Opportunities Corporation is owned by FPR Capital Holdings LLC, which in turn is managed by the Soros-funded Fund for Policy Reform. This, apparently, was enough to send Republicans and Elon Musk into an absolute tizzy over the last week. With folks like Musk lying for attention, claiming that Soros was buying the bankrupt radio company to “spread propaganda”:
Soros, as you might or might not know, is held up as a bogeyman and bizarre caricature by the right wing because he’s Jewish. Soros’ investments, like most extremely rich people, cover the gamut of industries and businesses. But because this latest investment is in media, Republicans (as per a very classy tradition) immediately jumped to seed antisemitic tropes about Jewish-control of media.
Highly-consolidated, Rupert Murdoch owned, right-wing news outlets, traditionally and quite correctly accused of spreading propaganda, got right to work pretending there was something illegal or nefarious about the FCC’s fairly routine bankruptcy restructuring vote:
FCC Commissioner Brendan Carr, who quite simply could not give any less of a shit about propaganda (if it’s coming from right wing sources like Fox, Sinclair, OAN, Newsmax, Breitbart, or any of a million other conservative organizations pretending to do journalism), or any less of a shit about media consolidation (whether it’s Time Warner or Fox or Sinclair) also put on a little hissy fit, claiming that the FCC’s approval of the bankruptcy restructuring was somehow illegal:
The FCC, for its part, was forced to issue a polite statement that nothing about this transaction was illegal or even out of the ordinary, and that Republicans were being, well, fucking gross:
“The process we use to facilitate this license transfer is identical to the one recently used by the agency in the bankruptcy proceedings of Cumulus Media in 2018, iHeart Media in 2019, Liberman Television in 2019, Fusion Connect in 2019, Windstream Holdings in 2020, America-CV Station Group in 2021, and Alpha Media in 2021. To suggest otherwise is cynical and wrong, as this precedent clearly demonstrates. Our practice here and in these prior cases is designed to facilitate the prompt and orderly emergence from bankruptcy of a company that is a licensee under the Communications Act.”
Now it certainly is true that both parties of government have historically failed utterly to rein in corporate consolidation in radio, TV, and pretty much every other industry (with occasional exception). That’s resulted in no limit of harm to journalism and media as competition and diverse voices are forced out of the market and Americans are bombarded by a rotating crop of corporatist and partisan mush.
But as is often the case, the Republican outrage here is entirely performative.
This is a party that routinely supports unchecked corporate power, monopolization, and consolidation at every turn. The only time it even tries to pretend otherwise is either in an instance like this when they’re seeding panic about Jewish ownership, or when they’re trying to gain leverage over companies for some reason (like when they pretended to care about “antitrust reform” for a few weeks to successfully scare tech companies away from moderating right wing racist propaganda on the internet).
These are not serious people. The GOP is not a serious party. It cares about two things: the power of rich white Christian men and unchecked wealth accumulation. Everything else is performance. Ironically our highly consolidated press, more worried about maximum engagement and access than the truth, routinely fails to point this fact out to the American public, giving ignorant propaganda efforts like this one more traction and “legitimacy” than they might otherwise deserve.
Filed Under: antisemitism, bigotry, disinformation, elon musk, fcc, george soros, jessica rosenworcel, media consolidation, propaganda, radio
Companies: audacy, fpr capital holdings, laurel tree opportunities corporation
Ctrl-Alt-Speech: Moderation Has A Well-Known Reality Bias
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Professor Kate Klonick, who has studied and written about trust & safety for many years and is currently studying the DSA & DMA in the EU as a Fulbright Scholar. They cover:
- EU Commission’s Digital Fairness Fitness Check (European Commission)
- Differences in misinformation sharing can lead to politically asymmetric sanctions (Nature)
- Inside Two Years of Turmoil at Big Tech’s Anti-Terrorism Group (Wired)
- Big Tech’s Promise Never To Block Access To Politically Embarrassing Content Apparently Only Applies To Democrats (Techdirt)
- Someone Put Facial Recognition Tech onto Meta’s Smart Glasses to Instantly Dox Strangers (404 Media)
- Americans’ Views Mixed on Tech’s Role in Politics (Anchor Change with Katie Harbath)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor TaskUs, a leading company in the T&S field which provides a range of platform integrity and digital safety solutions. In our Bonus Chat at the end of the episode, Marlyn Savio, a psychologist and research manager at TaskUs, talks to Mike about a recent study they released regarding frontline moderators and their perceptions and experiences dealing with severe content.
Filed Under: content moderation, disinformation, eu, facial recognition, kate klonick, misinformation
Companies: google, meta, twitter, x
John Kerry Accurately Explains First Amendment, MAGA World Loses Its Mind
from the up-is-down dept
In this stupid partisan world we live in, the MAGA world has decided that simply accurately explaining that the First Amendment does not allow for the suppression of speech (which is a good thing!) is somehow a call for abolishing the First Amendment. This isn’t even “blaming the messenger.” It’s misinterpreting the messenger and demanding he be drawn and quartered.
We’ve pointed out a few times how ridiculous both Democrats and Republicans have been of late when it comes to the First Amendment. Unfortunately, both have been making arguments for trimming back our First Amendment rights. Donald Trump has called for jailing those who criticize the Supreme Court (something, I should note, he regularly does himself).
However, as we’ve pointed out, Democrats don’t have the best track record on speech either. They’ve been caught calling for jailing social media execs over their speech, punishing booksellers for selling books they dislike, and making certain kinds of misinformation illegal.
So, I was certainly concerned when I saw a few headlines this week about John Kerry’s conversation last week at a World Economic Forum event, in which he talked about the First Amendment as a “major block” to punishing companies that spread disinformation.
His word choice was awkward and could be interpreted as criticizing the First Amendment. However, after watching the video clip of him saying it, I realized he’s just accurately saying what reality is: the First Amendment is a block to removing disinformation.
Because… it is? And that’s generally a good thing.
He was asked about how to deal with disinformation online, and he says, factually, that you can’t use the law to suppress that speech:
“You know there’s a lot of discussion now about how you curb those entities in order to guarantee that you’re going to have some accountability on facts, etc. But look, if people only go to one source, and the source they go to is sick, and, you know, has an agenda, and they’re putting out disinformation, our First Amendment stands as a major block to be able to just, you know, hammer it out of existence…”
If he then said “and that’s why we need to repeal the First Amendment,” then I’d be right there with the people concerned about this. And I would rather he followed up that statement by saying something along the lines of “and it’s a good thing the First Amendment is a block to such things.” But he still doesn’t appear to be saying that the First Amendment needs to change. He appears to be explaining reality to a questioner from the audience who wants to suppress speech.
But, of course, the MAGAsphere has gone crazy over this. Fox News, the National Review, and RT (of course) are all hammering it. On YouTube, the MAGA nutjobs are going crazy over it. Just a few examples, starting with everyone’s most mocked Russian-paid troll victim, Tim Pool:
Except, nowhere does Kerry call for “ending” free speech at all. He just notes that the First Amendment blocks suppressing speech by the government. Which is true! You’d think that the Russian-paid Tim Pool would, you know, appreciate that?
There are a bunch of others just like this:
Again, if he had actually called to abolish the First Amendment or even to weaken it, I’d be here calling it out. And again, as mentioned above, there have been other Democrats that have, in fact, called for unconstitutional speech suppression.
From the descriptions I initially saw of what he said, I was all ready to write a piece slamming Kerry for this. But then I watched it. And he just was… explaining accurately that the First Amendment blocks the government from suppressing speech.
He doesn’t call for that to be changed. He certainly doesn’t (as some of the folks above claim) call for “abolishing” the First Amendment or for censorship. One of the screenshots above from one of Elon’s favorite Twitter trolls falsely quotes Kerry as saying that the First Amendment “stands as a major roadblock for us right now,” which is not what he said at all. That’s just false.
Since the question itself was regarding disinformation around climate change, he does say that the best way to deal with climate change is to “win the ground” and elect people who can “implement change.” But it’s clear that he’s talking about implementing change regarding the climate, not about changing the First Amendment.
Meanwhile, I’m pretty sure literally none of the people screaming about this have discussed Trump’s announced plans to jail people who criticize the Supreme Court (which is a legitimate First Amendment threat).
I wonder why?
Filed Under: 1st amendment, climate change, disinformation, free speech, john kerry, tim pool
Ctrl-Alt-Speech: Is This The Real Life? Is This Just Fakery?
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Cathryn Weems, who has held T&S roles at Yahoo, Google, Dropbox, Twitter and Epic Games. They cover:
- Google outlines plans to help you sort real images from fake (The Verge)
- Fake AI “podcasters” are reviewing my book and it’s freaking me out (Ars Technica)
- We Don’t Need Google to Help “Reimagine” Election Misinformation (Tech Policy Press)
- Social media owners top global survey of misinformation concerns (The Guardian)
- Expert Survey on the Global Information Environment (IPIE)
- X’s First Transparency Report Since Elon Musk’s Takeover Is Finally Here (Wired)
- Telegram will now provide some user data to authorities (BBC)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Concentrix, the technology and services leader driving trust, safety, and content moderation globally. In our Bonus Chat at the end of the episode, clinical psychologist Dr Serra Pitts, who leads the psychological health team for Trust & Safety at Concentrix, talks to Ben about how to keep moderators healthy and safe at work and the innovative use of heart rate variability technology to monitor their physical response to harmful content.
Filed Under: ai, artificial intelligence, content moderation, disinformation, elon musk, misinformation
Companies: google, telegram, twitter, x
Ex-Congressmen Pen The Most Ignorant, Incorrect, Confused, And Dangerous Attack On Section 230 I’ve Ever Seen
from the this-is-not-how-anything-works dept
In my time covering internet speech issues, I’ve seen some truly ridiculous arguments regarding Section 230. I even created my ever-handy “Hello! You’ve Been Referred Here Because You’re Wrong About Section 230 Of The Communications Decency Act” article four years ago, which still gets a ton of traffic to this day.
But I’m not sure I’ve come across a worse criticism of Section 230 than the one recently published by former House Majority Leader Dick Gephardt and former Congressional Rep. Zach Wamp. They put together the criticism for Democracy Journal, entitled “The Urgent Task of Reforming Section 230.”
There are lots of problems with the article, which we’ll get into. But first, I want to focus on the biggest, most brain-numbingly obvious problem, which is that they literally admit they don’t care about the solution:
People on both sides of the aisle want to reform Section 230, and there’s a range of ideas on how to do it. From narrowing its rules to sunsetting the provision entirely, dozens of bills have emerged offering different approaches. Some legislators argue that platforms should be liable for certain kinds of content—for example, health disinformation or terrorism propaganda. Others propose removing protections for advertisements or content provided by a recommendation algorithm. CRSM is currently bringing together tech, mental health, education, and policy experts to work on solutions. But the specifics are less important than the impact of the reform. We will support reform guided by commonsense priorities.
I have pointed out over and over again through the years that I am open to proposals on Section 230 reform, but the specifics are all that matter, because almost every proposal to date to “reform Section 230” does not understand Section 230 or (more importantly) how it interacts with the First Amendment.
So saying “well, any reform is what matters” isn’t just flabbergasting. It’s a sign of people who have never bothered to seriously sit with the challenges, trade-offs, and nuances of changing Section 230. The reality (as we’ve explained many times) is that changing Section 230 will almost certainly massively benefit some and massively harm others. Saying “meh, doesn’t matter, as long as we do it” suggests a near total disregard for the harm that any particular solution might do, and to whom.
Even worse, it disregards how nearly every solution proposed will actually cause real and significant harm to the people reformers insist they’re trying to protect. And that’s because they don’t care or don’t want to understand how these things actually work.
The rest of the piece only further cements the fact that Gephardt and Wamp have no experience with this issue and seem to simply think in extremely simplistic terms. They think that (1) “social media is kinda bad these days” (2) “Section 230 allows social media to be bad” and thus (3) “reforming Section 230 will make social media better.” All three of these statements are wrong.
Hilariously, the article starts off by name-checking Prof. Jeff Kosseff’s book about Section 230. However, it then becomes clear that neither former Congress person read the book, because it would correct many of the errors in the piece. Then, they point out that both of them voted for CDA 230 and call it their “most regrettable” vote:
Law professor Jeff Kosseff calls it “the 26 words that created the internet.” Senator Ron Wyden, one of its co-authors, calls it “a sword and a shield” for online platforms. But we call it Section 230 of the 1996 Communications Decency Act, one of our most regrettable votes during our careers in Congress.
While that’s the title of Jeff’s book, he didn’t coin that phrase, so it’s even more evidence that they didn’t read it. Also, is that really such a “regrettable vote”? I see both of them voted for the Patriot Act. Wouldn’t that, maybe, be a bit more regrettable? Gephardt voted for the Crime Bill of 1994. I mean, come on.
Section 230 has enabled the internet to thrive, helped build out a strong US innovation industry online, and paved the way for more speech online. How is that worth “regretting”?
These two former politicians have to resort to rewriting history:
But the internet has changed dramatically since the 1990s, and the tech industry’s values have changed along with it. In 1996, Section 230 was protecting personal pages or small forums where users could talk about a shared hobby. Now, tech giants like Google, Meta, and X dominate all internet traffic, and both they and startups put a premium on growth. It is fundamental to their business model. They make money from advertising: Every new user means more profit. And to attract and maintain users, platforms rely on advanced algorithms that track our every online move, collecting data and curating feeds to our interests and demographics, with little regard for the reality that the most engaging content is often the most harmful.
When 230 was passed, it was in response to lawsuits involving two internet giants of the day (CompuServe, owned by accounting giant H&R Block at the time, and Prodigy, owned by IBM and Sears at the time), not some tiny startups. And yes, those companies also had advertisements and “put a premium on growth.” So it’s not clear why the authors of this piece think otherwise.
The claim that “the most engaging content is often the most harmful” has an implicit (obsolete) assumption. The assumption is that the companies Gephardt and Wamp are upset about optimize for “engagement.” While that may have been true over a decade ago when they first began experiments with algorithmic recommendations, most companies pretty quickly realized that optimizing on engagement alone was actually bad for business.
It frustrates users over time, drives away advertisers, and does not make for a successful long-term strategy. That’s why every major platform has moved away from algorithms that focus solely on engagement. Because they know it’s not a good long-term strategy. Yet Gephardt and Wamp are living in the past and think that algorithms are solely focused on engagement. They’re not because the market says that’s a bad idea.
Just like Big Tobacco, Big Tech’s profits depend on an addictive product, which is marketed to our children to their detriment. Social media is fueling a national epidemic of loneliness, depression, and anxiety among teenagers. Around three out of five teenage girls say they have felt persistently sad or hopeless within the last year. And almost two out of three young adults either feel they have been harmed by social media themselves or know someone who feels that way. Our fellow members of the Council for Responsible Social Media (CRSM) at Issue One know the harms all too well: Some of them have lost children to suicide because of social media. And as Facebook whistleblower Frances Haugen, another CRSM member, exposed, even when social media executives have hard evidence that their company’s algorithms are contributing to this tragedy, they won’t do anything about it—unless they are forced to change their behavior.
Where to begin on this nonsense? No, social media is not “addictive” like tobacco. Tobacco is a thing that includes nicotine, which is a physical substance that goes into your body and creates an addictive response in your bloodstream. Some speech online… is not that.
And, no, the internet is not “fueling a national epidemic of loneliness, depression, and anxiety among teenagers.” This has been debunked repeatedly. The studies do not support this. As for the stat that “three out of five teenage girls say they have felt persistently sad or hopeless” well… maybe there are some other reasons for that which are not social media? Maybe we’re living through a time of upheaval and nonsense where things like climate change are a major concern? And our leaders in Congress (like the authors of the piece I’m writing about) are doing fuck all to deal with it?
Maybe?
But, no, it couldn’t be that our elected officials dicked around and did nothing useful for decades and fucked the planet.
Must be social media!
Also, they’re flat out lying about what Haugen found. She found that the company was studying those issues to figure out how to fix them. The whole point of the study that everyone keeps pointing to was because there was a team at Facebook that was trying to figure out if the site was leading to bad outcomes among kids in order to try to fix it.
Almost everything written by Gephardt and Wamp in this piece is active misinformation.
It’s not just our children. Our very democracy is at stake. Algorithms routinely promote extreme content, including disinformation, that is meant to sow distrust, create division, and undermine American democracy. And it works: An alarming 73 percent of election officials report an increase in threats in recent years, state legislatures across the country have introduced hundreds of harmful bills to restrict voting, about half of Americans believe at least one conspiracy theory, and violence linked to conspiracy theories is on the rise. We’re in danger of creating a generation of youth who are polarized, politically apathetic, and unable to tell what’s real from what’s fake online.
Blaming all of the above on Section 230 is literal disinformation. To claim that somehow what’s described here is 230’s fault is so disconnected from reality as to raise serious questions about the ability of the authors of the piece to do basic reasoning.
First, nearly all disinformation is protected by the First Amendment, not Section 230. Are Gephardt and Wamp asking to repeal the First Amendment? Second, threats towards election officials are definitely not a Section 230 issue.
But, sure, okay, let’s take them at their word that they think Section 230 is the problem and “reform” is needed. I know they say they don’t care what the reform is, just that it happens, but let’s walk through some hypotheticals.
Let’s start with an outright repeal. Will that make the US less polarized and stop disinformation? Of course not. It would make it worse! Because Section 230 gives platforms the freedom to moderate their sites as they see fit, utilizing their own editorial discretion without fear of liability.
Remove that, and you get companies who are less able to remove disinformation because the risk of a legal fight increases. So any lawyer would tell company leadership to minimize their efforts to cut down on disinformation.
Okay, some people say, “maybe just change the law so that ‘you’re now liable for anything on your site.’” Well, okay, but now you have a very big First Amendment problem and, again, you get worse results. Because existing case law on the First Amendment from the Supreme Court on down says that you can’t be liable for distributing content if you don’t know it violates the law.
So, again, our hypothetical lawyers in this hypothetical world will say, “okay, do everything to avoid knowledge.” That will mean less reviewing of content, less moderation.
Or, alternatively, you get massive over-moderation to limit the risk of liability. Perhaps that’s what Gephardt and Wamp really want: no more freedom for the filthy public to ever speak. Maybe all speaking should only occur on heavily limited TV. Maybe we go back to the days before civil rights were a thing, and it was just white men on TV telling us how everyone should live?
This is the problem. Gephardt and Wamp are upset about some vague things they claim are caused by social media, and only due to Section 230. They believe that some vague amorphous reform will fix it.
Except all of that is wrong. The problems they’re discussing are broader, societal-level problems that these two former politicians failed to do anything about when they were in power. Now they are blaming people exercising their own free speech for these problems, and demanding that we change some unrelated law to… what…? Make themselves feel better?
This is not how you solve problems.
In short, Big Tech is putting profits over people. Throughout our careers, we have both supported businesses large and small, and we believe in their right to succeed. But they can’t be allowed to avoid responsibility by thwarting regulation of a harmful product. No other industry works like this. After a door panel flew off a Boeing plane mid-flight in January, the Federal Aviation Administration grounded all similar planes and launched an investigation into their safety. But every time someone tries to hold social media companies accountable for the dangerous design of their products, they hide behind Section 230, using it as a get-out-of-jail-free card.
Again, airplanes are not speech. Just like tobacco is not speech. These guys are terrible at analogies. And yes, every other industry that involves speech does work like this. The First Amendment protects nearly all the speech these guys are complaining about.
Section 230 has never been a “get out of jail” card. This is a lazy trope spread by people who never have bothered to understand Section 230. Section 230 only says that the liability for violative content on an internet service goes to whoever created the content. That’s it. There’s no “get out of jail free.” Whoever creates the violative content can still go to jail (if that content really violates the law, which in most cases it does not).
If their concerns are about profits, well, did Gephardt and Wamp spend any time reforming how capitalism works when they were lawmakers? Did they seek to change things so that the fiduciary duty of company boards wasn’t to deliver increasing returns every three months? Did they do anything to push for companies to be able to take a longer term view? Or to support stakeholders beyond investors?
No? Then, fellas, I think we found the problem. It’s you and other lawmakers who didn’t fix those problems, not Section 230.
That wasn’t the intent of Section 230. It was meant to protect companies acting as good Samaritans, ensuring that if a user posts harmful content and the platform makes a good faith-effort to moderate or remove it, the company can’t be held liable.
If you remove Section 230, they will have even less incentive to remove that content.
We still agree with that principle, but Big Tech is far from acting like the good Samaritan. The problem isn’t that there are eating disorder videos, dangerous conspiracy theories, hate speech, and lies on the platforms—it’s that the companies don’t make a good-faith effort to remove this content, and that their products are designed to actually amplify it, often intentionally targeting minors.
This is now reaching levels of active disinformation. Yes, companies do, in fact, seek to remove that content. It violates all sorts of policies, but (1) it’s not as easy as people think to actually deal with that content (because it’s way harder to identify than ignorant fools with no experience think it is) and (2) studies have shown that removing that content often makes problems like eating disorders worse rather than better (because it’s a demand-side problem, and users looking for that content will keep looking for it and find it in darker and darker places online, whereas when it’s on mainstream social media, those sites can provide better interventions and guide people to helpful resources).
If Gephardt and Wamp spoke to literally any actual experts on this, they could have been informed about the realities, nuances, and trade-offs here. But they didn’t. They appear to have surrounded themselves with moral panic nonsense peddlers.
They’re former Congressmen who assume they must know the right answer, which is “let’s run with a false moral panic!”
Of course, you had to know that this ridiculous essay wouldn’t be complete without a “fire in a crowded theater” line, so of course it has that:
There is also a common claim from Silicon Valley that regulating social media is a violation of free speech. But free speech, as courts have ruled time and time again, is not unconditional. You can’t yell “fire” in a crowded theater where there is no fire because the ensuing stampede would put people in real danger. But this is essentially what social media companies are letting users do by knowingly building products that spread disinformation like wildfire.
Yup. These two former lawmakers really went there, using the trope that immediately identifies you as ignorant of the First Amendment. There are a few limited classes of speech that are unprotected, but the Supreme Court has signaled loud and clear that it is not expanding the list. The “fire in a crowded theater” line was used as dicta in a case that was about locking up someone protesting the draft (do Gephardt and Wamp think we should lock up people for protesting the draft?!?) in a case that hasn’t been considered good law in seven decades.
Holding social media companies accountable for the amplification of harmful content—whether disinformation, conspiracy theories, or misogynistic messages—isn’t a violation of the First Amendment.
Yes, it literally is. I mean, there’s no two ways around it. All that content, with a very, very few possible exceptions, is protected under the First Amendment.
Even the platform X, formerly known as Twitter, agrees that we have freedom of speech, but not freedom of reach, meaning posts that violate the platform’s terms of service will be made “less discoverable.”
You absolute chuckleheads. The only reason sites can do “freedom of speech, but not freedom of reach” is because Section 230 allows them to moderate without fear of liability. If you remove that, you get less moderation.
In a lawsuit brought by the mother of a young girl who died after copying a “blackout challenge” that TikTok’s algorithm allegedly recommended to her, the Third Circuit Court of Appeals recently ruled that Section 230 does not protect TikTok from liability when the platform’s own design amplifies harmful content. This game-changing decision, if allowed to stand, could lead to a significant curtailing of Section 230’s shield. Traditional media companies are already held to these standards: They are liable for what they publish, even content like letters to the editor, which are written by everyday people.
First of all, that ruling is extremely unlikely to stand because even many of Section 230’s vocal critics recognize that the reasoning there made no sense. But second, the court said that algorithmic recommendations are expressive. And the end result is that while it may not be immune under 230 it remains protected under the First Amendment because the First Amendment protects expression.
This is why anyone who is going to criticize Section 230 absolutely has to understand how it intersects with the First Amendment. And anyone claiming that “you can’t shout fire in a crowded theater” is good law is so ignorant of the very basic concepts that it’s difficult to take them seriously.
If anything, Section 230 reforms could make platforms more pleasant for users; in the case of X, reforms could entice advertisers to come back after they fled in 2022-23 over backlash around hate speech. Getting rid of the vitriol could make space for creative and fact-based content to thrive.
I’m sorry, but are they claiming that “vitriol” is not protected under the First Amendment? Dick and Zach, buddies, pals, please have a seat. I have some unfortunate news for you that may make you sad.
But, don’t worry. Don’t blame me for it. It must be Section 230 making me make you sad when I tell you: vitriol is protected by the First Amendment.
The changes you suggest are not going to help advertisers come back to ExTwitter. Again, they will make things worse, because Elon is not going to want to deal with liability, so he will do even less moderation because the changes to Section 230 will increase liability for moderation choices you make.
How can you not understand this?
But for now, these platforms are still filled with lies, extremism, and harmful content.
Which is protected by the First Amendment, and which won’t change if Section 230 is changed.
We know what it’s like to sit at the dinner table and watch our grandchildren, even those under ten years old, scroll mindlessly on their phones. We genuinely worry, every time they pick them up, what the devices are doing to them—and to all of us.
Which also has got nothing to do with Section 230 and won’t change no matter what you do to Section 230?
Also, um, have you tried… parenting?
This may really be the worst piece on Section 230 I have ever read. And I’ve gone through both Ted Cruz and Josh Hawley’s Section 230 proposals.
This entire piece misunderstands the problems, misunderstands the law, misunderstands the constitution, then lies about the causes, blames the wrong things, has no clear actual reform policy, and is completely ignorant of how the changes they seem to want would do more damage to the very things they’re claiming need fixing.
It’s a stunning display of ignorant solutionism by ignorant fools. It’s the type of thing that could really only be pulled off by overconfident ex-Congresspeople with no actual understanding of the issues at play.
Filed Under: 1st amendment, content moderation, dick gephardt, disinformation, free speech, moral panic, section 230, social media, zach wamp
Techdirt Podcast Episode 397: The People Who Turn Lies Into Reality
from the checking-in dept
It was over six years ago when we last had Renée DiResta on the podcast for a detailed discussion about misinformation and disinformation on social media. Since then, she’s not only led extensive research on the subject, she’s also become a central figure in the fever-dream conspiracy theories of online disinformation peddlers. Her new book Invisible Rulers: The People Who Turn Lies Into Reality dives deep into the modern ecosystem of online disinformation, and she joins us again on this week’s episode to discuss the many things that have changed in the past six years.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: disinformation, misinformation, podcast, renee diresta, social media
Ctrl-Alt-Speech: The Internet Is (Still) For Porn, With Yoel Roth
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Yoel Roth, former head of trust & safety at Twitter and now head of trust & safety at Match Group. Together they cover:
- X tweaks rules to formally allow adult content (TechCrunch)
- Temu joins Shein in facing stricter regulation in the EU (The Verge)
- Facebook’s Taylor Swift Fan Pages Taken Over by Animal Abuse, Porn, and Scams (404 Media)
- Post-January 6th deplatforming reduced the reach of misinformation on Twitter (Nature)
- Misunderstanding the harms of online misinformation (Nature)
- Israel Secretly Targets U.S. Lawmakers With Influence Campaign on Gaza War (NY Times)
- Twitch terminates all members of its Safety Advisory Council (CNBC)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: content moderation, deplatforming, digital services act, disinformation, dsa, eu, gaza, israel, misinformation
Companies: meta, shein, temu, twitch, twitter, x