social media – Techdirt (original) (raw)

Ex-Congressmen Pen The Most Ignorant, Incorrect, Confused, And Dangerous Attack On Section 230 I’ve Ever Seen

from the this-is-not-how-anything-works dept

In my time covering internet speech issues, I’ve seen some truly ridiculous arguments regarding Section 230. I even created my ever-handy “Hello! You’ve Been Referred Here Because You’re Wrong About Section 230 Of The Communications Decency Act” article four years ago, which still gets a ton of traffic to this day.

But I’m not sure I’ve come across a worse criticism of Section 230 than the one recently published by former House Majority Leader Dick Gephardt and former Congressional Rep. Zach Wamp. They put together the criticism for Democracy Journal, entitled “The Urgent Task of Reforming Section 230.”

There are lots of problems with the article, which we’ll get into. But first, I want to focus on the biggest, most brain-numbingly obvious problem, which is that they literally admit they don’t care about the solution:

People on both sides of the aisle want to reform Section 230, and there’s a range of ideas on how to do it. From narrowing its rules to sunsetting the provision entirely, dozens of bills have emerged offering different approaches. Some legislators argue that platforms should be liable for certain kinds of content—for example, health disinformation or terrorism propaganda. Others propose removing protections for advertisements or content provided by a recommendation algorithm. CRSM is currently bringing together tech, mental health, education, and policy experts to work on solutions. But the specifics are less important than the impact of the reform. We will support reform guided by commonsense priorities.

I have pointed out over and over again through the years that I am open to proposals on Section 230 reform, but the specifics are all that matter, because almost every proposal to date to “reform Section 230” does not understand Section 230 or (more importantly) how it interacts with the First Amendment.

So saying “well, any reform is what matters” isn’t just flabbergasting. It’s a sign of people who have never bothered to seriously sit with the challenges, trade-offs, and nuances of changing Section 230. The reality (as we’ve explained many times) is that changing Section 230 will almost certainly massively benefit some and massively harm others. Saying “meh, doesn’t matter, as long as we do it” suggests a near total disregard for the harm that any particular solution might do, and to whom.

Even worse, it disregards how nearly every solution proposed will actually cause real and significant harm to the people reformers insist they’re trying to protect. And that’s because they don’t care or don’t want to understand how these things actually work.

The rest of the piece only further cements the fact that Gephardt and Wamp have no experience with this issue and seem to simply think in extremely simplistic terms. They think that (1) “social media is kinda bad these days” (2) “Section 230 allows social media to be bad” and thus (3) “reforming Section 230 will make social media better.” All three of these statements are wrong.

Hilariously, the article starts off by name-checking Prof. Jeff Kosseff’s book about Section 230. However, it then becomes clear that neither former Congress person read the book, because it would correct many of the errors in the piece. Then, they point out that both of them voted for CDA 230 and call it their “most regrettable” vote:

Law professor Jeff Kosseff calls it “the 26 words that created the internet.” Senator Ron Wyden, one of its co-authors, calls it “a sword and a shield” for online platforms. But we call it Section 230 of the 1996 Communications Decency Act, one of our most regrettable votes during our careers in Congress.

While that’s the title of Jeff’s book, he didn’t coin that phrase, so it’s even more evidence that they didn’t read it. Also, is that really such a “regrettable vote”? I see both of them voted for the Patriot Act. Wouldn’t that, maybe, be a bit more regrettable? Gephardt voted for the Crime Bill of 1994. I mean, come on.

Section 230 has enabled the internet to thrive, helped build out a strong US innovation industry online, and paved the way for more speech online. How is that worth “regretting”?

These two former politicians have to resort to rewriting history:

But the internet has changed dramatically since the 1990s, and the tech industry’s values have changed along with it. In 1996, Section 230 was protecting personal pages or small forums where users could talk about a shared hobby. Now, tech giants like Google, Meta, and X dominate all internet traffic, and both they and startups put a premium on growth. It is fundamental to their business model. They make money from advertising: Every new user means more profit. And to attract and maintain users, platforms rely on advanced algorithms that track our every online move, collecting data and curating feeds to our interests and demographics, with little regard for the reality that the most engaging content is often the most harmful.

When 230 was passed, it was in response to lawsuits involving two internet giants of the day (CompuServe, owned by accounting giant H&R Block at the time, and Prodigy, owned by IBM and Sears at the time), not some tiny startups. And yes, those companies also had advertisements and “put a premium on growth.” So it’s not clear why the authors of this piece think otherwise.

The claim that “the most engaging content is often the most harmful” has an implicit (obsolete) assumption. The assumption is that the companies Gephardt and Wamp are upset about optimize for “engagement.” While that may have been true over a decade ago when they first began experiments with algorithmic recommendations, most companies pretty quickly realized that optimizing on engagement alone was actually bad for business.

It frustrates users over time, drives away advertisers, and does not make for a successful long-term strategy. That’s why every major platform has moved away from algorithms that focus solely on engagement. Because they know it’s not a good long-term strategy. Yet Gephardt and Wamp are living in the past and think that algorithms are solely focused on engagement. They’re not because the market says that’s a bad idea.

Just like Big Tobacco, Big Tech’s profits depend on an addictive product, which is marketed to our children to their detriment. Social media is fueling a national epidemic of loneliness, depression, and anxiety among teenagers. Around three out of five teenage girls say they have felt persistently sad or hopeless within the last year. And almost two out of three young adults either feel they have been harmed by social media themselves or know someone who feels that way. Our fellow members of the Council for Responsible Social Media (CRSM) at Issue One know the harms all too well: Some of them have lost children to suicide because of social media. And as Facebook whistleblower Frances Haugen, another CRSM member, exposed, even when social media executives have hard evidence that their company’s algorithms are contributing to this tragedy, they won’t do anything about it—unless they are forced to change their behavior.

Where to begin on this nonsense? No, social media is not “addictive” like tobacco. Tobacco is a thing that includes nicotine, which is a physical substance that goes into your body and creates an addictive response in your bloodstream. Some speech online… is not that.

And, no, the internet is not “fueling a national epidemic of loneliness, depression, and anxiety among teenagers.” This has been debunked repeatedly. The studies do not support this. As for the stat that “three out of five teenage girls say they have felt persistently sad or hopeless” well… maybe there are some other reasons for that which are not social media? Maybe we’re living through a time of upheaval and nonsense where things like climate change are a major concern? And our leaders in Congress (like the authors of the piece I’m writing about) are doing fuck all to deal with it?

Maybe?

But, no, it couldn’t be that our elected officials dicked around and did nothing useful for decades and fucked the planet.

Must be social media!

Also, they’re flat out lying about what Haugen found. She found that the company was studying those issues to figure out how to fix them. The whole point of the study that everyone keeps pointing to was because there was a team at Facebook that was trying to figure out if the site was leading to bad outcomes among kids in order to try to fix it.

Almost everything written by Gephardt and Wamp in this piece is active misinformation.

It’s not just our children. Our very democracy is at stake. Algorithms routinely promote extreme content, including disinformation, that is meant to sow distrust, create division, and undermine American democracy. And it works: An alarming 73 percent of election officials report an increase in threats in recent years, state legislatures across the country have introduced hundreds of harmful bills to restrict voting, about half of Americans believe at least one conspiracy theory, and violence linked to conspiracy theories is on the rise. We’re in danger of creating a generation of youth who are polarized, politically apathetic, and unable to tell what’s real from what’s fake online.

Blaming all of the above on Section 230 is literal disinformation. To claim that somehow what’s described here is 230’s fault is so disconnected from reality as to raise serious questions about the ability of the authors of the piece to do basic reasoning.

First, nearly all disinformation is protected by the First Amendment, not Section 230. Are Gephardt and Wamp asking to repeal the First Amendment? Second, threats towards election officials are definitely not a Section 230 issue.

But, sure, okay, let’s take them at their word that they think Section 230 is the problem and “reform” is needed. I know they say they don’t care what the reform is, just that it happens, but let’s walk through some hypotheticals.

Let’s start with an outright repeal. Will that make the US less polarized and stop disinformation? Of course not. It would make it worse! Because Section 230 gives platforms the freedom to moderate their sites as they see fit, utilizing their own editorial discretion without fear of liability.

Remove that, and you get companies who are less able to remove disinformation because the risk of a legal fight increases. So any lawyer would tell company leadership to minimize their efforts to cut down on disinformation.

Okay, some people say, “maybe just change the law so that ‘you’re now liable for anything on your site.’” Well, okay, but now you have a very big First Amendment problem and, again, you get worse results. Because existing case law on the First Amendment from the Supreme Court on down says that you can’t be liable for distributing content if you don’t know it violates the law.

So, again, our hypothetical lawyers in this hypothetical world will say, “okay, do everything to avoid knowledge.” That will mean less reviewing of content, less moderation.

Or, alternatively, you get massive over-moderation to limit the risk of liability. Perhaps that’s what Gephardt and Wamp really want: no more freedom for the filthy public to ever speak. Maybe all speaking should only occur on heavily limited TV. Maybe we go back to the days before civil rights were a thing, and it was just white men on TV telling us how everyone should live?

This is the problem. Gephardt and Wamp are upset about some vague things they claim are caused by social media, and only due to Section 230. They believe that some vague amorphous reform will fix it.

Except all of that is wrong. The problems they’re discussing are broader, societal-level problems that these two former politicians failed to do anything about when they were in power. Now they are blaming people exercising their own free speech for these problems, and demanding that we change some unrelated law to… what…? Make themselves feel better?

This is not how you solve problems.

In short, Big Tech is putting profits over people. Throughout our careers, we have both supported businesses large and small, and we believe in their right to succeed. But they can’t be allowed to avoid responsibility by thwarting regulation of a harmful product. No other industry works like this. After a door panel flew off a Boeing plane mid-flight in January, the Federal Aviation Administration grounded all similar planes and launched an investigation into their safety. But every time someone tries to hold social media companies accountable for the dangerous design of their products, they hide behind Section 230, using it as a get-out-of-jail-free card.

Again, airplanes are not speech. Just like tobacco is not speech. These guys are terrible at analogies. And yes, every other industry that involves speech does work like this. The First Amendment protects nearly all the speech these guys are complaining about.

Section 230 has never been a “get out of jail” card. This is a lazy trope spread by people who never have bothered to understand Section 230. Section 230 only says that the liability for violative content on an internet service goes to whoever created the content. That’s it. There’s no “get out of jail free.” Whoever creates the violative content can still go to jail (if that content really violates the law, which in most cases it does not).

If their concerns are about profits, well, did Gephardt and Wamp spend any time reforming how capitalism works when they were lawmakers? Did they seek to change things so that the fiduciary duty of company boards wasn’t to deliver increasing returns every three months? Did they do anything to push for companies to be able to take a longer term view? Or to support stakeholders beyond investors?

No? Then, fellas, I think we found the problem. It’s you and other lawmakers who didn’t fix those problems, not Section 230.

That wasn’t the intent of Section 230. It was meant to protect companies acting as good Samaritans, ensuring that if a user posts harmful content and the platform makes a good faith-effort to moderate or remove it, the company can’t be held liable.

If you remove Section 230, they will have even less incentive to remove that content.

We still agree with that principle, but Big Tech is far from acting like the good Samaritan. The problem isn’t that there are eating disorder videos, dangerous conspiracy theories, hate speech, and lies on the platforms—it’s that the companies don’t make a good-faith effort to remove this content, and that their products are designed to actually amplify it, often intentionally targeting minors.

This is now reaching levels of active disinformation. Yes, companies do, in fact, seek to remove that content. It violates all sorts of policies, but (1) it’s not as easy as people think to actually deal with that content (because it’s way harder to identify than ignorant fools with no experience think it is) and (2) studies have shown that removing that content often makes problems like eating disorders worse rather than better (because it’s a demand-side problem, and users looking for that content will keep looking for it and find it in darker and darker places online, whereas when it’s on mainstream social media, those sites can provide better interventions and guide people to helpful resources).

If Gephardt and Wamp spoke to literally any actual experts on this, they could have been informed about the realities, nuances, and trade-offs here. But they didn’t. They appear to have surrounded themselves with moral panic nonsense peddlers.

They’re former Congressmen who assume they must know the right answer, which is “let’s run with a false moral panic!”

Of course, you had to know that this ridiculous essay wouldn’t be complete without a “fire in a crowded theater” line, so of course it has that:

There is also a common claim from Silicon Valley that regulating social media is a violation of free speech. But free speech, as courts have ruled time and time again, is not unconditional. You can’t yell “fire” in a crowded theater where there is no fire because the ensuing stampede would put people in real danger. But this is essentially what social media companies are letting users do by knowingly building products that spread disinformation like wildfire.

Yup. These two former lawmakers really went there, using the trope that immediately identifies you as ignorant of the First Amendment. There are a few limited classes of speech that are unprotected, but the Supreme Court has signaled loud and clear that it is not expanding the list. The “fire in a crowded theater” line was used as dicta in a case that was about locking up someone protesting the draft (do Gephardt and Wamp think we should lock up people for protesting the draft?!?) in a case that hasn’t been considered good law in seven decades.

Holding social media companies accountable for the amplification of harmful content—whether disinformation, conspiracy theories, or misogynistic messages—isn’t a violation of the First Amendment.

Yes, it literally is. I mean, there’s no two ways around it. All that content, with a very, very few possible exceptions, is protected under the First Amendment.

Even the platform X, formerly known as Twitter, agrees that we have freedom of speech, but not freedom of reach, meaning posts that violate the platform’s terms of service will be made “less discoverable.”

You absolute chuckleheads. The only reason sites can do “freedom of speech, but not freedom of reach” is because Section 230 allows them to moderate without fear of liability. If you remove that, you get less moderation.

In a lawsuit brought by the mother of a young girl who died after copying a “blackout challenge” that TikTok’s algorithm allegedly recommended to her, the Third Circuit Court of Appeals recently ruled that Section 230 does not protect TikTok from liability when the platform’s own design amplifies harmful content. This game-changing decision, if allowed to stand, could lead to a significant curtailing of Section 230’s shield. Traditional media companies are already held to these standards: They are liable for what they publish, even content like letters to the editor, which are written by everyday people.

First of all, that ruling is extremely unlikely to stand because even many of Section 230’s vocal critics recognize that the reasoning there made no sense. But second, the court said that algorithmic recommendations are expressive. And the end result is that while it may not be immune under 230 it remains protected under the First Amendment because the First Amendment protects expression.

This is why anyone who is going to criticize Section 230 absolutely has to understand how it intersects with the First Amendment. And anyone claiming that “you can’t shout fire in a crowded theater” is good law is so ignorant of the very basic concepts that it’s difficult to take them seriously.

If anything, Section 230 reforms could make platforms more pleasant for users; in the case of X, reforms could entice advertisers to come back after they fled in 2022-23 over backlash around hate speech. Getting rid of the vitriol could make space for creative and fact-based content to thrive.

I’m sorry, but are they claiming that “vitriol” is not protected under the First Amendment? Dick and Zach, buddies, pals, please have a seat. I have some unfortunate news for you that may make you sad.

But, don’t worry. Don’t blame me for it. It must be Section 230 making me make you sad when I tell you: vitriol is protected by the First Amendment.

The changes you suggest are not going to help advertisers come back to ExTwitter. Again, they will make things worse, because Elon is not going to want to deal with liability, so he will do even less moderation because the changes to Section 230 will increase liability for moderation choices you make.

How can you not understand this?

But for now, these platforms are still filled with lies, extremism, and harmful content.

Which is protected by the First Amendment, and which won’t change if Section 230 is changed.

We know what it’s like to sit at the dinner table and watch our grandchildren, even those under ten years old, scroll mindlessly on their phones. We genuinely worry, every time they pick them up, what the devices are doing to them—and to all of us.

Which also has got nothing to do with Section 230 and won’t change no matter what you do to Section 230?

Also, um, have you tried… parenting?

This may really be the worst piece on Section 230 I have ever read. And I’ve gone through both Ted Cruz and Josh Hawley’s Section 230 proposals.

This entire piece misunderstands the problems, misunderstands the law, misunderstands the constitution, then lies about the causes, blames the wrong things, has no clear actual reform policy, and is completely ignorant of how the changes they seem to want would do more damage to the very things they’re claiming need fixing.

It’s a stunning display of ignorant solutionism by ignorant fools. It’s the type of thing that could really only be pulled off by overconfident ex-Congresspeople with no actual understanding of the issues at play.

Filed Under: 1st amendment, content moderation, dick gephardt, disinformation, free speech, moral panic, section 230, social media, zach wamp

Twitter’s Pre-Musk Plans Mirrored Elon’s Vision—Until He Abandoned, Trashed Or Ignored Them

from the so-much-missed-opportunity dept

Today, the new book by NY Times reporters Kate Conger and Ryan Mac, C_haracter Limit: How Elon Musk Destroyed Twitter_, comes out. If you’re at all interested in what went down, I can’t recommend it enough. It’s a well-written, deeply researched book with all sorts of details about the lead-up to the acquisition, the acquisition itself, and the aftermath of Elon owning Twitter.

Even if you followed the story closely as it played out (as I did), the book is a worthwhile read in multiple ways. First, it’s pretty incredible to pull it all together in a single book. There was so much craziness happening every day that it’s sometimes difficult to take a step back and take in the larger picture. This book gives readers a chance to do just that.

But second, and more important, there are plenty of details broken by the book, some of which are mind-boggling. If you want to read a couple of parts that have been published, both the NY Times and Vanity Fair have run excerpts. The NY Times one covers Elon’s infatuation with “relaunching” Twitter Blue as a paid verification scheme a week after he took over. The Vanity Fair one looks at the actual closing of the deal and how chaotic it was, including Elon coming up $400 million short and demanding that Twitter just give him the money to cover the cost of closing the deal.

Both excerpts give you a sense of the kinds of amazing stories told in the book.

But as I read an advance copy of the book, two things stood out to me. The first was Elon’s near total lack of understanding of the concept of Chesterton’s Fence. The second was how much the old regime at Twitter was already trying to do almost everything that Elon claimed he wanted to do. But as soon as he took over, he was so sure (1) that the old regime were complete idiots and (2) that he could reason his way into solving social media, that he not only ignored what people were telling him, he actively assumed they were trying to sabotage him, and did away with anyone who could be helpful.

Elon rips out some fences

If you’re unaware of the concept of Chesterton’s Fence, it’s that you shouldn’t remove something (such as a fence) if you don’t understand why it was put there in the first place. Over and over in the book, we see Elon dismiss all sorts of ideas, policies, and systems that were in place at Twitter without even caring to find out why they were there. Often, he seems to assume things were done for the dumbest of all reasons, but never bothered to understand why they were actually done. Indeed, he so distrusted legacy Twitter employees that he assumed most were lying to him or trying to sabotage him.

It’s perhaps not that surprising to see why he would trust his own instincts, not that it makes it smart. With both Tesla and SpaceX, Elon bucked the conventional wisdom and succeeded massively. In both cases, he did things that many people said were impossible. And if that happens to you twice and makes you the world’s wealthiest person, you can see how you might start assuming that whenever people suggest that something is a bad idea or impossible, you should trust your gut over what people are telling you.

But the point of Chesterton’s Fence is not that you should never do things differently or never remove policies or technology that is in place. The point is that you should understand why they’re there. Elon never bothers to take that tiny step, and it’s a big part of his downfall.

In Character Limit, we see that Elon has almost no actual intellectual curiosity about social media. He has no interest in understanding how Twitter worked or why certain decisions were made. Propped up by a circle of sycophants and yes-men, he assumes that the previous regime at Twitter must have been totally stupid, and therefore there is no reason to listen to anything they had to say.

It is stunning how in story after story in the book, Elon has zero interest in understanding why anything works the way it does. He is sure that his own instincts, which are clouded by his unique position on the platform with tens of millions of followers, represent everyone’s experience.

He’s quite sure that his own instincts can get him to the right answers. This includes thinking he could (1) double advertising revenue in a few years (when he’s actually driven away over 80% of it) and (2) eclipse even that erroneously predicted increased advertising revenue by getting millions of people to pay for verification. In actuality, as the book details, a tiny fraction of users are willing to pay, and it’s bringing in just a few million dollars per quarter, doing little to staunch the losses of billions of dollars in advertising that Elon personally drove away.

The stories in the book are jaw-dropping. People who try to explain reality to him are fired. The people who stick around quickly learn the only thing to do is to lie to him and massage his ego. And thus, the book is full of stories of Elon ripping out the important pillars of what had been Twitter and then being perplexed when nothing works properly anymore.

He seems even more shocked that tons of people don’t seem to love him for his blundering around.

Old Twitter was already planning on doing what Elon wanted, but way better

Perhaps this is somewhat related to the last point, but the book details multiple ways in which Parag Agrawal, who had just taken over from Jack Dorsey a few months earlier, was already looking to do nearly everything Elon publicly claimed he wanted to do with Twitter.

When Elon first announced the deal to buy Twitter, I suggested a few (unlikely, but possible) ways in which Elon could actually improve Twitter. First up was that by taking the company private, Elon could remove Twitter from the whims of activist investors who were more focused on the short-term than the long-term.

The book goes into great detail about how much activist investors created problems for both Dorsey and Agrawal, pre-Musk. Specifically, their revenue and user demands actually made it somewhat more difficult to put in place a long-term vision.

In my original post, I talked about continuing Twitter’s actual commitment to free speech, which meant fighting government attempts to censor information (not just when you disagreed with the political leaders).

But beyond that, there were things like further investing in and supporting Bluesky (see disclaimer)* and its ATprotocol. After all, Elon claimed that he wanted to “open source” the algorithm.

Moving to an open protocol like ATProtocol would have not just allowed the open sourcing of the recommendation algorithm, it would have opened up the ability for anyone to create their own algorithm, both for recommendations and for moderation. Instead, that’s all happening on the entirely independent Bluesky app, which really only exists because Elon threw away Twitter’s deal to work with Bluesky.

Furthermore, the book reveals that well before Elon came on the scene, Parag and other top execs at the company were working on something called Project Saturn, which was discussed a bit in Kurt Wagner’s earlier book on this topic, but which is explained in more detail here.

The book reveals that Parag very much agreed with Elon (and Jack) that expecting companies to constantly completely remove problematic content was not a very effective solution.

So they created a plan to basically rearchitect everything around “freedom of speech, not freedom of reach.” Ironically, this is the very same motto that Elon claimed to embrace soon after taking over the company (and after firing Parag).

Image

But Parag and others at Twitter had already been working on a system to operationalize that very idea. The plan was to use different “levels” and “circles” in which users who were following the rules would have their content eligible to be promoted to varying degrees within the algorithm. The more you violated the site’s rules, you would move to further and further outer layers/rings of the system (which is where the Project Saturn name came from). This would lead to less “reach,” but also less of a need for Twitter to fully remove accounts or tweets.

It was a big rethinking of how social media could work and how it could support free speech. In reading about it in the book, it sounds like exactly what Elon said he wanted. A small team within Twitter, pushed by Parag’s vision, had been working on it since way before Elon purchased his shares and started the takeover process. According to the book, even as Elon caused such a mess in the summer of 2022 trying to back out of the deal, Parag kept pushing the team to move forward with the idea.

Once Elon took over, it appears that a few remaining people at the company tried to show him Project Saturn and explain to him how it would match the ideals he had talked about.

But Elon ignored them, tossed out all the work they had done on it, and just randomly started unbanning people he thought belonged back on the platform without any plan on how to deal with those users if they started causing problems (and driving away advertisers). He assumed that his new verification plan would solve both the revenue issues for the company and all moderation issues.

Even the idea that Twitter was too bloated with excess employees and a lack of vision seemed to be part of Agrawal’s plans. Before Elon had made his move, the book reveals that Agrawal had drawn up plans to lay off approximately 25% of the company and greatly streamline everything with a focus on building out certain lines of business and users. He did move to lay off many senior leaders as part of that streamlining, though it wasn’t as clearly explained at the time what the larger plan was. Elon’s effort to buy Twitter outright (and then back out of the deal) forced Agrawal to put the layoff plans on hold, out of a fear that Elon would view those layoffs as an attempt to sabotage the company.

It’s truly striking how much of what Elon claimed he wanted to do, Parag and his exec team were already doing. They were making things more open, transparent, and decentralized with Bluesky. They were decreasing the reliance on “takedowns” as a trust & safety mechanism with Saturn. They were betting big on “freedom of speech, not reach” with Saturn. They were fighting for actual free speech with legal actions around the globe. They were cutting employee bloat.

But the company was doing all of those things thoughtfully and deliberately, with a larger strategy behind it.

As the book details, Elon came in and not only tore down Chesterton Fences everywhere he could, he dismissed, ignored, or cut loose all of those other projects that would have taken him far along the path he claimed he wanted to go.

So, now he’s left with a site that has trouble functioning, has lost nearly all of its revenue, and is generally seen as a laughingstock closed system designed just to push Elon’s latest political partisan brain farts, rather than enabling the world’s conversation.

Of course, in the wake of all that destruction, it has enabled things like Bluesky to spring forth entirely unrelated to Twitter, and to put some of this into practice. Just this weekend, Bluesky passed 10 million users, helped along by Elon’s (again) hamfisted fight with Brazil, which (like so many other things Elon) may have a good reason at its core (fighting against secretive government demands), but was done in the dumbest way possible.

If there’s one thing that is painfully clear throughout the book, it is that Elon was correct that there were all sorts of ways that Twitter could be more efficient, more open, and less strict in takedowns. But he handled each in the worst way possible and destroyed what potential there was for the site.

Later today on the podcast, I’ll have an interview with Kate Conger about the book and Elon where we talk some more about all of this.

* As I’ve said before, I’m now on the board of Bluesky, which wouldn’t have been necessary if Elon hadn’t immediately cut Bluesky free from Twitter upon taking over the company.

Filed Under: character limit, chesterton fences, content moderation, elon musk, free speech, kate conger, parag agrawal, project saturn, ryan mac, social media
Companies: bluesky, twitter, x

Ctrl-Alt-Speech: Blunder From Down Under

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Riana Pfefferkorn, a Policy Fellow at the Stanford Institute for Human Centered AI. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: asio, australia, child safety, content moderation, first amendment, social media, utah
Companies: snap, tiktok

California Politicians Embarrass Themselves By Calling For ‘Warning Labels’ On Social Media

from the stop-the-moral-panic dept

Can we add a warning label to the First Amendment that says “Actually reading this can cause extreme embarrassment to grandstanding politicians”?

California Attorney General, Rob Bonta, has just lost two separate cases in the Ninth Circuit regarding social media laws he strongly supported, which the court said violated the First Amendment. You would think that maybe, just maybe, he’d take a step back and brush up on how the First Amendment works, and figure out why he’s getting these fairly basic things so wrong and so unconstitutional.

Tragically, he’s not.

Just the fact that he flat out lied to the public and declared victory in one of the cases he lost should give you a sense of Bonta’s priorities in spitting on the First Amendment. But now he’s doubling down.

Earlier this week, he “called on Congress” to pass a law requiring “warning labels” on social media.

California Attorney General Rob Bonta today joined a bipartisan coalition of 42 attorneys general in sending a letter to Congress supporting the United States Surgeon General’s recent call for Congress to require a surgeon general’s warning on social media platforms. Young people are facing a mental health crisis fueled by social media. The attorneys general argue that by mandating a surgeon general’s warning on algorithm-driven social media platforms, Congress can address the growing crisis and protect future generations of Americans.

“Social media companies have continuously demonstrated an unwillingness to tackle the youth mental health crisis, instead looking to dig in deeper for the sake of profits,” said Attorney General Bonta. “Warning labels on social media are a clear and frank way to communicate the risks that social media engagement poses to young users. Just like we are certain of the risk of alcohol or cigarette use, we are certain of the mental health risks of social media use. I urge Congress to adopt this commonsense step that complements California’s work to protect our children and teens.”

The problem is that (1) this is unconstitutional, and (2) this is all nonsense. Yes, the Surgeon General called for this, but as we explained, he was also confused. His own report on the matter showed that for many kids social media is actually beneficial, and there remains no evidence that he could find that social media is inherently harmful. The actual research on this stuff continues to find no actual evidence of harm.

Study after study after study looks at this and comes up empty. At best, they find that for kids who need real mental health support and aren’t getting it, they may turn to social media and spend excess amounts of time there. But this is a small percentage of kids, who would be better served by getting the mental health support that they need and deserve.

For most others, there is no evidence of any kind of harm. And, as one of the leading researchers in this field, Candice Odgers, has pointed out, demonizing a tool that many people like to use or are expected to use, does real harm to people. It will actually make kids feel worse about themselves for doing a very natural thing and trying to communicate with friends and family.

This is a giant moral panic, no different than similar moral panics about chess, the waltz, novels, bicycles, radio, television, pinball, dungeons & dragons, rock music, and more.

And it’s making people like Bonta look incredibly foolish.

As for why it’s unconstitutional, it’s a form of compelled speech. Yes, certain kinds of mandated warning labels have been found to be okay, but only in cases when the science is absolute and incontrovertible. That’s for things like actual toxins that literally poison your bloodstream.

Speech is not that.

Mandated “warning labels” about speech are so obviously unconstitutional that it’s embarrassing. Indeed, the idea of mandatory health warnings on websites is so ridiculous that even the crazy Fifth Circuit thought they were a bridge too far for just adult content websites. Even in the case that Rob Bonta lost just last week, the court highlighted to him directly that you can’t just mandate websites add speech about content on their site.

Did he read that decision? Did he understand it? Or did he just decide he could ignore it because it was embarrassing to the moral panic he supports?

People keep telling me that Bonta is a smart, thoughtful lawyer, but over and over again he seems to have fallen sway to a ridiculous moral panic, against all evidence and against the Constitution he’s supposed to be protecting and upholding.

Even worse, this nonsense is “trickling down” elsewhere. San Mateo County, where I live, work, and pay taxes, just unanimously passed a resolution supporting Bonta’s call. It’s also home to Meta, YouTube, and where tons of employees of social media companies work.

San Mateo County, home to tech giant Meta, urged Congress to pass legislation requiring social media companies to add labels to their platforms warning people about their potential to harm users’ mental health.

The Board of Supervisors unanimously passed a resolution Tuesday, the same day 42 state attorneys general, including California’s Rob Bonta, called on Congress to address the mental health risks associated with social media.

Given where they are, you’d think that the San Mateo County Board of Supervisors would… maybe actually talk to some experts first? Hell, my office is literally blocks away from the County offices. I’d be happy to walk the Supervisors through a presentation of all of the evidence, including those found in the Surgeon General’s report, the American Psychological Association’s report, one from the National Academies of Science, and a massive meta-study from the Journal of Pediatrics.

It doesn’t show any actual causal connection for harm and actually suggests many other reasons for the teen mental health crisis today.

David Canepa, the San Mateo County Supervisor who pushed this resolution, also seems wholly unfamiliar with how the First Amendment works:

“All politics is local,” Canepa said. “For example, if there’s something racist or anti-Semitic, there needs to be a label. As the county board, we’re asking them to address this problem.”

Canepa’s offices are, again, right down the street from my own. I’d love to come by his office and have him to play Moderator Mayhem or Trust & Safety Tycoon, and then see if he still feels that (1) companies aren’t trying (because they are), or (2) that there’s some easy way to “label” such content.

This stuff is way more difficult than bumbling, ignorant, grandstanding politicians recognize. The government can no more mandate that social media place warnings on social media, than they could demand that newspapers refuse to cover their opponents in elections. The First Amendment means the government has to stay out of this stuff.

Perhaps Rob Bonta himself needs a warning label: “repeated exposure to my lack of understanding of the Constitution or the facts may cause severe eye-rolling.” Because it appears that Bonta’s misunderstanding of some fundamentals around the First Amendment are trickling down to local politicians as well.

Filed Under: 1st amendment, california, david canepa, moral panic, rob bonta, san mateo county, social media, vivek murthy, warning labels

Utah’s ‘Protect The Kids Online!’ Law Rejected By Court

from the utah-does-it-again dept

Over the last few years, politicians in Utah have been itching to pass terrible internet legislation. Some of you may forget that in the earlier part of the century, Utah became somewhat famous for passing absolutely terrible internet laws that the courts then had to clean up. In the last few years, it’s felt like other states have passed Utah, and maybe its lawmakers were getting a bit jealous in losing their “we pass batshit crazy unconstitutional internet laws” crown.

So, two years ago, they started pushing a new round of such laws. Even as they were slamming social media as dangerous and evil, Utah Governor Spencer Cox proudly signed the new law, streaming it on all the social media sites he insisted were dangerous. When Utah was sued by NetChoice, the state realized that the original law was going to get laughed out of court and they asked for a do-over, promising that they were going to repeal and replace the law with something better. The new law changed basically nothing, though, and an updated lawsuit (again by NetChoice) was filed.

The law required social media companies to engage in “age assurance” (which is just a friendlier name for age verification, but still a privacy nightmare) and then restrict access to certain types of content and features for “minor accounts.”

Cox also somewhat famously got into a fight on ExTwitter with First Amendment lawyer Ari Cohn. When Cohn pointed out that the law clearly violates the First Amendment, Cox insisted: “Can’t wait to fight this lawsuit. You are wrong and I’m excited to prove it.” When Cohn continued to point out the law’s flaws, Cox responded “See you in court.”

The Twitter exchange between Cohn and Cox as described above with Cox concluding "see you in court."

In case you’re wondering how the lawsuit is going, last night Ari got to post an update:

Ari Cohn quote tweeting Cox's "See you in court" tweet, and saying "ope" with a screenshot of the conclusion from the court enjoining the law as unconstitutional.

The law is enjoined. The court found it to likely be unconstitutional, just as Ari and plenty of other First Amendment experts expected. This case has been a bit of a roller coaster, though. A month and a half ago, the court said that Section 230 preemption did not apply to the case. The analysis on that made no sense. As we just saw, a court in Texas threw out a very similar law and said that since it tried to limit how sites could moderate content, it was preempted by Section 230. But, for a bunch of dumb reasons, the judge here, Robert Shelby, argued that the law wasn’t actually trying to impact content moderation (even though it clearly was).

But, that was only part of the case. The latest ruling found that the law almost certainly violates the First Amendment anyway:

NetChoice’s argument is persuasive. As a preliminary matter, there is no dispute the Act implicates social media companies’ First Amendment rights. The speech at issue in this case— the speech social media companies engage in when they make decisions about how to construct and operate their platforms—is protected speech. The Supreme Court has long held that “[a]n entity ‘exercis[ing] editorial discretion in the selection and presentation’ of content is ‘engage[d] in speech activity’” protected by the First Amendment. And this July, in Moody v. NetChoice, LLC, the Court affirmed these First Amendment principles “do not go on leave when social media are involved.” Indeed, the Court reasoned that in “making millions of . . . decisions each day” about “what third-party speech to display and how to display it,” social media companies “produce their own distinctive compilations of expression.”

Furthermore, following on the Supreme Court’s ruling earlier this year in Moody about whether or not the entire law can be struck down on a “facial” challenge, the court says “yes” (this issue has recently limited similar rulings in Texas and California):

NetChoice has shown it is substantially likely to succeed on its claim the Act has “no constitutionally permissible application” because it imposes content-based restrictions on social media companies’ speech, such restrictions require Defendants to show the Act satisfies strict scrutiny, and Defendants have failed to do so.

Utah tries to argue that this law is not about speech and content, but rather about conduct and “structure,” as California did in challenges to its “kids code” law. The court is not buying it:

Defendants respond that the Definition contemplates a social media service’s “structure, not subject matter.” However, Defendants’ argument emphasizes the elements of the Central Coverage Definition that relate to “registering accounts, connecting accounts, [and] displaying user-generated content” while ignoring the “interact socially” requirement. And unlike the premises-based distinction at issue in City of Austin, the social interaction-based distinction does not appear designed to inform the application of otherwise content-neutral restrictions. It is a distinction that singles out social media companies based on the “social” subject matter “of the material [they] disseminate[].” Or as Defendants put it, companies offering services “where interactive, immersive, social interaction is the whole point.”

The court notes that Utah seems to misunderstand the issue, and finds the idea that this law is content neutral to be laughable:

Defendants also respond that the Central Coverage Definition is content neutral because it does not prevent “minor account holders and other users they connect with [from] discuss[ing] any topic they wish.” But in this respect, Defendants appear to misunderstand the essential nature of NetChoice’s position. The foundation of NetChoice’s First Amendment challenge is not that the Central Coverage Definition restricts minor social media users’ ability to, for example, share political opinions. Rather, the focus of NetChoice’s challenge is that the Central Coverage Definition restricts social media companies’ abilities to collage user-generated speech into their “own distinctive compilation[s] of expression.”

Moreover, because NetChoice has shown the Central Coverage Definition facially distinguishes between “social” speech and other forms of speech, it is substantially likely the Definition is content based and the court need not consider whether NetChoice has “point[ed] to any message with which the State has expressed disagreement through enactment of the Act.”

Given all that, strict scrutiny applies, and there’s no way this law passes strict scrutiny. The first prong of the test is whether or not there’s a compelling state interest in passing such a law. And even though it’s about the moral panic of kids on the internet, the court says there’s a higher bar here. Because we’ve done this before, with California trying to regulate video games, which the Supreme Court struck down fourteen years ago:

To satisfy this exacting standard, Defendants must “specifically identify an ‘actual problem’ in need of solving.” In Brown v. Entertainment Merchants Association, for example, the Supreme Court held California failed to demonstrate a compelling government interest in protecting minors from violent video games because it lacked evidence showing a causal “connection between exposure to violent video games and harmful effects on children.” Reviewing psychological studies California cited in defense of its position, the Court reasoned research “show[ed] at best some correlation between exposure to violent entertainment” and “real-world effects.” This “ambiguous proof” did not establish violent videogames were such a problem that it was appropriate for California to infringe on its citizens’ First Amendment rights. Likewise, the Court rejected the notion that California had a compelling interest in “aiding parental authority.” The Court reasoned the state’s assertion ran contrary to the “rule that ‘only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to [minors].’”

While there’s lots of screaming and yelling about how social media is bad for kids’ mental health, as we directly told Governor Cox, the evidence just doesn’t support the claim. The court seems to recognize that the claims are a lot of hot air as well. Indeed, Utah submitted the Surgeon General’s report as “proof,” which apparently they didn’t even read. As we noted, contrary to the media reporting on that report, it contained a very nuanced analysis that does not show any causal harms to kids from social media.

The judge absolutely noticed that.

First, though the court is sensitive to the mental health challenges many young people face, Defendants have not provided evidence establishing a clear, causal relationship between minors’ social media use and negative mental health impacts. It may very well be the case, as Defendants allege, that social media use is associated with serious mental health concerns including depression, anxiety, eating disorders, poor sleep, online harassment, low self-esteem, feelings of exclusion, and attention issues. But the record before the court contains only one report to that effect, and that report—a 2023 United States Surgeon General Advisory titled Social Media and Youth Mental Health—offers a much more nuanced view of the link between social media use and negative mental health impacts than that advanced by Defendants. For example, the Advisory affirms there are “ample indicators that social media can . . . have a profound risk of harm to the mental health and well-being of children and adolescents,” while emphasizing “robust independent safety analyses of the impact of social media on youth have not yet been conducted.” Likewise, the Advisory observes there is “broad agreement among the scientific community that social media has the potential to both benefit and harm children and adolescents,” depending on “their individual strengths and vulnerabilities, and . . . cultural, historical, and socio-economic factors.” The Advisory suggests social media can benefit minors by “providing positive community and connection with others who share identities, abilities, and interest,” “provid[ing] access to important information and creat[ing] a space for self-expression,” “promoting help-seeking behaviors[,] and serving as a gateway to initiating mental health care.”

The court is also not at all impressed by a declaration Utah provided by Jean Twenge, who is Jonathan Haidt’s partner-in-crime in pushing the baseless moral panic narrative about kids and social media.

Moreover, a review of Dr. Twenge’s Declaration suggests the majority of the reports she cites show only a correlative relationship between social media use and negative mental health impacts. Insofar as those reports support a causal relationship, Dr. Twenge’s Declaration suggests the nature of that relationship is limited to certain populations, such as teen girls, or certain mental health concerns, such as body image.

Then the court points out (thank you!) that kids have First Amendment rights too:

Second, Defendants’ position that the Act serves to protect uninformed minors from the “risks involved in providing personal information to social media companies and other users” ignores the basic First Amendment principle that “minors are entitled to a significant measure of First Amendment Protection.” The personal information a minor might choose to share on a social media service—the content they generate—is fundamentally their speech. And the Defendants may not justify an intrusion on the First Amendment rights of NetChoice’s members with, what amounts to, an intrusion on the constitutional rights of its members’ users…

Furthermore, Utah fails to meet the second prong of strict scrutiny, that the law be “narrowly tailored.” Because it’s not:

To begin, Defendants have not shown the Act is the least restrictive option for the State to accomplish its goals because they have not shown existing parental controls are an inadequate alternative to the Act. While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls. Moreover, Defendants do not indicate the State has tried, or even considered, promoting “the diverse supervisory technologies that are widely available” as an alternative to the Act. The court is not unaware of young people’s technological prowess and potential to circumvent parental controls. But parents “control[] whether their minor children have access to Internet-connected devices in the first place,” and Defendants have not shown minors are so capable of evading parental controls that they are an insufficient alternative to the State infringing on protected speech.

Also, this:

Defendants do not offer any evidence that requiring social media companies to compel minors to push “play,” hit “next,” and log in for updates will meaningfully reduce the amount of time they spend on social media platforms. Nor do Defendants offer any evidence that these specific measures will alter the status quo to such an extent that mental health outcomes will improve and personal privacy risks will decrease

The court also points out that the law targets social media only, and not streaming or sports apps, but if it was truly harmful, then the law would have to target all of those other apps as well. Utah tried to claim that social media is somehow special and different than those other apps, but the judge notes that they provide no actual evidence in support of this claim.

But Defendants simply do not offer any evidence to support this distinction, and they only compare social media services to “entertainment services.” They do not account for the wider universe of platforms that utilize the features they take issue with, such as news sites and search engines. Accordingly, the Act’s regulatory scope “raises seriously doubts” about whether the Act actually advances the State’s purported interests.

The court also calls out that NetChoice member Dreamwidth, run by the trust & safety expert known best online as @rahaeli, proves how stupid and mistargeted this law is:

Finally, Defendants have not shown the Act is not seriously overinclusive, restricting more constitutionally protected speech than necessary to achieve the State’s goals. Specifically, Defendants have not identified why the Act’s scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants’ well-being and privacy concerns. NetChoice member Dreamwidth, “an open source social networking, content management, and personal publishing website,” provides a useful illustration of this disconnect. Although Dreamwidth fits the Central Coverage Definition’s concept of a “social media service,” Dreamwidth is distinguishable in form and purpose from the likes of traditional social media platforms—say, Facebook and X. Additionally, Dreamwidth does not actively promote its service to minors and does not use features such as seamless pagination and push notification.

The court then also notes that if the law went into effect, companies would face irreparable injury, given the potential fines in the law.

This harm is particularly concerning given the high cost of violating the Act—$2,500 per offense—and the State’s failure to promulgate administrative rules enabling social media companies to avail themselves of the Act’s safe harbor provision before it takes effect on October 1, 2024.

Some users also sued to block the law, and the court rejected that request as there is no clear redressable injury for those plaintiffs yet, and thus they have no standing to sue at this point. That could have changed after the law started to be enforced, but thanks to the injunction from the NetChoice part, the law is not going into effect.

Utah will undoubtedly waste more taxpayer money and appeal the case. But, so far, these laws keep failing in court across the country. And that’s great to see. Kids have First Amendment rights too, and one day, our lawmakers should start to recognize that fact.

Filed Under: 1st amendment, age assurance, age verification, content moderation, kids, protect the children, robert shelby, social media, utah
Companies: netchoice

Aussie Gov’t: Age Verification Went From ‘Privacy Nightmare’ To Mandatory In A Year

from the topsy-turvy-down-under dept

Over the last few years, it’s felt like the age verification debate has gotten progressively stupider. People keep insisting that it must be necessary, and when others point out that there are serious privacy and security concerns that will likely make things worse, not better, we’re told that we have to do it anyway.

Let’s head down under for just one example. Almost exactly a year ago, the Australian government released a report on age verification, noting that the technology was simply a privacy and security nightmare. At the time, the government felt that mandating such a technology was too dangerous:

“It is clear from the roadmap at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness or implementation issues,” the government’s response to the roadmap said.

The technology must work effectively without circumvention, must be able to be applied to pornography hosted outside Australia, and not introduce the risk to personal information for adults who choose to access legal pornography, the government stated.

“The roadmap makes clear that a decision to mandate age assurance is not yet ready to be taken.”

That’s why we were a bit surprised earlier this year when the government announced a plan to run a pilot program for age verification. However, as we pointed out at the time, just hours after the announcement of that pilot program, it was revealed that a mandated verification database used for bars and clubs in Australia was breached, revealing sensitive data on over 1 million people.

You would think that might make the government pause and think more deeply about this. But apparently that’s not the way they work down under. The government is now exploring plans to officially age-gate social media.

The federal government could soon have the power to ban children from social media platforms, promising legislation to impose an age limit before the next election.

But the government will not reveal any age limit for social media until a trial of age-verification technology is complete.

The article is full of extremely silly quotes:

Prime Minister Anthony Albanese said social media was taking children away from real-life experiences with friends and family.

“Parents are worried sick about this,” he said.

“We know they’re working without a map. No generation has faced this challenge before.

“The safety and mental and physical health of our young people is paramount.

“Parents want their kids off their phones and on the footy field. So do I.”

This is ridiculous on all sorts of levels. Many families stay in touch via social media, so taking kids away from it may actually cut off their ability to connect with “friends and families.”

Yes, there are cases where some kids cannot put down phones and where obvious issues must be dealt with, as we’ve discussed before. But the idea that this is a universal, across-the-board problem is nonsense.

Hell, a recent study found that more people appeared to be going into the great outdoors because of seeing it glorified on social media. Some are worried that people are too focused on the great outdoors because it’s being overly glorified on social media.

Again, there’s a lot of nuance in the research that suggests this is not a simple issue of “if we cut kids off of social media, they’ll spend more time outside.” Some kids use social media to build up their social life which can lead to more outdoor activity, while some don’t. It’s not nearly as simple as saying that they’ll magically go outdoors and play sports if they don’t have social media.

Then you combine that with the fact that the Australian government knows that age verification is inherently unsafe, and this whole plan seems especially dangerous.

But, of course, politicians love to play into the latest moral panic.

South Australian Premier Peter Malinauskas said getting kids off social media required urgent leadership.

“The evidence shows early access to addictive social media is causing our kids harm,” he said.

“This is no different to cigarettes or alcohol. When a product or service hurts children, governments must act.”

Except, it’s extremely different than cigarettes and alcohol, both of which are actually consumed by the body and insert literal toxins into the bloodstream. Social media is speech. Speech can influence, but you can’t call it inherently a toxin or inherently good or bad.

The statement that “addictive social media is causing our kids harm” is literally false. The evidence is way more nuanced, and there remain no studies showing an actual causal relationship here. As we’ve discussed at length (backed up by multiple studies), if anything the relationship may go the other way, with kids who are already dealing with mental health problems resorting to spending more time on social media because of failures by the government to provide resources to help.

In other words, this rush to ban social media for kids is, in effect, an attempt by government officials to cover up their own failures.

The government could be doing all sorts of things to actually help kids. It could invest in better digital literacy, training kids how to use the technology more appropriately. It could provide better mental health resources for people of all ages. It could provide more space and opportunities for kids to freely spend time outdoors. These are all good uses of the government’s powers that tackle the issues they claim matter here.

Surveilling kids and collecting private data on them which everyone knows will eventually leak, and then banning them from spaces that many, many kids have said make their lives and mental health better, seems unlikely to help.

Of course, it’s only at the very end of the article linked above that the reporters include a few quotes from academics pointing out that age verification could create privacy and security problems, and that such laws could backfire. But the article never even mentions that the claims made by politicians are also full of shit.

Filed Under: age verification, anthony albanese, australia, kids, mental health, moral panic, peter malinauskas, privacy, security, social media

How ‘Analog Privilege’ Spares Elites From The Downsides Of Flawed AI Decision-Making Systems

from the that’s-not-fair dept

We live in a world where there are often both analog and digital versions of a product. For example, we can buy books or ebooks, and choose to listen to music on vinyl or via streaming services. The fact that digital goods can be copied endlessly and perfectly, while analog ones can’t, has led some people to prefer the latter for what often amounts to little more than snobbery. But a fascinating Tech Policy Press article by Maroussia Lévesque points out that in the face of increased AI decision-making, there are very real advantages to going analog:

The idea of “analog privilege” describes how people at the apex of the social order secure manual overrides from ill-fitting, mass-produced AI products and services. Instead of dealing with one-size-fits-all AI systems, they mobilize their economic or social capital to get special personalized treatment. In the register of tailor-made clothes and ordering off menu, analog privilege spares elites from the reductive, deterministic and simplistic downsides of AI systems.

One example given by Lévesque concerns the use of AI by businesses to choose new employees, monitor them as they work, and pass judgement on their performance. As she points out:

Analog privilege begins before high-ranking employees even start working, at the hiring stage. Recruitment likely occurs through headhunting and personalized contacts, as opposed to applicant tracking systems automatically sorting out through resumes. The vast majority of people have to jump automated one-size-fits all hoops just to get into the door, whereas candidates for positions at the highest echelons are ushered through a discretionary and flexible process.

Another example in the article involves the whitelisting of material on social networks when it is posted by high-profile users. Masnick’s Impossibility Theorem pointed out five years ago that moderation at scale is impossible to do well. One response to that problem has been the increasing use of AI to make quick but often questionable judgements about what is and isn’t acceptable online. This, in its turn, has led to another kind of analog privilege:

In light of AI’s limitations, platforms give prominent users special analog treatment to avoid the mistakes of crude automated content moderation. Meta’s cross-check program adds up to four layers of human review for content violation detection to shield high-profile users from inadvertent enforcement. For a fraction of one percent of users, the platform dedicates special human reviewers to tailor moderation decisions to each individual context. Moreover, the content stays up pending review.

In terms of addressing analog privilege, wherever it may be found, Lévesque suggests that creating a “right to be an exception” might be a good start. But she also notes that implementing such a right in AI laws won’t be enough, and that the creators of AI systems need to improve “intelligibility so people subject to AI systems can actually understand and contest decisions.” More generally:

looking at analog privilege and the detrimental effects of AI systems side by side fosters a better understanding of AI’s social impacts. Zooming out from AI harms and contrasting them with corresponding analog privileges makes legible a subtle permutation of longstanding patterns of exceptionalism. More importantly, putting the spotlight on privilege provides a valuable opportunity to interrupt unearned advantages and replace them with equitable, reasoned approaches to determining who should be subject to or exempt from AI systems.

Well, we can dream, can’t we?

Follow me @glynmoody on Mastodon and on Bluesky.

Filed Under: ai, analog, decision making, digital, elites, exceptions, masnick's impossibility theorem, moderation, privilege, recruitment, social media
Companies: meta

Ctrl-Alt-Speech: I Bet You Think This Block Is About You

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

IIn this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Discord. In our Bonus Chat at the end of the episode, Mike speaks to Juliet Shen and Camille Francois about the Trust & Safety Tooling Consortium at Columbia School of International and Public Affairs, and the importance of open source tools for trust and safety.

Filed Under: child safety, content moderation, coppa, jim jordan, kosa, social media
Companies: google, tiktok, twitter, x

Ctrl-Alt-Speech: Live At TrustCon 2024

from the ctrl-alt-speech-LIVE! dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In the first ever live recording of Ctrl-Alt-Speech, Mike and Ben are joined at TrustCon 2024 by Dona Bellow, Platform Safety Policy at Reddit, and Alice Hunsberger, PartnerHero’s VP of Trust & Safety and Content Moderation, to round up the latest news in online speech, content moderation and internet regulation, including:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor TaskUs, a leading company in the T&S field which provides a range of platform integrity and digital safety solutions. In our Bonus Chat at the end of the episode, also recorded live at TrustCon, Mike sits down with Rachel Guevara, TaskUs Division Vice President of Trust and Safety, to talk about her impressions of this year’s conference and her thoughts on the future of trust and safety.

You can also watch the video of the recording on our YouTube channel.

Filed Under: content moderation, social media, trust and safety
Companies: meta

Ctrl-Alt-Speech: Conspiracies Abhor A Vacuum

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Concentrix, the technology and services leader driving trust, safety, and content moderation globally. In our Bonus Chat at the end of the episode, Paul Danter, Global Head of Trust and Safety at Concentrix, talks about what he’s excited for at next week’s TrustCon event and the huge potential for industry collaboration.

Filed Under: ai, artificial intelligence, content moderation, donald trump, elon musk, free speech, podcasts, social media
Companies: spotify, tiktok, twitter, x