content moderation – Techdirt (original) (raw)

Big Tech’s Promise Never To Block Access To Politically Embarrassing Content Apparently Only Applies To Democrats

from the hypocrites?-in-big-tech?-how-could-it-be? dept

It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when that political figure is a Democrat. If it’s a Republican, then of course the content will be suppressed, and the GOP officials who demanded that big tech never ever again suppress such content will look the other way.

A week and a half ago, the Senate Intelligence Committee held a hearing about the threat of foreign intelligence efforts to interfere with US elections. Senator Tom Cotton, who believes in using the US military to suppress American protests, used the opportunity to berate Meta and Google for supposedly (but not really) “suppressing” the Hunter Biden laptop story:

In that session — which I feel the need to remind you was just held on September 18th — both Nick Clegg from Meta and Kent Walker from Google were made to promise that they would never, ever engage in anything like the suppression of the Hunter Biden laptop story (Walker noted that Google had taken no effort to do so when that happened in the first place).

Clegg explicitly said that a similar demotion “would not take place today.”

Take a wild guess where this is going?

Exactly one week and one day after that hearing, Ken Klippenstein released the Trump campaign’s internal vetting dossier on JD Vance. It’s pretty widely accepted that the document was obtained via hacking by Iranian agents and had been shopped around to US news sites for months. Klippenstein, who will do pretty much anything for attention, finally bit.

In response, Elon immediately banned Ken’s ExTwitter account and blocked any and all links to not just the document, but to Ken’s Substack. He went way further than anyone ever did regarding the original Hunter Biden laptop story and the content revealed from that laptop. We noted the irony of how the scenario is nearly identical to the Hunter Biden laptop story, but everyone wants to flip sides in their opinion of it.

Elon being a complete fucking hypocrite is hardly new. It’s almost to be expected. That combined with his public endorsement (and massive funding) of the Trump/Vance campaign means it’s noteworthy, but not surprising, that he’d do much more to seek to suppress the Vance dossier than old Twitter ever did about the Hunter laptop story.

So, what about Meta and Google? After all, literally a week earlier, top execs from each company said in a Senate hearing under oath that they would never seek to suppress similar content this year.

And yet…

That’s the link to the dossier on Threads with a message saying “This link can’t be opened from Threads. It might contain harmful content or be designed to steal personal information.”

Ah. And remember, while Twitter did restrict links to the NY Post article for about 24 hours, Meta never restricted the links. It only set it so that the Facebook algorithm wouldn’t promote the story until they checked and made sure it was legit. But here, they’re blocking all links to the Vance dossier on all their properties. When asked, a Meta spokesperson told the Verge:

“Our policies do not allow content from hacked sources or content leaked as part of a foreign government operation to influence US elections. We will be blocking such materials from being shared on our apps under our Community Standards.”

Yeah, but again, literally a week ago, Nick Clegg said under oath that they wouldn’t do this. The “hacked sources” policy was the excuse Twitter had used to block the NY Post story.

Does anyone realize how ridiculous all of this looks?

And remember how Zuckerberg was just saying he regrets “censoring” political content? Just last week, there was a big NY Times piece arguing, ridiculously, that Zuck was done with politics. Apparently it’s only Democrat-politics that he’s done with.

As for Google, well, Walker told Senator Cotton that the Biden laptop story didn’t meet their standards to have it blocked or removed. But apparently the Vance dossier does. NY Times reporter Aric Toler found that you can’t store the document in your Google Drive, saying it violates their policies against “personal and confidential information”:

As we’ve said over and over again, neither of these things should have been blocked. The NY Post story shouldn’t have been blocked, and the Vance dossier shouldn’t have been blocked. Yes, there are reasons to be concerned about foreign interference in elections, but if something is newsworthy, it’s newsworthy. It’s not for these companies to determine what’s newsworthy at all.

While it was understandable why in the fog of the release about the Hunter Biden story both Twitter and Meta said “let’s pump the brakes and see…” given how much attention has been paid to all that, including literally one week before this, it certainly raises a ton of questions to then immediately move to blocking the Vance dossier.

Of course, the hypocrisy will stand, because the GOP, which has spent years pointing to the Hunter Biden laptop story as their shining proof of “big tech bias” (even though it was nothing of the sort), will immediately, and without any hint of shame or acknowledgment, insist that of course the Vance dossier must be blocked and it’s ludicrous to think otherwise.

And thus, we see the real takeaway from all that working of the refs over the years: embarrassing stuff about Republicans must be suppressed, because it’s doxxing or hacking or foreign interference. However, embarrassing stuff about Democrats must be shared, because any attempt to block it is election interference.

Got it?

Filed Under: content moderation, hunter biden laptop, hypocrisy, jd vance, jd vance dossier, ken klippenstein, nick clegg, tom cotton
Companies: google, meta, twitter, x

Ctrl-Alt-Speech: Is This The Real Life? Is This Just Fakery?

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Cathryn Weems, who has held T&S roles at Yahoo, Google, Dropbox, Twitter and Epic Games. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Concentrix, the technology and services leader driving trust, safety, and content moderation globally. In our Bonus Chat at the end of the episode, clinical psychologist Dr Serra Pitts, who leads the psychological health team for Trust & Safety at Concentrix, talks to Ben about how to keep moderators healthy and safe at work and the innovative use of heart rate variability technology to monitor their physical response to harmful content.

Filed Under: ai, artificial intelligence, content moderation, disinformation, elon musk, misinformation
Companies: google, telegram, twitter, x

Quick Let’s Watch Everyone Flip Sides On Twitter’s Handling Of The Hunter Biden Laptop vs. The JD Vance Dossier

from the watch-the-rationalizations-fly dept

It’s been all of [checks calendar] one freaking day since we wrote about Elon Musk’s hypocrisy on free speech compared to the old Twitter regime, and he has to go and make another example.

Twitter, under old management: Briefly limits sharing of (at the time) unverified Hunter Biden laptop story. Elon: “Outrageous censorship!” and possibly a “First Amendment violation!”

ExTwitter, under Elon: Blocks links to leaked JD Vance dossier. Also Elon: “Most egregious doxxing ever!” Hmm…

As we’ve discussed for years now, very few people fully understand what happened four months ago with Twitter and the NY Post’s story about the content of Hunter Biden’s laptop. Two years ago, we pieced together what actually happened based on information from lawsuits, but also from what Elon released after taking over Twitter (though he did so misleadingly).

In short, Twitter had a very, very broad policy (too broad!) regarding “hacked materials.” We had criticized how that policy had been used to hide news reports before the whole Hunter Biden laptop story came out, warning that the policy was too broad and resulted in blocking legitimate news based on leaks.

At the same time, there were widespread (legitimate) concerns that foreign entities might engage in “hack and dump” efforts to leak critical information, as had happened in 2016. The folks who had access to the details of the laptop had shopped the contents around to multiple news sources who all refused to publish it, including Fox News. Eventually, the NY Post bit on the story, though even the main author of it was so unsure of the story he asked for his name to be taken off the byline. The actual content revealed in the story was… not really particularly interesting or revelatory.

Given the general concerns about amplifying a “hack and dump” campaign perhaps by a foreign adversary, and with no direct communication by the government, Twitter had a quick internal discussion. Then, they decided to limit access to the NY Post’s story under the “hacked materials” policy (as they had done before) until they knew more about the provenance of the laptop content. At that point, users were unable to share the link to just that story.

The internal leaks from the company showed that the decision makers inside the company struggled with how to deal with this, but politics did not come into play. Instead, they noted that given it “is an emerging situation where the facts remain unclear” and the risks, they decided to err on the side of caution and limit the distribution.

This did not actually limit interest in the article (hello Streisand Effect), which got way more traffic once Twitter made that decision.

Just one day later, Twitter admitted it had made a mistake, changed the policy, and again began allowing users to share that story.

Following that, there have been years of nonsense. This includes a firm (false) belief that Twitter actively tried to stifle the story for political reasons, that it blocked the story for months, that it knew the story was real, that the FBI and/or the non-existent Biden administration (remember Trump was the President at the time) had ordered Twitter to suppress the story.

An election interference lawsuit was filed… and rejected. There were Congressional investigations from Jim Jordan, which turned up nothing (but which he still spun as exposing conspiratorial actions).

But to many, including Elon Musk and many of his most vocal fans, it is taken as fact that old evil Twitter deliberately censored that story for political reasons, possibly changing the course of the 2020 election (even though literally none of that is accurate).

When his own company released the fact that the Biden campaign (not administration) asked Twitter if it might remove five tweets that showed Hunter Biden dick pics that were revealed as a part of the leak, Elon claimed that this story was a quintessential “violation of the Constitution’s First Amendment,” even as the tweets clearly violated Twitter’s policy against the sharing of non-consensual nude images.

Image

Indeed, many people cite that false narrative as a reason they’re happy that “free speech absolutist” Elon took over to make sure such a thing would never happen again.

Fast forward to yesterday…

Hold onto your hats, folks. This year, there are widespread (legitimate) concerns about foreign interference in the election including “hack and dump” efforts. Over the last month, there have been tons of stories regarding how Iran had hacked Trump officials, obtained a bunch of things, and shopped them around to a variety of media sources, who all refused to publish it.

Eventually, one dipshit decided to publish at least some of it: the Trump internal dossier on JD Vance. In this case, the dipshit was Ken Klippenstein, an independent reporter, known for his terrible reporting as well as his willingness to beg for attention on social media.

The actual content revealed in the story was… not really particularly interesting or revelatory. It’s a dossier of all the reasons why Vance might be a bad VP choice. There’s little that’s surprising in there.

So, the scenario has an awful lot of similarities to the Hunter Biden laptop story, right? Almost eerily so. But this time, Elon Musk is in charge, right? And so, obviously, he left this up, right? And he let people share it, right? Free speech absolutism, right? Right? Elon?

Hahaha, of course not.

Image

And if you try to share the link to Ken’s article? According to multiple people who have tried, it does not work. Here’s one screenshot of a few that I saw showing what happens if you try:

Image

You also can’t share the link via DMs.

Image

Another user on Twitter notes that their own account was temporarily suspended not even for tweeting out a link to the Vance dossier story, but for tweeting a link to Ken’s post about getting suspended!

Image

ExTwitter Safety claims Ken’s is a “temporary” suspension (just like Twitter’s temporary limit on the NY Post — though in that case they didn’t suspend the account as they did here). And the reason given is that the dossier supposedly revealed Vance’s physical addresses and “the majority of his Social Security number.”

Image

As opposed to, say, Hunter Biden’s dick pics.

That said, the link posted to ExTwitter did not, in fact, reveal the addresses or partial SSN. It linked to an article that Ken wrote about the dossier, which then did include a link to the file, but it’s still two clicks away from ExTwitter.

Ken points out that this particular info (Vance’s addresses and partial SSN) is widely available online or via data brokers. That still seems a bit iffy, and it feels like he could have easily redacted that info, but chose not to. There are plenty of cases that many people consider to be “doxxing” that are little more than getting info from a data broker.

Elon, though, is insisting that this was “one of the most egregious, evil doxxing actions we’ve ever seen.” Which is laughably untrue.

Image

And, of course, unlike the old Twitter regime, which made no public displays of support for presidential candidates, Elon has publicly endorsed Donald Trump, become one of the largest donors to his campaign, and turned ExTwitter into a non-stop pro-Trump promotional media site. So, unlike the old Twitter regime, Elon has made it clear that he absolutely wants to use the site to elect his preferred candidate and would have political reasons for trying to suppress this marginally embarrassing dossier.

So… is Jim Jordan going to launch an investigation and hold hearings, like he did about Twitter and the NY Post over Hunter Biden’s laptop? Is he going to haul Elon before Congress and demand he explain what happened? Will Elon release the “X-Files” revealing the internal discussions he and his employees had over banning Ken and blocking the sharing of the link?

Or nah?

Already we’re seeing Musk’s biggest fans trying to come up with justifications for how these stories are totally different. But they’re literally not. On basically all important details they’re effectively identical.

Again, I said at the time (and even before the Biden laptop story came out) that I thought Twitter’s policy was bad and they were wrong to temporarily block the sharing of the link. I also think that Elon is wrong to suspend Ken and block the sharing of the links as well.

But watch the rank hypocrisy fly. The old Twitter regime at least struggled with this decision internally (later revealed by Elon) and recognized that they were making a quick call based on imperfect information that they quickly reversed course on and apologized.

Somehow, I doubt Elon’s going to do any of that.

Filed Under: content moderation, elon musk, hunter biden, hunter biden laptop, jd vance, ken klippenstein
Companies: twitter, x

Ex-Congressmen Pen The Most Ignorant, Incorrect, Confused, And Dangerous Attack On Section 230 I’ve Ever Seen

from the this-is-not-how-anything-works dept

In my time covering internet speech issues, I’ve seen some truly ridiculous arguments regarding Section 230. I even created my ever-handy “Hello! You’ve Been Referred Here Because You’re Wrong About Section 230 Of The Communications Decency Act” article four years ago, which still gets a ton of traffic to this day.

But I’m not sure I’ve come across a worse criticism of Section 230 than the one recently published by former House Majority Leader Dick Gephardt and former Congressional Rep. Zach Wamp. They put together the criticism for Democracy Journal, entitled “The Urgent Task of Reforming Section 230.”

There are lots of problems with the article, which we’ll get into. But first, I want to focus on the biggest, most brain-numbingly obvious problem, which is that they literally admit they don’t care about the solution:

People on both sides of the aisle want to reform Section 230, and there’s a range of ideas on how to do it. From narrowing its rules to sunsetting the provision entirely, dozens of bills have emerged offering different approaches. Some legislators argue that platforms should be liable for certain kinds of content—for example, health disinformation or terrorism propaganda. Others propose removing protections for advertisements or content provided by a recommendation algorithm. CRSM is currently bringing together tech, mental health, education, and policy experts to work on solutions. But the specifics are less important than the impact of the reform. We will support reform guided by commonsense priorities.

I have pointed out over and over again through the years that I am open to proposals on Section 230 reform, but the specifics are all that matter, because almost every proposal to date to “reform Section 230” does not understand Section 230 or (more importantly) how it interacts with the First Amendment.

So saying “well, any reform is what matters” isn’t just flabbergasting. It’s a sign of people who have never bothered to seriously sit with the challenges, trade-offs, and nuances of changing Section 230. The reality (as we’ve explained many times) is that changing Section 230 will almost certainly massively benefit some and massively harm others. Saying “meh, doesn’t matter, as long as we do it” suggests a near total disregard for the harm that any particular solution might do, and to whom.

Even worse, it disregards how nearly every solution proposed will actually cause real and significant harm to the people reformers insist they’re trying to protect. And that’s because they don’t care or don’t want to understand how these things actually work.

The rest of the piece only further cements the fact that Gephardt and Wamp have no experience with this issue and seem to simply think in extremely simplistic terms. They think that (1) “social media is kinda bad these days” (2) “Section 230 allows social media to be bad” and thus (3) “reforming Section 230 will make social media better.” All three of these statements are wrong.

Hilariously, the article starts off by name-checking Prof. Jeff Kosseff’s book about Section 230. However, it then becomes clear that neither former Congress person read the book, because it would correct many of the errors in the piece. Then, they point out that both of them voted for CDA 230 and call it their “most regrettable” vote:

Law professor Jeff Kosseff calls it “the 26 words that created the internet.” Senator Ron Wyden, one of its co-authors, calls it “a sword and a shield” for online platforms. But we call it Section 230 of the 1996 Communications Decency Act, one of our most regrettable votes during our careers in Congress.

While that’s the title of Jeff’s book, he didn’t coin that phrase, so it’s even more evidence that they didn’t read it. Also, is that really such a “regrettable vote”? I see both of them voted for the Patriot Act. Wouldn’t that, maybe, be a bit more regrettable? Gephardt voted for the Crime Bill of 1994. I mean, come on.

Section 230 has enabled the internet to thrive, helped build out a strong US innovation industry online, and paved the way for more speech online. How is that worth “regretting”?

These two former politicians have to resort to rewriting history:

But the internet has changed dramatically since the 1990s, and the tech industry’s values have changed along with it. In 1996, Section 230 was protecting personal pages or small forums where users could talk about a shared hobby. Now, tech giants like Google, Meta, and X dominate all internet traffic, and both they and startups put a premium on growth. It is fundamental to their business model. They make money from advertising: Every new user means more profit. And to attract and maintain users, platforms rely on advanced algorithms that track our every online move, collecting data and curating feeds to our interests and demographics, with little regard for the reality that the most engaging content is often the most harmful.

When 230 was passed, it was in response to lawsuits involving two internet giants of the day (CompuServe, owned by accounting giant H&R Block at the time, and Prodigy, owned by IBM and Sears at the time), not some tiny startups. And yes, those companies also had advertisements and “put a premium on growth.” So it’s not clear why the authors of this piece think otherwise.

The claim that “the most engaging content is often the most harmful” has an implicit (obsolete) assumption. The assumption is that the companies Gephardt and Wamp are upset about optimize for “engagement.” While that may have been true over a decade ago when they first began experiments with algorithmic recommendations, most companies pretty quickly realized that optimizing on engagement alone was actually bad for business.

It frustrates users over time, drives away advertisers, and does not make for a successful long-term strategy. That’s why every major platform has moved away from algorithms that focus solely on engagement. Because they know it’s not a good long-term strategy. Yet Gephardt and Wamp are living in the past and think that algorithms are solely focused on engagement. They’re not because the market says that’s a bad idea.

Just like Big Tobacco, Big Tech’s profits depend on an addictive product, which is marketed to our children to their detriment. Social media is fueling a national epidemic of loneliness, depression, and anxiety among teenagers. Around three out of five teenage girls say they have felt persistently sad or hopeless within the last year. And almost two out of three young adults either feel they have been harmed by social media themselves or know someone who feels that way. Our fellow members of the Council for Responsible Social Media (CRSM) at Issue One know the harms all too well: Some of them have lost children to suicide because of social media. And as Facebook whistleblower Frances Haugen, another CRSM member, exposed, even when social media executives have hard evidence that their company’s algorithms are contributing to this tragedy, they won’t do anything about it—unless they are forced to change their behavior.

Where to begin on this nonsense? No, social media is not “addictive” like tobacco. Tobacco is a thing that includes nicotine, which is a physical substance that goes into your body and creates an addictive response in your bloodstream. Some speech online… is not that.

And, no, the internet is not “fueling a national epidemic of loneliness, depression, and anxiety among teenagers.” This has been debunked repeatedly. The studies do not support this. As for the stat that “three out of five teenage girls say they have felt persistently sad or hopeless” well… maybe there are some other reasons for that which are not social media? Maybe we’re living through a time of upheaval and nonsense where things like climate change are a major concern? And our leaders in Congress (like the authors of the piece I’m writing about) are doing fuck all to deal with it?

Maybe?

But, no, it couldn’t be that our elected officials dicked around and did nothing useful for decades and fucked the planet.

Must be social media!

Also, they’re flat out lying about what Haugen found. She found that the company was studying those issues to figure out how to fix them. The whole point of the study that everyone keeps pointing to was because there was a team at Facebook that was trying to figure out if the site was leading to bad outcomes among kids in order to try to fix it.

Almost everything written by Gephardt and Wamp in this piece is active misinformation.

It’s not just our children. Our very democracy is at stake. Algorithms routinely promote extreme content, including disinformation, that is meant to sow distrust, create division, and undermine American democracy. And it works: An alarming 73 percent of election officials report an increase in threats in recent years, state legislatures across the country have introduced hundreds of harmful bills to restrict voting, about half of Americans believe at least one conspiracy theory, and violence linked to conspiracy theories is on the rise. We’re in danger of creating a generation of youth who are polarized, politically apathetic, and unable to tell what’s real from what’s fake online.

Blaming all of the above on Section 230 is literal disinformation. To claim that somehow what’s described here is 230’s fault is so disconnected from reality as to raise serious questions about the ability of the authors of the piece to do basic reasoning.

First, nearly all disinformation is protected by the First Amendment, not Section 230. Are Gephardt and Wamp asking to repeal the First Amendment? Second, threats towards election officials are definitely not a Section 230 issue.

But, sure, okay, let’s take them at their word that they think Section 230 is the problem and “reform” is needed. I know they say they don’t care what the reform is, just that it happens, but let’s walk through some hypotheticals.

Let’s start with an outright repeal. Will that make the US less polarized and stop disinformation? Of course not. It would make it worse! Because Section 230 gives platforms the freedom to moderate their sites as they see fit, utilizing their own editorial discretion without fear of liability.

Remove that, and you get companies who are less able to remove disinformation because the risk of a legal fight increases. So any lawyer would tell company leadership to minimize their efforts to cut down on disinformation.

Okay, some people say, “maybe just change the law so that ‘you’re now liable for anything on your site.’” Well, okay, but now you have a very big First Amendment problem and, again, you get worse results. Because existing case law on the First Amendment from the Supreme Court on down says that you can’t be liable for distributing content if you don’t know it violates the law.

So, again, our hypothetical lawyers in this hypothetical world will say, “okay, do everything to avoid knowledge.” That will mean less reviewing of content, less moderation.

Or, alternatively, you get massive over-moderation to limit the risk of liability. Perhaps that’s what Gephardt and Wamp really want: no more freedom for the filthy public to ever speak. Maybe all speaking should only occur on heavily limited TV. Maybe we go back to the days before civil rights were a thing, and it was just white men on TV telling us how everyone should live?

This is the problem. Gephardt and Wamp are upset about some vague things they claim are caused by social media, and only due to Section 230. They believe that some vague amorphous reform will fix it.

Except all of that is wrong. The problems they’re discussing are broader, societal-level problems that these two former politicians failed to do anything about when they were in power. Now they are blaming people exercising their own free speech for these problems, and demanding that we change some unrelated law to… what…? Make themselves feel better?

This is not how you solve problems.

In short, Big Tech is putting profits over people. Throughout our careers, we have both supported businesses large and small, and we believe in their right to succeed. But they can’t be allowed to avoid responsibility by thwarting regulation of a harmful product. No other industry works like this. After a door panel flew off a Boeing plane mid-flight in January, the Federal Aviation Administration grounded all similar planes and launched an investigation into their safety. But every time someone tries to hold social media companies accountable for the dangerous design of their products, they hide behind Section 230, using it as a get-out-of-jail-free card.

Again, airplanes are not speech. Just like tobacco is not speech. These guys are terrible at analogies. And yes, every other industry that involves speech does work like this. The First Amendment protects nearly all the speech these guys are complaining about.

Section 230 has never been a “get out of jail” card. This is a lazy trope spread by people who never have bothered to understand Section 230. Section 230 only says that the liability for violative content on an internet service goes to whoever created the content. That’s it. There’s no “get out of jail free.” Whoever creates the violative content can still go to jail (if that content really violates the law, which in most cases it does not).

If their concerns are about profits, well, did Gephardt and Wamp spend any time reforming how capitalism works when they were lawmakers? Did they seek to change things so that the fiduciary duty of company boards wasn’t to deliver increasing returns every three months? Did they do anything to push for companies to be able to take a longer term view? Or to support stakeholders beyond investors?

No? Then, fellas, I think we found the problem. It’s you and other lawmakers who didn’t fix those problems, not Section 230.

That wasn’t the intent of Section 230. It was meant to protect companies acting as good Samaritans, ensuring that if a user posts harmful content and the platform makes a good faith-effort to moderate or remove it, the company can’t be held liable.

If you remove Section 230, they will have even less incentive to remove that content.

We still agree with that principle, but Big Tech is far from acting like the good Samaritan. The problem isn’t that there are eating disorder videos, dangerous conspiracy theories, hate speech, and lies on the platforms—it’s that the companies don’t make a good-faith effort to remove this content, and that their products are designed to actually amplify it, often intentionally targeting minors.

This is now reaching levels of active disinformation. Yes, companies do, in fact, seek to remove that content. It violates all sorts of policies, but (1) it’s not as easy as people think to actually deal with that content (because it’s way harder to identify than ignorant fools with no experience think it is) and (2) studies have shown that removing that content often makes problems like eating disorders worse rather than better (because it’s a demand-side problem, and users looking for that content will keep looking for it and find it in darker and darker places online, whereas when it’s on mainstream social media, those sites can provide better interventions and guide people to helpful resources).

If Gephardt and Wamp spoke to literally any actual experts on this, they could have been informed about the realities, nuances, and trade-offs here. But they didn’t. They appear to have surrounded themselves with moral panic nonsense peddlers.

They’re former Congressmen who assume they must know the right answer, which is “let’s run with a false moral panic!”

Of course, you had to know that this ridiculous essay wouldn’t be complete without a “fire in a crowded theater” line, so of course it has that:

There is also a common claim from Silicon Valley that regulating social media is a violation of free speech. But free speech, as courts have ruled time and time again, is not unconditional. You can’t yell “fire” in a crowded theater where there is no fire because the ensuing stampede would put people in real danger. But this is essentially what social media companies are letting users do by knowingly building products that spread disinformation like wildfire.

Yup. These two former lawmakers really went there, using the trope that immediately identifies you as ignorant of the First Amendment. There are a few limited classes of speech that are unprotected, but the Supreme Court has signaled loud and clear that it is not expanding the list. The “fire in a crowded theater” line was used as dicta in a case that was about locking up someone protesting the draft (do Gephardt and Wamp think we should lock up people for protesting the draft?!?) in a case that hasn’t been considered good law in seven decades.

Holding social media companies accountable for the amplification of harmful content—whether disinformation, conspiracy theories, or misogynistic messages—isn’t a violation of the First Amendment.

Yes, it literally is. I mean, there’s no two ways around it. All that content, with a very, very few possible exceptions, is protected under the First Amendment.

Even the platform X, formerly known as Twitter, agrees that we have freedom of speech, but not freedom of reach, meaning posts that violate the platform’s terms of service will be made “less discoverable.”

You absolute chuckleheads. The only reason sites can do “freedom of speech, but not freedom of reach” is because Section 230 allows them to moderate without fear of liability. If you remove that, you get less moderation.

In a lawsuit brought by the mother of a young girl who died after copying a “blackout challenge” that TikTok’s algorithm allegedly recommended to her, the Third Circuit Court of Appeals recently ruled that Section 230 does not protect TikTok from liability when the platform’s own design amplifies harmful content. This game-changing decision, if allowed to stand, could lead to a significant curtailing of Section 230’s shield. Traditional media companies are already held to these standards: They are liable for what they publish, even content like letters to the editor, which are written by everyday people.

First of all, that ruling is extremely unlikely to stand because even many of Section 230’s vocal critics recognize that the reasoning there made no sense. But second, the court said that algorithmic recommendations are expressive. And the end result is that while it may not be immune under 230 it remains protected under the First Amendment because the First Amendment protects expression.

This is why anyone who is going to criticize Section 230 absolutely has to understand how it intersects with the First Amendment. And anyone claiming that “you can’t shout fire in a crowded theater” is good law is so ignorant of the very basic concepts that it’s difficult to take them seriously.

If anything, Section 230 reforms could make platforms more pleasant for users; in the case of X, reforms could entice advertisers to come back after they fled in 2022-23 over backlash around hate speech. Getting rid of the vitriol could make space for creative and fact-based content to thrive.

I’m sorry, but are they claiming that “vitriol” is not protected under the First Amendment? Dick and Zach, buddies, pals, please have a seat. I have some unfortunate news for you that may make you sad.

But, don’t worry. Don’t blame me for it. It must be Section 230 making me make you sad when I tell you: vitriol is protected by the First Amendment.

The changes you suggest are not going to help advertisers come back to ExTwitter. Again, they will make things worse, because Elon is not going to want to deal with liability, so he will do even less moderation because the changes to Section 230 will increase liability for moderation choices you make.

How can you not understand this?

But for now, these platforms are still filled with lies, extremism, and harmful content.

Which is protected by the First Amendment, and which won’t change if Section 230 is changed.

We know what it’s like to sit at the dinner table and watch our grandchildren, even those under ten years old, scroll mindlessly on their phones. We genuinely worry, every time they pick them up, what the devices are doing to them—and to all of us.

Which also has got nothing to do with Section 230 and won’t change no matter what you do to Section 230?

Also, um, have you tried… parenting?

This may really be the worst piece on Section 230 I have ever read. And I’ve gone through both Ted Cruz and Josh Hawley’s Section 230 proposals.

This entire piece misunderstands the problems, misunderstands the law, misunderstands the constitution, then lies about the causes, blames the wrong things, has no clear actual reform policy, and is completely ignorant of how the changes they seem to want would do more damage to the very things they’re claiming need fixing.

It’s a stunning display of ignorant solutionism by ignorant fools. It’s the type of thing that could really only be pulled off by overconfident ex-Congresspeople with no actual understanding of the issues at play.

Filed Under: 1st amendment, content moderation, dick gephardt, disinformation, free speech, moral panic, section 230, social media, zach wamp

Ctrl-Alt-Speech: Smells Like Teen Safety

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: ai, artificial intelligence, chatbots, child safety, content moderation, teen safety, thierry breton
Companies: instagram, meta, socialai

MAGA World’s Belief In Their Made Up Claim That Biden Is ‘Censoring’ Conservatives On Social Media May Kill KOSA

from the well,-if-that's-what-it-takes dept

MAGA world’s false belief that Joe Biden is “censoring conservatives” on social media may actually kill the Kids Online Safety Act (KOSA). As we mentioned earlier this week, while KOSA has already passed the Senate and advanced in a different form out of the House Energy and Commerce Committee, there were still big concerns among House leadership that likely prevented the bill from moving forward.

Some of those concerns were legit and some were not. It appears that House Leadership is leaning in on the concerns based on a myth that they made up and apparently now believe to be true.

House Majority Leader Steve Scalise made it clear that House leadership has some problems with the bill, in an interview with the Washington Times:

Mr. Scalise said there would not be action on the legislation before the Nov. 5 election and declined to predict whether it could advance later this year before the current Congress ends. He said he’s provided feedback to Energy and Commerce members leading the bills, and “everybody’s going to keep working,” but the concerns raised by various ideological GOP caucuses are “important to note.”

Among the outstanding concerns is that the bills, particularly KOSA, give too much power to the executive branch to regulate online content.

“You want to protect kids, but you don’t want to give more ability to the Biden administration to censor conservatives. And unfortunately, they’ve abused these powers in the past,” Mr. Scalise said. “And so you got to narrow it. You got to focus it just on kids.”

This is somewhat hilarious and stupid. Yes, KOSA could be used for censorship, which is why we’ve spent years calling out its many flaws. But the claim that the “Biden administration” has “abused these powers in the past” to “censor conservatives” is a myth. It’s a myth made up by the MAGA world.

We’ve gone over this before. Multiple studies have found no evidence to support the claims that social media companies engaged in politically biased content removals. Indeed, many of the studies have found that sites actually adjusted the rules to give Trump supporters more leeway in breaking the rules to avoid even the false appearance of bias.

Then there are the false claims that the Biden administration, in particular, engaged in censorship of conservatives. But that’s made-up fantasyland nonsense based on a misunderstanding of reality. It is true that the administration requested that social media companies do a better job dealing with COVID and election misinformation. However, the companies basically all either pushed back on those requests or ignored them entirely.

As the Supreme Court made clear in its Murthy ruling, there’s a huge difference between using the bully pulpit of the Presidency to encourage certain activities (perfectly legal and expected) and illegally coercing speech suppression (which would violate the First Amendment). The Supreme Court noted that the lower courts had mixed those things up, as had the plaintiffs in that case.

As the majority of the Court noted, all of the moderation scenarios presented in the lawsuit seemed perfectly normal content moderation decisions that platforms always make, exercising their own editorial discretion. The scenarios showed no signs of interference or coercion from the administration.

We reject this overly broad assertion. As already discussed, the platforms moderated similar content long before any of the Government defendants engaged in the challenged conduct. In fact, the platforms, acting independently, had strengthened their pre-existing content-moderation policies before the Government defendants got involved….

This evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment.

Yet, MAGA world still wants to insist that this myth is true. They made up the myth whole cloth based on a cluelessness with trust & safety.

And now that myth might kill KOSA. Yay?

To be clear, there are all sorts of reasons that KOSA should go away. It includes problematic censorial powers that could be abused by any administration seeking to remove content for ideological reasons. And there are principled reasons why Republicans should reject KOSA. Senator Rand Paul recently laid out a compelling argument for why KOSA is bad that had nothing to do with culture war nonsense or made up fairy tales.

But here, it appears that the GOP’s leadership may have played themselves into making the right call for the right underlying reasons (the censorship powers), but based on a near total misunderstanding of how the world actually works.

Filed Under: 1st amendment, anti-conservative bias, censorship, content moderation, kosa, murthy v. missouri, steve scalise

Elon Rehires Lawyers In Brazil, Removes Accounts He Insisted He Wouldn’t Remove

from the was-there-no-strategy? dept

Elon Musk fought the Brazilian law, and it looks like the Brazilian law won.

After making a big show of how he was supposedly standing up for free speech, Elon caved yet again. Just as happened back in April when he first refused to comply with court orders from Supreme Court Justice Alexandre de Moraes, the Brazilian news org Folha reports that ExTwitter has (1) rehired a law firm in Brazil (though hasn’t yet designated a “legal representative” for the purpose of being a potential hostage) and (2) begun taking down accounts that it was ordered to remove (translated via Google Translate):

X (formerly Twitter) began complying with court orders from the Federal Supreme Court (STF) on Wednesday night (18) and took down accounts that Minister Alexandre de Moraes ordered to be suspended.

This week, the company rehired the Pinheiro Neto law firm to represent it before the Court. The firm had been dismissed last week. The STF says it will only recognize the new lawyers after X appoints a legal representative in the country.

This all comes right after the mess where ExTwitter switched its CDN provider, leading to the site briefly becoming available again in Brazil. According to Bloomberg, de Moraes appeared none too pleased about this and ordered another fine on the company:

Supreme Court Justice Alexandre de Moraes, who has been sparring with Musk for months, ordered a daily fine of 5 million reais ($922,250) against the social media site and accused it of attempting to “disobey” judicial orders.

An order published Thursday instructs the nation’s telecommunications regulator, Anatel, to ban X access through network providers such as Cloudflare, Fastly and EdgeUno, which were “created to circumvent the judicial decision to block the platform in national territory.”

From everything I’ve heard, it really does appear that the Cloudflare thing was unintentional and just happened because ExTwitter was in the process of moving from Fastly to Cloudflare for CDN services. This was for a variety of reasons and not to avoid the ban in Brazil. ExTwitter put out a statement saying it was unintentional as well:

![When X was shut down in Brazil, our infrastructure to provide service to Latin America was no longer accessible to our team. To continue providing optimal service to our users, we changed network providers. This change resulted in an inadvertent and temporary service restoration to Brazilian users.

While we expect the platform to be inaccessible again shortly, we continue efforts to work with the Brazilian government to return very soon for the people of Brazil.](https://i0.wp.com/lex-img-p.s3.us-west-2.amazonaws.com/img/f5d24997-e113-4cec-a710-e0ac0699263b-RackMultipart20240920-206-dnt3xe.png?ssl=1)

You can say that the company is lying, but that wouldn’t make much sense. Elon has had zero problems antagonizing and attacking de Moraes and the Brazilian government, so it wouldn’t make sense for him to lie about this. Especially if it is true that they had already begun the process of rehiring the law firm and banning some accounts.

Cloudflare quickly announced that it would segregate ExTwitter and make sure Brazilian traffic didn’t reach it. Anyone would have had to know this was the likely result if it really was intentional.

So, all of this sounds like Elon potentially realizing that he did his “oh, look at me, I’m a free speech absolutist” schtick, it caused ExTwitter to lose a large chunk of its userbase, and now he’s back to playing ball again. Because, like so much that he’s done since taking over Twitter, he had no actual plan to deal with these kinds of demands from countries.

Filed Under: alexandre de moraes, brazil, content moderation, elon musk, hostage employees, legal representative, takedowns
Companies: twitter, x

Twitter’s Pre-Musk Plans Mirrored Elon’s Vision—Until He Abandoned, Trashed Or Ignored Them

from the so-much-missed-opportunity dept

Today, the new book by NY Times reporters Kate Conger and Ryan Mac, C_haracter Limit: How Elon Musk Destroyed Twitter_, comes out. If you’re at all interested in what went down, I can’t recommend it enough. It’s a well-written, deeply researched book with all sorts of details about the lead-up to the acquisition, the acquisition itself, and the aftermath of Elon owning Twitter.

Even if you followed the story closely as it played out (as I did), the book is a worthwhile read in multiple ways. First, it’s pretty incredible to pull it all together in a single book. There was so much craziness happening every day that it’s sometimes difficult to take a step back and take in the larger picture. This book gives readers a chance to do just that.

But second, and more important, there are plenty of details broken by the book, some of which are mind-boggling. If you want to read a couple of parts that have been published, both the NY Times and Vanity Fair have run excerpts. The NY Times one covers Elon’s infatuation with “relaunching” Twitter Blue as a paid verification scheme a week after he took over. The Vanity Fair one looks at the actual closing of the deal and how chaotic it was, including Elon coming up $400 million short and demanding that Twitter just give him the money to cover the cost of closing the deal.

Both excerpts give you a sense of the kinds of amazing stories told in the book.

But as I read an advance copy of the book, two things stood out to me. The first was Elon’s near total lack of understanding of the concept of Chesterton’s Fence. The second was how much the old regime at Twitter was already trying to do almost everything that Elon claimed he wanted to do. But as soon as he took over, he was so sure (1) that the old regime were complete idiots and (2) that he could reason his way into solving social media, that he not only ignored what people were telling him, he actively assumed they were trying to sabotage him, and did away with anyone who could be helpful.

Elon rips out some fences

If you’re unaware of the concept of Chesterton’s Fence, it’s that you shouldn’t remove something (such as a fence) if you don’t understand why it was put there in the first place. Over and over in the book, we see Elon dismiss all sorts of ideas, policies, and systems that were in place at Twitter without even caring to find out why they were there. Often, he seems to assume things were done for the dumbest of all reasons, but never bothered to understand why they were actually done. Indeed, he so distrusted legacy Twitter employees that he assumed most were lying to him or trying to sabotage him.

It’s perhaps not that surprising to see why he would trust his own instincts, not that it makes it smart. With both Tesla and SpaceX, Elon bucked the conventional wisdom and succeeded massively. In both cases, he did things that many people said were impossible. And if that happens to you twice and makes you the world’s wealthiest person, you can see how you might start assuming that whenever people suggest that something is a bad idea or impossible, you should trust your gut over what people are telling you.

But the point of Chesterton’s Fence is not that you should never do things differently or never remove policies or technology that is in place. The point is that you should understand why they’re there. Elon never bothers to take that tiny step, and it’s a big part of his downfall.

In Character Limit, we see that Elon has almost no actual intellectual curiosity about social media. He has no interest in understanding how Twitter worked or why certain decisions were made. Propped up by a circle of sycophants and yes-men, he assumes that the previous regime at Twitter must have been totally stupid, and therefore there is no reason to listen to anything they had to say.

It is stunning how in story after story in the book, Elon has zero interest in understanding why anything works the way it does. He is sure that his own instincts, which are clouded by his unique position on the platform with tens of millions of followers, represent everyone’s experience.

He’s quite sure that his own instincts can get him to the right answers. This includes thinking he could (1) double advertising revenue in a few years (when he’s actually driven away over 80% of it) and (2) eclipse even that erroneously predicted increased advertising revenue by getting millions of people to pay for verification. In actuality, as the book details, a tiny fraction of users are willing to pay, and it’s bringing in just a few million dollars per quarter, doing little to staunch the losses of billions of dollars in advertising that Elon personally drove away.

The stories in the book are jaw-dropping. People who try to explain reality to him are fired. The people who stick around quickly learn the only thing to do is to lie to him and massage his ego. And thus, the book is full of stories of Elon ripping out the important pillars of what had been Twitter and then being perplexed when nothing works properly anymore.

He seems even more shocked that tons of people don’t seem to love him for his blundering around.

Old Twitter was already planning on doing what Elon wanted, but way better

Perhaps this is somewhat related to the last point, but the book details multiple ways in which Parag Agrawal, who had just taken over from Jack Dorsey a few months earlier, was already looking to do nearly everything Elon publicly claimed he wanted to do with Twitter.

When Elon first announced the deal to buy Twitter, I suggested a few (unlikely, but possible) ways in which Elon could actually improve Twitter. First up was that by taking the company private, Elon could remove Twitter from the whims of activist investors who were more focused on the short-term than the long-term.

The book goes into great detail about how much activist investors created problems for both Dorsey and Agrawal, pre-Musk. Specifically, their revenue and user demands actually made it somewhat more difficult to put in place a long-term vision.

In my original post, I talked about continuing Twitter’s actual commitment to free speech, which meant fighting government attempts to censor information (not just when you disagreed with the political leaders).

But beyond that, there were things like further investing in and supporting Bluesky (see disclaimer)* and its ATprotocol. After all, Elon claimed that he wanted to “open source” the algorithm.

Moving to an open protocol like ATProtocol would have not just allowed the open sourcing of the recommendation algorithm, it would have opened up the ability for anyone to create their own algorithm, both for recommendations and for moderation. Instead, that’s all happening on the entirely independent Bluesky app, which really only exists because Elon threw away Twitter’s deal to work with Bluesky.

Furthermore, the book reveals that well before Elon came on the scene, Parag and other top execs at the company were working on something called Project Saturn, which was discussed a bit in Kurt Wagner’s earlier book on this topic, but which is explained in more detail here.

The book reveals that Parag very much agreed with Elon (and Jack) that expecting companies to constantly completely remove problematic content was not a very effective solution.

So they created a plan to basically rearchitect everything around “freedom of speech, not freedom of reach.” Ironically, this is the very same motto that Elon claimed to embrace soon after taking over the company (and after firing Parag).

Image

But Parag and others at Twitter had already been working on a system to operationalize that very idea. The plan was to use different “levels” and “circles” in which users who were following the rules would have their content eligible to be promoted to varying degrees within the algorithm. The more you violated the site’s rules, you would move to further and further outer layers/rings of the system (which is where the Project Saturn name came from). This would lead to less “reach,” but also less of a need for Twitter to fully remove accounts or tweets.

It was a big rethinking of how social media could work and how it could support free speech. In reading about it in the book, it sounds like exactly what Elon said he wanted. A small team within Twitter, pushed by Parag’s vision, had been working on it since way before Elon purchased his shares and started the takeover process. According to the book, even as Elon caused such a mess in the summer of 2022 trying to back out of the deal, Parag kept pushing the team to move forward with the idea.

Once Elon took over, it appears that a few remaining people at the company tried to show him Project Saturn and explain to him how it would match the ideals he had talked about.

But Elon ignored them, tossed out all the work they had done on it, and just randomly started unbanning people he thought belonged back on the platform without any plan on how to deal with those users if they started causing problems (and driving away advertisers). He assumed that his new verification plan would solve both the revenue issues for the company and all moderation issues.

Even the idea that Twitter was too bloated with excess employees and a lack of vision seemed to be part of Agrawal’s plans. Before Elon had made his move, the book reveals that Agrawal had drawn up plans to lay off approximately 25% of the company and greatly streamline everything with a focus on building out certain lines of business and users. He did move to lay off many senior leaders as part of that streamlining, though it wasn’t as clearly explained at the time what the larger plan was. Elon’s effort to buy Twitter outright (and then back out of the deal) forced Agrawal to put the layoff plans on hold, out of a fear that Elon would view those layoffs as an attempt to sabotage the company.

It’s truly striking how much of what Elon claimed he wanted to do, Parag and his exec team were already doing. They were making things more open, transparent, and decentralized with Bluesky. They were decreasing the reliance on “takedowns” as a trust & safety mechanism with Saturn. They were betting big on “freedom of speech, not reach” with Saturn. They were fighting for actual free speech with legal actions around the globe. They were cutting employee bloat.

But the company was doing all of those things thoughtfully and deliberately, with a larger strategy behind it.

As the book details, Elon came in and not only tore down Chesterton Fences everywhere he could, he dismissed, ignored, or cut loose all of those other projects that would have taken him far along the path he claimed he wanted to go.

So, now he’s left with a site that has trouble functioning, has lost nearly all of its revenue, and is generally seen as a laughingstock closed system designed just to push Elon’s latest political partisan brain farts, rather than enabling the world’s conversation.

Of course, in the wake of all that destruction, it has enabled things like Bluesky to spring forth entirely unrelated to Twitter, and to put some of this into practice. Just this weekend, Bluesky passed 10 million users, helped along by Elon’s (again) hamfisted fight with Brazil, which (like so many other things Elon) may have a good reason at its core (fighting against secretive government demands), but was done in the dumbest way possible.

If there’s one thing that is painfully clear throughout the book, it is that Elon was correct that there were all sorts of ways that Twitter could be more efficient, more open, and less strict in takedowns. But he handled each in the worst way possible and destroyed what potential there was for the site.

Later today on the podcast, I’ll have an interview with Kate Conger about the book and Elon where we talk some more about all of this.

* As I’ve said before, I’m now on the board of Bluesky, which wouldn’t have been necessary if Elon hadn’t immediately cut Bluesky free from Twitter upon taking over the company.

Filed Under: character limit, chesterton fences, content moderation, elon musk, free speech, kate conger, parag agrawal, project saturn, ryan mac, social media
Companies: bluesky, twitter, x

Zuckerberg Vows To Stop Apologizing To Bad Faith Politicians, Right After Doing Just That

from the yeah,-sure,-whatever dept

Two weeks ago, Mark Zuckerberg apologized for something he didn’t actually do to appease a bad faith actor demanding he take responsibility for something that didn’t happen. This week, he’s claiming that he’s done falsely apologizing to bad faith actors demanding accountability for things he’s not responsible for.

Pardon me, but I think I’ll wait for some actual evidence of this before I take it on faith that he’s a changed man.

There were plenty of times over the last decade that Mark Zuckerberg seemed both unwilling and unable to speak up about how content moderation / trust & safety actually worked. He was so easily battered down by bad faith political actors into issuing pointless apologies that it became a sort of common occurrence. Politicians began to realize they could capitalize on this kind of theater to their own benefit.

Over the course of that decade, there were many times when Zuck could have come out and more clearly explained the reality of these things: content moderation is impossible to do well at scale, mistakes will always be made, and some people will always disagree with some of our choices. As such, there are times that people will have reasonable criticisms of decisions the company has made, or policies it has chosen to prioritize, but it’s got nothing to do with bad faith, or partisan politics, or the woke mind virus, or anything like that at all.

It just has to do with the nature of content moderation at scale. There are many malicious actors out there, many calls are subjective in nature, and operationalizing rules across tens of thousands of content moderators to protect the health and safety of users on a site is going to be fraught with decisions people disagree with.

Zuckerberg could have taken that stance at basically any point in the last decade. He could have tried to share some of the nuances and trade-offs inherent in these choices. Yet, each and every time, he seemed to fold and play politics.

So, there’s one side of me that thinks his recent appearance on some podcast in which he suggests he’s done apologizing and now focused on being more open and honest is nice to hear.

The founder of Facebook has spent a lot of time apologizing for Facebook’s content moderation issues. But when reflecting on the biggest mistakes of his career, Zuckerberg said his largest one was a “political miscalculation” that he described as a “20-year mistake.” Specifically, he said, he’d taken too much ownership for problems allegedly out of Facebook’s control.

“Some of the things they were asserting that we were doing or were responsible for, I don’t actually think we were,” said Zuckerberg. “When it’s a political problem… there are people operating in good faith who are identifying a problem and want something to be fixed, and there are people who are just looking for someone to blame.”

Of course, that would be a hell of a lot more compelling if, literally two weeks ago, Zuckerberg hadn’t sent a totally spineless and craven apology for things that didn’t even happen to one of the most bad faith “just looking for someone else to blame” actors around: Jim Jordan.

So it’s a little difficult to believe that Zuck has actually turned over a new leaf regarding political posturing, caving, and apologizing for things he wasn’t actually responsible for. It just looks like he’s shifted which bad faith actors he’s willing to cave to.

The problem in all of this is that there are (obviously!) plenty of things that social media companies and their CEOs could do better to provide a better overall environment. And there are (obviously!) plenty of things that social media companies and their CEOs could do better to explain and educate the public about the realities of social media, trust & safety, and society itself.

There are all sorts of problems that are pinned on social media that are really society-level problems that governments have failed to deal with going back centuries. A real leader would strive to highlight the differences between the things that are societal level problems and platform level problems. A real leader would highlight ways in which society should be attacking some of those problems, and where and how social media platforms could assist.

But Zuckerberg isn’t doing any of that. He’s groveling before bad faith actors… and pretending that he’s done doing so. Mainly because those very same bad faith actors keep insisting (in a bad faith way) that Zuck’s previous apologies were because of other bad faith actors conspiring with Zuck to silence certain voices. Except that didn’t happen.

So forgive me for being a bit cynical in believing that Zuck is “done” apologizing or “done” caving to bad faith actors. The claim he’s making here appears to be explicitly about now caving to a new and different batch of bad faith actors.

Filed Under: apologies, bad faith actors, content moderation, jim jordan, mark zuckerberg, politics, trade offs
Companies: meta