matt taibbi – Techdirt (original) (raw)

Supporting Free Speech Means Supporting Victims Of SLAPP Suits, Even If You Disagree With The Speakers

from the no-free-speech-tourism dept

When Elon filed his recent ridiculous SLAPP suit against Media Matters, it was noteworthy (but not surprising) to me to see people who not only claimed to be “free speech supporters,” but who made that a key part of their persona, cheering on the lawsuit, even though its only purpose was to use the power of the state to stifle and suppress free speech.

Matt Taibbi, for example, has spent the last year insisting that the Twitter Files, which he has totally misread and misrepresented, are one of the biggest “free speech” stories of our times. Indeed, he just won some made up new award worth $100k for “excellence in investigative journalism,” and in his “acceptance speech” he argued about how free speech was under attack and needed to be defended.

Just a few weeks later, he was cheering on Musk’s decision to sue Media Matters over journalism Taibbi didn’t like and didn’t support. Taibbi argues that the lawsuit is okay because it accuses Media Matters of “creating a news story, reporting on it, _then propagandizing it to willing partners in the mainstream media._” Except, um, dude, that’s exactly the same thing you did with the Twitter Files story, propagandizing it to Fox News and other nonsense peddling networks.

Of course, Taibbi has every right to be terrible at the job of being a journalist. He has every right to not understand the documents put in front of him. He has every right to leave out the important context that explains what he’s seeing (either because he’s too clueless to understand it, or because of motivated reasoning and the need to feed an audience of ignorant clods who pay him). He even has every right to make the many, many factual mistakes he made in his reporting, which undermine the central premise of that reporting. That’s free speech.

If Taibbi were sued over his reporting, I’d stand up and point out how ridiculous it is and how such a lawsuit is an attack on his free speech rights, and a clear SLAPP suit. Taibbi may deserve to be ridiculed for his ignorance and credulity, but he should never face legal consequences for it.

But, according to Taibbi himself, it’s okay for someone who is the victim of that kind of bad reporting to sue and run journalists through the destructive process of a SLAPP suit, because if a story is “propagandized” then it’s fair game. He even seems kinda gleeful about it, suggesting that all sorts of reporting from the past few years deserves similar treatment, whether it was reporting about Donald Trump’s alleged connections to Russia or things about COVID — that all of it is now fair game if it was misleadingly sensationalized (again, the very same thing he, himself, has been doing, just for a different team).

That’s not supporting free speech. That’s exactly what they accuse others of doing: of only supporting free speech when you agree with it. And it’s cheering on an actual, blatant, obvious abuse of the state to try to stifle speech.

So, let’s go back to Musk’s other SLAPP suit, which he filed earlier this year against the Center for Countering Digital Hate, claiming that their reporting about hateful content on ExTwitter somehow violated a contract (originally, Musk’s personal lawyer threatened CCDH with defamation, but that’s not what they filed).

As I made clear at the time, I think CCDH is awful. I think their research methodologies are terrible. I think they play fast and loose with the details in their rush to blame social media for all sorts of ills in the world. Hell, just weeks before the lawsuit I dismantled a CCDH study that was being relied upon by California legislators to try to pass a terrible bill regarding kids and social media. The study was egregiously bad, to the point of arguing that photos of gum on social media were “eating disorder content.”

I would never trust anything that CCDH puts out. I think that their slipshod methodology undermines everything they do to the point that I’d never rely on anything they said as being accurate, because they have zero credibility with me.

But, unlike Taibbi, I would never cheer on a SLAPP suit against them. I still stand up for CCDH’s free speech rights to publish the studies they publish and to speak up about what they believe without then being sued and facing legal consequences for stating their opinions.

That’s why I was happy that the Public Participation Project, a non-profit working to pass more anti-SLAPP laws across the country, where I am a board member, decided to work with the Harvard Cyberlaw clinic to file an amicus brief in support of CCDH, calling out how Musk’s lawsuit against them is an obvious SLAPP suit. Full disclosure: the Public Participation Project did ask me how I felt about the organization submitting this brief before deciding to take it on, knowing my own reservations about CCDH’s work. I told them I supported the filing wholeheartedly and am proud to see the organization doing the right thing and standing up against a SLAPP suit, even if (perhaps especially because!) I disagree with what CCDH says.

The filing, written by the amazing Kendra Albert, makes some key points. The original lawsuit was framed as a “breach of contract” lawsuit in a thinly veiled attempt to avoid an anti-SLAPP claim, since ExTwitter will certainly claim contract issues have nothing to do with speech.

But as the Public Participation Project’s filing makes clear, there is no way to read that lawsuit without realizing it’s an attempt to punish CCDH for its speech and silence similar such speech:

By suing the Center for Countering Digital Hate (“CCDH”), X Corp. (formerly Twitter), seeks to silence critique rather than to counter it. X Corp.’s claims may sound in breach of contract, intentional interference with contractual relations, and inducing breach of contract (hereinafter “state law claims”), rather than being explicitly about CCDH’s speech. But its arguments and its damages calculations rest on the decisions of advertisers to no longer work with X Corp. as a result of CCDH’s First Amendment protected activity. In brief, this is a classic Strategic Lawsuit Against Public Participation (SLAPP). X Corp. aims not to win but to chill.

Fortunately, the California anti-SLAPP statute, Cal. Civ. Proc. Code § 425.16, provides protection against abuse of the legal system with the goal of suppressing speech. X Corp.’s claims arise from CCDH’s protected activity and relate to a matter of public interest, making it appropriate for the anti-SLAPP statute to apply. Indeed, the anti-SLAPP statute’s purpose requires its application to these claims.

No doubt recognizing this, X Corp. seeks to do through haphazardly constructed contractual claims what the First Amendment does not permit it to do through speech torts. The harm that the anti-SLAPP statute aims to prevent will be realized if this court allows X Corp.’s claims to continue. The statute provides little help if plaintiffs can easily plead around it.

Indeed, X Corp.’s legal theories, if accepted by this court, could chill broad swathes of speech. If large online platforms can weaponize their terms of service against researchers and commentators they dislike, they may turn any contract of adhesion into a “SLAPP trap.” Organizations of all types could hold their critics liable for loss of revenue from the organization’s own bad acts, so long as a contractual breach might have occurred somewhere in the causal chain.

X Corp.’s behavior has already substantially chilled researchers and advocates, and it shows no signs of stopping. Since this litigation was filed, its Chief Technology Officer has threatened organizations with litigation because they have commented on X Corp.’s policies in a way that he dislikes. And on November 20, 2023, X Corp. filed another lawsuit against a non-profit organization for reporting its findings about hate speech on X. Organizations and individuals must be able to engage in research around the harms and benefits of social media platforms and they must be able to publish that research without fear of a federal lawsuit if their message is too successful.

Standing up for free speech means standing up against using the power of the state, including the courts, to attack and suppress speech you dislike. As much as I disagree with CCDH’s conclusions and the methodology behind many of its reports, unlike supposed “free speech warriors” cheering on Musk’s lawsuits against the likes of CCDH and Media Matters, I’m proud that an organization I’m associated with is willing to stand on principle and argue for actual free speech.

Hopefully, the courts will recognize this as well. And hopefully more people will begin to realize just how thin and fake the claims of other “free speech” supporters are. Not through lawsuits, but just by seeing what they actually do in practice when true threats to free speech arrive.

Filed Under: anti-slapp, california, elon musk, free speech, matt taibbi, slapp suits
Companies: ccdh, twitter, x

Stop Letting Nonsense Purveyors Cosplay As Free Speech Martyrs

from the oh,-i-do-declare-westminster dept

A few people have been asking me about last week’s release of something called the “Westminster Declaration,” which is a high and mighty sounding “declaration” about freedom of speech, signed by a bunch of journalists, academics, advocates and more. It reminded me a lot of the infamous “Harper’s Letter” from a few years ago that generated a bunch of controversy for similar reasons.

In short, both documents take a few very real concerns about attacks on free expression, but commingle them with a bunch of total bullshit huckster concerns that only pretend to be about free speech, in order to legitimize the latter with the former. The argument form is becoming all too common these days, in which you nod along with the obvious stuff, but any reasonable mind should stop and wonder why the nonsense part was included as well.

It’s like saying war is bad, and we should all seek peace, and my neighbor Ned is an asshole who makes me want to be violent, so since we all agree that war is bad, we should banish Ned.

The Westminster Declaration is sort of like that, but the parts about war are about legitimate attacks of free speech around the globe (nearly all of which we cover here), and the parts about my neighbor Ned are… the bogus Twitter Files.

The Daily Beast asked me to write up something about it all, so I wrote a fairly long analysis of just how silly the Westminster Declaration turns out to be when you break down the details. Here’s a snippet:

I think there is much in the Westminster Declaration that is worth supporting. We’re seeing laws pushed, worldwide, that seek to silence voices on the internet. Global attacks on privacy and speech-enhancing encryption technologies are a legitimate concern.

But the Declaration—apparently authored by Michael Shellenberger and Matt Taibbi, along with Andrew Lowenthal, according to their announcement of the document—seeks to take those legitimate concerns and wrap them tightly around a fantasy concoction. It’s a moral panic of their own creation, arguing that separate from the legitimate concern of censorial laws being passed in numerous countries passed, there is something more nefarious—what they have hilariously dubbed “the censorship-industrial complex.”

To be clear, this is something that does not actually exist. It’s a fever dream from people who are upset that they, or their friends, violated the rules of social media platforms and faced the consequences.

But, unable to admit that private entities determining their own rules is an act of free expression itself (the right not to associate with speech is just as important as the right to speak), the crux of the Westminster Declaration is an attempt to commingle legitimate concerns about government censorship with grievances about private companies’ moderation decisions.

You can click through to read the whole thing

Filed Under: free seech, matt taibbi, michael shellenberger, moral panic, twitter files, westminster declaration
Companies: twitter

Twitter Admits in Court Filing: Elon Musk Is Simply Wrong About Government Interference At Twitter

from the confirmation-idiocy dept

It is amazing the degree to which some people will engage in confirmation bias and believe absolute nonsense, even as the facts show the opposite is true. Over the past few months, we’ve gone through the various “Twitter Files” releases, and pointed out over and over again how the explanations people gave for them simply don’t match up with the underlying documents.

To date, not a single document revealed has shown what people now falsely believe: that the US government and Twitter were working together to “censor” people based on their political viewpoints. Literally none of that has been shown at all. Instead, what’s been shown is that Twitter had a competent trust & safety team that debated tough questions around how to apply policies for users on their platform and did not seem at all politically motivated in their decisions. Furthermore, while various government entities sometimes did communicate with the company, there’s little evidence of any attempt by government officials to compel Twitter to moderate in any particular way, and Twitter staff regularly and repeatedly rebuffed any attempt by government officials to go after certain users or content.

Now, as you may recall, two years ago, a few months after Donald Trump was banned from Twitter, Facebook, and YouTube, he sued the companies, claiming that the banning violated the 1st Amendment. This was hilariously stupid for many reasons, not the least of which is because at the time of the banning Donald Trump was the President of the United States, and these companies were very much private entities. The 1st Amendment restricts the government, not private entities, and it absolutely does not restrict private companies from banning the President of the United States should the President violate a site’s rules.

As expected, the case went poorly for Trump, leading to it being dismissed. It is currently on appeal. However, in early May, Trump’s lawyers filed a motion to effectively try to reopen the case at the district court, arguing that the Twitter Files changed everything, and that now there was proof that Trump’s 1st Amendment rights were violated.

In October of 2022, after the entry of this Court’s Judgment, Twitter was acquired by Elon Musk. Shortly thereafter, Mr. Musk invited several journalists to review Twitter’s internal records. Allowing these journalists to search for evidence that Twitter censored content that was otherwise compliant with Twitter’s “TOS”, the journalists disclosed their findings in a series of posts on Twitter collectively known as the Twitter Files. As set out in the attached Rule 60 motion, the Twitter Files confirm Plaintiffs’ allegations that Twitter engaged in a widespread censorship campaign that not only violated the TOS but, as much of the censorship was the result of unlawful government influence, violated the First Amendment.

I had been thinking about writing this up as a story, but things got busy, and last week Twitter (which, again, is now owned by Elon Musk who has repeatedly made ridiculously misleading statements about what the Twitter Files showed) filed its response, where they say (with risk of sanctions on the line) that this is all bullshit and nothing in the Twitter Files says what Trump (and Elon, and a bunch of his fans) claim it says. This is pretty fucking damning to anyone who believed the nonsense Twitter Files narrative.

The new materials do not plausibly suggest that Twitter suspended any of Plaintiffs’ accounts pursuant to any state-created right or rule of conduct. As this Court held, Lugar’s first prong requires a “clear,” government-imposed rule. Dkt. 165 at 6. But, as with Plaintiffs’ Amended Complaint, the new materials contain only a “grab-bag” of communications about varied topics, none establishing a state-imposed rule responsible for Plaintiffs’ challenged content-moderation decisions. The new materials cover topics ranging, for example, from Hunter Biden’s laptop, Pls.’ Exs. A.14 & A.27-A.28, to foreign interference in the 2020 election, Pls.’ Exs. A.13 at, e.g., 35:15-41:4, A.22, A.37, A.38, to techniques used in malware and ransomware attacks, Pls.’ Ex. A.38. As with the allegations in the Amended Complaint, “[i]t is … not plausible to conclude that Twitter or any other listener could discern a clear state rule” from such varied communications. Dkt. 165 at 6. The new materials would not change this Court’s dismissal of Plaintiffs’ First Amendment claims for this reason alone.

Moreover, a rule of conduct is imposed by the state only if backed by the force of law, as with a statute or regulation. See Sutton v. Providence St. Joseph Med. Ctr., 192 F.3d 826, 835 (9th Cir. 1999) (regulatory requirements can satisfy Lugar’s first prong). Here, nothing in the new materials suggests any statute or regulation dictating or authorizing Twitter’s content-moderation decisions with respect to Plaintiffs’ accounts. To the contrary, the new materials show that Twitter takes content-moderation actions pursuant to its own rules and policies. As attested to by FBI Agent Elvis Chan, when the FBI reported content to social media companies, they would “alert the social media companies to see if [the content] violated their terms of service,” and the social media companies would then “follow their own policies” regarding what actions to take, if any. Pls.’ Ex. A.13 at 165:9-22 (emphases added); accord id. at 267:19-23, 295:24-296:4. And general calls from the Biden administration for Twitter and other social media companies to “do more” to address alleged misinformation, see Pls.’ Ex. A.47, fail to suggest a state-imposed rule of conduct for the same reasons this Court already held the Amended Complaint’s allegations insufficient: “[T]he comments of a handful of elected officials are a far cry from a ‘rule of decision for which the State is responsible’” and do not impose any “clear rule,” let alone one with the force of law. Dkt. 165 at 6. The new materials thus would not change this Court’s determination that Plaintiffs have not alleged any deprivation caused by a rule of conduct imposed by the State.

Later on it goes further:

Plaintiffs appear to contend (Pls.’ Ex. 1 at 16-17) that the new materials support an inference of state action in Twitter’s suspension of Trump’s account because they show that certain Twitter employees initially determined that Trump’s January 2021 Tweets (for which his account was ultimately suspended) did not violate Twitter’s policy against inciting violence. But these materials regarding Twitter’s internal deliberations and disagreements show no governmental participation with respect to Plaintiffs’ accounts. See Pls.’ Exs. A.5.5, A-49-53.5

Plaintiffs are also wrong (Ex. 1 at 15-16) that general calls from the Biden administration to address alleged COVID-19 misinformation support a plausible inference of state action in Twitter’s suspensions of Cuadros’s and Root’s accounts simply because they “had their Twitter accounts suspended or revoked due to Covid-19 content.” For one thing, most of the relevant communications date from Spring 2021 or later, after Cuadros and Roots’ suspensions in 2020 and early 2021, respectively, see Pls.’ Ex. A.46-A.47; Am. Compl. ¶¶124, 150. Such communications that “post-date the relevant conduct that allegedly injured Plaintiffs … do not establish [state] action.” Federal Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1125-26 (N.D. Cal. 2020). Additionally, the new materials contain only general calls on Twitter to “do more” to address COVID-19 misinformation and questions regarding why Twitter had not taken action against certain other accounts (not Plaintiffs’). Pls.’ Exs. A.43-A.48. Such requests to “do more to stop the spread of false or misleading COVID-19 information,” untethered to any specific threat or requirement to take any specific action against Plaintiffs, is “permissible persuasion” and not state action. Kennedy v. Warren, 66 F.4th 1199, 1205, 1207-12 (9th Cir. 2023). As this Court previously held, government actors are free to “urg[e]” private parties to take certain actions or “criticize” others without giving rise to state action. Dkt. 165 at 12-13. Because that is the most that the new materials suggest with respect to Cuadros and Root, the new materials would not change this Court’s dismissal of their claims.

Twitter’s filing is like a beat-by-beat debunking of the conspiracy theories pushed by the dude who owns Twitter. It’s really quite incredible.

First, the simple act of receiving information from the government, or of deciding to act upon that information, does not transform a private actor into a state actor. See O’Handley, 62 F.4th at 1160 (reports from government actors “flagg[ing] for Twitter’s review posts that potentially violated the company’s content-moderation policy” were not state action). While Plaintiffs have attempted to distinguish O’Handley on the basis of the repeated communications reflected in the new materials, (Ex. 1 at 13), O’Handley held that such “flag[s]” do not suggest state action even where done “on a repeated basis” through a dedicated, “priority” portal. Id. The very documents on which Plaintiffs rely establish that when governmental actors reported to social media companies content that potentially violated their terms of service, the companies, including Twitter, would “see if [the content] violated their terms of service,” and, “[i]f [it] did, they would follow their own policies” regarding what content-moderation action was appropriate. Pls.’ Ex. A.13 at 165:3-17; accord id. at 296:1-4 (“[W]e [the FBI] would send information about malign foreign influence to specific companies as we became aware of it, and then they would review it and determine if they needed to take action.”). In other words, Twitter made an independent assessment and acted accordingly.

Moreover, the “frequen[t] [] meetings” on which Plaintiffs rely heavily in attempting to show joint action fall even farther short of what was alleged in O’Handley because, as discussed supra at 7, they were wholly unrelated to the kinds of content-moderation decisions at issue here.

Second, contrary to Plaintiffs’ contention (Ex. 1 at 11-12), the fact that the government gave certain Twitter employees security clearance does not transform information sharing into state action. The necessity for security clearance reflects only the sensitive nature of the information being shared— i.e., efforts by “[f]oreign adversaries” to “undermine the legitimacy of the [2020] election,” Pls.’ Ex. A.22. It says nothing about whether Twitter would work hand-in-hand with the federal government. Again, when the FBI shared sensitive information regarding possible election interference, Twitter determined whether and how to respond. Pls.’ Ex. A.13 at 165:3-17, 296:1-4.

Third, Plaintiffs are also wrong (Ex. 1 at 12-13) that Twitter became a state actor because the FBI “pay[ed] Twitter millions of dollars for the staff [t]ime Twitter expended in handling the government’s censorship requests.” For one thing, the communication on which Plaintiffs rely in fact explains that Twitter was reimbursed $3 million pursuant to a “statutory right of reimbursement for time spent processing” “legal process” requests. Pls.’ Ex. A.34 (emphasis added). The “statutory right” at issue is that created under the Stored Communications Act for costs “incurred in searching for, assembling, reproducing, or otherwise providing” electronic communications requested by the government pursuant to a warrant. 18 U.S.C. § 2706(a), see also id. § 2703(a). The reimbursements were not for responding to requests to remove any accounts or content and thus are wholly irrelevant to Plaintiffs’ joint-action theory

And, in any event, a financial relationship supports joint action only where there is complete “financial integration” and “indispensability.” Vincent v. Trend W. Tech. Corp., 828 F.2d 563, 569 (9th Cir. 1987) (quotation marks omitted). During the period in which Twitter recovered 3million(late2019throughearly2021),thecompanywasvaluedatapproximately3 million (late 2019 through early 2021), the company was valued at approximately 3million(late2019throughearly2021),thecompanywasvaluedatapproximately30 billion. Even Plaintiffs do not argue that a $3 million payment would be indispensable to Twitter.

I mean, if you read Techdirt, you already knew about all this, because we debunked the nonsense “government paid Twitter to censor” story months ago, even as Elon Musk was falsely tweeting exactly that. And now, Elon’s own lawyers are admitting that the company’s owner is completely full of shit or too stupid to actually read any of the details in the Twitter files. It’s incredible.

It goes on. Remember how Elon keeps insisting that the government coerced Twitter to make content moderation decisions? Well, Twitter’s own lawyers say that’s absolute horseshit. I mean, much of the following basically is what my Techdirt posts have explained:

The new materials do not evince coercion because they contain no threat of government sanction premised on Twitter’s failure to suspend Plaintiffs’ accounts. As this Court already held, coercion requires “a concrete and specific government action, or threatened action” for failure to comply with a governmental dictate. Dkt. 165 at 11. Even calls from legislators to “do something” about Plaintiffs’ Tweets (specifically, Mr. Trump’s) do not suggest coercion absent “any threatening remark directed to Twitter.” Id. at 7. The Ninth Circuit has since affirmed the same basic conclusion, holding in O’Handley that “government officials do not violate the First Amendment when they request that a private intermediary not carry a third party’s speech so long as the officials do not threaten adverse consequences if the intermediary refuses to comply.” 62 F.4th at 1158. Like the Amended Complaint, the new materials show, at most, attempts by the government to persuade and not any threat of punitive action, and thus would not alter the Court’s dismissal of Plaintiffs’ First Amendment claims.

FBI Officials. None of the FBI’s communications with Twitter cited by Plaintiffs evince coercion because they do not contain a specific government demand to remove content—let alone one backed by the threat of government sanction. Instead, the new materials show that the agency issued general updates about their efforts to combat foreign interference in the 2020 election. For example, one FBI email notified Twitter that the agency issued a “joint advisory” on recent ransomware tactics, and another explained that the Treasury department seized domains used by foreign actors to orchestrate a “disinformation campaign.” Pls.’ Ex. A.38. These informational updates cannot be coercive because they merely convey information; there is no specific government demand to do anything—let alone one backed by government sanction.

So too with respect to the cited FBI emails flagging specific Tweets. The emails were phrased in advisory terms, flagging accounts they believed may violate Twitter’s policies—and Twitter employees received them as such, independently reviewing the flagged Tweets. See, e.g., Pls.’ Exs. A.30 (“The FBI San Francisco Emergency Operations Center sent us the attached report of 207 Tweets they believe may be in violation of our policies.”), A.31, A.40. None even requested—let alone commanded—Twitter to take down any content. And none threatened retaliatory action if Twitter did not remove the flagged Tweets. As in O’Handley, therefore, the FBI’s “flags” cannot amount to coercion because there was “no intimation that Twitter would suffer adverse consequences if it refused.” 62 F.4th at 1158. What is more, unlike O’Handley, not one of the cited communications contains a request to take any action whatsoever with respect to any of Plaintiffs’ accounts.6

Plaintiffs’ claim (Ex. 1 at 14) that the FBI’s “compensation of Twitter for responding to its requests” had coercive force is meritless. As a threshold matter, as discussed supra at 10, the new materials demonstrate only that Twitter exercised its statutory right—provided to all private actors—to seek reimbursement for time it spent processing a government official’s legal requests for information under the Stored Communications Act, 18 U.S.C. § 2706; see also id. § 2703. The payments therefore do not concern content moderation at all—let alone specific requests to take down content. And in any event, the Ninth Circuit has made clear that, under a coercion theory, “receipt of government funds is insufficient to convert a private [actor] into a state actor, even where virtually all of the [the party’s] income [i]s derived from government funding.” Heineke, 965 F.3d at 1013 (quotation marks omitted) (third alteration in original). Therefore, Plaintiffs’ reliance on those payments does not evince coercion.

What about the pressure from Congress? That too is garbage, admits Twitter:

Congress. The new materials do not contain any actionable threat by Congress tied to Twitter’s suspension of Plaintiffs’ accounts. First, Plaintiffs place much stock (Ex. 1 at 14-15) in a single FBI agent’s opinion that Twitter employees may have felt “pressure” by Members of Congress to adopt a more proactive approach to content moderation, Pls.’ Ex. A13 at 117:15-118:6. But a third-party’s opinion as to what Twitter’s employees might have felt is hardly dispositive. And in any event, “[g]enerating public pressure to motivate others to change their behavior is a core part of public discourse,” and is not coercion absent a specific threatened sanction for failure to comply….

White House Officials. The new materials do not evince any actionable threat by White House officials either. Plaintiffs rely (Ex. 1 at 16) on a single statement by a Twitter employee that “[t]he Biden team was not satisfied with Twitter’s enforcement approach as they wanted Twitter to do more and to deplatform several accounts,” Pls.’ Ex. A.47. But those exchanges took place in December 2022, id.— well after Plaintiffs’ suspensions, and so could not have compelled Twitter to suspend their accounts. Furthermore, the new materials fail to identify any threat of government sanction arising from the officials’ “dissatisfaction”; indeed, Twitter was only asked to join “other calls” to continue the dialogue

Basically, Twitter’s own lawyers are admitting in a court filing that the guy who owns their company is spewing utter nonsense about what the Twitter Files revealed. I don’t think I’ve ever seen anything quite like this.

Guy takes over company because he’s positive that there are awful things happening behind the scenes. Gives “full access” to a bunch of very ignorant journalists who are confused about what they find. Guy who now owns the company falsely insists that they proved what he believed all along, leading to the revival of a preternaturally stupid lawsuit… only to have the company’s lawyers basically tell the judge “ignore our stupid fucking owner, he can’t read or understand any of this.”

Filed Under: coercion, content moderation, donald trump, elon musk, elvis chan, fbi, matt taibbi, twitter files
Companies: twitter

After Matt Taibbi Leaves Twitter, Elon Musk ‘Shadow Bans’ All Of Taibbi’s Tweets, Including The Twitter Files

from the a-show-in-three-acts dept

The refrain to remember with Twitter under Elon Musk: it can always get dumber.

Quick(ish) recap:

On Thursday, Musk’s original hand-picked Twitter Files scribe, Matt Taibbi, went on Mehdi Hasan’s show (which Taibbi explicitly demanded from Hasan, after Hasan asked about Taibbi’s opinions on Musk blocking accounts for Modi in India). The interview did not go well for Taibbi in the same manner that finding an iceberg did not go well for the Titanic.

One segment of the absolutely brutal interview involves Hasan asking Taibbi the very question that Taibbi had said he wanted to come on the show to answer: what was his opinion of Musk blocking Twitter accounts in India, including those of journalists and activists, that were critical of the Modi government? Hasan notes that Taibbi has talked up how he believes Musk is supporting free speech, and asked Taibbi if he’d like to criticize the blocking of journalists.

Taibbi refused to do so, and claimed he doesn’t really know about the story, even though it was the very story that Hasan initially tweeted about that resulted in Taibbi saying he’d tell Hasan his opinion on the story if he was invited on the show. It was, well, embarrassing to watch Taibbi squirm as he knew he couldn’t say anything critical about Musk. He already saw how the second Twitter Files scribe, Bari Weiss, was ex-communicated from the Church of Musk for criticizing Musk’s banning of journalists.

The conversation was embarrassing in real time:

Hasan: What’s interesting about Elon Musk is that, we’ve checked, you’ve tweeted over thirty times about Musk since he announced he was going to buy Twitter last April, and not a word of criticism about him in any of those thirty plus tweets. Musk is a billionaire who’s been found to have violated labor laws multiples times, including in the past few days. He’s attacked labor unions, reportedly fired employees on a whim, slammed the idea of a wealth tax. Told his millions of followers to vote Republican last year, and in response to a right-wing coup against Bolivian leftist President Evo Morales tweeted “we’ll coup whoever we want.”

And yet, you’ve been silent on all that.

How did you go, Matt, from being the scourge of Wall St. The man who called Goldman Sachs the Vampire Squid, to be unwilling to say anything critical at all about this right wing reactionary anti-union billionaire.

Taibbi: Look….[long pause… then a sigh]. So… so… I like Elon Musk. I met him. This is part of the calculation when you do one of these stories. Are they going to give you information that’s gonna make you look stupid. Do you think their motives are sincere about doing x or y…. I did. I thought his motives were sincere about the Twitter Files. And I admired them. I thought he did a tremendous public service in opening the files up. But that doesn’t mean I have to agree with him about everything.

Hasan: I agree with you. But you never disagree with him. You’ve gone silent. Some would say that’s access journalism.

Taibbi: No! No. I haven’t done… I haven’t reported anything that limits my ability to talk about Elon Musk…

Hasan: So will you criticize him today? For banning journalists, for working with Modi government to shut down speech, for being anti-union. You can go for it. I’ll give you as much time as you’d like. Would you like to criticize Musk now?

Taibbi: No, I don’t particularly want to… uh… look, I didn’t criticize him really before… uh… and… I think that what the Twitter Files are is a step in the right direction…

Hasan: But it’s the same Twitter he’s running right now…

Taibbi: I don’t have to disagree with him… if you wanna ask… a question in bad faith…

[crosstalk]

Hasan: It’s not in bad faith, Matt!

Taibbi: It absolutely is!

Hasan: Hold on, hold on, let me finish my question. You saying that he’s good for Twitter and good for speech. I’m saying that he’s using Twitter to help one of the most rightwing governments in the world censor speech. I will criticize that. Will you?

Taibbi: I have to look at the story first. I’m not looking at it now!

By Friday, that exchange became even more embarrassing. Because, due to a separate dispute that Elon was having with Substack (more on that in a bit), he decided to arbitrarily bar anyone from retweeting, replying, or even liking any tweet that had a Substack link in it. But Taibbi’s vast income stems from having one of the largest paying Substack subscriber bases. So, in rapid succession he announced that he was leaving Twitter, and would rely on Substack, and that this would likely limit his ability to continue working on the Twitter Files. Minutes later, Elon Musk unfollowed Taibbi on Twitter.

Quite a shift in the Musk/Taibbi relationship in 24 hours.

Then came Saturday. First Musk made up some complete bullshit about both Substack and Taibbi, claiming that Taibbi was an employee of Substack, and also that Substack was violating their (rapidly changing to retcon whatever petty angry outburst Musk has) API rules.

Somewhat hilariously, the Community Notes feature — which old Twitter had created, though once Musk changed its name from “Birdwatch” to “Community Notes,” he acted as if it was his greatest invention — is correcting Musk:

Because also either late Friday or early Saturday, Musk had added substack.com to Twitter’s list of “unsafe” URLs, suggesting that it may contain malicious links that could steal information. Of course, the only one malicious here was Musk.

Also correcting Musk? Substack founder Chris Best:

Then, a little later on Saturday, people realized that searching for Matt Taibbi’s account… turned up nothing. Taibbi wrote on Substack that he believed all his Twitter Files had been “removed” as first pointed out by William LeGate:

But, if you dug into Taibbi’s Twitter account, you could still find them. Mashable’s Matt Binder solved the mystery and revealed, somewhat hilariously, that Taibbi’s acount appears to have been “max deboosted” or, in Twitter’s terms, had the highest level of visibility filters applied, meaning you can’t find Taibbi in search. Or, in the parlance of today, he shadowbanned Matt Taibbi.

Again, this shouldn’t be a surprise, even though the irony is super thick. Early Twitter Files revealed that Twitter had long used visibility filtering to limit the spread of certain accounts. Musk screamed about how this was horrible shadowbanning… but then proceeded to use those tools to suppress speech of people he disliked. And now he’s using the tool, at max power, to hide Taibbi and the very files that we were (falsely) told “exposed” how old Twitter shadow banned people.

This is way more ironic than the Alanis song.

So, yes, we went from Taibbi praising Elon Musk for supporting free speech and supposedly helping to expose the evil shadowbanning of the old regime, and refusing to criticize Musk on anything, to Taibbi leaving Twitter, and Musk not just unfollowing him but shadowbanning him and all his Twitter Files.

In about 48 hours.

Absolutely incredible.

Just a stunning show of leopard face eating.

Not much happened then on Sunday, though Twitter first added a redirect on any searches for “substack” to “newsletters” (what?) and then quietly stopped throttling links to Substack, though no explanation was given. And as far as I can tell, Taibbi’s account is still “max deboosted.”

Anyway, again, to be clear: Elon Musk is perfectly within his rights to be as arbitrary and capricious as he wants to be with his own site. But can people please stop pretending his actions have literally anything to do with “free speech”?

Filed Under: content moderation, elon musk, matt taibbi, mehdi hasan, shadowbanning, twitter files, visibility filtering
Companies: substack, twitter

Mehdi Hasan Dismantles The Entire Foundation Of The Twitter Files As Matt Taibbi Stumbles To Defend It

from the i'd-like-to-report-a-murder dept

So here’s the deal. If you think the Twitter Files are still something legit or telling or powerful, watch this 30 minute interview that Mehdi Hasan did with Matt Taibbi (at Taibbi’s own demand):

Hasan came prepared with facts. Lots of them. Many of which debunked the core foundation on which Taibbi and his many fans have built the narrative regarding the Twitter Files.

We’ve debunked many of Matt’s errors over the past few months, and a few of the errors we’ve called out (though not nearly all, as there are so, so many) show up in Hasan’s interview, while Taibbi shrugs, sighs, and makes it clear he’s totally out of his depth when confronted with facts.

Since the interview, Taibbi has been scrambling to claim that the errors Hasan called out are small side issues, but they’re not. They’re literally the core pieces on which he’s built the nonsense framing that Stanford, the University of Washington, some non-profits, the government, and social media have formed an “industrial censorship complex” to stifle the speech of Americans.

As we keep showing, Matt makes very sloppy errors at every turn, doesn’t understand the stuff he has found, and is confused about some fairly basic concepts.

The errors that Hasan highlights matter a lot. A key one is Taibbi’s claim that the Election Integrity Partnership flagged 22 million tweets for Twitter to take down in partnership with the government. This is flat out wrong. The EIP, which was focused on studying election interference, flagged less than 3,000 tweets for Twitter to review (2,890 to be exact).

And they were quite clear in their report on how all this worked. EIP was an academic project to track election interference information and how it flowed across social media. The 22 million figure shows up in the report, but it was just a count of how many tweets they tracked in trying to follow how this information spread, not seeking to remove it. And the vast majority of those tweets weren’t even related to the ones they did explicitly create tickets on.

In total, our incident-related tweet data included 5,888,771 tweets and retweets from ticket status IDs directly, 1,094,115 tweets and retweets collected first from ticket URLs, and 14,914,478 from keyword searches, for a total of 21,897,364 tweets.

Tracking how information spreads is… um… not a problem now is it? Is Taibbi really claiming that academics shouldn’t track the flow of information?

Either way, Taibbi overstated the number of tweets that EIP reported by 21,894,474 tweets. In percentage terms, the actual number of reported tweets was 0.013% of the number Taibbi claimed.

Okay, you say, but STILL, if the government is flagging even 2,890 tweets, that’s still a problem! And it would be if it was the government flagging those tweets. But it’s not. As the report details, basically all of the tickets in the system were created by non-government entities, mainly from the EIP members themselves (Stanford, University of Washington, Graphika, and Digital Forensics Lab).

This is where the second big error that Taibbi makes knocks down a key pillar of his argument. Hasan notes that Taibbi falsely turned the non-profit Center for Internet Security (CIS) into the government agency the Cybersecurity and Infrastructure Security Agency (CISA). Taibbi did this by assuming that when someone at Twitter noted information came from CIS, they must have meant CISA, and therefore he appended the A in brackets as if he was correcting a typo:

Taibbi admits that this was a mistake and has now tweeted a correction (though this point was identified weeks ago, and he claims he only just learned about it). I’ve seen Taibbi and his defenders claim that this is no big deal, that he just “messed up an acronym.” But, uh, no. Having CISA report tweets to Twitter was a key linchpin in the argument that the government was sending tweets for Twitter to remove. But it wasn’t the government, it was an independent non-profit.

The thing is, this mistake also suggests that Taibbi never even bothered to read the EIP report on all of this, which lays out extremely clearly where the flagged tweets came from, noting that CIS (which was not an actual part of the EIP) sent in 16% of the total flagged tweets. It even pretty clearly describes what those tweets were:

Compared to the dataset as a whole, the CIS tickets were (1) more likely to raise reports about fake official election accounts (CIS raised half of the tickets on this topic), (2) more likely to create tickets about Washington, Connecticut, and Ohio, and (3) more likely to raise reports that were about how to vote and the ballot counting process—CIS raised 42% of the tickets that claimed there were issues about ballots being rejected. CIS also raised four of our nine tickets about phishing. The attacks CIS reported used a combination of mass texts, emails, and spoofed websites to try to obtain personal information about voters, including addresses and Social Security numbers. Three of the four impersonated election official accounts, including one fake Kentucky election website that promoted a narrative that votes had been lost by asking voters to share personal information and anecdotes about why their vote was not counted. Another ticket CIS reported included a phishing email impersonating the Election Assistance Commission (EAC) that was sent to Arizona voters with a link to a spoofed Arizona voting website. There, it asked voters for personal information including their name, birthdate, address, Social Security number, and driver’s license number.

In other words, CIS was raising pretty legitimate issues: people impersonating election officials, and phishing pages. This wasn’t about “misinformation.” These were seriously problematic tweets.

There is one part that perhaps deserves some more scrutiny regarding government organizations, as the report does say that a tiny percentage of reports came from the GEC, which is a part of the State Department, but the report suggests that this was probably less than 1% of the flags. 79% of the flags came from the four organizations in the partnership (not government). Another 16% came from CIS (contrary to Taibbi’s original claim, not government). That leaves 5%, which came from six different organizations, mostly non-profits. Though it does list the GEC as one of the six organizations. But the GEC is literally focused entirely on countering (not deleting) foreign state propaganda aimed at destabilizing the US. So, it’s not surprising that they might call out a few tweets to the EIP researchers.

Okay, okay, you say, but even so this is still problematic. It was still, as a Taibbi retweet suggests, these organizations who are somehow close to the government trying to silence narratives. And, again, that would be bad if true. But, that’s not what the information actually shows. First off, we already discussed how some of what they targeted was just out and out fraud.

But, more importantly, regarding the small number of tweets that EIP did report to Twitter… it never suggested what Twitter should do about them, and Twitter left the vast majority of them up. The entire purpose of the EIP program, as laid out in everything that the EIP team has made clear from before, during, and after the election, was just to be another set of eyes looking out for emerging trends and documenting how information flows. In the rare cases (again less than 1%) where things looked especially problematic (phishing attempts, impersonation) they might alert the company, but made no effort to tell Twitter how to handle them. And, as the report itself makes clear, Twitter left up the vast majority of them:

We find, overall, that platforms took action on 35% of URLs that we reported to them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked. No action was taken on 65%. TikTok had the highest action rate: actioning (in their case, their only action was removing) 64% of URLs that the EIP reported to their team.)

They don’t break it out by platform, but across all platforms no action was taken on 65% of the reported content. And considering that TikTok seemed quite aggressive in removing 64% of flagged content, that means that all of the other platforms, including Twitter, took action on way less than 35% of the flagged content. And then, even within the “took action” category, the main action taken was labeling.

In other words, the top two main results of EIP flagging this content were:

  1. Nothing
  2. Adding more speech

The report also notes that the category of content that was most likely to get removed was the out and out fraud stuff: “phishing content and fake official accounts.” And given that TikTok appears to have accounted for a huge percentage of the “removals” this means that Twitter removed significantly less than 13% of the tweets that EIP flagged for them. So not only is it not 22 million tweets, it’s that EIP flagged less than 3,000 tweets, and Twitter ignored most of them and removed probably less than 10% of them.

When looked at in this context, basically the entire narrative that Taibbi is pushing melts away.

The EIP is not part of the “industrial censorship complex.” It’s a mostly academic group that was tracking how information flows across social media, which is a legitimate area of study. During the election they did exactly that. In the tiny percentage of cases where they saw stuff they thought was pretty worrisome, they’d simply alert the platforms with no push for the platforms to take any action, and (indeed) in most cases the platforms took no action whatsoever. In a few cases, they added more speech.

In a tiny, tiny percentage of the already tiny percentage, when the situation was most extreme (phishing, fake official accounts) then the platforms (entirely on their own) decided to pull down that content. For good reason.

That’s not “censorship.” There’s no “complex.” Taibbi’s entire narrative turns to dust.

There’s a lot more that Taibbi gets wrong in all of this, but the points that Hasan got him to admit he was wrong about are literally core pieces in the underlying foundation of his entire argument.

At one point in the interview, Hasan also does a nice job pointing out that the posts that the Biden campaign (note: not the government) flagged to Twitter were of Hunter Biden’s dick pics, not anything political (we’ve discussed this point before) and Taibbi stammers some more and claims that “the ordinary person can’t just call up Twitter and have something taken off Twitter. If you put something nasty about me on Twitter, I can’t just call up Twitter…”

Except… that’s wrong. In multiple ways. First off, it’s not just “something nasty.” It’s literally non-consensual nude photos. Second, actually, given Taibbi’s close relationship with Twitter these days, uh, yeah, he almost certainly could just call them up. But, most importantly, the claim about “the ordinary” person not being able to have non-consensual nude images taken off the site? That’s wrong.

You can. There’s a form for it right here. And I’ll admit that I’m not sure how well staffed Twitter’s trust & safety team is to handle those reports today, but it definitely used to have a team of people who would review those reports and take down non-consensual nude photos, just as they did with the Hunter Biden images.

As Hasan notes, Taibbi left out this crucial context to make his claims seem way more damning than they were. Taibbi’s response is… bizarre. Hasan asks him if he knew that the URLs were nudes of Hunter Biden and Taibbi admits that “of course” he did, but when Hasan asks him why he didn’t tell people that, Taibbi says “because I didn’t need to!”

Except, yeah, you kinda do. It’s vital context. Without it, the original Twitter Files thread implied that the Biden campaign (again, not the government) was trying to suppress political content or embarrassing content that would harm the campaign. The context that it’s Hunter’s dick pics is totally relevant and essential to understanding the story.

And this is exactly what the rest of Hasan’s interview (and what I’ve described above) lays out in great detail: Taibbi isn’t just sloppy with facts, which is problematic enough. He leaves out the very important context that highlights how the big conspiracy he’s reporting is… not big, not a conspiracy, and not even remotely problematic.

He presents it as a massive censorship operation, targeting 22 million tweets, with takedown demands from government players, seeking to silence the American public. When you look through the details, correcting Taibbi’s many errors, and putting it in context, you see that it was an academic operation to study information flows, who sent the more blatant issues they came across to Twitter with no suggestion that they do anything about them, and the vast majority of which Twitter ignored. In some minority of cases, Twitter applied its own speech to add more context to some of the tweets, and in a very small number of cases, where it found phishing attempts or people impersonating election officials (clear terms of service violations, and potentially actual crimes), it removed them.

There remains no there there. It’s less than a Potemkin village. There isn’t even a façade. This is the Emperor’s New Clothes for a modern era. Taibbi is pointing to a naked emperor and insisting that he’s clothed in all sorts of royal finery, whereas anyone who actually looks at the emperor sees he’s naked.

Filed Under: censorship, content moderation, eip, elections, matt taibbi, mehdi hasan, twitter files
Companies: twitter

Jim Jordan Weaponizes The Subcommittee On The Weaponization Of The Gov’t To Intimidate Researchers & Chill Speech

from the where-are-the-jordan-files? dept

As soon as it was announced, we warned that the new “Select Subcommittee on the Weaponization of the Federal Government,” (which Kevin McCarthy agreed to support to convince some Republicans to support his speakership bid) was going to be not just a clown show, but one that would, itself, be weaponized to suppress speech (the very thing it claimed it would be “investigating.”)

To date, the subcommittee, led by Jim Jordan, has lived down to its expectations, hosting nonsense hearings in which Republicans on the subcommittee accidentally destroy their own talking points and reveal themselves to be laughably clueless.

Anyway, it’s now gone up a notch beyond just performative beclowing to active maliciousness.

This week, Jordan sent information requests to Stanford University, the University of Washington, Clemson University and the German Marshall Fund, demanding they reveal a bunch of internal information, that serves no purpose other than to intimidate and suppress speech. You know, the very thing that Jim Jordan pretends his committee is “investigating.”

House Republicans have sent letters to at least three universities and a think tank requesting a broad range of documents related to what it says are the institutions’ contributions to the Biden administration’s “censorship regime.”

As we were just discussing, the subcommittee seems taken in by Matt Taibbi’s analysis of what he’s seen in the Twitter files, despite nearly every one of his “reports” on them containing glaring, ridiculous factual errors that a high school newspaper reporter would likely catch. I mean, here he claims that the “Disinformation Governance Board” (an operation we mocked for the abject failure of the administration in how it rolled out an idea it never adequately explained) was somehow “replaced” by Stanford University’s Election Integrity Project.

Except the Disinformation Governance Board was announced, and then disbanded, in April and May of 2022. The Election Integrity Partnership was very, very publicly announced in July of 2020. Now, I might not be as decorated a journalist as Matt Taibbi, but I can count on my fingers to realize that 2022 comes after 2020.

Look, I know that time has no meaning since the pandemic began. And that journalists sometimes make mistakes (we all do!), but time is, you know, not that complicated. Unless you’re so bought into the story you want to tell you just misunderstand basically every last detail.

The problem, though, goes beyond just getting simple facts wrong (and the list of simple facts that Taibbi gets wrong is incredibly long). It’s that he gets the less simple, more nuanced facts, even more wrong. Taibbi still can’t seem to wrap his head around the idea that this is how free speech and the marketplace of ideas actually works. Private companies get to decide the rules for how anyone gets to use their platform. Other people get to express their opinions on how those rules are written and enforced.

As we keep noting, the big revelations so far (if you read the actual documents in the Twitter Files, and not Taibbi’s bizarrely disconnected-from-what-he’s-commenting-on commentary), is that Twitter’s Trust and Safety team was… surprisingly (almost boringly) competent. I expected way more awful things to come out in the Twitter Files. I expected dirt. Awful dirt. Embarrassing dirt. Because every company of any significant size has that. They do stupid things for stupid fucking reasons, and bend over backwards to please certain constituents.

But… outside of a few tiny dumb decisions, Twitter’s team has seemed… remarkably competent. They put in place rules. If people bent the rules, they debated how to handle it. They sometimes made mistakes, but seemed to have careful, logical debates over how to handle those things. They did hear from outside parties, including academic researchers, NGOs, and government folks, but they seemed quite likely to mock/ignore those who were full of shit (in a manner that pretty much any internal group would do). It’s shockingly normal.

I’ve spent years talking to insiders working on trust and safety teams at big, medium, and small companies. And, nothing that’s come out is even remotely surprising, except maybe how utterly non-controversial Twitter’s handling of these things was. There’s literally less to comment on then I expected. Nearly every other company would have a lot more dirt.

Still, Jordan and friends seem driven by the same motivation as Taibbi, and they’re willing to do exactly the things that they claim they’re trying to stop: using the power of the government to send threatening intimidation letters that are clearly designed to chill academic inquiry into the flow of information across the internet.

By demanding that these academic institutions turn over all sorts of documents and private communications, Jordan must know that he’s effectively chilling the speech of not just them, but any academic institution or civil society organization that wants to study how false information (sometimes deliberately pushed by political allies of Jim Jordan) flow across the internet.

It’s almost (almost!) as if Jordan wants to use the power of his position as the head of this subcommittee… to create a stifling, speech-suppressing, chilling effect on academic researchers engaged in a well-established field of study.

Can’t wait to read Matt Taibbi’s report on this sort of chilling abuse by the federal government. It’ll be a real banger, I’m sure. I just hope he uses some of the new Substack revenue he’s made from an increase in subscribers to hire a fact checker who knows how linear time works.

Filed Under: academic research, chilling effects, congress, intimidation, jim jordan, matt taibbi, nonsense peddlers, research, twitter files, weaponization subcommittee
Companies: clemson university, german marshall fund, stanford, twitter, university of washington

Matt Taibbi Can’t Comprehend That There Are Reasons To Study Propaganda Information Flows, So He Insists It Must Be Nefarious

from the not-how-any-of-this-works dept

Over the last few months, Elon Musk’s handpicked journalists have continued revealing less and less with each new edition of the “Twitter Files,” to the point that even those of us who write about this area have mostly been skimming each new release, confirming that yet again these reporters have no idea what they’re talking about, are cherry picking misleading examples, and then misrepresenting basically everything.

It’s difficult to decide if it’s even worth giving these releases any credibility at all in going through the actual work of debunking them, but sometimes a few out of context snippets from the Twitter Files, mostly from Matt Taibbi, seem to get picked up by others and it becomes necessary to dive back into the muck to clean up the mess that Matt has made yet again.

Unfortunately, this seems like one of those times.

Over the last few “Twitter Files” releases, Taibbi has been pushing hard on the false claim that, okay, maybe he can’t find any actual evidence that the government tried to force Twitter to remove content, but he can find… information about how certain university programs and non-governmental organizations received government grants… and they setup “censorship programs.”

It’s “censorship by proxy!” Or so the claim goes.

Except, it’s not even remotely accurate. The issue, again, goes back to understanding some pretty fundamental concepts that seem to escape Taibbi’s ability to understand. Let’s go through them.

Point number one: Studying misinformation and disinformation is a worthwhile field of study. That’s not saying that we should silence such things, or that we need an “arbiter of truth.” But the simple fact remains that some have sought to use misinformation and disinformation to try to influence people, and studying and understanding how and why that happens is valuable.

Indeed, I personally tend to lean towards the view that most discussions regarding mis- and disinformation are overly exaggerated moral panics. I think the terms are overused, and often misused (frequently just to attack factual news that people dislike). But, in part, that’s why it’s important to study this stuff. And part of studying it is to actually understand how such information is spread, which includes across social media.

Point number two: It’s not just an academic field of interest. For fairly obvious reasons, companies that are used to spread such information have a vested interest in understanding this stuff as well, though to date, it’s mostly been the social media companies that have shown the most interest in understanding these things, rather than say, cable news, even as some of the evidence suggests cable news is a bigger vector for spreading such things than social media.

Still, the companies have an interest in understand this stuff, and sometimes that includes these organizations flagging content they find and sharing it with the companies for the sole purpose of letting those companies evaluate if the content violate existing policies. And, once again, the companies regularly did nothing after noting that the flagged accounts didn’t violate any policies.

Point number three: governments also have an interest in understand how such information flows, in part to help combat foreign influence campaigns designed to cause strife and even violence.

Note what none of these three points are saying: that censorship is necessary or even desired. But it’s not surprising that the US government has funded some programs to better understand these things, and that includes bringing in a variety of experts from academia and civil society and NGOs to better understand these things. It’s also no surprise that some of the social media companies are interested in what these research efforts find because it might be useful.

And, really, that’s basically everything that Taibbi has found out in his research. There are academic centers and NGOs that have received some grants from various government agencies to study mis- and disinformation flows. Also, that sometimes Twitter communicated with those organization. Notably, many of his findings actually show that Twitter employees absolutely disagreed with the conclusions of those research efforts. Indeed, some of the revealed emails show Twitter employees somewhat dismissive of the quality of the research.

What none of this shows is a grand censorship operation.

However, that’s what Taibbi and various gullible culture warriors in Congress are arguing, because why not?

So, some of the organizations in questions have decided they finally need to do some debunking on their own. I especially appreciate the University of Washington (UW), which did a step by step debunker that, in any reasonable world, would completely embarrass Matt Taibbi for the very obvious fundamental mistakes he made:

False impression: The EIP orchestrated a massive “censorship” effort. In a recent tweet thread, Matt Taibbi, one of the authors of the “Twitter Files” claimed: “According to the EIP’s own data, it succeeded in getting nearly 22 million tweets labeled in the runup to the 2020 vote.” That’s a lot of labeled tweets! It’s also not even remotely true. Taibbi seems to be conflating our team’s post-hoc research mapping tweets to misleading claims about election processes and procedures with the EIP’s real-time efforts to alert platforms to misleading posts that violated their policies. The EIP’s research team consisted mainly of non-expert students conducting manual work without the assistance of advanced AI technology. The actual scale of the EIP’s real-time efforts to alert platforms was about 0.01% of the alleged size.

Now, that’s embarrassing.

There’s a lot more that Taibbi misunderstands as well. For example, the freak-out over CISA:

False impression: The EIP operated as a government cut-out, funneling censorship requests from federal agencies to platforms. This impression is built around falsely framing the following facts: the founders of the EIP consulted with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) office prior to our launch, CISA was a “partner” of the EIP, and the EIP alerted social media platforms to content EIP researchers analyzed and found to be in violation of the platforms’ stated policies. These are all true claims — and in fact, we reported them ourselves in the EIP’s March 2021 final report. But the false impression relies on the omission of other key facts. CISA did not found, fund, or otherwise control the EIP. CISA did not send content to the EIP to analyze, and the EIP did not flag content to social media platforms on behalf of CISA.

There are multiple other false claims that UW debunks as well, including that it was a partisan effort, that it happened in secret, or that it did anything related to content moderation. None of those are true.

The Stanford Internet Observatory (SIO), which works with UW on some of these programs, ended up putting out a similar debunker statement as well. For whatever reason, the SIO seems to play a central role in Taibbi’s fever dream of “government-driven censorship.” He focuses on projects like the Election Integrity Project or the Virality Project, both of which were focused on looking at the flows of viral misinformation.

In Taibbi’s world, these were really government censorship programs. Except, as SIO points out, they weren’t funded by the government:

Does the SIO or EIP receive funding from the federal government?

As part of Stanford University, the SIO receives gift and grant funding to support its work. In 2021, the SIO received a five-year grant from the National Science Foundation, an independent government agency, awarding a total of $748,437 over a five-year period to support research into the spread of misinformation on the internet during real-time events. SIO applied for and received the grant after the 2020 election. None of the NSF funds, or any other government funding, was used to study the 2020 election or to support the Virality Project. The NSF is the SIO’s sole source of government funding.

They also highlight how the Virality Project’s work on vaccine disinformation was never about “censorship.”

Did the SIO’s Virality Project censor social media content regarding coronavirus vaccine side-effects?

No. The VP did not censor or ask social media platforms to remove any social media content regarding coronavirus vaccine side effects. Theories stating otherwise are inaccurate and based on distortions of email exchanges in the Twitter Files. The Project’s engagement with government agencies at the local, state, or federal level consisted of factual briefings about commentary about the vaccine circulating on social media.

The VP’s work centered on identification and analysis of social media commentary relating to the COVID-19 vaccine, including emerging rumors about the vaccine where the truth of the issue discussed could not yet be determined. The VP provided public information about observed social media trends that could be used by social media platforms and public health communicators to inform their responses and further public dialogue. Rather than attempting to censor speech, the VP’s goal was to share its analysis of social media trends so that social media platforms and public health officials were prepared to respond to widely shared narratives. In its work, the Project identified several categories of allegations on Twitter relating to coronavirus vaccines, and asked platforms, including Twitter, which categories were of interest to them. Decisions to remove or flag tweets were made by Twitter.

In other words, as was obvious to anyone who actually had followed any of this while these projects were up and running, these are not examples of “censorship” regimes. Nor are they efforts to silence anyone. They’re research programs on information flows. That’s also clear if you don’t read Taibbi’s bizarrely disjointed commentary and just look at the actual things he presents.

In a normal world, the level of just outright nonsense and mistakes in Taibbi’s work would render his credibility completely shot going forward. Instead, he’s become a hero to a certain brand of clueless troll. It’s the kind of transformation that would be interesting to study and understand, but I assume Taibbi would just build a grand conspiracy theory about how doing that was just an attempt by the illuminati to silence him.

Filed Under: academic research, censorship, cisa, disinformation, information flows, matt taibbi, misinformation, propaganda, twitter files
Companies: stanford, twitter, university of washington

Details Of FTC’s Investigation Into Twitter And Elon Musk Emerge… And Of Course People Are Turning It Into A Nonsense Culture War

from the closing-in dept

Back in the fall we were among the first to highlight that Elon Musk might face a pretty big FTC problem. Twitter, of course, is under a 20 year FTC consent decree over some of its privacy failings. And, less than a year ago (while still under old management), Twitter was hit with a $150 million fine and a revised consent decree. Both of them are specifically regarding how it handles users private data. Musk has made it abundantly clear that he doesn’t care about the FTC, but that seems like a risky move. While I think this FTC has made some serious strategic mistakes in the antitrust world, the FTC tends not to fuck around with privacy consent decrees.

However, now the Wall Street Journal has a big article with some details about the FTC’s ongoing investigation into Elon’s Twitter (based on a now released report from the Republican-led House Judiciary who frames the whole thing as a political battle by the FTC to attack a company Democrats don’t like — despite the evidence included not really showing anything to support that narrative).

The Federal Trade Commission has demanded Twitter Inc. turn over internal communications related to owner Elon Musk, as well as detailed information about layoffs—citing concerns that staff reductions could compromise the company’s ability to protect users, documents viewed by the Wall Street Journal show.

In 12 letters sent to Twitter and its lawyers since Mr. Musk’s Oct. 27 takeover, the FTC also asked the company to “identify all journalists” granted access to company records and to provide information about the launch of the revamped Twitter Blue subscription service, the documents show.

The FTC is also seeking to depose Mr. Musk in connection with the probe.

I will say that some of the demands from the FTC appear to potentially be overbroad, which should be a concern:

The FTC also asked for all internal Twitter communications “related to Elon Musk,” or sent “at the direction of, or received by” Mr. Musk.

I mean… that seems to be asking for way more than is reasonable. I’ve heard some discussion that these requests are an attempt to figure out who Musk is delegating to handle privacy issues at the company (as required in the consent decree), but it seems that such a request can (and should) be more tailored to that point. Otherwise, it appears (and will be spun, as the House Judiciary Committee is doing…) as an overly broad fishing expedition.

Either way, as we predicted in our earlier posts, the FTC seems quite concerned about whether or not Twitter is conducting required privacy reviews before releasing new features.

The FTC also pressed Twitter on whether it was conducting in-depth privacy reviews before implementing product changes such as the new version of Twitter Blue, as required under the 2022 order. The agency sought detailed records on how product changes were communicated to Twitter users.

It asked Twitter to explain how it handled a recently reported leak of Twitter user-profile data, to account for changes made to the way users authenticate their accounts, and to describe how it scrubbed sensitive data from sold office equipment.

Another area that is bound to be controversial (and Matt Taibbi is, in his usual fashion, misleadingly misrepresenting things and whining about it) is that the FTC asked to find out which outside “journalists” had been granted access to Twitter systems:

On Dec. 13, the FTC asked about Twitter’s decision to give journalists access to internal company communications, a project Mr. Musk has dubbed the “Twitter Files” and that he says sheds light on controversial decisions by previous management.

The agency asked Twitter to describe the “nature of access granted each person” and how allowing that access “is consistent with your privacy and information security obligations under the Order.” It asked if Twitter conducted background checks on the journalists, and whether the journalists could access Twitter users’ personal messages.

Given the context, this request actually seems reasonable. The consent decree is pretty explicit about how Twitter needs to place controls on access to private information, and the possibility that Musk gave outside journalists access to private info was a concern that many people raised. Since then, Twitter folks have claimed that it never gave outside journalists full access to internal private information, but rather tasked employees with sharing requested files (this might still raise some questions about private data, but it’s not as free wheeling as some worried initially). If Twitter really did not provide access to internal private data to journalists, then it can respond to that request by showing what kind of access it did provide.

But, Taibbi is living down to his reputation and pretending it’s something different:

Matt Taibbi tweet to WSJ article saying: "which journalists a company or its executives talks to is not remotely the government's business. This is an insane overreach."

At best, Taibbi seems to be conflating two separate requests here. The request for all of Musk’s communications definitely does seem too broad, and it seems like Twitter’s lawyers (assuming any remain, or outside counsel that is still having its bills paid) could easily respond and push back on the extensiveness of the request to narrow it down to communications relevant to the consent decree. That’s… how this process normally works.

As for the claim that which journalists an executive talks to is not the government’s business, that is correct, but lacking context. It becomes the government’s business if part of the conversation with the journalist is to violate the law. And… it’s that point that the FTC is trying to determine. If they didn’t violate the consent decree, then, problem solved.

Thus, the request regarding how much access Musk gave to journalists seems like a legitimate question to determine if the access violated the consent decree. One hopes that Twitter was careful enough in how this was set up that the answer is “no, it did not violate the consent decree, and all access was limited and carefully monitored to protect user data,” but that’s kinda the reason that the investigation is happening in the first place.

Indeed, in the House Judiciary Committee report, in which they try to turn this into a much bigger deal, they do reveal a small snippet of the FTC’s requests to Twitter on this topic that suggest that Taibbi is (yet again) totally misrepresenting things (it’s crazy how often that’s the case with that guy), and their concern is literally to the single point implicated by the consent decree: did Twitter give outside journalists access to internal Twitter systems that might have revealed private data:

I would be concerned if the request actually were (as Taibbi falsely implies) for Musk to reveal every journalist he’s talking to. But the request (as revealed by the Committee) appears to only be about “journalists and other members of the media to whom” Elon has “granted any type of access to the Companies internal communications.” And, given that the entire consent decree is about restricting access to internal systems and others’ communications, that seems directly on point and not, as the Judiciary Committee and Taibbi complain, about an attack on the 1st Amendment.

It remains entirely possible that the FTC finds nothing at all here. Or that if it tries to file claims against Twitter that Twitter wins. Unlike some people, I am not rushing to assume that the FTC is going to bring Twitter to account. But there are some pretty serious questions about whether or not Musk is abiding by the consent decree, and violating a consent decree is just pleading for the FTC to make an expensive example of you.

Filed Under: consent decree, elon musk, ftc, investigation, matt taibbi, privacy, subpoena
Companies: twitter

No, The FBI Is NOT ‘Paying Twitter To Censor’

from the not-how-any-of-this-works dept

Sigh.

Look. I want to stop writing about Twitter. I want to write about lots of other stuff. I have a huge list of other stories that I’m trying to get through, but then Elon Musk does something dumb again, and people run wild with it, and (for reasons that perplex me) much of the media either run with what Musk said, or just ignore it completely. But Musk is either deliberately lying about stuff or too ignorant to understand what he’s talking about, and I don’t know which is worse, though neither is a good look.

Today, his argument is that “the FBI has been paying Twitter to censor,” and he suggests this is a big scandal.

This would be a big scandal if true. But, it’s not. It’s just flat out wrong.

As with pretty much every one of these misleading statements regarding the very Twitter that he runs, where people (I guess maybe just former people) could explain to him why he’s wrong, it takes way more time and details to explain why he’s wrong than for him to push out these misleading lines that will now be taken as fact.

But, since at least some of us still believe in facts and truth, let’s walk through this.

First up, we already did a huge, long debunker on the idea that the FBI (or any government entity) was in any way involved in the Twitter decision to block links to the Hunter Biden laptop story. Most of the people who believed that have either ignored that there was no evidence to support it, or have simply moved on to this new lie, suggesting that “the FBI” was “sending lists” to Twitter of people to censor.

The problem is that, once again, that’s not what “the Twitter Files” show, even as the reporters working on it — Matt Taibbi, Bari Weiss, and Michael Shellenberger — either don’t understand what they’re looking at or are deliberately misrepresenting it. I’m no fan of the FBI, and have spent much of the two and a half decades here at Techdirt criticizing it. But… there’s literally no scandal here (or if there is one, it’s something entirely different, which we’ll get to at the end of the article).

What the files show is that the FBI would occasionally (not very often, frankly) use reporting tools to alert Twitter to accounts that potentially violated Twitter’s rules. When the FBI did so, it was pretty clear that it was just flagging these accounts for Twitter to review, and had no expectation that the company would or would not do anything about it. In fact, they are explicit in their email that the accounts “may potentially constitute violations of Twitter’s Terms of Service” and that Twitter can take “any action or inaction deemed appropriate within Twitter policy.”

Email from FBI to Twitter saying: "FBI San Francisco is notifying you of the below accounts which may potentially constitute violations of Twitter's Terms of Service for any action or inaction deemed appropriate within Twitter policy."

That is not a demand. There is no coercion associated with the email, and it certainly appears that Twitter frequently rejected these flags from the US government. Twitter’s most recent transparency report lists all of the “legal demands” the company received for content removals in the US, and its compliance rate is 40.6%. In other words, it complied with well under half of any demands for data removal from the government.

Indeed, even as presented (repeatedly) by Taibbi and Shellenberger as if it’s proof that Twitter closely cooperated with the FBI, over and over again if you read the actual screenshots, it shows Twitter (rightly!) pushing back on the FBI. Here, for example, Michael Shellenberger, shows Twitter’s Yoel Roth rejecting a request from the FBI to share information, saying they need to take the proper legal steps to request that info (depending on the situation, likely getting a judge to approve the request):

Now, we could have an interesting discussion (and I actually do think it’s an interesting discussion) about whether or not the government should be flagging accounts to review as terms of service violations. Right now, anyone can do this. You or I can go on Twitter and if we see something that we think violates a content policy, we can flag it for Twitter to review. Twitter than will review the content and determine whether or not it’s violative, and then decide what the remedy should be if it is.

That opens up an interesting question in general: should government officials and entities also be allowed to do the same type of flagging? Considering that anyone else can do it, and the company still reviews against its own terms of service and (importantly) feels free to reject those requests when they do not appear to violate the terms, I’m hard pressed to see the problem here on its own.

If there were evidence that there was some pressure, coercion, or compulsion for the company to comply with the government requests, that would be a different story. But, to date, there remains none (at least in the US).

As for the accounts that were flagged, from everything revealed to date in the Twitter Files, it mostly appears to be accounts that were telling a certain segment of the population (sometimes Republicans, sometimes Democrats) to vote on Wednesday, the day after Election Day, rather than Tuesday. Twitter had announced long before the election that any such tweets would violate policy. It does appear that a number of those tweets were meant as jokes, but as is the nature of content moderation, it’s difficult to tell what’s a joke from what’s not a joke, and quite frequently malicious actors will try to hide behind “but I was only joking…” when fighting back against an enforcement action. So, under that context, a flat “do not suggest people vote the day after Election Day” rule seems reasonable.

Given all that, to date, the only “evidence” that people can look at regarding “the FBI sent a list to censor” is that the FBI flagged (just as your or I could flag) accounts that were pretty clearly violating Twitter policies in a way that could undermine the US election, and left it entirely up to Twitter to decide what to do about it — and Twitter chose to listen to some requests and ignore others.

That doesn’t seem so bad in context, does it? It actually kinda seems like the sort of thing people would want the FBI to do to support election integrity.

But the payments!

So, there’s no evidence of censorship. But what about these payments? Well, that’s Musk’s hand-chosen reporters, Musk himself, and his fans totally misunderstanding some very basic stuff that any serious reporter with knowledge of the law would not mess up. Here’s Shellenberger’s tweet from yesterday that has spun up this new false argument:

![Tweet from Shellenberger saying "The FBI's influence campaign may have been helped by the fact that it was paying Twitter millions of dollars for its staff time."

I am happy to report we have collected $3,415,323 since October 2019!" reports an associate of Jim Baker in early 2021.](https://i0.wp.com/www.techdirt.com/wp-content/uploads/2022/12/image-67.png?resize=543%2C648&ssl=1)

That’s Shellenberger saying:

The FBI’s influence campaign may have been helped by the fact that it was paying Twitter millions of dollars for its staff time.

“I am happy to report we have collected $3,415,323 since October 2019!” reports an associate of Jim Baker in early 2021.

But this is a misreading/misunderstanding of how things work. This had nothing to do with any “influence campaign.” The law already says that if the FBI is legally requesting information for an investigation under a number of different legal authorities, the companies receiving those requests can be reimbursed for fulfilling them.

(a)Payment.—

Except as otherwise provided in subsection (c), a governmental entity obtaining the contents of communications, records, or other information under section 2702, 2703, or 2704 of this title shall pay to the person or entity assembling or providing such information a fee for reimbursement for such costs as are reasonably necessary and which have been directly incurred in searching for, assembling, reproducing, or otherwise providing such information. Such reimbursable costs shall include any costs due to necessary disruption of normal operations of any electronic communication service or remote computing service in which such information may be stored.

But note what this is limited to. These are investigatory requests for information, or so called 2703(d) requests, which require a court order.

Now, there are reasons to be concerned about the 2703(d) program. I mean, going back to 2013, when it was revealed that the 2703(d) program was abused as part of an interpretation of the Patriot Act to allow the DOJ/NSA to collect data secretly from companies, we’ve highlighted the many problems with the program.

So, by the way, did old Twitter. More than a decade ago, Twitter went to court to challenge the claim that a Twitter user had no standing to challenge a 2703(d) order. Unfortunately, Twitter lost and the feds are still allowed to use these orders (which, again, require a judge to sign off on them).

I do think it remains a scandal the way that 2703(d) orders work, and the inability of users to push back on them. But that is the law. And it has literally nothing whatsoever to do with “censorship” requests. It is entirely about investigations by the FBI into Twitter users based on evidence of a crime. If you want, you can read the DOJ’s own guidelines regarding what they can request under 2703(d).

DOJ's "quick reference guide" to what can be obtained under a (d) order.

Looking at that, you can see that if they can get a 2703(d) order (again, signed by a judge) they can seek to obtain subscriber info, transaction records, retrieved communications, and unretrieved communications stored for more than 180 days (in the past, we’ve long complained about the whole 180 days thing, but that’s another issue).

You know what’s not on that list? “Censoring people.” It’s just not a thing. The reimbursement that is talked about in that email is about complying with these information production orders that have been reviewed and signed by a judge.

It’s got nothing at all to do with “censorship demands.” And yet Musk and friends are going hog wild pushing this utter nonsense.

Meanwhile, Twitter’s own transparency report again already reveals data on these orders as part of its “data information requests” list, where it shows that in the latest period reported (second half of 2021) it received 2.3k requests specifying 11.3k accounts, and complied with 69% of the requests.

This was actually down a bit from 2020. But since the period the email covers is from 2019 through 2020, you can see that there were a fair number of information requests from the FBI:

Given all that, it looks like there were probably in the range of 8,000 requests for information, covering who knows how many accounts, that Twitter had to comply with. And so the $3 million reimbursement seems pretty reasonable, assuming you would need a decent sized skilled team to review the orders, collect the information, and respond appropriately.

If there’s any scandal at all, it remains the lack of more detailed transparency about the (d) orders, or the ability of companies like Twitter to have standing to challenge them on behalf of users. Also, there are reasonable arguments for why judges are too quick to approve (d) orders as valid under the 4th Amendment.

But literally none of that is “the FBI paid Twitter to censor people.”

And yet, here’s Elon.

Filed Under: censorship, content moderation, doj, elon musk, fbi, investigations, matt taibbi, michael shellenberger, terms of service
Companies: twitter

Hello! You’ve Been Referred Here Because You’re Wrong About Twitter And Hunter Biden’s Laptop

from the just-helping-you-out-here dept

Hello! Someone has referred you to this post because you’ve said something quite wrong about Twitter and how it handled something to do with Hunter Biden’s laptop. If you’re new here, you may not know that I’ve written a similar post for people who are wrong about Section 230. If you’re being wrong about Twitter and the Hunter Biden laptop, there’s a decent chance that you’re also wrong about Section 230, so you might want to read that too! Also, these posts are using a format blatantly swiped from lawyer Ken “Popehat” White, who wrote one about the 1st Amendment. Honestly, you should probably read that one too, because there’s some overlap.

Now, to be clear, I’ve explained many times before, in other posts, why people who freaked out about how Twitter handled the Hunter Biden laptop story are getting confused, but it’s usually been a bit buried. I had already started a version of this post last week, since people keep bringing up Twitter and the laptop, but then on Friday, Elon (sorta) helped me out by giving a bunch of documents to reporter Matt Taibbi.

So, let’s review some basics before we respond to the various wrong statements people have been making. Since 2016, there have been concerns raised about how foreign nation states might seek to interfere with elections, often via the release of hacked or faked materials. It’s no secret that websites have been warned to be on the lookout for such content in the leadup to the election — not with demands to suppress it, but just to consider how to handle it.

Partly in response to that, social media companies put in place various policies on how they were going to handle such material. Facebook set up a policy to limit certain content from trending in its algorithm until it had been reviewed by fact-checkers. Twitter put in place a “hacked materials” policy, which forbade the sharing of leaked or hacked materials. There were — clearly! — some potential issues with that policy. In fact, in September of 2020 (a month before the NY Post story) we highlighted the problems of this very policy, including somewhat presciently noting the fear that it would be used to block the sharing of content in the public interest and could be used against journalistic organizations (indeed, that case study highlights how the policy was enforced to ban DDOSecrets for leaking police chat logs).

The morning the NY Post story came out there was a lot of concern about the validity of the story. Other news organizations, including Fox News, had refused to touch it. NY Post reporters refused to put their name on it. There were other oddities, including the provenance of the hard drive data, which apparently had been in Rudy Giuliani’s hands for months. There were concerns about how the data was presented (specifically how the emails were converted into images and PDFs, losing their header info and metadata).

The fact that, much later on, many elements of the laptops history and provenance were confirmed as legitimate (with some open questions) is important, but does not change the simple fact that the morning the NY Post story came out, it was extremely unclear (in either direction) except to extreme partisans in both camps.

Based on that, both Twitter and Facebook reacted somewhat quickly. Twitter implemented its hacked materials policy in exactly the manner that we had warned might happen a month earlier: blocking the sharing of the NY Post link. Facebook implemented other protocols, “reducing its distribution” until it had gone through a fact check. Facebook didn’t ban the sharing of the link (like Twitter did), but rather limited the ability for it to “trend” and get recommended by the algorithm until fact checkers had reviewed it.

To be clear, the decision by Twitter to do this was, in our estimation, pretty stupid. It was exactly what we had warned about just a month earlier regarding this exact policy. But this is the nature of trust & safety. People need to make very rapid decisions with very incomplete information. That’s why I’ve argued ever since then that while the policy was stupid, it was no giant scandal that it happened, and given everything, it was not a stretch to understand how it played out.

Also, importantly, the very next day Twitter realized it fucked up, admitted so publicly, and changed the hacked materials policy saying that it would no longer block links to news sources based on this policy (though it might add a label to such stories). The next month, Jack Dorsey, in testifying before Congress, was pretty transparent about how all of this went down.

All of this seemed pretty typical for any kind of trust & safety operation. As I’ve explained for years, mistakes in content moderation (especially at scale) are inevitable. And, often, the biggest reason for those mistakes is the lack of context. That was certainly true here.

Yet, for some reason, the story has persisted for years now that Twitter did something nefarious, engaging in election interference that was possibly at the behest of “the deep state” or the Biden campaign. For years, as I’ve reported on this, I’ve noted that there was literally zero evidence to back any of that up. So, my ears certainly perked up last Friday when Elon Musk said that he was about to reveal “what really happened with the Hunter Biden story suppression.”

Certainly, if there was evidence of something nefarious behind closed doors, that would be important and worth covering. If it was true that through discussions I’ve had with dozens of Twitter employees over the past few years every single one of them lied about what happened, well, that would also be useful for me to know.

And then Taibbi revealed… basically nothing of interest. He revealed a few internal communications that… simply confirmed everything that was already public in statements made by Twitter, Jack Dorsey’s Congressional testimony, and in declarations made as part of a Federal Elections Commission investigation into Twitter’s actions. There were general concerns about foreign state influence campaigns, including “hack and leak” in the lead up to the election, and there were questions about the provenance of this particular data, so Twitter made a quick (cautious) judgment call and implemented a (bad) policy. Then it admitted it fucked up and changed things a day later. That’s… basically it.

And, yet, the story has persisted over and over and over again. Incredibly, even after the details of Taibbi’s Twitter thread revealed nothing new, many people started pretending that it had revealed something major, with even Elon Musk insisting that this was proof of some massive 1st Amendment violation:

Elon Musk tweet stating: "If this isn’t a violation of the Constitution’s First Amendment, what is?"

Now, apparently more files are going to be published, so something may change, but so far it’s been a whole lot of utter nonsense. But when I say that both here on Techdirt and on Twitter, I keep seeing a few very, very wrong arguments being made. So, let’s get to the debunking:

1. If you said Twitter’s decision to block links to the NY Post was election interference…

You’re wrong. Very much so. First off, there was, in fact, a complaint to the FEC about this very point, and the FEC investigated and found no election interference at all. It didn’t even find evidence of it being an “in-kind” contribution. It found no evidence that Twitter engaged in politically motivated decision making, but rather handled this in a non-partisan manner consistent with its business objectives:

Twitter acknowledges that, following the October 2020 publication of the New York Post articles at issue, Twitter blocked users from sharing links to the articles. But Twitter states that this was because its Site Integrity Team assessed that the New York Post articles likely contained hacked and personal information, the sharing of which violated both Twitter’s Distribution of Hacked Materials and Private Information Policies. Twitter points out that although sharing links to the articles was blocked, users were still permitted to otherwise discuss the content of the New York Post articles because doing so did not directly involve spreading any hacked or personal information. Based on the information available to Twitter at the time, these actions appear to reflect Twitter’s stated commercial purpose of removing misinformation and other abusive content from its platform, not a purpose of influencing an election

All of this is actually confirmed by the Twitter Files from Taibbi/Musk, even as both seem to pretend otherwise. Taibbi revealed some internal emails in which various employees (going increasingly up the chain) discussed how to handle the story. Not once does anyone in what Taibbi revealed suggest anything even remotely politically motivated. There was legitimate concern internally about whether or not it was correct to block the NY Post story, which makes sense, because they were (correctly) concerned about making a decision that went too far. I mean, honestly, the discussion is not only without political motive, but shows that the trust & safety apparatus at Twitter was concerned with getting this correct, including employees questioning whether or not these were legitimately “hacked materials” and questioning whether other news stories on the hard drive should get the same treatment.

Twitter employees discuss whether it's appropriate to block the NY Post story based on the Hacked Materials policy.

There are more discussions of this nature, with people questioning whether or not the material was really “hacked” and initially deciding on taking the more cautious approach until they knew more. Twitter’s Yoel Roth notes that “this is an emerging situation where the facts remain unclear. Given the SEVERE risks here and lessons of 2016, we’re erring on the side of including a warning and preventing this content from being amplified.”

More debate among Twitter employees about how to handle the content, but saying that "this is an emerging situation where the facts remain unclear" and "given the severe risks here and the lessons of 2016, we're erring on the side of including a warning and preventing this content from being amplified."

Again, exactly as has been noted, given the lack of clarity Twitter reasonably decided to pump the brakes until more was known. There was some useful back-and-forth among employees — the kind that happens in any company regarding major trust & safety decisions, in which Twitter’s then VP of comms questioned whether or not this was the right decision. This shows a productive discussion — not anything along the lines of pushing for any sort of politically motivated outcome.

A message from Twitter's comms VP Brandon Borrman stating "can we truthfully claim that this is a part of the policy."

And then deputy General Counsel Jim Baker (more on him later, trust me…) chimes in to again highlight exactly what everyone has been saying: that this is a rapidly evolving situation, and it makes sense to be cautious until more is known. Baker’s message is important:

I support the conclusion that we need more facts to assess whether the materials were hacked. At this stage, however, it is reasonable for us to assume that they may have been and that caution is warranted. There are some facts that indicate that the materials may have been hacked, while there are others indicating that the computer was either abandoned and/or the owner consented to allow the repair shop to access it for at least some purposes. We simply need more information.

Again, all of this is… exactly what everyone has said ever since the day after it happened. This was an emerging story. The provenance was unclear. There were some sketchy things about it, and so Twitter enacted the policy because they just weren’t sure and didn’t have enough info yet. It turned out to be a bad call, but in content moderation, you’re going to make some bad calls.

What is missing entirely is any evidence that politics entered this discussion at all. Not even once.

2. But Twitter’s decision to “suppress” the story was a big deal and may have swung the election to Biden!

I’m sorry, but there remains no evidence to support that silly claim either. First off, Twitter’s decision actually seemed to get the story a hell of a lot more attention. Again, as noted above, Twitter did nothing to stop discussion of the story. It only blocked links to one story in the NY Post, and only for that one day. And the very fact that Twitter did this (and Facebook took other action) caused a bit of a Streisand Effect (hey!) which got the underlying story a lot more attention because of the decisions by those two companies.

The reality, though, is that the story just wasn’t that big of a deal for voters. Hunter Biden wasn’t the candidate. His father was. Everyone already pretty much knew that Hunter is a bit of a fuckup and clearly personally profiting off of the situation, but there was no actual big story in the revelations (I mean, yeah, there are still some people who insist there are, but they’re the same people who misunderstood the things we’re debunking here today). And, if we’re going to talk about kids of Presidents profiting off of their last name, well, there’s a pretty long list to go down….

But don’t take my word for it, let’s look at the evidence. As reporter Philip Bump recently noted, there’s actual evidence in Google search trends that Twitter and Facebook’s decision really did generate a lot more interest in the story. It was well after both companies took action that searches on Google for Hunter Biden shot upward:

Chart showing Google Trends for searches on "Hunter Biden" the day the NY Post story dropped, showing a big increase hours after FB and Twitter limited the story.

Also, soon after, Twitter reversed its policy, and there was widespread discussion of the laptop in the next three weeks leading up to the election. The brief blip in time in which Twitter and Facebook limited the story seemed to have only fueled much more interest in it, rather than “suppressing” it.

Indeed, another document in the “Twitter Files” highlights how a Democratic member of the House, Ro Khanna, actually reached out to Twitter to point this out and to question Twitter’s decision (if this was really a big Democratic conspiracy, you’d think he’d be supportive of the move, rather than critical of it, but the reverse was true.) Rep. Khanna’s email to Twitter noted:

I say this as a total Biden partisan and convinced he didn’t do anything wrong. But the story has now become more about censorship than relatively innocuous emails and it’s become a bigger deal than it would have been.

So again, the evidence actually suggests that the story wasn’t suppressed at all. It got more attention. It didn’t swing the election, because most people didn’t find the story particularly revealing.

3. The government pressured Twitter/Facebook to block this story, and that’s a huge 1st Amendment violation / treason / crime of the century / etc.

Yeah, so, that’s just not true. I’ve spent years calling out government pressure on speech, from Democrats (and more Democrats) to Republicans (and more Republicans). So I’m pretty focused on watching when the government goes over the line — and quick to call it out. And there remains no evidence at all of that happening here. At all. Taibbi admits this flat out:

Matt Taibbi tweet noting "there's no evidence - that I've seen - of any government involvement in the laptop story."

Incredibly, I keep seeing people on Twitter claim that Taibbi said the exact opposite. And you have people like Glenn Greenwald who insist that Taibbi only meant “foreign” governments here, despite all the evidence to the contrary. If he had found evidence that there was US government pressure here… why didn’t he post it? The answer: because it almost certainly does not exist.

Some people point to Mark Zuckerberg’s appearance over the summer on Joe Rogan’s podcast as “proof” that the FBI directed both companies to suppress the story, but that’s not at all what Zuckerberg said if you listened to his actual comments. Zuckerberg admits that they make mistakes, and that it feels terrible when they do. He goes into a pretty detailed explanation of some of how trust & safety works in determining whether or not a user is authentic. Then Rogan asks about the laptop story, and Zuckerberg says:

So, basically, the background here, is the FBI basically came to us, some folks on our team, and were like “just so you know, you should be on high alert, we thought there was a lot of Russian propaganda in the 2016 election, we have it on notice, basically, that there’s about to be some kind of dump that’s similar to that. So just be vigilant.”

This does not say that the FBI came to Facebook and said “suppress the Hunter Biden laptop story.” It was just a general warning that the FBI had intelligence that there might be some foreign influence operations, and to “be vigilant.”

This is nearly identical to what Twitter’s then head of “site integrity,” Yoel Roth, noted in his declaration in the FEC case discussed above:

“[F]ederal law enforcement agencies communicated that they expected ‘hack-and-leak operations’ by state actors might occur in the period shortly before the 2020 presidential election . . . . I also learned in these meetings that there were rumors that a hack-and-leak operation would involve Hunter Biden.”

Basically the FBI is saying, in general, they have some intelligence that this kind of attack may happen, so be careful. It did not say to censor the info. It didn’t involve any threats. It wasn’t specifically about the laptop story.

And, in fact, as of earlier this week, we now have the FBI’s version of these events as well! That’s because of the somewhat silly lawsuit that Missouri and Louisiana filed against the Biden administration over Twitter’s decision to block the NY Post story. Just this week, Missouri released the deposition of FBI agent, Elvis Chan, who is often found at the center of conspiracy theories regarding “government censorship.”

And Chan tells basically the same story with a few slight differences, mostly in terms of framing. Specifically, Chan says that he never told the companies to “expect” a hack and leak attack, but rather to be aware of the possibility, slightly contradicting Roth’s declaration:

Yeah, I don’t know what Mr. Roth meant or meant, but what I’m letting you know is that from my recollection — I don’t believe we would have worded it so strongly to say that we expected there to be hacks. I would have worded it to say that there was the potential for hacks, and I believe that is how anyone from our side would have framed the comment.

And the reason I believe that is because I and the FBI, for that matter the U.S. intelligence community, was not aware of any successful hacks against political organizations or political campaigns.

You don’t think that intelligence officials described it in the way that Mr. Roth does here in this sentence in the affidavit?

Yeah, I would not have — I do not believe that the intelligence community would have expected it. I said that they would have been concerned about the potential for it.

In the deposition, Chan repeats (many, many times) that he wouldn’t have used the language saying such an effort would be “expected” but that it was something to look out for.

He also doesn’t recall Hunter Biden’s name even coming up, though he does say they warned them to be on the lookout for discussions on “hot button” issues, and notes that the companies themselves would often ask about certain scenarios:

So from my recollection, the social media companies, who include Twitter, would regularly ask us, “Hey, what kind of content do you think the nation state actors, the Russians would post,” and then they would provide examples. Like, “Would it be X” or “Would it be Y” or “Would it be Z.” And then we — I and then the other FBI officials would say, “We believe that the Russians will take advantage of any hot-button issue.” And we — I do not remember us specifically saying “Hunter Biden” in any meeting with Twitter.

Later on he says:

Yeah, in my estimation, we never discussed Hunter Biden specifically with Twitter. And so the way I read that is that there are hack-and-leak operations, and then at the time — at the time I believe he flagged one of the potential current events that were happening ahead of the elections.

You believe that he, Yoel Roth, flagged Hunter Biden in one of these meetings?

No. I believe — I don’t believe he flagged it during one of the meetings. I just think that — so I don’t know. I cannot read his mind, but my assessment is because I don’t remember discussing Hunter Biden at any of the meetings with Twitter, that we didn’t discuss it.

So this would have been something that he would have just thought of as a hot-button issue on his own that happened in October.

He goes into great detail about meeting with tons of companies, but notes that mostly he’d talk to them about cybersecurity threats, not disinformation. He talks a bit about Russian disinformation campaigns, highlighting the well known Internet Research Agency, which specialized in pushing divisive messaging on US social media platforms. However, he basically confirms that he never discussed the laptop with anyone at any of these companies, and the deposition makes it pretty clear that if anyone at the FBI would have done so, it either would have been Chan himself or done with Chan’s knowledge.

As for the NY Post story, and the laptop itself, he notes he found out about it through the media, just like everyone else. And then he says that he didn’t talk with anyone at Twitter or Facebook about it, despite being their main contact on these kinds of issues.

Q. It’s your testimony that those news articles are the first time that you became aware that — you became aware of Hunter Biden’s laptop in any connection?

Yes. I don’t remember if it was a New York Post article or if it was another media outlet, but it was on multiple media outlets, and I can’t remember which article I read.

And before that day, October 14th, 2020, were you aware — were you aware of Hunter Biden — had anyone ever mentioned Hunter Biden’s laptop to you?

No.

[….]

Do you know if anyone at Twitter reached out to anyone at the FBI to check or verify anything about the Hunter Biden story?

I am not aware of any communications between Yoel Roth and the FBI about this topic.

Are you aware of any communications between anyone at Twitter and anyone in the federal government about the decision to suppress content relating to the Hunter Biden laptop story once the story had broken?

I am not aware of Mr. Roth’s discussions with any other federal agency. As I mentioned, I am not aware of any discussions with any FBI employees about this topic as well. But I only know who I know. So I don’t — he may have had these conversations, but I was not aware of it.

You mentioned Mr. Roth. How about anyone else at Twitter, did anyone else at Twitter reach out, to your knowledge, to anyone else in the federal government?

So I can only answer for the FBI. To my knowledge, I am not aware of any Twitter employee reaching out to any FBI employee regarding this topic.

/How about Facebook, other than that meeting you referred to where an analyst asked the FBI to comment on the Hunter Biden investigation, are you aware of any communications between anyone at Facebook and anyone at the FBI related to the Hunter Biden laptop story?

No.

How about any other social media platform?

No.

How about Apple or Microsoft?

No.

Basically, the exact same story emerges no matter how you look at it. The FBI, along with CISA, would have various meetings with internet companies mainly to warn them about cybersecurity (i.e., hacking) threats, but also generally mentioned the possibility of hack and leak attempts with a general warning to be on the lookout for such things, and that they may touch on “hot button” social and news topics. Nowhere is there any indication of pressure or attempts to tell the companies what to do, or how they should handle it. Just straight up information sharing.

When you look at all three statements — Zuckerberg’s, Roth’s, and Chan’s — basically the same not-very-interesting story emerges. The US government had some general meetings that happen with lots of big companies to warn them about various potential cybersecurity threats, and the issue of hack-and-leak campaigns as a general possibility came up with no real specifics and no warnings.

And no one communicated with the companies directly about the NY Post story.

Given all that, I honestly don’t see how there’s any reasonable concern here. There’s certainly no clear 1st Amendment concern. There appears to be zero in the way of government involvement or pressure. There’s no coercion or even implied threats. There’s literally nothing at all (no matter how Missouri’s Attorney General completely misrepresents it).

Indeed, the only thing revealed so far that might be concerning regarding the 1st Amendment is that Taibbi claimed that the Trump administration allegedly made demands of Twitter.

Taibbi tweet saying: "Both parties had access to these tools. For instance, in 2020, requests from both the Trump White House and the Biden campaign were received and honored."

If the Trump administration actually had sent requests to “remove” tweets (as Taibbi claims in an earlier tweet) that would most likely be a 1st Amendment issue. However, Taibbi reveals no such requests, which is really quite remarkable. It is also possible that Taibbi is overselling these claims, because this is a part of a discussion that we’ll get to in the next section, regarding Twitter’s flagging tools, which anyone (including you or me) can use to flag content for Twitter to review to see if it violates the company’s terms of service. While there are certainly some concerns about the government’s use of such tools, unless there’s some sort of threat or coercion, and as long as Twitter is free to judge the content for itself and determine how to handle it under its own terms, there’s probably no 1st Amendment issue.

Indeed, some people have highlighted the fact that the government gets “special treatment” in having its flags reviewed. But, from people I’ve spoken to, that actually goes against the “1st Amendment violation!” argument, because many social media companies set up special systems for government agents not to enable “moar censorship!” but because they know they have to be extra vigilant in reviewing those requests so as not to take down content mistakenly based on a government request.

So, sorry, so far there appears to be no government intrusion, and certainly no 1st Amendment violation.

4. The Biden campaign / Democrats demanded Twitter censor the NY Post! And that’s a 1st Amendment violation / treason / the crime of the century / etc.

So, again, the only way that there’s a 1st Amendment violation is if the government issued the demand. And in October of 2020, the Biden campaign and the Democratic National Committee… were not the government. The 1st Amendment does not restrict their ability, as private citizens (even while campaigning for public office) to flag content for Twitter to review against its policies. Hilariously, Elon Musk seems kinda confused about how time works. That tweet that we screenshotted about about the “1st Amendment” violation is in response to an internal email that Taibbi revealed about what Taibbi (misleadingly) says are “requests from connected actors to delete tweets” followed by a screenshot of Twitter employees listing out some tweets saying “more to review from the Biden team” and someone responding “handled these.”

There was then the next tweet which was a similar set of two tweets sent over from the Democratic National Committee (as compared to the Biden campaign in the first one). This includes a tweet from the actor James Woods, which the Twitter team calls special attention to for being “high profile.”

Taibbi tweets described in the paragraph before this image.

Except, as a few enterprising folks discovered when looking up those tweets listed, they were… basically Hunter Biden nude images that were found on the laptop hard drive, which clearly violated Twitter’s terms of service (and likely violated multiple state laws regarding the sharing of nonconsensual nude images). This includes the James Woods tweet, which included a fake Biden campaign ad that showed a naked picture of Hunter Biden lying on a bed with his (only slightly blurred) penis quite visible. I’m not going to share a link to the image.

A good investigative reporter might have looked up what was in those tweets before posting a conspiratorial post implying that these were attempts by the campaign to remove the NY Post story or some other important information. But Taibbi did not. Nor has he commented on it since.

On top of that, while Taibbi claims that these were “requests to delete,” as the Twitter email quite clearly says, these are for Twitter to “review.” In other words, these were flagged for Twitter to review if they violate Twitter’s policies as the naked images clearly do.

So, there’s clearly no 1st Amendment concern here because, despite Musk’s understanding of the space-time continuum, the Biden administration was not in the White House in October of 2020. Second, even if we’re concerned about political campaigns asking for content to be deleted, flagging content for companies to review to see if they violate policies is not (in any way) the same as demanding it be deleted. Anyone can flag content. And then the company reviews it and makes a determination.

Even more importantly, nothing revealed so far suggests that the campaign had anything to say to Twitter regarding the NY Post story or any story regarding the laptop. Literally the only concerns raised were about the naked pictures.

Finally, as noted above, the only other Democrat mentioned so far in the Twitter files is Rep. Ro Khanna who told Twitter it was wrong to stop the links to the NY Post article, and urged them to rescind the decision in the name of free speech. That does not sounds like the Democrats secretly pressuring the company to block the story. It kinda sounds like the exact opposite.

So despite what everyone keeps yelling on Twitter (including Elon Musk) this still doesn’t appear to be evidence of “censorship” or even “suppression of the Hunter Biden laptop story.” It’s just focused on the nonconsensual sharing of Hunter’s naked images.

As a side note, Woods has now said he’s going to sue over this, though for the life of me I have no idea what sort of claim he thinks he has, or how it’s going to go over in court when he claims his rights were violated when he was unable to share Hunter’s dick pic.

5. But Jim Baker! He worked for the FBI! And he was in charge of the Twitter files! Clearly he’s covering up stuff!

Here we are ripping from the stupidity headlines. This one came out just last night as Taibbi added a “supplement” to the Twitter files, again seemingly confused about how basically anything works. According to Taibbi in a very unclear and awkwardly worded thread, he and Bari Weiss (another opinion columnist who Musk has decided to share the files with) were having some sort of “complication” in accessing the files. Taibbi claims that Twitter’s Deputy General Counsel, Jim Baker, was reviewing the files, and somehow this was as problem (he does not explain why or how, though there’s a lot of conjecture).

Baker is, in fact, the former General Counsel at the FBI. It made news when he was hired.

Baker was subject to a bunch of conspiracy theory stuff a few years ago regarding the FBI and some of the sillier theories regarding the Trump campaign, including the Steele Dossier and the even sillier “Alfa Bank” story (which had always been silly and lots of people, including us, had mocked when it came out).

But despite all that, there’s really little evidence that Baker has done anything particularly noteworthy here. The stuff about his actions while at the FBI is totally overblown partisan hackery. People talk about the so-called “criminal investigation” he faced for his work looking into Russian interference in the 2020 election, but that appears to be something mostly cooked up by extreme Trumpists in the House and appears to have gone nowhere. And, yes, he was a witness at the Michael Sussman trial, which was sorta connected to the Alfa Bank stuff, but his testimony supported John Durham, not Michael Sussman, in that he claimed that Sussman made a false statement to him, which the entire case hinged on (and, for what it’s worth, the trial ended in acquittal).

In other words, almost all of the FBI-related accusations against Baker are entirely “guilt by association” type claims, with nothing at all legitimate to back them up.

As for Twitter, we already highlighted Baker’s email that Taibbi revealed, which shows a normal, thoughtful, cautious discussion of a normal trust & safety debate, with nothing even remotely political.

The latest claims from Taibbi and Weiss also don’t make much sense. Elon Musk has told his company to hand over a bunch of internal documents to reporters. Any corporate lawyer would naturally do a fairly standard document review before doing so to make sure that they’re not handing over any private information or something else that might create legal issues for Musk. And since a large chunk of the legal team has left the company, it wouldn’t be all that surprising if the task ended up on Baker’s desk.

Now, you can argue (as Taibbi and others now imply) that there’s some massive conflict of interest here, but, uh… that’s not at all clear, and not really how conflict of interest works. And, again, there’s little indication that Baker had a major role here at all, beyond being one of many who weighed in on this matter (and did so in a perfectly reasonable manner).

Honestly, Baker not reviewing the documents first would have potentially put him in legal jeopardy for not doing the very basic function of his job in making sure the company he worked for didn’t put itself in serious legal jeopardy by revealing things that might create huge liabilities for Musk and the company.

Either way, late Tuesday, Musk announced that Baker had “exited” from the company, and when asked by a random Twitter user if he had been “asked to explain himself first” Musk claimed that Baker’s “explanation was… unconvincing.”

Musk tweets as described in the paragraph above

And perhaps there’s something more here that will be revealed by Weiss now that the shackles have been removed. But, based on what’s been stated so far, a perfectly plausible explanation is that Musk confronted Baker wanting to know why he was holding back the files and what his role was in “suppressing” the NY Post story. And Baker told him, truthfully, that his role was exactly as was revealed in the email (giving his general thoughts on the proper approach to handling the story) and that he was reviewing documents because that’s his job, and Musk got mad and fired him.

Somewhat incredibly, Musk also seemed to imply he only learned of Baker’s involvement on Sunday.

Some people are claiming that Musk is saying he only discovered that Baker worked for him on Sunday, which is possible but seems unlikely. Conspiracy theorists had pointed out Baker’s role at the company to Musk as far back as April. A more charitable explanation is that Musk only discovered that Baker was handling the document review on Sunday. And I guess that’s plausible but, again, really only reflects extremely poorly on Musk.

If he’s going to reveal internal documents to reporters, especially ones that Musk himself keeps claiming implicate him in potential criminal liability (yes, it happened before his time, but Musk purchased the liabilities of the company as well), it’s not just perfectly normal, but kinda necessary to have lawyers do some document review. Again, as a more charitable explanation, perhaps Musk just wanted a different lawyer to do the review, and my only answer there is maybe he shouldn’t have gotten rid of so many lawyers from the legal team. Might have helped.

So, look, there could be a possible issue here, but given how much has been totally misrepresented throughout this whole process, without any actual evidence to support the “Jim Baker mastermind” theory, it’s difficult to take it even remotely seriously when there’s a perfectly normal, non-nefarious explanation to how all of this went down.

The absence of evidence is not evidence that there’s a coverup. It might just be evidence that you’re prone to believing in unsubstantiated conspiracy theories, though.

6. Still, all this proved that Twitter is “illegally” biased towards Democrats!

Taibbi made a big deal out of the fact that Twitter employees overwhelmingly donated to Democrats in their political contributions, which is not exactly new or surprising. Musk commented on this as well, suggesting sarcastically it was proof of bias at Twitter, but left out that among the companies in the chart he was commenting on… was also Tesla, where over 90% of employee donations went to Democrats.

But, more importantly, it’s not surprising in the least. Employees of many companies lean left. Executives (who donate way more money) tend to lean right. I mean, you can look at a similar chart of executive donations that shows they overwhelmingly go to Republicans. Neither is illegal, or even a problem. It’s just reality.

And companies making editorial decisions are… in fact… allowed to have bias in their political viewpoints. I would bet that if you looked at donations by employees at the NY Post or Fox News, they would generally favor Republicans. Indeed, imagine what would happen if someone took over Fox News and suddenly started revealing (1) communications between Fox News execs and Republican politicians and campaigns and (2) internal editorial meeting notes regarding what to promote. Don’t you think it would be way more biased than what the Twitter files revealed?

Here’s the important point on that: Fox News’ clear bias is not illegal either. And, indeed, if Democrats in Congress held hearings on “Fox News’ bias” and demanded that its top executives appear and explain their editorial decision making in promoting GOP talking points, people should be outraged over the clear intimidation factor, which would obviously be problematic from a 1st Amendment angle. Yet I don’t expect people to get all that worked up about the same thing happening to Twitter, even though it’s actually the same issue.

Companies are allowed to be biased. But the amazing thing revealed in the Twitter files is just how little evidence there is that any bias was a part of the debate on how to handle this stuff. Everything appeared to be about perfectly reasonable business decisions.

And… that’s it. I fear that this story is going to live on for years and years and years. And the narrative full of nonsense is already taking shape. However, I like to work off of actual facts and evidence, rather than fever dreams and misinterpretations. And I hope that you’ll read this and start doing the same.

Filed Under: 1st amendment, biden campaign, content moderation, election interference, elon musk, elvis chan, free speech, hunter biden, hunter biden laptop, jim baker, joe biden, journalism, mark zuckerberg, matt taibbi, reporting, yoel roth
Companies: ny post, twitter