Fascinating New Study Suggests (Again) That Twitter Moderation Is Biased Against Misinformation, Not Conservatives (original) (raw)

from the if-you-don't-want-to-get-banned-stop-sharing-bullshit dept

Behold! An actually interesting academic study exploring whether or not Twitter moderation has an anti-conservative bias! This is something many of us have been asking for for a while, but it’s a very difficult thing to study in any meaningful way. The results of this study are actually really fascinating, but it’s important to dig into some of the background first.

I know, I know, it’s become a key part of “the narrative” that social media websites have an “anti-conservative bias” in how they moderate. As we’ve pointed out over and over again there remains little evidence to support this. Content moderation is impossible to do well at scale. Mistakes are always going to be made, and pointing to this or that anecdote, without context, isn’t proving anything regarding bias. Indeed, every single “example” I’ve seen people trot out as “evidence” of anti-conservative bias, upon looking more closely, falls apart.

Take, for example, the popular claim that Twitter blocking a NY Post tweet about Hunter Biden’s laptop is proof of bias. However, as we discussed at the time, Twitter very clearly had a policy forbidding the linking to “hacked” documents. And Twitter had actually used that same policy to shut down DDoSecrets’ account for… publishing documents that exposed law enforcement wrongdoing. So, here we have evidence that the same policy was used to block links to articles about police misconduct as well (which would generally be a key liberal talking point, less a conservative one) and the Biden laptop article.

Now, to be clear, we always thought this policy was stupid and were happy that Twitter changed its policy on this point soon after. But, the company did nothing to stop the actual discussions of Biden’s laptop (just links to that one story), and Twitter had already shown that it enforced that policy against publications that would be mostly seen as “left leaning” as well. That’s not proof of bias. Just bad policy.

There have been a few attempts to “study” whether or not anti-conservative bias actually is happening, but they all come up empty. I mean, there was one ridiculous and non-scientific study that said that Twitter’s decision to remove accounts like the American Nazi Party along with some noted white supremacists proved an anti-conservative bias, but when conservatives are self-identifying with the American Nazi Party, then your argument about bias already is going to have some issues.

There was another study looking at Facebook, performed by a subsidiary of Facebook (though, the data all seemed legit), that suggested at least on Facebook that the company was willing to promote Trumpist voices more than anti-Trump voices. But that still wasn’t proving very much.

There certainly have been other reports about what’s going on inside these companies, including how Mark Zuckerberg had Facebook change its rules to better protect Trumpists (again suggesting the opposite of anti-conservative bias). Or about how Twitter had to dial back an algorithmic change that would have suppressed white supremacists because that algorithm was having trouble distinguishing neo-Nazis from prominent Republicans (see the report above about the American Nazi Party).

Separately, there is an issue in that when conservatives (really Trump supporting conservatives) are suspended… they tend to yell more loudly about it. Over the weekend I saw a discussion in response to a very prominent investor saying that it was obvious that Twitter was biased against conservatives, where a few self-identifying conservatives said that they’d never heard of any non-conservative getting suspended from Twitter… while also admitting they didn’t follow any (without recognizing how this might bias their own views). The fact is that non-conservative users are frequently suspended as well — often for things like calling out racism or pushing back against homophobia. But those don’t get blasted all over Fox News.

Anyway, that finally takes us to this new study, done by four researchers from a variety of different universities (mainly MIT and Yale): Qi Yang, Mohsen Mosleh, Tauhid Zaman and David Rand. Now, it’s important to note that this study specifically looked at political speech (the area that people are most concerned about, even though the reality is that this is a tiny fraction of what most content moderation efforts deal with), and it did find that a noticeably larger number of Republicans had their accounts banned than Democrats in their study (with a decently large sample size). However, that did not mean that it showed bias. Indeed, the study is quite clever, in that it corrected for generally agreed upon false information sharers — and the conclusion is that Twitter’s content moderation is biased against agreed-upon misinformation rather than political bias. It’s just that Republicans were shown to be much, much, much more willing to share such misinformation.

Social media companies are often accused of anti-conservative bias, particularly in terms of which users they suspend. Here, we evaluate this possibility empirically. We begin with a survey of 4,900 Americans, which showed strong bi-partisan support for social media companies taking actions against online misinformation. We then investigated potential political bias in suspension patterns and identified a set of 9,000 politically engaged Twitter users, half Democratic and half Republican, in October 2020, and followed them through the six months after the U.S. 2020 election.

During that period, while only 7.7% of the Democratic users were suspended, 35.6% of the Republican users were suspended. The Republican users, however, shared substantially more news from misinformation sites –as judged by either fact-checkers or politically balanced crowds –than the Democratic users. Critically, we found that users’ misinformation sharing was as predictive of suspension as was their political orientation. Thus, the observation that Republicans were more likely to be suspended than Democrats provides no support for the claim that Twitter showed political bias in its suspension practices. Instead, the observed asymmetry could be explained entirely by the tendency of Republicans to share more misinformation. While support for action against misinformation is bipartisan, the sharing of misinformation –at least at this historical moment –is heavily asymmetric across parties. As a result, our study shows that it is inappropriate to make inferences about political bias from asymmetries in suspension rates.

Now, I know that some people are going to just rush to the results of this, and the differing number of Republican accounts suspended compared to Democratic accounts, but as the authors of this study make abundantly clear, that’s a mistake.

I suggest reading the study, where the methodology seems quite sound. The key finding is what best predicts whether an account will be suspended — and it’s not the political orientation or beliefs of the tweeter. It’s whether or not they’re sharing blatant misinformation. In fact, the study found that using toxic or offensive language was even less of a predictor. Twitter allows for vigorous and even angry debate (as shouldn’t surprise anyone who is on the site regularly). But if you’re regularly pushing total nonsense, you might get suspended.

Now, I can already hear some people screaming that “misinformation” is in the eye of the beholder, so it’s possible a study like this would inaccurately count certain content favored by, let’s just say, Republicans as misinformation. However, even there, the researchers appeared to bend over backwards to try to make this as fair as possible. They used other studies that involved many different raters to judge which sources were reliable and which were not (i.e., they didn’t just pick which sources they favored).

Another interesting piece of the study was that they also ran a survey of both Democrats and Republicans to see whether or not they thought that social media sites should try to reduce misinformation and found that even among Republicans there was strong agreement that reducing misinformation was the right approach.

We begin by assessing public attitudes about whether social media companies should take actions against misinformation on their platforms, and how these attitudes vary by respondent partisanship (results are qualitatively equivalent when examining variation by respondent ideology). When N=1,228 respondents were asked whether or not social media companies should try to reduce the spread of misinformation and fake news on their platforms, 80.0% responded “Yes”. Is this support for platform action bipartisan? While support for reducing misinformation did correlate with respondent partisanship, such that Republicans were less supportive(r(1226)=-0.18, p<0.001; regression including controls for age, gender, education, and ethnicity: β=-0.17, t=-5.84, p<.001), even a substantial majority (67.2%) of strong Republicans believe social media platforms should try to reduce the spread of misinformation (Figure 1a). Thus, there is strong bipartisan support for interventions against misinformation.

In other words, stripped of culture war buzzwords, the vast majority of people want social media websites to intervene to slow the spread of misinformation (contrary to what you might hear out there). Second, the evidence pretty strongly shows that spreading misinformation is the leading indicator of why you might get banned by a social media platform.

So, even if more Republicans than Democrats end up getting banned, the evidence again suggests that it’s not anti-conservative bias at work, the issue is just that Republicans are significantly more likely to spread bullshit. If they stopped doing that, they wouldn’t face the same moderation pressures. You can find the whole study at this link or below.

Filed Under: anti-conservative bias, bias, content moderation, study
Companies: twitter