Removing Terrorist Content Isn't Helping Win The War On Terror (original) (raw)
from the misguided-efforts dept
The terrorists are winning.
This shouldn’t come as a surprise. The War on Drugs hasn’t made a dent in drug distribution. Why should the War on Terror be any different? Two decades and several billion dollars later, what do we have to show for it? Just plenty of enemies foreign and domestic.
While politicians rail against “terrorist content,” encryption, and the right for people to remain generally unmolested by their governments, they’re leaning hard on social media platforms to eradicate this content ASAP.
And social media companies are doing all they can. Moderation is hard. It’s impossible when you’re serving millions of users at once. Nonetheless, the content goes down. Some of it is actual “terrorist content.” Some of it is journalism. Some of it is stuff no one would consider terroristic. But it all goes down because time is of the essence and the world is watching.
But to what end? As was noted here all the way back in 2017, efforts made to take down “terrorist content” resulted in the removal of evidence of war crimes. Not much has changed since then. This unfortunate side effect was spotted again in 2019. Target all the terrorist content you want, but destroying it destroys evidence that could be used to identify, track, and, ultimately, prosecute terrorists.
Sure, there’s some concern that unmoderated terrorist content contains the inherent power to radicalize internet randos. It’s a valid concern but it might be outweighed by the positives of keeping the content live. To go further, it might be a net gain for society if terrorist content was accessible and easily-shared. This seems counterintuitive, but there’s a growing body of research showing terrorists + internet use = thwarted terrorist plots.
Call me crazy, but this sounds like a better deal for the world’s population than dozens of surveillance agencies slurping up everything that isn’t nailed down by statute. This comes from Joe Whittaker at Lawfare, who summarizes research suggesting swift removal of “terrorist content” isn’t helping win the War on Terror.
In my sample, the success of an attempted terrorist event—defined as conducting an attack (regardless of fatalities), traveling to the caliphate, or materially supporting others actor by providing funds or otherwise assisting their event—is negatively correlated with a range of different internet behaviors, including interacting with co-ideologues and planning their eventual activity. Furthermore, those who used the internet were also significantly more likely to be known to the security services prior to their event or arrest. There is support for this within the literature; researchers at START found that U.S.-based extremists who were active on social media had lower chances of success than those who were not. Similarly, research on U.K.-based lone actors by Paul Gill and Emily Corner found that individuals who used the internet to plan their actions were significantly less likely to kill or injure a target. Despite the operational affordances that the internet can offer, terrorist actors often inadvertently telegraph their intentions to law enforcement. Take Heather Coffman, whose Facebook profile picture of an image of armed men with the text “VIRTUES OF THE MUJIHADEEN” alerted the FBI, which deployed an undercover agent and eventually led to her arrest.
Correlation isn’t causation but there’s something to be said about visibility. This has been a noticeable problem ever since some law enforcement-adjacent grandstanders started nailing every online service with personal ads to the judicial wall for supposedly facilitating sex trafficking. Ads were pulled. Services were halted. And sex traffickers became increasingly difficult to track down.
As this research notes, radicalization might occur faster with heavier social media use. But this isn’t necessarily a bad thing. Greater visibility means easier tracking and better prevention.
Out in the open also means encryption isn’t nearly as much of an issue. Terrorist organizations appear to be voluntarily moving away from open platforms, sacrificing expeditious radicalization for privacy and security. But even that doesn’t appear to pose nearly as much of a problem as politicians and law enforcement officials suggest.
When looking at the Islamic State cohort in the United States, unlike other online behaviors, there is not a significant relationship between the use of end-to-end encryption and event success. Terrorists who used it were just as likely to be successful as those who did not.
Unfortunately, there are no easy answers here. While driving terrorists underground results in limited visibility for those seeking to thwart their plans, allowing them to take full advantage of open platforms increases the number of possible terrorists law enforcement must keep an eye on.
The downsides of aggressive moderation, however, are clear. Visibility decreases as the possibility for over-moderation increases. Evidence needed for investigations and prosecutions vanishes into the ether over the deafening roar of calls to “do more.”
Filed Under: content moderation, content removals, open source intelligence, terrorism, terrorist content