extremist content – Techdirt (original) (raw)
ExTwitter Is Such Hot Garbage Even British Cops No Longer Want Anything To Do With It
from the just-not-the-kind-of-extremism-we-like dept
Of course, Elon Musk had to take Twitter private. If he had to answer to shareholders, he would have been ousted months ago for his systematic, single-minded destruction of the company’s value.
Pretty much every move he’s made has been some level of bad, ranging from confusing to annoying to infuriating to catastrophic. Alienating long-term users, chasing away advertisers, amplifying the voices of some the worst people in the world, kowtowing to foreign dictators Donald Trump considers to be great leaders, turning verification into pay-to-play, and generally just being an all-around asshole, Musk has managed to turn a social media pioneer into a toxic dumpster fire in record time.
It’s not just advertisers fleeing the platform. It’s also the public sector. Government agencies also use services like Twitter to reach constituents and they’re beginning to see why it might be a bad idea to send out their messages via a service awash in a sea of hate, misinformation, extremism, and grifting.
Even entities that often align themselves with the sort of authoritarians Musk and his Trump-loving fanboys dig the most are finding ExTwitter to have moved a bit too far to the extremist side of the spectrum to continue doing (government) business with the platform. Here are Andy Bruce and Muvija M reporting for Reuters about the latest collateral damage produced by the Musk regime:
Reuters contacted all 45 territorial police forces and British Transport Police by email. Of the 33 to give details about their policy, 10 forces who collectively police nearly 13 million people said they were actively reviewing their presence on X, while 13 said they frequently reviewed all their social media platforms.
[…]
Yet of these 23 forces, six said they were cutting their presence to just one or two X accounts. One, North Wales Police, serving nearly 700,000 residents, stopped using X completely in August.
“We … felt that the platform was no longer consistent with our values and therefore we have withdrawn our use of it,” Chief Constable Amanda Blakeman said, adding that they would continue to monitor and review alternative platforms.
The quote in bold could be applied to a large number of former Twitter users, many of which began leaving the platform after Musk made it clear he’d prefer to be surrounded by conspiracy theorists, domestic extremists, far-right racists, and people willing to turn to violence rather than respect a peaceful transition of power. Then there are the Nazis. Lots of them. And all of this is surrounded by the non-stop gibbering of blue-checked asshats either trying to foist their bigotry on others or simply cluttering up threats with auto-generated replies pushing whatever crypto scam they happen to be participating in.
North Wales pulled the plug. It looks like another police agency may soon be headed for the exit door as well.
Gwent Police said they were reviewing X because of questions about “the tone of the platform and whether that is the right place to reach our communities”. All Gwent’s individual officer accounts have been removed.
It’s not just cops no longer seeing the value in maintaining ExTwitter accounts, although it’s always a surprise to see law enforcement agencies exit a platform that caters to so many of their biggest fans: bigots, white nationalists, and others who will lick any boot they see so long as it’s someone they hate being pinned under the heel.
Interacting and informing the public is an important feature of social media services. But other government agencies are now deciding it’s not worth wading through a cesspool just to hand out a few extra digital pamphlets.
Of 32 ambulance and fire services surveyed by Reuters, nine said they had actively reviewed their presence on X. England’s North East Ambulance Service announced in July that it had stopped posting there.
[…]
In recent months, some British charities, and health, and educational establishments have said they will no longer post to X.
Every day brings more news like this. Soon, ExTwitter will be of interest to no one but cultural anthropologists. Even those demanding the resurrection of banned accounts are bound to get bored with seeing nothing but their own bigotry and stupidity reflected back at them by a bunch of blue checks with similar interests and a similar lack of anything actually interesting to say.
For public services that seek to serve the entirety of their public, exiting X is the smart move, even if it means temporarily losing a little bit of reach. There’s nothing to be gained by being one of the last rational voices on a service that is pretty much just 4chan but with a more attractive UI.
Filed Under: content moderation, elon musk, extremism, extremist content, police
Companies: twitter, x
Flip Side To 'Stopping' Terrorist Content Online: Facebook Is Deleting Evidence Of War Crimes
from the not-for-the-first-time dept
Just last week, we talked about the new Christchurch Call, and how a bunch of governments and social media companies have made some vague agreements to try to limit and take down “extremist” content. As we pointed out last week, however, there appeared to be little to no exploration by those involved in how such a program might backfire and hide content that is otherwise important.
We’ve been making this point for many, many years, but every time people freak out about “terrorist content” on social media sites and demand that it gets deleted, what really ends up happening is that evidence of war crimes gets deleted as well. This is not an “accident” or such systems misapplied, this is the simple fact that terrorist propaganda often is important evidence of war crimes. It’s things like this that make the idea of the EU’s upcoming Terrorist Content Regulation so destructive. You can’t demand that terrorist propaganda get taken down without also removing important historical evidence.
It appears that more and more people are finally starting to come to grips with this. The Atlantic recently had an article bemoaning the fact that tech companies are deleting evidence of war crimes, highlighting how such videos have actually been really useful in tracking down terrorists, so long as people can watch them before they get deleted.
In July 2017, a video capturing the execution of 18 people appeared on Facebook. The clip opened with a half-dozen armed men presiding over several rows of detainees. Dressed in bright-orange jumpsuits and black hoods, the captives knelt in the gravel, hands tied behind their back. They never saw what was coming. The gunmen raised their weapons and fired, and the first row of victims crumpled to the earth. The executioners repeated this act four times, following the orders of a confident young man dressed in a black cap and camouflage trousers. If you slowed the video down frame by frame, you could see that his black T-shirt bore the logo of the Al-Saiqa Brigade, an elite unit of the Libyan National Army. That was clue No. 1: This happened in Libya.
Facebook took down the bloody video, whose source has yet to be conclusively determined, shortly after it surfaced. But it existed online long enough for copies to spread to other social-networking sites. Independently, human-rights activists, prosecutors, and other internet users in multiple countries scoured the clip for clues and soon established that the killings had occurred on the outskirts of Benghazi. The ringleader, these investigators concluded, was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander. Within a month, the International Criminal Court had charged Werfalli with the murder of 33 people in seven separate incidents?from June 2016 to the July 2017 killings that landed on Facebook. In the ICC arrest warrant, prosecutors relied heavily on digital evidence collected from social-media sites.
The article notes, accurately, that this whole situation is kind of a mess. Governments (and some others in the media and elsewhere) are out there screaming about “terrorist content” online, but pushing companies to take it all down is having the secondary impact of both deleting that evidence from existence and making it that much more difficult to find those terrorists.n And when people raise this concern, they’re mostly being ignored:
These concerns are being drowned out by a counterargument, this one from governments, that tech companies should clamp down harder. Authoritarian countries routinely impose social-media blackouts during national crises, as Sri Lanka did after the Easter-morning terror bombings and as Venezuela did during the May 1 uprising. But politicians in healthy democracies are pressing social networks for round-the-clock controls in an effort to protect impressionable minds from violent content that could radicalize them. If these platforms fail to comply, they could face hefty fines and even jail time for their executives.
As the article notes, the companies rush to appease governments demanding such content get taken down has already made the job of those open source researchers much more difficult, and actually helped to hide more terrorists:
Khatib, at the Syrian Archive, said the rise of machine-learning algorithms has made his job far more difficult in recent months. But the push for more filters continues. (As a Brussels-based digital-rights lobbyist in a separate conversation deadpanned, ?Filters are the new black, essentially.?) The EU?s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will see to it that nobody sees it. He fears the unintended consequences of such a law?that in cracking down on content that?s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, ?would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.?
Of course, this is no surprise. We see this in lots of contexts. For example, the focus on going after platforms for sex trafficking with FOSTA stopped the ability of police to help find actual traffickers and victims by hiding that material from view. Indeed, just this week, a guy was sentenced for sex trafficking a teenager, and the way he was found was via Backpage.
This is really the larger point we’ve been trying to make for the better part of two decades. Focusing on putting liability and control on the intermediary may seem like the “easiest” solution to the fact that there is “bad” content online, but it creates all sorts of downstream effects that we might not like at all. It’s reasonable to say that we don’t want terrorists to be able to easily recruit new individuals to their cause, but if that makes it harder to stop actual terrorism, shouldn’t we be analyzing the trade-offs there? To date, that almost never happens. Instead, we get the form of a moral panic: this content is bad, therefore we need to stop this content, and the only way to do that is to make the platforms liable for it. That assumes — often incorrectly — a few different things, including the idea that magically disappearing the content makes the activity behind it go away. Instead, as this article notes, it often does the opposite and makes it more difficult for officials and law enforcement to track down those actually responsible.
It really is a question of whether or not we want to be able to address the underlying problem (those actually doing bad stuff) or sweep it under the rug by deleting it and pretending it doesn’t happen. All of the efforts to put the liability on intermediaries really turns into an effort to sweep the bad stuff under the rug, to look the other way and pretend if we can’t find it on a major platform, that it’s not really happening.
Filed Under: christchurch call, content moderation, evidence, extremist content, terrorist content, war crimes
Companies: facebook, google, twitter
UK's New 'Extremist Content' Filter Will Probably Just End Up Clogged With Innocuous Content
from the hashtags-or-no dept
The UK government has rolled out an auto-flag tool for terrorist video content, presumably masterminded by people who know it when they (or their machine) see it and can apply the “necessary hashtags.” The London firm behind it is giving its own product a thumbs-up, vouching for its nigh invincibility.
London-based firm ASI Data Science was handed £600,000 by government to develop the unnamed algorithm, which uses machine learning to analyse Daesh propaganda videos.
According to the Home Office, tests have shown the tool automatically detects 94 per cent of Daesh propaganda with 99.995 per cent accuracy.
The department claimed the algorithm has an “extremely high degree of accuracy”, with only 50 out of a million randomly selected videos requiring additional human review.
This tool won’t be headed to any big platforms. Most of those already employ algorithms of their own to block extremist content. The Home Office is hoping this will be used by smaller platforms which may not have the budget or in-house expertise to pre-moderate third party content. They’re also hoping it will be used by smaller platforms that have zero interest in applying algorithmic filters to user uploads because it’s more likely to anger their smaller userbase than bring an end to worldwide terrorism.
The Home Office’s hopes are only hopes for the moments. But if there aren’t enough takers, it will become mandated reality.
[Amber] Rudd told the Beeb the government would not rule out taking legislative action “if we need to do it”.
In a statement she said: “The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this, can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images.”
Is such an amazing tool really that amazing? It depends on who you ask. The UK government says it’s so great it may not even need to mandate its use. The developers also think their baby is pretty damn cute. But what does “94% blocking with 99.995% accuracy” actually mean when scaled? Well, The Register did some math and noticed it adds up to a whole lot of false positives.
Assume there are 100 Daesh videos uploaded to a platform, among a batch of 100,000 vids that are mostly cat videos and beauty vlogs. The algorithm would accurately pick out 94 terror videos and miss six, while falsely identifying five. Some people might say that’s a fair enough trade-off.
But if it is fed with 1 million videos, and there are still only 100 Daesh ones in there, it will still accurately pick out 94 and miss six – but falsely identify 50.
So if the algorithm was put to work on one of the bigger platforms like YouTube or Facebook, where uploads could hit eight-digit figures a day, the false positives could start to dwarf the correct hits.
This explains the government’s pitch (the one with latent legislative threat) to smaller platforms. Fewer uploads mean fewer false positives. Larger platforms with their own software likely aren’t in the market for something government-made that works worse than what they have.
Then there’s the other problem. Automated filters, backed by human review, may limit the number of false positives. But once the government-ordained tool declared something extremist content, what are the options for third parties whose uploaded content has just been killed? There doesn’t appear to be a baked-in appeals process for wrongful takedowns.
“If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake? It may be too complicated for the small company,” said Jim Killock, director of the Open Rights Group.
“If the government want people to use their tool, there is a strong case that the government should review mistakes and ensure that there is an independent appeals process.”
For now, it’s a one-way ride. Content deemed “extremist” vanishes and users have no vehicle for recourse. Even if one were made available, how often would it be used? Given that this is a government process, rather than a private one, wrongful takedowns will likely remain permanent. As Killock points out, no one wants to risk being branded as a terrorist sympathizer for fighting back against government censorship. Nor do third parties using these platforms necessarily have the funds to back a formal legal complaint against the government.
No filtering system is going to be perfect, but the UK’s new toy isn’t any better than anything already out there. At least in the case of the social media giants, takedowns can be contested without having to face down the government. It’s users against the system — something that rarely works well, but at least doesn’t add the possibility of being added to a “let’s keep an eye on this one” list.
And if it’s a system, it will be gamed. Terrorists will figure out how to sneak stuff past the filters while innocent users pay the price for algorithmic proxy censorship. Savvy non-terrorist users will also game the system, flagging content they don’t like as questionable, possibly resulting in even more non-extremist content being removed from platforms.
The UK government isn’t wrong to try to do something about recruitment efforts and terrorist propaganda. But they’re placing far too much faith in a system that will generate false positives nearly as frequently as it will block extremist content.
Filed Under: algorithms, censorship, extremist content, filters, terrorism, uk
Insanity: Theresa May Says Internet Companies Need To Remove 'Extremist' Content Within 2 Hours
from the a-recipe-for-censorship dept
It’s fairly stunning just how much people believe that it’s easy for companies to moderate content online. Take, for example, this random dude who assumes its perfectly reasonable for Facebook, Google and Twitter to “manually review all content” on their platforms (and since Google is a search engine, I imagine this means basically all public web content that can be found via its search engine). This is, unfortunately, a complete failure of basic comprehension about the scale of these platforms and how much content flows through them.
Tragically, it’s not just random Rons on Twitter with this idea. Ron’s tweet was in response to UK Prime Minister Theresa May saying that internet platforms must remove “extremist” content within two hours. This is after the UK’s Home Office noted that they see links to “extremist content” remaining online for an average of 36 hours. Frankly, 36 hours seems incredibly low. That’s pretty fast for platforms to be able to discover such content, make a thorough analysis of whether or not it truly is “extremist content” and figure out what to do about it. Various laws on takedowns usually have statements about a “reasonable” amount of time to respond — and while there are rarely set numbers, the general rule of thumb seems to be approximately 24 hours after notice (which is pretty aggressive).
But for May to now be demanding two hours is crazy. It’s a recipe for widespread censorship. Already we see lots of false takedowns from these platforms as they try to take down bad content — we write about them all the time. And when it comes to “extremist” content, things can get particularly ridiculous. A few years back, we wrote about how YouTube took down an account that was documenting atrocities in Syria. And the same thing happened just a month ago, with YouTube deleting evidence of war crimes.
So, May calling for these platforms to take down extremist content in two hours confuses two important things. First, it shows a near total ignorance of the scale of content on these platforms. There is no way possible to actually monitor this stuff. Second, it shows a real ignorance about the whole concept of “extremist” content. There is no clear definition of it, and without a clear definitions wrong decisions will be made. Frequently. Especially if you’re not giving the platforms any time to actually investigate. At best, you’re going to end up with a system with weak AI flagging certain things, and then low-paid, poorly trained individuals in far off countries making quick decisions.
And since the “penalty” for leaving content up will be severe, the incentives will all push towards taking down the content and censorship. The only pushback against this is the slight embarrassment if someone makes a stink about mistargeted takedowns.
Of course, Theresa May doesn’t care about that at all. She’s been bleating on censoring the internet to stop terrorists for quite some time now — and appears willing to use any excuse and make ridiculous demands along the way. It doesn’t appear she has any interest in understanding the nature of the problem, as it’s much more useful to her to be blaming others for terrorist attacks on her watch, than actually doing anything legitimate to stop them. Censoring the internet isn’t a solution, but it allows her to cast blame on foreign companies.
Filed Under: censorship, extremist content, theresa may, uk
Companies: facebook, google, twitter