This Will Backfire: Google/Facebook Using Copyright Tools To Remove 'Extremist' Content (original) (raw)

from the slippery-slippery-slope dept

They’ve been pressured to do this for a while, but according to a Reuters report over the weekend, both Google and Facebook have started using some of their automation tools to start automatically removing “extremist” content. Both are apparently using modifications to their copyright takedown technology:

The technology was originally developed to identify and remove copyright-protected content on video sites. It looks for “hashes,” a type of unique digital fingerprint that internet companies automatically assign to specific videos, allowing all content with matching fingerprints to be removed rapidly.

In other words, the companies aren’t (yet) using these tools to automatically determine what’s “extremist” and block it, but rather they’re just keeping it from being posted. Of course, we’re all quite familiar with how badly this can fail in the copyright context, and it’s quite likely the same thing may happen in this context as well. Remember, in the past, under pressure from a US Senator, YouTube took down a Syrian watchdog’s channel, confusing its documentation of atrocities with extremist content. And, hell, the same day that this was reported, a reporter on Twitter noted that her own Facebook account was suspended because she posted a picture of a friend of hers who had been killed in Syria.

And that’s a big part of the issue here: context totally matters. One person’s extremist content may be quite informative/useful in other contexts.

Yes, I know that there’s a big push for “countering violent extremism” online these days. And the government, in particular, has been putting lots of pressure on the big tech companies to “do something.” But I’m curious what anyone thinks this is actually doing. The people who want to see these videos will still see these videos. It still seems like a fairly exaggerated threat to think that someone just watching some YouTube videos will suddenly decide that’s why they’re going to join ISIS. And, if that is the case, it seems like a much better response is counterspeech — put up other videos that rebut the claims in the “extremist” videos, rather than blocking them across the board. Of course, if they’re being matched via ContentID, even someone offering commentary on a video to debunk claims may suddenly find out that their videos are being taken down as well. I can’t see how that’s at all helpful.

Filed Under: censorship, contentid, copyright, isis, platforms, radical extremism, videos
Companies: facebook, google