nazi content – Techdirt (original) (raw)
Content Moderation At Scale Is Impossible To Do Well: Series About Antisemitism Removed By Instagram For Being Antisemetic
from the hate-vs.-reporting-on-hate dept
I’ve written a lot about the impossibility of doing content moderation well at scale, and there are lots of reasons for that. But one of the most common is the difficulty both AI and human beings have in distinguishing hateful/trollish/harassing behavior from those reporting on that behavior. We’ve pointed this out over and over again in a variety of contexts. One classic example is social media websites pulling down human rights activists highlighting war crimes by saying it’s “terrorist content.” Another were the many examples of people on social media talking about racism and how they’re victims of racist attacks having their accounts and posts shut down over claims of racism.
And now we have another similar example. A new video series about antisemitism posted its trailer to Instagram… where it was removed for violating community guidelines.
You can see the video on YouTube, and it’s not difficult to figure out how this happened. The message from Instagram says it violates that organization’s community guidelines against “violence or dangerous organizations.” The video in question, all about antisemitism, does include some Nazi imagery, obviously to make the point that in its extreme form, antisemitism can lead to the murder of Jews. But, Instagram has banned all Nazi content, in part due to those who complained about antisemitism on Instagram.
And that leads to a dilemma. If you’re banning Nazi content, you also have to realize how that might lead to content about Nazis (to criticize them and to warn about what they might do) also getting banned. And, again, this isn’t new. Earlier this year we had a case study on how YouTube’s similar ban took down historical and educational videos about the Holocaust.
The point here is that there is no easy answer. You can say that it should be obvious to anyone reviewing this that trailer (highlighting how bad antisemitism is) is different from actual antisemitism, but it’s a lot harder in practice at massive scale. First you need people who actually understand the difference, and you have to be able to write rules that can go out to thousands of moderators in a simple enough manner that explicitly makes clear the differences. And, you also need to give reviewers enough time to actually understand the context, which is kind of impossible given the scale of the content that needs to be reviewed. In such situations the “simpler” versions of the rules often are what get written: “No Nazi content.” That’s clear and scalable, but leads to these kinds of “mistakes.”
Filed Under: antisemitism, community standards, content moderation, hate speech, nazi content, scale
Companies: facebook, instagram
Freaking Out About Nazi Content On The Internet Archive Is Totally Missing The Point
from the moral-panics dept
The moral panics around anyone finding “bad” content online are getting out of control. The latest is a truly silly article in the San Francisco Chronicle whining about the fact that there is Nazi content available on the Internet Archive, written by the executive director of the Middle East Media Research Institute, Steven Stalinsky, who is quite perturbed that his own personal content moderation desires are not how the Internet Archive moderates.
For the past decade, Middle East Media Research Institute (MEMRI) research has been exposing the Internet Archive’s enabling of Al-Qaeda, ISIS and other jihadi propaganda efforts and its function as a database for their distribution of materials, recruitment campaigns, incitement of violence, fundraising and even daily radio programs. We wrote that ISIS liked the platform because there was no way to flag objectionable content for review and removal ? unlike on other platforms such as YouTube. Today, the Internet Archive enables neo-Nazis and white supremacists in the same ways, and its terms of use still deny responsibility for content uploaded to it.
Right, so let’s stop right there. Yes, for over a decade, we’ve written about ongoing complaints among the pearl clutching crew that you could find terrorist content online, along with their demands that websites pull it down, leading to social media sites shutting down the accounts of human rights groups who were documenting war crimes committed by terrorist organizations.
There’s a key point in this: just because this information is available does not mean it is only used for nefarious purposes. Indeed, it is often used for important and valuable purposes — such as documenting crimes. Or creating historical archives that show truly horrific crimes and ignorant thinking. Deleting that and sweeping it under the rug is not a reasonable approach either. But this entire article by Stalinsky seems premised on the idea that every bit of evidence of Nazi-ism should disappear. That seems incredibly counterproductive.
A recent two-year study I co-authored reviews the massive amount of content being uploaded, downloaded and shared by these groups on the Internet Archive and how it is used for recruitment and radicalization. This includes historical Nazi content such as copies of Der Sturmer, the virulently antisemitic Nazi-era propaganda newspaper, and speeches and writings by Adolph Hitler, Nazi propaganda minister Joseph Goebbels and other Nazi figures.
This historical material is interspersed with neo-Nazi content, including tens of thousands of pages with titles such as “Adolf Hitler: The Ultimate Red Pill,” “666 Adolf Hitler Quotes” and “Joseph Goebbels, Master of Propaganda, Heil Hitler,” and videos and writings by convicted Holocaust deniers.
And the answer to this content is… to set it all on fire? Like that won’t come back to bite you?
Extremist works are available on the platform for download ? and for radicalization ? including seminal white supremacist books, training manuals for carrying out attacks, recruitment videos and several manifestos of white supremacist mass shooters.
And it’s also available for activists, journalists, researchers and more to study it and figure out how to counter it.
Yes, these are serious issues and I can understand the concerns about how this information could be misused (though, despite an attention grabbing headline about how the website is a “favorite” for “neo-Nazis”, the actual article supplies little to no evidence to support that claim). But simply hiding the information doesn’t make it go away–nor does it deal with any of the underlying reasons such information might be appealing to some ignorant people. It is brushing a serious issue under the rug, and doing so in a way that can have seriously bad consequences — as we’ve seen with social media sites deleting evidence of war crimes.
Everyone seems to think that content moderation is easy — just do what I would do — without ever thinking through the actual trade-offs and challenges of having to actually make these decisions. The article here seems to be written in incredibly bad faith, assuming that removing these historical documents is the only possible and acceptable solution, without bothering to grapple with the serious difficulties and trade-offs involved in making such decisions. Are there ways that the Internet Archive could better handle this content? Probably! Will being scolded as a “favorite” of “neo-Nazis” help make that work better? That seems unlikely.
Filed Under: archive, content moderation, library, nazi content, steven stalinsky, terrorist content, user generated content
Companies: internet archive