terrorist content – Techdirt (original) (raw)

Stories filed under: "terrorist content"

New Israeli Law Makes Consuming ‘Terrorist’ Content A Criminal Offense

from the well-that's-a-mess dept

It’s amazing just how much war and conflict can change a country. On October 7th, Hamas blitzed Israel with an attack that was plainly barbaric. Yes, this is a conflict that has been simmering with occasional flashpoints for decades. No, neither side can even begin to claim it has entirely clean hands as a result of those decades of conflict. We can get the equivocating out of the way. October 7th was different, the worst single day of murder of the Jewish community since the Holocaust. And even in the immediate aftermath, those outside of Israel and those within knew that the attack was going to result in both an immediate reaction from Israel and longstanding changes within its borders. And those of us from America, or those that witnessed how our country reacted to 9/11, knew precisely how much danger this period of change represented.

It’s already started. First, Israel loosened the reins to allow once-blacklisted spyware companies to use their tools to help Israel find the hundreds of hostages Hamas claims to have taken. While that goal is perfectly noble, of course, the willingness to engage with more nefarious tools to achieve that end had begun. And now we learn that Israel’s government has taken the next step in amending its counterterrorism laws to make the consumption of “terrorist” content a criminal offense, punishable with jail time.

The bill, which was approved by a 13-4 majority in the Knesset, is a temporary two-year measure that amends Article 24 of the counterterrorism law to ban the “systematic and continuous consumption of publications of a terrorist organization under circumstances that indicate identification with the terrorist organization”.

It identifies the Palestinian group Hamas and the ISIL (ISIS) group as the “terrorist” organisations to which the offence applies. It grants the justice minister the authority to add more organisations to the list, in agreement with the Ministry of Defence and with the approval of the Knesset’s Constitution, Law, and Justice Committee.

Make no mistake, this is the institution of thought crime. Read those two paragraphs one more time and realize just how much the criminalization of consumption of materials relies on the judgement and interpretation of those enforcing it. What is systematic in terms of this law? What is a publication? What constitutes a “terrorist organization,” not in the case of Hamas and ISIL, but in that ominous bit at the end of the second paragraph, where more organizations can — and will — be added to this list?

And most importantly, how in the world is the Israeli government going to determine “circumstances that indicate identification with the terrorist organization?”

“This law is one of the most intrusive and draconian legislative measures ever passed by the Israeli Knesset since it makes thoughts subject to criminal punishment,” said Adalah, the Legal Centre for Arab Minority Rights in Israel. It warned that the amendment would criminalise “even passive social media use” amid a climate of surveillance and curtailment of free speech targeting Palestinian citizens of Israel.

“This legislation encroaches upon the sacred realm of an individual’s personal thoughts and beliefs and significantly amplifies state surveillance of social media use,” the statement added. Adalah is sending a petition to the Supreme Court to challenge the bill.

This has all the hallmarks of America’s overreaction to the 9/11 attacks. We still haven’t unwound, not even close, all of the harm that was done in the aftermath of those attacks, all in the name of safety. We are still at a net-negative value in terms of our civil liberties due to that overreaction. President Biden even reportedly warned Israel not to ignore our own mistakes, but they’re doing it anyway.

And circling back to the first quotation and the claim that this law is temporary over a 2 year period, that’s just not how this works. If this law is allowed to continue to exist, it will be extended, and then extended again. The United States is still operating under the Authorization for Use of Military Force of 2001 and used it in order to conduct strikes in Somalia under the Biden administration, two decades later.

The right to speech and thought is as bedrock a thing as exists for a democracy. If we accept that premise, then it is simply impossible to “protect a democracy” by limiting the rights of speech and thought. And that’s precisely what this new law in Israel does: it chips away at the democracy of the state in order to protect it.

That’s not how Israel wins this war, if that is in fact the goal.

Filed Under: hamas, israel, palestine, terrorism, terrorist content

Websites Now Have One Hour To Remove “Terrorist Content” Online Or Face Massive Fines. What Could Go Wrong?

from the this-won't-be-abused-at-all dept

We spent a few years warning people about the terrible EU Terrorist content regulation law, but as of this week, it’s now in effect, and websites will have one hour to remove any terrorist content that is flagged to them by any government official. If they fail to remove the content, they could face fines up to an astounding 4% of global revenue.

And the definition of what could count is incredibly broad:

_Such material includes text, images, sound recordings and videos, as well as live transmissions of terrorist offences_…

Oh. Just that.

And, don’t worry, the law says to take context into account.

When assessing whether material constitutes terrorist content within the meaning of this Regulation, competent authorities and hosting service providers should take into account factors such as the nature and wording of statements, the context in which the statements were made and their potential to lead to harmful consequences in respect of the security and safety of persons.

But, remember, if you don’t take it down, the government may fine you billions of dollars. So basically, companies have one hour to assess the wider context… but if they make the mistake of leaving it up, you could cost the company billions of dollars. Guess what’s going to happen? Everyone is going to take down reported content.

So, you might say, if it’s really encouraging terrorist activity, maybe it’s better to take it down, right? Except the evidence suggests otherwise. Actual research on the topic suggests that removing terrorist content doesn’t do a damn thing to stop terrorist acts. In fact, the research suggests that having such content out in the open enables better responses to terrorist threats.

Also, perhaps you think that the officials making these reports are trustworthy and won’t ever make mistakes? Except, we know that’s not how this works. Facebook’s was pressured into removing more terrorist content, and in the process ended up taking down lots of accounts from activists and journalists who were reporting on terrorist activity. Or how about the time that YouTube was ordered to remove terrorist content… and ended up removing documentation of war crimes instead.

Oops.

Oh, and how can we forget the time, just a couple years ago, when French officials declared that much of the Internet Archive was actually terrorist content and needed to be removed?

I’m sure that this new regulation is going to go just great.

Filed Under: censorship, content moderation, eu, terrorism, terrorist content, terrorist content regulation

Freaking Out About Nazi Content On The Internet Archive Is Totally Missing The Point

from the moral-panics dept

The moral panics around anyone finding “bad” content online are getting out of control. The latest is a truly silly article in the San Francisco Chronicle whining about the fact that there is Nazi content available on the Internet Archive, written by the executive director of the Middle East Media Research Institute, Steven Stalinsky, who is quite perturbed that his own personal content moderation desires are not how the Internet Archive moderates.

For the past decade, Middle East Media Research Institute (MEMRI) research has been exposing the Internet Archive’s enabling of Al-Qaeda, ISIS and other jihadi propaganda efforts and its function as a database for their distribution of materials, recruitment campaigns, incitement of violence, fundraising and even daily radio programs. We wrote that ISIS liked the platform because there was no way to flag objectionable content for review and removal ? unlike on other platforms such as YouTube. Today, the Internet Archive enables neo-Nazis and white supremacists in the same ways, and its terms of use still deny responsibility for content uploaded to it.

Right, so let’s stop right there. Yes, for over a decade, we’ve written about ongoing complaints among the pearl clutching crew that you could find terrorist content online, along with their demands that websites pull it down, leading to social media sites shutting down the accounts of human rights groups who were documenting war crimes committed by terrorist organizations.

There’s a key point in this: just because this information is available does not mean it is only used for nefarious purposes. Indeed, it is often used for important and valuable purposes — such as documenting crimes. Or creating historical archives that show truly horrific crimes and ignorant thinking. Deleting that and sweeping it under the rug is not a reasonable approach either. But this entire article by Stalinsky seems premised on the idea that every bit of evidence of Nazi-ism should disappear. That seems incredibly counterproductive.

A recent two-year study I co-authored reviews the massive amount of content being uploaded, downloaded and shared by these groups on the Internet Archive and how it is used for recruitment and radicalization. This includes historical Nazi content such as copies of Der Sturmer, the virulently antisemitic Nazi-era propaganda newspaper, and speeches and writings by Adolph Hitler, Nazi propaganda minister Joseph Goebbels and other Nazi figures.

This historical material is interspersed with neo-Nazi content, including tens of thousands of pages with titles such as “Adolf Hitler: The Ultimate Red Pill,” “666 Adolf Hitler Quotes” and “Joseph Goebbels, Master of Propaganda, Heil Hitler,” and videos and writings by convicted Holocaust deniers.

And the answer to this content is… to set it all on fire? Like that won’t come back to bite you?

Extremist works are available on the platform for download ? and for radicalization ? including seminal white supremacist books, training manuals for carrying out attacks, recruitment videos and several manifestos of white supremacist mass shooters.

And it’s also available for activists, journalists, researchers and more to study it and figure out how to counter it.

Yes, these are serious issues and I can understand the concerns about how this information could be misused (though, despite an attention grabbing headline about how the website is a “favorite” for “neo-Nazis”, the actual article supplies little to no evidence to support that claim). But simply hiding the information doesn’t make it go away–nor does it deal with any of the underlying reasons such information might be appealing to some ignorant people. It is brushing a serious issue under the rug, and doing so in a way that can have seriously bad consequences — as we’ve seen with social media sites deleting evidence of war crimes.

Everyone seems to think that content moderation is easy — just do what I would do — without ever thinking through the actual trade-offs and challenges of having to actually make these decisions. The article here seems to be written in incredibly bad faith, assuming that removing these historical documents is the only possible and acceptable solution, without bothering to grapple with the serious difficulties and trade-offs involved in making such decisions. Are there ways that the Internet Archive could better handle this content? Probably! Will being scolded as a “favorite” of “neo-Nazis” help make that work better? That seems unlikely.

Filed Under: archive, content moderation, library, nazi content, steven stalinsky, terrorist content, user generated content
Companies: internet archive

Content Moderation Case Study: Facebook's Moderation Of Terrorist Content Results In The Removal Of Journalists' And Activists' Accounts (June 2020)

from the context-matters dept

Summary: In almost every country in which it offers its service, Facebook has been asked — sometimes via direct regulation — to limit the spread of “terrorist” content.

But moderating this content has proven difficult. It appears the more aggressively Facebook approaches the problem, the more collateral damage it causes to journalists, activists, and others studying and reporting on terrorist activity.

Because documenting and reporting on terrorist activity necessitates posting of content considered to be “extremist,” journalists and activists are being swept up in Facebook’s attempts to purge its website of content considered to be a violation of terms of service, if not actually illegal.

The same thing happened in another country frequently targeted by terrorist attacks.

In the space of one day, more than 50 Palestinian journalists and activists had their profile pages deleted by Facebook, alongside a notification saying their pages had been deactivated for “not following our Community Standards.”

“We have already reviewed this decision and it can’t be reversed,” the message continued, prompting users to read more about Facebook’s Community Standards.

There appears to be no easy solution to Facebook’s over-moderation of terrorist content. With algorithms doing most of the work, it’s left up to human moderators to judge the context of the posts to see if they’re glorifying terrorists or simply providing information about terrorist activities.

Decisions to be made by Facebook:

Questions and policy implications to consider:

Resolution: Facebook continues to struggle to eliminate terrorist-linked content from its platform. It appears to have no plan in place to reduce the collateral damage caused by its less-than-nuanced approach to a problem that appears — at least at this point — unsolvable. In fact, its own algorithms have generated extremist content by auto-generating “year in review” videos utilizing “terrorist” content uploaded by users, but apparently never removed by Facebook.

Facebook’s ongoing efforts with the Global Internet Forum to Counter Terrorism (GIFCT) probably aren’t going to limit the collateral damage to activists and journalists. Hashes of content designated “extremist” are uploaded to GIFCT’s database, making it easier for algorithmic moderation to detect and remove unwanted content. But utilizing hashes and automatic moderation won’t solve the problem facing Facebook and others: the moderation of extremist content uploaded by extremists and similar content uploaded by users who are reporting on extremist activity. The company continues to address the issue, but it seems likely this collateral damage will continue until more nuanced moderation options are created and put in place.

Filed Under: content moderation, context, journalism, terrorist content
Companies: facebook

Gullible Maine & DHS Intel Officers Believed Teen TikTok Video Was Serious Terrorist Threat

from the this-just-keeps-getting-dumber dept

Thu, Aug 6th 2020 06:29am - Karl Bode

We’ve been noting for a few weeks that much of the hysteria surrounding TikTok is kind of dumb. For one, banning TikTok doesn’t really do much to thwart Chinese spying, given our privacy and security incompetence leaves us vulnerable on countless fronts. Most of the folks doing the heaviest pearl clutching over TikTok have opposed efforts at any meaningful internet privacy rules, have opposed funding election security reform, and have been utterly absent or apathetic in the quest for better security and privacy practices over all (the SS7 flaw, cellular location data scandals, etc.).

Even the idea that banning TikTok meaningfully thwarts Chinese spying given the country’s total lack of scruples, bottomless hacking budget, and our own security and privacy incompetence (the IOT comes quickly to mind) is fairly laughable. Banning TikTok to thwart Chinese spying is kind of like spitting at a thunderstorm in the hopes of preventing rain. Genuine privacy and security reform starts by actually engaging in serious privacy and security reform, not (waves in the general direction of Trump’s bizarre, extortionist, TikTok agenda) whatever the hell this is supposed to be.

I see the entire TikTok saga as little more than bumbling, performative nonsense by wholly unserious people more interested in money, politics, leverage, and power than privacy or national security. Case in point: desperate to create the idea that TikTok is a serious threat, a new document leak reveals that the Department of Homeland Security has spent a good chunk of this year circulating the claim that a nineteen year-old girl was somehow “training terrorists” via a comedy video she posted to TikTok.

According to Mainer, the video in question was sent to police departments across Maine by the Maine Information and Analysis Center (MIAC), part of the DHS network of so-called “Fusion Centers” tasked with sharing and and distributing information about “potential terrorist threats.” The problem: when you dig through the teen in question’s TikTok posts, it’s abundantly clear after about four minutes of watching that she’s not a threat. The tweet itself appears to have been deleted, but it too (duh) wasn’t anything remotely resembling a genuine terrorist threat or security risk:

“In the TikTok clip, Weirdsappho first displays a satirical tweet from the stand-up comedian Jaboukie Young-White, a correspondent for The Daily Show, that ?thanks? police for ?bringing in the army? to combat peaceful protests against police brutality. The tweet encourages protestors to throw ?water balloons filled w sticky liquids (esp some sort of sugar/milk/syrup combo)? at tanks, in order to ?support our troops.”

And yet, after the clip got picked up and spread around by a handful of Qanon conspiracy cultists, it was, in turn, picked up and spread around by utterly unskeptical and uncritical agents at DHS and MIAC, who have a bit of a blind spot when it comes to far right extremism (for what should be obvious reasons), but can be easily worked into a lather where the vile menace “antifa” is concerned:

“Fusion Centers like MIAC, which is headquartered in Augusta and run by the Maine State Police, are engaged in a pattern of spreading misinformation, based on far-right rumors, that raise fears of leftist violence at peaceful protests against police brutality. Earlier this month, Mainer exposed how two social media posts by unreliable sources became fodder for official warnings about anarchist ?plots? to leave stacks of bricks at protest sites for use as weapons against police.

In a July 15 article based on the BlueLeaks files, The Intercept revealed how DHS and its fusion centers are hyping far-fetched plots by alleged anti-fascist ?antifa? activists despite evidence that far-right extremists pose actual threats to law enforcement personnel and protesters.”

The idea that law enforcement and “intelligence officials” can’t (or just won’t) differentiate between joking political teen videos and serious terrorism threats should be terrifying to anybody with a whit of common sense. But it’s not just part and parcel for a law enforcement and intel community that apparently can’t behave or think objectively, it’s par for the course for this wave of TikTok hysteria that’s not based on much in the way of, you know, facts.

Filed Under: comedy, dhs, maine, satire, terrorist content, terrorist threat
Companies: tiktok

The UK's Entire Approach To 'Online Harms' Is Backwards… And No One Cares

from the this-is-not-a-good-idea dept

Back in April, the UK (with Theresa May making the announcement) released a plan to fine internet companies if they allowed “online harms” in the form of “abhorrent content.” This included “legal” content. As we noted at the time, this seemed to create all sorts of problems. Since then, the UK has been seeking “comments” on this proposal, and many are coming in. However, the most incredible thing is that the UK seems to assume so many things in its plan that the comments it’s asking for are basically, “how do we tweak this proposal around the edges,” rather than, “should we do this at all?”

Various organizations have been engaging, as they should. However, reading the Center for Democracy & Technology’s set of comments to the UK in response to its questions is a really frustrating experience. CDT knows how dumb this plan is. However, the specific questions that the UK government is asking don’t even let commenters really lay out the many, many problems with this approach.

And, of course, we just wrote about some new research that suggests a focus on “removing” terrorist content has actually harmed the efforts against terrorism, in large part by hiding from law enforcement and intelligence agencies what’s going on. In short, in this moral panic about “online harms”, we’re effectively sweeping useful evidence under the rug to pretend that if we hide it, nothing bad happens. Instead, the reality is that letting clueless people post information about their dastardly plans online seems to make it much easier to stop those plans from ever being brought to fruition.

But the UK’s “online harms” paper and approach doesn’t even seem to take that possibility into account — instead it assumes that it’s obviously a good thing to censor this content, and the only questions are really around who has the power to do so and how.

The fact that they don’t even seem to be open to the idea that this entire approach may be counterproductive and damaging suggests that the momentum for this proposal is unlikely to be stoppable — and we’re going to end up with a really dangerous, censorial regulation with little concern for all the harm it will cause, even when it regards actual harms like terrorist attacks.

Filed Under: content moderation, harm, online harms, terrorist content, uk

Removing Terrorist Content Isn't Helping Win The War On Terror

from the misguided-efforts dept

The terrorists are winning.

This shouldn’t come as a surprise. The War on Drugs hasn’t made a dent in drug distribution. Why should the War on Terror be any different? Two decades and several billion dollars later, what do we have to show for it? Just plenty of enemies foreign and domestic.

While politicians rail against “terrorist content,” encryption, and the right for people to remain generally unmolested by their governments, they’re leaning hard on social media platforms to eradicate this content ASAP.

And social media companies are doing all they can. Moderation is hard. It’s impossible when you’re serving millions of users at once. Nonetheless, the content goes down. Some of it is actual “terrorist content.” Some of it is journalism. Some of it is stuff no one would consider terroristic. But it all goes down because time is of the essence and the world is watching.

But to what end? As was noted here all the way back in 2017, efforts made to take down “terrorist content” resulted in the removal of evidence of war crimes. Not much has changed since then. This unfortunate side effect was spotted again in 2019. Target all the terrorist content you want, but destroying it destroys evidence that could be used to identify, track, and, ultimately, prosecute terrorists.

Sure, there’s some concern that unmoderated terrorist content contains the inherent power to radicalize internet randos. It’s a valid concern but it might be outweighed by the positives of keeping the content live. To go further, it might be a net gain for society if terrorist content was accessible and easily-shared. This seems counterintuitive, but there’s a growing body of research showing terrorists + internet use = thwarted terrorist plots.

Call me crazy, but this sounds like a better deal for the world’s population than dozens of surveillance agencies slurping up everything that isn’t nailed down by statute. This comes from Joe Whittaker at Lawfare, who summarizes research suggesting swift removal of “terrorist content” isn’t helping win the War on Terror.

In my sample, the success of an attempted terrorist event—defined as conducting an attack (regardless of fatalities), traveling to the caliphate, or materially supporting others actor by providing funds or otherwise assisting their event—is negatively correlated with a range of different internet behaviors, including interacting with co-ideologues and planning their eventual activity. Furthermore, those who used the internet were also significantly more likely to be known to the security services prior to their event or arrest. There is support for this within the literature; researchers at START found that U.S.-based extremists who were active on social media had lower chances of success than those who were not. Similarly, research on U.K.-based lone actors by Paul Gill and Emily Corner found that individuals who used the internet to plan their actions were significantly less likely to kill or injure a target. Despite the operational affordances that the internet can offer, terrorist actors often inadvertently telegraph their intentions to law enforcement. Take Heather Coffman, whose Facebook profile picture of an image of armed men with the text “VIRTUES OF THE MUJIHADEEN” alerted the FBI, which deployed an undercover agent and eventually led to her arrest.

Correlation isn’t causation but there’s something to be said about visibility. This has been a noticeable problem ever since some law enforcement-adjacent grandstanders started nailing every online service with personal ads to the judicial wall for supposedly facilitating sex trafficking. Ads were pulled. Services were halted. And sex traffickers became increasingly difficult to track down.

As this research notes, radicalization might occur faster with heavier social media use. But this isn’t necessarily a bad thing. Greater visibility means easier tracking and better prevention.

Out in the open also means encryption isn’t nearly as much of an issue. Terrorist organizations appear to be voluntarily moving away from open platforms, sacrificing expeditious radicalization for privacy and security. But even that doesn’t appear to pose nearly as much of a problem as politicians and law enforcement officials suggest.

When looking at the Islamic State cohort in the United States, unlike other online behaviors, there is not a significant relationship between the use of end-to-end encryption and event success. Terrorists who used it were just as likely to be successful as those who did not.

Unfortunately, there are no easy answers here. While driving terrorists underground results in limited visibility for those seeking to thwart their plans, allowing them to take full advantage of open platforms increases the number of possible terrorists law enforcement must keep an eye on.

The downsides of aggressive moderation, however, are clear. Visibility decreases as the possibility for over-moderation increases. Evidence needed for investigations and prosecutions vanishes into the ether over the deafening roar of calls to “do more.”

Filed Under: content moderation, content removals, open source intelligence, terrorism, terrorist content

Flip Side To 'Stopping' Terrorist Content Online: Facebook Is Deleting Evidence Of War Crimes

from the not-for-the-first-time dept

Just last week, we talked about the new Christchurch Call, and how a bunch of governments and social media companies have made some vague agreements to try to limit and take down “extremist” content. As we pointed out last week, however, there appeared to be little to no exploration by those involved in how such a program might backfire and hide content that is otherwise important.

We’ve been making this point for many, many years, but every time people freak out about “terrorist content” on social media sites and demand that it gets deleted, what really ends up happening is that evidence of war crimes gets deleted as well. This is not an “accident” or such systems misapplied, this is the simple fact that terrorist propaganda often is important evidence of war crimes. It’s things like this that make the idea of the EU’s upcoming Terrorist Content Regulation so destructive. You can’t demand that terrorist propaganda get taken down without also removing important historical evidence.

It appears that more and more people are finally starting to come to grips with this. The Atlantic recently had an article bemoaning the fact that tech companies are deleting evidence of war crimes, highlighting how such videos have actually been really useful in tracking down terrorists, so long as people can watch them before they get deleted.

In July 2017, a video capturing the execution of 18 people appeared on Facebook. The clip opened with a half-dozen armed men presiding over several rows of detainees. Dressed in bright-orange jumpsuits and black hoods, the captives knelt in the gravel, hands tied behind their back. They never saw what was coming. The gunmen raised their weapons and fired, and the first row of victims crumpled to the earth. The executioners repeated this act four times, following the orders of a confident young man dressed in a black cap and camouflage trousers. If you slowed the video down frame by frame, you could see that his black T-shirt bore the logo of the Al-Saiqa Brigade, an elite unit of the Libyan National Army. That was clue No. 1: This happened in Libya.

Facebook took down the bloody video, whose source has yet to be conclusively determined, shortly after it surfaced. But it existed online long enough for copies to spread to other social-networking sites. Independently, human-rights activists, prosecutors, and other internet users in multiple countries scoured the clip for clues and soon established that the killings had occurred on the outskirts of Benghazi. The ringleader, these investigators concluded, was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander. Within a month, the International Criminal Court had charged Werfalli with the murder of 33 people in seven separate incidents?from June 2016 to the July 2017 killings that landed on Facebook. In the ICC arrest warrant, prosecutors relied heavily on digital evidence collected from social-media sites.

The article notes, accurately, that this whole situation is kind of a mess. Governments (and some others in the media and elsewhere) are out there screaming about “terrorist content” online, but pushing companies to take it all down is having the secondary impact of both deleting that evidence from existence and making it that much more difficult to find those terrorists.n And when people raise this concern, they’re mostly being ignored:

These concerns are being drowned out by a counterargument, this one from governments, that tech companies should clamp down harder. Authoritarian countries routinely impose social-media blackouts during national crises, as Sri Lanka did after the Easter-morning terror bombings and as Venezuela did during the May 1 uprising. But politicians in healthy democracies are pressing social networks for round-the-clock controls in an effort to protect impressionable minds from violent content that could radicalize them. If these platforms fail to comply, they could face hefty fines and even jail time for their executives.

As the article notes, the companies rush to appease governments demanding such content get taken down has already made the job of those open source researchers much more difficult, and actually helped to hide more terrorists:

Khatib, at the Syrian Archive, said the rise of machine-learning algorithms has made his job far more difficult in recent months. But the push for more filters continues. (As a Brussels-based digital-rights lobbyist in a separate conversation deadpanned, ?Filters are the new black, essentially.?) The EU?s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will see to it that nobody sees it. He fears the unintended consequences of such a law?that in cracking down on content that?s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, ?would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.?

Of course, this is no surprise. We see this in lots of contexts. For example, the focus on going after platforms for sex trafficking with FOSTA stopped the ability of police to help find actual traffickers and victims by hiding that material from view. Indeed, just this week, a guy was sentenced for sex trafficking a teenager, and the way he was found was via Backpage.

This is really the larger point we’ve been trying to make for the better part of two decades. Focusing on putting liability and control on the intermediary may seem like the “easiest” solution to the fact that there is “bad” content online, but it creates all sorts of downstream effects that we might not like at all. It’s reasonable to say that we don’t want terrorists to be able to easily recruit new individuals to their cause, but if that makes it harder to stop actual terrorism, shouldn’t we be analyzing the trade-offs there? To date, that almost never happens. Instead, we get the form of a moral panic: this content is bad, therefore we need to stop this content, and the only way to do that is to make the platforms liable for it. That assumes — often incorrectly — a few different things, including the idea that magically disappearing the content makes the activity behind it go away. Instead, as this article notes, it often does the opposite and makes it more difficult for officials and law enforcement to track down those actually responsible.

It really is a question of whether or not we want to be able to address the underlying problem (those actually doing bad stuff) or sweep it under the rug by deleting it and pretending it doesn’t happen. All of the efforts to put the liability on intermediaries really turns into an effort to sweep the bad stuff under the rug, to look the other way and pretend if we can’t find it on a major platform, that it’s not really happening.

Filed Under: christchurch call, content moderation, evidence, extremist content, terrorist content, war crimes
Companies: facebook, google, twitter

Governments And Internet Companies Agree On Questionable Voluntary Pact On Extremist Content Online

from the well-meaning-but-misguided dept

Yesterday, there was a big process, called the Christchurch Call, in which a bunch of governments and big social media companies basically agreed to take a more proactive role in dealing with terrorist and violent extremist content online. To its credit, the effort did include voices from civil society/public interest groups that raised issues about how these efforts might negatively impact freedom of expression and other human rights issues around the globe. However, it’s not clear that the “balance” they came to is a good one.

A free, open and secure internet is a powerful tool to promote connectivity, enhance social inclusiveness and foster economic growth.

The internet is, however, not immune from abuse by terrorist and violent extremist actors. This was tragically highlighted by the terrorist attacks of 15 March 2019 on the Muslim community of Christchurch ? terrorist attacks that were designed to go viral.

The dissemination of such content online has adverse impacts on the human rights of the victims, on our collective security and on people all over the world.

The “Call” is not binding on anyone. It’s just a set of “voluntary commitments” to try to “address the issue of terrorist and violent extremist content online and to prevent the abuse of the internet….” There are a set of commitments from governments and a separate set from social media companies. On the government side the commitments are:

Counter the drivers of terrorism and violent extremism by strengthening the resilience and inclusiveness of our societies to enable them to resist terrorist and violent extremist ideologies, including through education, building media literacy to help counter distorted terrorist and violent extremist narratives, and the fight against inequality.

Ensure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content, in a manner consistent with the rule of law and international human rights law, including freedom of expression.

Encourage media outlets to apply ethical standards when depicting terrorist events online, to avoid amplifying terrorist and violent extremist content.

Support frameworks, such as industry standards, to ensure that reporting on terrorist attacks does not amplify terrorist and violent extremist content, without prejudice to responsible coverage of terrorism and violent extremism.

Consider appropriate action to prevent the use of online services to disseminate terrorist and violent extremist content, including through collaborative actions, such as:

* Awareness-raising and capacity-building activities aimed at smaller online service providers; * Development of industry standards or voluntary frameworks; * Regulatory or policy measures consistent with a free, open and secure internet and international human rights law.

That mostly seems to stop short of demanding content be taken down, though that last point teeters on the edge. On the social media side, there is the following list of commitments:

Take transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms. Cooperative measures to achieve these outcomes may include technology development, the expansion and use of shared databases of hashes and URLs, and effective notice and takedown procedures.

Provide greater transparency in the setting of community standards or terms of service, including by:

* Outlining and publishing the consequences of sharing terrorist and violent extremist content; * Describing policies and putting in place procedures for detecting and removing terrorist and violent extremist content.

Enforce those community standards or terms of service in a manner consistent with human rights and fundamental freedoms, including by:

* Prioritising moderation of terrorist and violent extremist content, however identified; * Closing accounts where appropriate; * Providing an efficient complaints and appeals process for those wishing to contest the removal of their content or a decision to decline the upload of their content.

Implement immediate, effective measures to mitigate the specific risk that terrorist and violent extremist content is disseminated through livestreaming, including identification of content for real-time review.

Implement regular and transparent public reporting, in a way that is measurable and supported by clear methodology, on the quantity and nature of terrorist and violent extremist content being detected and removed.

Review the operation of algorithms and other processes that may drive users towards and/or amplify terrorist and violent extremist content to better understand possible intervention points and to implement changes where this occurs. This may include using algorithms and other processes to redirect users from such content or the promotion of credible, positive alternatives or counter-narratives. This may include building appropriate mechanisms for reporting, designed in a multi-stakeholder process and without compromising trade secrets or the effectiveness of service providers? practices through unnecessary disclosure.

Work together to ensure cross-industry efforts are coordinated and robust, for instance by investing in and expanding the GIFCT, and by sharing knowledge and expertise.

Facebook put up its own list of actions that it’s taking in response to this, but as CDT’s Emma Llanso points out, it’s missing some fairly important stuff about making sure these efforts don’t lead to censorship, especially of marginalized groups and individuals:

In response to all of this, the White House refused to join with the other countries who signed on to the voluntary commitments of the Christchurch Call, noting that it had concerns about whether it was appropriate and consistent with the First Amendment. That’s absolutely accurate and correct. Even if the effort is voluntary and non-binding, and even if it makes references to protecting freedom of expression, once a government gets involved in advocating for social media companies to take down content, it’s crossing a line. The Washington Post quoted law professor James Grimmelmann, who makes this point concisely:

?It?s hard to take seriously this administration?s criticism of extremist content, but it?s probably for the best that the United States didn?t sign,? said James Grimmelmann, a Cornell Tech law professor. ?The government should not be in the business of ?encouraging? platforms to do more than they legally are required to ? or than they could be required to under the First Amendment.?

?The government ought to do its ?encouraging? through laws that give platforms and users clear notice of what they?re allowed to do, not through vague exhortations that can easily turn into veiled threats,? Grimmelmann said.

And he’s also right that it’s difficult to take this administration’s position seriously, especially given that the very same day that it refused to join this effort, it was also pushing forward with its sketchy plan to force social media companies to deal with non-existent “conservative bias.” So, on the one hand, the White House says it believes in the First Amendment and doesn’t want governments to get involved, and at the very same time, it’s suggesting that it can pressure social media into acting in a way that it wants. And, of course, this is also the same White House, that has made other efforts to get social media companies to remove content from governments they dislike, such as Iran’s.

So, yes, we should be wary of governments telling social media companies what content should and should not be allowed, so it’s good that the White House declined to support the Christchurch Call. But it’s difficult to believe it was doing so for any particularly principled reasons.

Filed Under: censorship, christchurch, christchurch call, extremism, free speech, human rights, social media, terrorist content, voluntary, white house
Companies: facebook, google, microsoft, twitter, youtube

EU Parliament Votes To Require Internet Sites To Delete 'Terrorist Content' In One Hour (By 3 Votes)

from the eu's-ongoing-attack-on-the-internet dept

A bit of deja vu here. Once again, the EU Parliament has done a stupid thing for the internet. As we’ve been discussing over the past few months, the EU has been pushing a really dreadful “EU Terrorist Content Regulation” with the main feature being a requirement that any site that can be accessed from the EU must remove any content deemed “terrorist content” by any vaguely defined “competent authority” within one hour of being notified. The original EU Commission version also included a requirement for filters to block reuploads and a provision that effectively turned websites’ terms of service documents into de facto law. In moving the Regulation to the EU Parliament, the civil liberties committee LIBE stripped the filters and the terms of service parts from the proposal, but kept in the one hour takedown requirement.

In a vote earlier today, the EU Parliament approved the version put for by the committee, rejecting (bad) amendments to bring back the upload filters and empowering terms of service, but also rejecting — by just three votes — an amendment to remove the insane one hour deadline.

Since this version is different than the absolutely bonkers one pushed by the European Commission, this now needs to go through a trilogue negotiation to reconcile the different versions, which will eventually lead to another vote. Of course, what that vote will look like may be anyone’s guess, given that the EU Parliamentary elections are next month, so it will be a very different looking Parliament by the time this comes back around.

Either way, this whole concept is a very poorly thought out knee-jerk moral panic from people scared of the internet and who don’t understand how it works. Actually implementing this in law would be disastrous for the EU and for internet security. The only way, for example, that we could comply with the law would be to hand over backend access to our servers to strangers in the EU and empower them to delete whatever they wanted. This is crazy and not something we would ever agree to do. It is unclear how any company — other than the largest companies — could possibly even pretend to try to comply with the one hour deadline, and even then (as the situation with the Christchurch video showed) there is simply no way for even the largest and best resourced teams out there to remove this kind of content within one hour. And that’s not even touching on the questions around who gets to determine what is “terrorist content,” how it will be abused, and also what this will mean for things like historical archives or open source intelligence.

This entire idea is poorly thought out, poorly implemented and a complete mess. So, of course, the EU Parliament voted for it. Hopefully, in next month’s elections we get a more sensible cohort of MEPs.

Filed Under: censorship, eu, eu parliament, free speech, one hour, takedowns, terrorist content, terrorist content regulation