The UK's Entire Approach To 'Online Harms' Is Backwards… And No One Cares (original) (raw)

from the this-is-not-a-good-idea dept

Back in April, the UK (with Theresa May making the announcement) released a plan to fine internet companies if they allowed “online harms” in the form of “abhorrent content.” This included “legal” content. As we noted at the time, this seemed to create all sorts of problems. Since then, the UK has been seeking “comments” on this proposal, and many are coming in. However, the most incredible thing is that the UK seems to assume so many things in its plan that the comments it’s asking for are basically, “how do we tweak this proposal around the edges,” rather than, “should we do this at all?”

Various organizations have been engaging, as they should. However, reading the Center for Democracy & Technology’s set of comments to the UK in response to its questions is a really frustrating experience. CDT knows how dumb this plan is. However, the specific questions that the UK government is asking don’t even let commenters really lay out the many, many problems with this approach.

And, of course, we just wrote about some new research that suggests a focus on “removing” terrorist content has actually harmed the efforts against terrorism, in large part by hiding from law enforcement and intelligence agencies what’s going on. In short, in this moral panic about “online harms”, we’re effectively sweeping useful evidence under the rug to pretend that if we hide it, nothing bad happens. Instead, the reality is that letting clueless people post information about their dastardly plans online seems to make it much easier to stop those plans from ever being brought to fruition.

But the UK’s “online harms” paper and approach doesn’t even seem to take that possibility into account — instead it assumes that it’s obviously a good thing to censor this content, and the only questions are really around who has the power to do so and how.

The fact that they don’t even seem to be open to the idea that this entire approach may be counterproductive and damaging suggests that the momentum for this proposal is unlikely to be stoppable — and we’re going to end up with a really dangerous, censorial regulation with little concern for all the harm it will cause, even when it regards actual harms like terrorist attacks.

Filed Under: content moderation, harm, online harms, terrorist content, uk