DSA Framers Insisted It Was Carefully Calibrated Against Censorship; Then Thierry Breton Basically Decided It Was An Amazing Tool For Censorship (original) (raw)

from the our-new-truth-czar-is-overreaching dept

A few weeks ago, I highlighted how EU chief Digital Services Act enforcer, Thierry Breton, was making a mess of things sending broadly threatening letters (which have since been followed up with opening official investigations) to all the big social media platforms. His initial letter highlighted the DSA’s requirements regarding takedowns of illegal content, but very quickly blurred the line between illegal content and disinformation.

Following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU.

I noted that the framers of the DSA have insisted up, down, left, right, and center that the DSA was carefully designed such that it couldn’t possibly be used for censorship. I’ve highlighted throughout the DSA process how this didn’t seem accurate at all, and a year ago when I was able to interview an EU official, he kept doing a kind of “of course it’s not for censorship, but if there’s bad stuff online, then we’ll have to do something, but it’s not censorship” dance.

Some people (especially on social media and especially in the EU) got mad about my post regarding Breton’s letters, either saying that he was just talking about illegal content (he clearly is not!) or defending the censorship of disinformation as necessary (one person even told me that censorship means something different in the EU).

However, it appears I’m not the only one alarmed by how Breton has taken the DSA and presented it as a tool for him to crack down on legal information that he personally finds problematic. Fast Company had an article highlighting experts saying they were similarly unnerved by Breton’s approach to this whole thing.

“The DSA has a bunch of careful, procedurally specific ways that the Commission or other authorities can tell platforms what to do. That includes ‘mitigating harms,’” Keller says. The problem with Breton’s letters, she argues, is that they “blow right past all that careful drafting, seeming to assume exactly the kind of unconstrained state authority that many critics in the Global South warned about while the DSA was being drafted.”

Meanwhile, others are (rightfully!) noting that these threat letters are likely to lead to the suppression of important information as well:

Ashkhen Kazaryan, senior fellow of free speech and peace at the nonprofit Stand Together, objects to the implication in these letters that the mere existence of harmful, but legal, content suggests companies aren’t living up to their obligations under the DSA. After all, there are other interventions, including warning labels and reducing the reach of content, that platforms may be using rather than removing content altogether. Particularly in times of war, Kazaryan, who is a former content policy manager for Meta, says these alternative interventions can be crucial in preserving evidence to be used later on by researchers and international tribunals. “The preservation of [material] is important, especially for things like actually verifying it,” Kazaryan says, pointing to instances where evidence of Syrian human rights offenses have been deleted en masse.

The human rights civil society group Access Now similarly came out with concerns about Breton’s move fast and break speech approach might come across.

Firstly, the letters establish a false equivalence between the DSA’s treatment of illegal content and “disinformation.”’ “Disinformation” is a broad concept and encompasses varied content which can carry significant risk to human rights and public discourse. It does not automatically qualify as illegal and is not per se prohibited by either European or international human rights law. While the DSA contains targeted measures addressing illegal content online, it more appropriately applies a different regulatory approach with respect to other systemic risks, primarily consisting of VLOPs’ due diligence obligations and legally mandated transparency. However, the letters strongly focus on the swift removal of content rather than highlighting the importance of due diligence obligations for VLOPs that regulate their systems and processes. We call on the European Commission to strictly respect the DSA’s provisions and international human rights law, and avoid any future conflation of these two categories of expression.

Secondly, the DSA does not contain deadlines for content removals or time periods under which service providers need to respond to notifications of illegal content online. It states that providers have to respond in a timely, diligent, non-arbitrary, and objective manner. There is also no legal basis in the DSA that would justify the request to respond to you or your team within 24 hours. Furthermore, by issuing such public letters in the name of DSA enforcement, you risk undermining the authority and independence of DG Connect’s DSA Enforcement Team.

Thirdly, the DSA does not impose an obligation on service providers to “consistently and diligently enforce [their] own policies.” Instead, it requires all service providers to act in a diligent, objective, and proportionate manner when applying and enforcing the restrictions based on their terms and conditions and for VLOPs to adequately address significant negative effects on fundamental rights stemming from the enforcement of their terms and conditions. Terms and conditions often go beyond restrictions permitted under international human rights standards. State pressure to remove content swiftly based on platforms’ terms and conditions leads to more preventive over-blocking of entirely legal content.

Fourthly, while the DSA obliges service providers to promptly inform law enforcement or judicial authorities if they have knowledge or suspicion of a criminal offence involving a threat to people’s life or safety, the law does not mention a fixed time period for doing so, let alone one of 24 hours. The letters also call on Meta and X to be in contact with relevant law enforcement authorities and EUROPOL, without specifying serious crimes occurring in the EU that would provide sufficient legal and procedural ground for such a request.

Freedom of expression and the free flow of information must be vigorously defended during armed conflicts. Disproportionate restrictions of fundamental rights may distort information that is vital for the needs of civilians caught up in the hostilities and for recording documentation of ongoing human rights abuses and atrocities that could form the basis for evidence in future judicial proceedings. Experience shows that shortsighted solutions that hint at the criminal nature of “false information” or “fake news” — without further qualification — will disproportionately affect historically oppressed groups and human rights defenders fighting against aggressors perpetrating gross human rights abuses.

No one is suggesting that the spread of mis- and disinformation regarding the crisis is a good thing, but the ways to deal with it are tricky, nuanced, and complex. And having a bumbling, egotistical, blowhard like Breton acting like the dictator for social media speech is going to cause a hell of a lot more problems than it solves.

Filed Under: censorship, digital services act, disinformation, dsa, eu, thierry breton
Companies: meta, tiktok, twitter, x, youtube