thierry breton – Techdirt (original) (raw)

Ctrl-Alt-Speech: Smells Like Teen Safety

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: ai, artificial intelligence, chatbots, child safety, content moderation, teen safety, thierry breton
Companies: instagram, meta, socialai

EU’s Top Censor Out Of A Job

from the gosh-what-a-shame dept

Thierry Breton has finally taken the next logical step in his role as the EU’s censor: he’s talked himself out of his job.

Over the last few years, Thierry Breton, the former CEO of France Telecom, has spent the last few years as the Commissioner for the Internal Market in the EU, where he has positioned himself as a sort of tech regulatory czar. But, now he’s out. While the press is describing it as a “resignation,” his own letter admits that it is because the President of the European Commission, Ursula von der Leyen had requested that France propose someone else to be France’s designated candidate for a European Commission position.

For the past few years, Breton has constantly boosted his own profile in pursuit of tech policy results that he, personally, wanted. As we’ve described, he has a long history of both self-promotion and inflating his job as Commissioner into basically being a full tech czar. He has repeatedly interpreted the Digital Services Act (DSA) to mean that he can demand certain content be removed, which has only served to piss off EU colleagues, who keep insisting the DSA is not a censorship law.

He has also been leading the charge in the EU to hamstring AI tools, gleefully pushing a bill that was mostly conceived of prior to the generative AI boom, and then trying to retrofit the law to that world, resulting in quite a regulatory mess.

Still, it had been assumed that he would keep his job as the digital regulator as von der Leyen was prepared to present her new slate of Commissioners. Each member state gets to designate someone to be a Commissioner, and the bigger countries (e.g., Germany, France) often get the higher profile/more important roles.

Emmanuel Macron had designated Breton to continue in that role, though there had been some concerns that Breton was a liability. Breton had also pissed off von der Leyen back in March, tweeting out a mocking tweet about how she was outvoted by her own party, and suggesting maybe she shouldn’t be leading the EU.

On Monday, Breton tweeted out his “resignation.” It’s clear that von der Leyen had asked Macron for someone other than Breton, promising Macron that France would get a better, more influential Commissioner job if it was anyone but Breton. Hence Breton’s resignation:

On 25 July. President Emmanuel Macron designated me as France’s official candidate for a second mandate in the College of Commissioners as he had already publicly announced on the margins of the European Council on 28 June. A few days ago, in the very final stretch of negotiations on the composition of the future College, you asked France to withdraw my name for personal reasons that in no instance you have discussed directly with me and offered, as a political trade-off, an allegedly more influential portfolio for France in the future College. You will now be proposed a different candidate.

And thus, he submitted his resignation “effective immediately.”

Somehow, he posted this letter without including a photo of himself. This is surprising, given the vast majority of his tweets include self-portraits (not selfies).

Macron quickly designated France’s outgoing foreign minister, Stéphane Séjourné, as Breton’s replacement. This doesn’t mean that Séjourné will get Breton’s assignment either, or that whoever does eventually get it won’t be even worse than Breton was. But it should serve as a reminder, yet again, of how much power the European Commissioner has over how some of these laws are interpreted.

On Tuesday, von der Leyen rolled out her proposed slate of Commissioners. Séjourné is up for “Prosperity and Industrial Strength” whatever that means. It’s not at all clear where the tech policy portfolio will land, as there are a few places it might end up. Finland’s Henna Virkkunen is given “tech sovereignty” which again is not clear. Either way Breton is out and we’ll see what comes of the new Commission.

Filed Under: emmanuel macron, eu, eu commission, france, thierry breton, ursula von der leyen

EU Officials Seem Pretty Pissed Off At Thierry Breton For His Censorial Letter To Elon

from the eu's-censorship-czar dept

Yesterday, we wrote about EU Commissioner Thierry Breton’s preposterously stupid letter to Elon Musk, warning him that Elon had to make sure his Spaces conversation with Donald Trump did not include any “harmful content” or it might violate the DSA. He also demanded that Elon tell Eurocrats what ExTwitter was doing to make sure that no “harmful” speech from Trump could possibly reach Europeans’ brains.

It was very stupid.

Elon’s response — posting a meme telling Breton to “fuck yourself in the face” — while not exactly a masterclass in diplomatic communication, at least made his feelings on the matter abundantly clear. It also made the point that Breton appeared to be using the DSA in a manner that Europeans insisted the DSA would never enable: to order companies to censor content.

Since then, I’ve had (mostly) Europeans in the comments to my post and on Bluesky arguing something to the effect of the standard argument every time I point out the censorial nature of EU laws: “the EU doesn’t have a First Amendment, you stupid American, and we’re just trying to stop Elon from enabling more violence.”

This is not a particularly compelling response. I may be a stupid American, but I still think I’m allowed to discuss the actual impacts of laws that will be used to regulate speech. I’m quite aware that the EU has no First Amendment. All I’m pointing out is how that enables problematic and concerning censorship, such as what Breton sought to do this week.

Indeed, it appears that other EU officials agree that Breton went too far. The Financial Times covered the story by noting that other EU officials were wholly unaware that Breton was going to send that letter, and they sound displeased about it:

On Tuesday the European Commission denied that Breton had approval from its president Ursula von der Leyen to send the letter.

“The timing and the wording of the letter were neither co-ordinated or agreed with the president nor with the [commissioners],” it said.

An EU official, who asked not to be named, said: “Thierry has his own mind and way of working and thinking.”

Now, remember, EU Commissioners are not elected. After each EU election, each country gets a Commissioner who is in charge of something as picked through some sort of opaque process involving the European Council and the Commission-President-elect. The President-elect gets to submit a full slate of Commissioners, which the EU Parliament only gets a yes/no vote on (the entire slate, not individual Commissioners).

So now we have an unelected bureaucrat who was put in charge of a law that enables him to kick off investigations that could result in massive fines to (almost entirely) American companies, who appears to have total free rein to send threatening letters based on his own personal censorship desires.

It seems less than great.

In Politico, another unnamed EU official is even more direct about how bad this looks for Breton and the DSA.

Four separate EU officials, speaking on the condition of anonymity, said Breton’s warning to Musk had surprised many within the Commission. The bloc’s enforcers were still investigating the platform for potential wrongdoing and the EU did not want to be seen as potentially interfering in the U.S. presidential election.

The EU is not in the business of electoral interference,” said one of those officials. “ DSA implementation is too important to be misused by an attention-seeking politician in search of his next big job.”

So, it sure looks as though some others in the EU are similarly disturbed by Breton using the DSA as his own personal censorship tool. Perhaps, if the various Europeans want to insist that the DSA won’t be used for such censorship, they should fix things so that an “attention-seeking politician in search of his next big job” can’t abuse the law that way? The DSA needs safeguards to prevent abuse by overzealous regulators, or it risks enabling the very censorship many insisted it would prevent. Unless, of course, they want more EU officials being told to perform anatomically challenging acts.

Filed Under: censorship, donald trump, dsa, election interference, elon musk, eu commission, free speech, harmful content, thierry breton
Companies: twitter, x

Breton Wields DSA As Censorship Tool, Musk Tells Him To ‘Go Fuck His Own Face’

from the meme-diplomacy dept

I know that many Elon Musk supporters assume that my mockery of the many stupid things that Elon does means that I won’t give him a fair shake. But when he does something good, I’m happy to highlight it and give him kudos.

In this case, he’s right (if a bit provocative) in telling EU Commissioner Thierry Breton to, well, [checks notes] go fuck his own face.

Let’s take a step back, because this requires some background. We’ve been warning for many years that the EU’s Digital Services Act (DSA) would be abused for censorship by the government. EU officials and supporters of the DSA kept insisting that we were overreacting. But, Thierry Breton has made it clear that while the DSA is under his purview as a Commissioner, it is his own personal censorship tool for anything he dislikes online.

When Elon first sought to buy Twitter, Breton had a sit-down meeting with him. He got Musk to stupidly give a full-throated endorsement of the DSA. We warned him at the time that he (1) looked foolish doing so and (2) would regret it. Now that ExTwitter has been accused of violating the DSA, it looks like our warning was prescient.

Anyway, that brings us to yesterday. As you might have heard (or, I guess, given the massive technical difficulties, perhaps you didn’t hear), Donald Trump joined Elon Musk for a conversation on “Spaces,” the extremely buggy real-time audio chat feature on ExTwitter. Before that happened, however, Thierry Breton posted one of his typically smug open letters that more or less warns Elon that if Trump said anything bad, the EU might seek to take action against ExTwitter.

Image

There’s a lot of text in that letter. If you really want to read it, here’s a larger version, but the short version is “Hey, Elon, I hear you’re going to have Donald Trump on. Under the DSA, I’m warning you that you better stop him from saying anything that I consider ‘harmful.’ In the meantime, I need you to waste a bunch of your time and tell me how you plan to block Trump from saying such things.”

While Breton sprinkles the phrase “illegal content” throughout the letter, he’s not really warning about that. First of all, Trump is not saying anything that is actually illegal, no matter how much nonsense he spews. But more importantly, Breton very clearly calls out “harmful content” in the third paragraph:

This notably means ensuring, on one hand, that freedom of expression and of information, including media freedom and pluralism, are effectively protected and, on the other hand, that all proportionate and effective mitigation measures are put in place regarding the amplification of harmful content in connection with relevant events, including live streaming, which, if unaddressed, might increase the risk profile of X and generate detrimental effects on civic discourse and public security. This is important against the background of recent examples of public unrest brought about by the amplification of content that promotes hatred, disorder, incitement to violence, or certain instances of disinformation.

That is flat out demanding that Musk and ExTwitter have tools in place to silence Donald Trump if he says something that Breton and other EU technocrats believe is “harmful.”

And that’s bullshit.

Especially coming right after he says “freedom of expression… are effectively protected.”

And therefore, I actually appreciate Elon going into meme form to tell Breton what he thought of his letter:

Image

I mean, I wouldn’t necessarily call it the most diplomatic approach. Nor is it one that is likely to endear many in the EU ruling class to Musk. But, honestly, someone should be calling out Breton and his repeated use of the DSA as a tool for issuing personal threats over speech he disagrees with.

I’m no fan of either Trump or Musk, but (1) the idea anything being said that would violate the law is crazy and (2) the idea that it’s any of the EU’s business is beyond stupid. And at least someone is calling it out.

Filed Under: censorship, donald trump, dsa, elon musk, eu, free speech, thierry breton
Companies: twitter, x

Musk’s DSA Debacle: From ‘Exactly Aligned’ To Accused Of Violations

from the what-comes-around dept

Elon Musk declaring the EU DSA regulation as “exactly aligned with my thinking” and agreeing with “everything” it mandates is looking pretty hilarious at this point.

Elon Musk loves endorsing things he clearly doesn’t understand and then lashes out when they backfire. Last week, we had the story of how he was demanding criminal prosecution of the Global Alliance for Responsible Media (GARM) just one week after ExTwitter announced it had “excitedly” rejoined GARM. But now he’s really outdone himself.

Two years ago, soon after Elon announced his bid to takeover Twitter because (he claimed) he was a “free speech absolutist,” he met with the EU’s Internal Market Commissioner, Thierry Breton, and gave a full-throated endorsement of the EU’s Digital Services Act (DSA). At the time, we pointed out how ridiculous this was, as the DSA, at its heart, is an attack on free speech and the rights of companies to moderate as they wish.

At the time, we pointed out how it showed just how incredibly naïve and easily played Elon was. He was endorsing a bill that clearly went against everything he had been saying about “free speech” on social media. Indeed, the previous management of Twitter — the one so many people mocked as being against free speech — had actually done important work pushing back on the worst aspects of the DSA when it was being negotiated. And then Musk came in and endorsed the damn thing.

So, of course, the EU has been on the attack ever since he’s taken over the company. Almost immediately Breton publicly started lashing out at Musk over his moderation decisions, and insisting that they violated the DSA. As we highlighted at the time, this seemed ridiculously censorial and extremely problematic regarding free expression.

But, of course, the whole thing was pretty much a foregone conclusion. And late last week, the EU formally charged ExTwitter with violating the DSA, the very law that Elon originally said was great and he agreed with its approach.

The commission has three findings, and each of them seems problematic in the typical simplistic EU paternalistic manner, written by people who have never had any experience having to manage social media.

To be clear, in all three cases, I do wish that ExTwitter were doing what the EU is demanding, because I think it would be better for users and the public. But, I don’t see how it’s any business of EU bureaucrats to demand that ExTwitter do things the way they want.

First, they don’t like how Elon changed the setup of the “blue check” “verified account”.

And, I mean, I’ve written a ton of words about why Elon doesn’t understand verification, and why his various attempts to change the verification system have been absurd and counterproductive. But that doesn’t mean it “deceives users.” Nor does it mean that the government needs to step in. Let Elon fall flat on his face over and over again. This entire approach is based on Breton and EU technocrats assuming that the public is too stupid to realize how broken ExTwitter has become.

As stupid as I think Musk’s approach to verification is, the fact that it doesn’t “correspond to industry practice” shouldn’t matter. That’s how experimentation happens. Sometimes that experimentation is stupid (as we see with Musk’s constantly changing and confusing verification system), but sometimes it allows for something useful and new.

Here the complaint from the EU seems ridiculously elitist: how dare it be that “everyone” can get verified?

Are there better ways to handle verification? Absolutely. Do I trust EU technocrats to tell platforms the one true way to do so? Absolutely not.

Second, the EU is mad about ExTwitter’s apparent lack of advertising transparency:

I wish there were more details on this because it’s not entirely clear what the issue is here. Transparency is a good thing, but as we’ve said over and over again, mandated transparency leads to very real problems.

There are serious tradeoffs with transparency, and having governments require it can lead to problematic outcomes regarding privacy and competition. It’s quite likely that ExTwitter’s lack of a searchable repository has more to do with (1) Elon having a barebones engineering staff that only focuses on the random things he’s interested in and that doesn’t include regulatory compliance, (2) Elon really, really hates it when the media is able to point out that ads are showing up next to awful content, and (3) a repository might give more of a view into how the quality of ads on the site has gone from top end luxury brands to vapes and crypto scams.

So, yes, in general, more transparency on ads is a good thing, but I don’t think it’s the kind of thing the government should be mandating, beyond the basic requirements that ads need to be disclosed.

Finally, the last item is similar to the second one in some ways, regarding researcher access to data:

And, again, in general, I do wish that ExTwitter was better at giving researchers access to data. I wish that they made it possible for researchers to have API access for free, and not tryin to charge them $42,000 per month.

But, again, there’s a lot of nuance here that the EU doesn’t understand or care about. Remember that Cambridge Analytica began as an “academic research project” using the Facebook API. Then it turned into one of the biggest (though, quite over-hyped) privacy scandals related to social media in the last decade.

I have no doubt that if ExTwitter opened up its API access to researchers and another Cambridge Analytica situation happened, the very same EU Commissioners issuing these charges would immediately condemn the company for the sin of making that data available.

Meanwhile, Elon is claiming in response to all of this that the Commission offered him an “illegal secret deal” that they wouldn’t face these charges if they “quietly censored speech without telling anyone, they would not fine” the company. Musk also claimed that other companies accepted that deal, while ExTwitter did not.

Image

So, this is yet another situation in which both sides are being misleading and confusing. Again, the structure of the DSA is such that its very nature is censorial. This is what we’ve been pointing out for years, and why we were horrified that Elon so loudly endorsed the DSA two years ago.

But, it does not at all match with how things actually work with the EU Commission to suggest that the EU would offer “secret deals” to companies to avoid fines. Thierry Breton’s explanation that there was no “secret deal” with anyone, and that it was ExTwitter’s own staff that asked what terms might settle the complaint rings very true.

Image

In the end, both sides are guilty of overblown dramatics. Elon Musk continues to flounder spectacularly at managing a social media platform, making a series of blunders that even his fiercest advocates can’t overlook. However, the EU’s role is equally questionable. Their enforcement of the DSA seems overly paternalistic and censorial, enforcing best practices that may not even be best and reeking of condescension.

The allegations of an “illegal secret deal” are just another smoke screen in this complex spectacle. It’s far more likely that the EU Commission pointed to the DSA and offered standard settlement terms that ExTwitter rebuffed, turning it into a grandiose narrative.

This debacle offers no real heroes — just inflated egos and problematic regulations. What we’re left with is an unending mess where no one truly wins. Musk’s mistaken endorsement of the DSA was a red flag from the beginning, showing that hasty alliances in the tech-policy arena often lead to chaos rather than clarity.

There are a ton of nuances and tradeoffs in the tech policy space, and neither Musk nor Breton seem to care about those details. It’s all about the grandstanding and the spectacle.

So, here we stand: a free speech absolutist who endorsed censorship regulations and a regulatory body enforcing broad and suspect mandates. It’s a circus of hypocrisy and heavy-handedness, proving that in the clash between tech giants and bureaucratic juggernauts, the rest of us become unwilling spectators.

Filed Under: ads, blue checks, dsa, elon musk, eu, regulations, research, thierry breton, transparency, verification
Companies: twitter, x

The EU’s Investigation Of ExTwitter Is Ridiculous & Censorial

from the bureaucrats-should-not-be-determining-speech-policy dept

People keep accusing me of criticizing Elon Musk because I “hate” him. But I don’t hate him, nor do I criticize him out of any personal feelings at all, beyond thinking that he often is hypocritical in his decision making, and makes decisions that defy common sense and logic. But when he does the right thing, I’m equally happy to call it out positively.

And while I’ve seen some people cheering on the EU’s new investigation of ExTwitter under the DSA (Digital Services Act), I think it’s extremely problematic and hope that Elon fights it. As we’ve explained, the DSA — while more thoughtful and careful in its approach than most US legislation about social media — remains a tool that can be abused for censoring speech.

Supporters of the DSA kept insisting to me that it would never be used that way, while wink-wink-nudge-nudging that if it didn’t magically stop ill-defined bad content online then it had somehow failed.

And thus, it was quite notable when the EU’s unelected technocrat enforcer, Thierry Breton, started threatening ExTwitter and other Silicon Valley companies earlier this year. The most notable thing was that Breton lumped together illegal content (which the sites are required to take down) and “disinformation,” which (in theory!) they’re not required to take down, but are supposed to have some form of best practices for responding to.

Breton lumped the two together, falsely suggesting that websites were required to remove disinformation under the DSA, which would be quite problematic, given that there is no agreed upon definition of disinformation, and often there are extremely conflicting beliefs about what is and what is not disinformation.

And, yet, this new investigation seems focused on exactly that, among other things:

This feels extremely heavy handed and really none of the EU’s business. Community Notes, while not a replacement for a full trust & safety effort, is a really unique and worthwhile experiment (and one that I’d like to see other sites implement as well). How exactly does one judge the “effectiveness” of the system and how is that the EU’s business?

Similarly, this seems really sketchy as well:

I mean, yes, Elon fucked up the whole “blue check as a marker of authority” concept by selling them, rather than using it as part of an actual verification system, but again, calling it “deceptive design” seems like a ridiculous statement, and suggests that the EU now feels it’s reasonable to critique product choices by companies.

Even if we think Elon’s choices around this were dumb and wholly counterproductive, that really shouldn’t be for the government to step in and decide.

And, of course, by kicking off this investigation over such silly things, it really undermines what might be legitimate concerns and areas of investigation, making the whole process — and the DSA itself — appear to be less credible.

Still, I can’t help but close this story with a bit of a “told ya so” directed at Elon. Remember, weeks after announcing his intention to purchase Twitter, Elon sat down with Breton and gave a full throated endorsement of the DSA approach. At the time, we warned him that if he really supported free speech, he’d actually be speaking out about the risks for free speech under the DSA (something old Twitter did in pushing back against earlier drafts of it). But instead, he told Breton that he agreed with this approach. And now he’s its first victim.

I hope that he has ExTwitter fight back against this intrusion, as that would help make it clear that the DSA’s rules should not get this deep into the level of tinkering with content on a site or with random features of a site the EU dislikes.

Filed Under: community notes, deceptive design, dsa, elon musk, free speech, investigation, thierry breton, verification
Companies: twitter, x

DSA Framers Insisted It Was Carefully Calibrated Against Censorship; Then Thierry Breton Basically Decided It Was An Amazing Tool For Censorship

from the our-new-truth-czar-is-overreaching dept

A few weeks ago, I highlighted how EU chief Digital Services Act enforcer, Thierry Breton, was making a mess of things sending broadly threatening letters (which have since been followed up with opening official investigations) to all the big social media platforms. His initial letter highlighted the DSA’s requirements regarding takedowns of illegal content, but very quickly blurred the line between illegal content and disinformation.

Following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU.

I noted that the framers of the DSA have insisted up, down, left, right, and center that the DSA was carefully designed such that it couldn’t possibly be used for censorship. I’ve highlighted throughout the DSA process how this didn’t seem accurate at all, and a year ago when I was able to interview an EU official, he kept doing a kind of “of course it’s not for censorship, but if there’s bad stuff online, then we’ll have to do something, but it’s not censorship” dance.

Some people (especially on social media and especially in the EU) got mad about my post regarding Breton’s letters, either saying that he was just talking about illegal content (he clearly is not!) or defending the censorship of disinformation as necessary (one person even told me that censorship means something different in the EU).

However, it appears I’m not the only one alarmed by how Breton has taken the DSA and presented it as a tool for him to crack down on legal information that he personally finds problematic. Fast Company had an article highlighting experts saying they were similarly unnerved by Breton’s approach to this whole thing.

“The DSA has a bunch of careful, procedurally specific ways that the Commission or other authorities can tell platforms what to do. That includes ‘mitigating harms,’” Keller says. The problem with Breton’s letters, she argues, is that they “blow right past all that careful drafting, seeming to assume exactly the kind of unconstrained state authority that many critics in the Global South warned about while the DSA was being drafted.”

Meanwhile, others are (rightfully!) noting that these threat letters are likely to lead to the suppression of important information as well:

Ashkhen Kazaryan, senior fellow of free speech and peace at the nonprofit Stand Together, objects to the implication in these letters that the mere existence of harmful, but legal, content suggests companies aren’t living up to their obligations under the DSA. After all, there are other interventions, including warning labels and reducing the reach of content, that platforms may be using rather than removing content altogether. Particularly in times of war, Kazaryan, who is a former content policy manager for Meta, says these alternative interventions can be crucial in preserving evidence to be used later on by researchers and international tribunals. “The preservation of [material] is important, especially for things like actually verifying it,” Kazaryan says, pointing to instances where evidence of Syrian human rights offenses have been deleted en masse.

The human rights civil society group Access Now similarly came out with concerns about Breton’s move fast and break speech approach might come across.

Firstly, the letters establish a false equivalence between the DSA’s treatment of illegal content and “disinformation.”’ “Disinformation” is a broad concept and encompasses varied content which can carry significant risk to human rights and public discourse. It does not automatically qualify as illegal and is not per se prohibited by either European or international human rights law. While the DSA contains targeted measures addressing illegal content online, it more appropriately applies a different regulatory approach with respect to other systemic risks, primarily consisting of VLOPs’ due diligence obligations and legally mandated transparency. However, the letters strongly focus on the swift removal of content rather than highlighting the importance of due diligence obligations for VLOPs that regulate their systems and processes. We call on the European Commission to strictly respect the DSA’s provisions and international human rights law, and avoid any future conflation of these two categories of expression.

Secondly, the DSA does not contain deadlines for content removals or time periods under which service providers need to respond to notifications of illegal content online. It states that providers have to respond in a timely, diligent, non-arbitrary, and objective manner. There is also no legal basis in the DSA that would justify the request to respond to you or your team within 24 hours. Furthermore, by issuing such public letters in the name of DSA enforcement, you risk undermining the authority and independence of DG Connect’s DSA Enforcement Team.

Thirdly, the DSA does not impose an obligation on service providers to “consistently and diligently enforce [their] own policies.” Instead, it requires all service providers to act in a diligent, objective, and proportionate manner when applying and enforcing the restrictions based on their terms and conditions and for VLOPs to adequately address significant negative effects on fundamental rights stemming from the enforcement of their terms and conditions. Terms and conditions often go beyond restrictions permitted under international human rights standards. State pressure to remove content swiftly based on platforms’ terms and conditions leads to more preventive over-blocking of entirely legal content.

Fourthly, while the DSA obliges service providers to promptly inform law enforcement or judicial authorities if they have knowledge or suspicion of a criminal offence involving a threat to people’s life or safety, the law does not mention a fixed time period for doing so, let alone one of 24 hours. The letters also call on Meta and X to be in contact with relevant law enforcement authorities and EUROPOL, without specifying serious crimes occurring in the EU that would provide sufficient legal and procedural ground for such a request.

Freedom of expression and the free flow of information must be vigorously defended during armed conflicts. Disproportionate restrictions of fundamental rights may distort information that is vital for the needs of civilians caught up in the hostilities and for recording documentation of ongoing human rights abuses and atrocities that could form the basis for evidence in future judicial proceedings. Experience shows that shortsighted solutions that hint at the criminal nature of “false information” or “fake news” — without further qualification — will disproportionately affect historically oppressed groups and human rights defenders fighting against aggressors perpetrating gross human rights abuses.

No one is suggesting that the spread of mis- and disinformation regarding the crisis is a good thing, but the ways to deal with it are tricky, nuanced, and complex. And having a bumbling, egotistical, blowhard like Breton acting like the dictator for social media speech is going to cause a hell of a lot more problems than it solves.

Filed Under: censorship, digital services act, disinformation, dsa, eu, thierry breton
Companies: meta, tiktok, twitter, x, youtube

Would Elon Pull ExTwitter Out Of The EU To Avoid The DSA Overreach?

from the perhaps-he-should,-but-he-won’t dept

This course of events was all too predictable. In May of 2022, while Elon was still in the “trying to buy Twitter” stage, we pointed out the absolute ridiculousness of him meeting with the EU’s Thierry Breton and saying that he fully endorsed the EU’s DSA approach. As we noted at the time, the whole framework of the DSA was set up to enable the EU to force websites like Twitter to suppress speech, and Elon’s endorsement of it (while claiming to be a “free speech absolutist”) suggested that it would be ridiculously easy for politicians around the globe to play Musk for a fool on speech suppression (something that has since been proven true on multiple occasions).

In fact, if you go back and watch the original video of Breton and Musk, you can almost see Breton snickering at knowing how much he had played Musk.

Of course, last week, Breton started shaking his censor stick at a bunch of social media companies sending many of them letters effectively demanding they remove disinformation or face massive fines under the DSA. As we noted, this was a dangerous and stupid thing for Breton to do, even if we agree that Elon’s been terrible for exTwitter and is wholly unprepared for dealing with the kind of disinfo flowing during a modern crisis. That’s no excuse for the government to demand censorship, however.

A day after sending his threat letter to Musk (not, by the way, Linda Yaccarino), Breton took things up a notch, initiating an official investigation into exTwitter under the DSA.

Today the European Commission services formally sent X a request for information under the Digital Services Act (DSA). This request follows indications received by the Commission services of the alleged spreading of illegal content and disinformation, in particular the spreading of terrorist and violent content and hate speech. The request addresses compliance with other provisions of the DSA as well.

Following its designation as Very Large Online Platform, X is required to comply with the full set of provisions introduced by the DSA since late August 2023, including the assessment and mitigation of risks related to the dissemination of illegal content, disinformation, gender-based violence, and any negative effects on the exercise of fundamental rights, rights of the child, public security and mental well-being.

In this particular case, the Commission services are investigating X’s compliance with the DSA, including with regard to its policies and actions regarding notices on illegal content, complaint handling, risk assessment and measures to mitigate the risks identified. The Commission services are empowered to request further information to X in order to verify the correct implementation of the law.

Now, according to Insider, Musk is considering just closing off the EU from exTwitter rather than deal with this.

I actually think this is the right move. Breton is throwing around his censorial weight, and it would be great if Musk actually did push back a little bit. At the very least, this could establish some boundaries on what the DSA actually enables an unelected bureaucrat like Breton to do regarding internet speech.

In recent weeks Elon Musk has suggested Twitter could stop being accessible in Europe in order to avoid new regulation enacted by the European Commission.

Musk is increasingly frustrated with having to comply with the Digital Services Act, according to a person familiar with the company. The Tesla billionaire, who acquired Twitter, now called X, a year ago for $44 billion, has discussed simply removing the app’s availability in the region, or blocking users in the European Union from accessing it, the person said.

That said, I find it difficult to believe he’d actually do it. As we’ve highlighted, traffic is down. Ad revenue is way, way down. No one’s signing up for “Premium,” and his new $1/year plan is likely to go down in flames as well.

Is he really going to cut off over 400 million EU residents? It… seems unlikely.

Filed Under: content moderation, disinformation, dsa, elon musk, eu, thierry breton
Companies: twitter, x

Sure, There’s Disinfo On ExTwitter, But The EU Should Not Be Demanding Censorship

from the eu-censors-take-over dept

Some of us have been warning about the dangers of the Digital Services Act (DSA) in the EU for quite some time, and pointed out that Elon Musk was effectively endorsing censorship in May of 2022 (after announcing his plans to purchase then-Twitter) by meeting with the EU’s Thierry Breton and saying that the DSA was “exactly aligned” with his thinking about his plans for Twitter content moderation. As we pointed out at the time, this was crazy, because the DSA is set up to position the EU government as ultimate censors.

Nearly a year ago, I got to moderate a panel at the EU’s brand new offices in San Francisco (set up for the new EU censors to be closer to the internet platforms), where I was told repeatedly by the top EU official in that office, Gerard de Graaf, that there was no way that the DSA would be used for censorship, and that it was only about “best practices,” (while then admitting that if bad content was still online, they’d have to crack down on companies). It was clear that the EU officials were doing a nonsense two-step in these discussions. They will insist up and down that the DSA isn’t about censorship, but then immediately point out that if you leave up content they don’t want, it will violate the DSA.

Indeed, as the DSA has now gone into effect, last month EU officials released a document that reveals the DSA is very much about censorship. The boring sounding “Application of the risk management framework to Russian disinformation campaigns” basically says that failing to delete Kremlin disinformation likely violates the DSA.

No matter what you think of Russian disinformation tactics, we should be very, very concerned when governments step in and tell companies how they must moderate, with threats of massive fines. That never ends well. And the EU is already making it clear that they view the DSA as a weapon to hold over the heads of websites.

On Tuesday, the very same Thierry Breton who Elon Musk insisted he was “aligned” with tweeted a letter addressed to Musk (notably not company “CEO” Linda Yaccarino) basically telling him that exTwitter needs to remove disinformation about the Hamas attacks in Israel.

Now, there’s no doubt that there have been tremendous amounts of disinformation about the attacks flooding across exTwitter (and if I can find the time to finish it, I have another article about it coming). But no matter what you think of that, it should never be the job of the government to step in and threaten websites over their moderation practices. That never leads to good results, and always (always, always) leads to abuse of power by the governments to silence dissent and marginalized voices.

So, this kind of language from Breton’s letter is dangerous nonsense:

Following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU.

If the content is illegal, then show which laws are being broken, and have law enforcement go after the perpetrators. If it’s disinformation, which is not illegal, then the government can respond to the disinformation and seek to debunk it. But this letter is very clearly threatening Musk, telling him that he has “very precise obligations regarding content moderation.”

And while Breton (like de Graaf) then tapdances around the issue by talking about “transparency” and “effective mitigation,” the throughline is clear: if you allow disinformation about topics that the EU government doesn’t want spoken about, it will accuse you of violating the DSA.

I’m all for websites figuring out the best way to deal with disinformation on their own platforms. That can include a variety of responses such as responding to and debunking the misinformation, making it less visible, or any number of other measures up to and including removing the content or banning accounts. But it should be up to the sites themselves, and not the government.

In response to Breton’s tweet, Musk did some tap dancing himself, saying:

Our policy is that everything is open source and transparent, an approach that I know the EU supports. Please list the violations you allude to on 𝕏, so that that the public can see them.

Breton responded:

You are well aware of your users’ — and authorities’— reports on fake content and glorification of violence. Up to you to demonstrate that you walk the talk. My team remains at your disposal to ensure DSA compliance, which the EU will continue to enforce rigorously.

Which is… nonsense. Again, this is basically the way the Great Firewall in China originally was set up. Officials would tell ISPs “don’t let anything bad through… or else” without ever defining what was bad and what wasn’t allowed. The end result was that ISPs in China went aggressively towards overblocking content to avoid potential liability.

That doesn’t mean Musk’s response is great either. Directly asking EU officials to publicly post what disinfo they find problematic directly to Musk himself is… not a reasonable process. The DSA actually has requirements for a process enabling governments to flag content as “trusted flaggers.” Under such a program, exTwitter should then be able to evaluate the content and determine how to deal with it, and then be transparent about what it’s doing (including if it decides the content is fine and should be left alone). But, having an EU official tag Elon in a tweet is… um… not that at all. It’s just all silly posturing by both sides.

Again, I think that Musk could have done many, many things to better deal with disinformation on exTwitter. But it’s not the government’s place to step in and threaten him over speech.

Filed Under: censorship, content moderation, disinformation, dsa, elon musk, free speech, hamas, israel, russia, thierry breton
Companies: twitter, x

Large EU Internet Retailer Whines That It Shouldn’t Have To Comply With The DSA’s Most Stringent Rules

from the wait,-no-one-told-us-this-might-apply-to-eu-companies-too! dept

A few months ago, when the EU designated 17 companies as “VLOPs” — Very Large Online Providers — subject to the most stringent regulations, one name that I heard lots of folks in the US be confused about was Zalando, which is a large EU-focused online retailer. It was also one of only two companies actually based in the EU to be designated as such (Booking.com was the other). And it seems that Zalando was just as surprised as everyone else, as it has now sued to challenge that designation.

Germany’s Zalando on Tuesday contested the labelling methodology and took its case to the Luxembourg-based Court of Justice of the European Union, Europe’s top court.

The company said the Commission had failed to take into account the hybrid nature of its business model and the fact it does not present a systemic risk of disseminating harmful or illegal content from third parties.

“The European Commission misinterpreted our user numbers and failed to acknowledge our mainly retail business model. The number of European visitors who connect with our Partners is far below the DSA’s threshold to be considered as a VLOP,” Zalando CEO Robert Gentz said in a statement.

Of course it does seem at least somewhat eyebrow raising to see an EU company as the first one to challenge this law. It almost feels like they naturally assumed that this law was only supposed to apply to those foreign internet companies, rather than the EU’s own companies.

The response from the EU’s internet czar, Thierry Breton (who always comes off as a bit too smug and gleeful about his power to suppress speech) is pretty laughable as well, claiming that he thinks companies should be happy to face his ridiculous, speech suppressing, compliance-nightmare-inducing regulations:

“Complying with the DSA is not a punishment – I encourage all platforms to see it as an opportunity to reinforce their brand value and reputation as a trustworthy site,” he said in a statement.

That’s just disconnected from reality. If it was simply an “opportunity to reinforce their brand value and reputation as a trustworthy site,” then you wouldn’t need to have a regulation with massive potential fines backing it up. Companies already have plenty of incentive to “reinforce their brand value” without having to hire a shitload of compliance lawyers.

Filed Under: designation, dsa, eu, thierry breton, vlop
Companies: zalando