digital services act – Techdirt (original) (raw)

Ctrl-Alt-Speech: The Internet Is (Still) For Porn, With Yoel Roth

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Yoel Roth, former head of trust & safety at Twitter and now head of trust & safety at Match Group. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: content moderation, deplatforming, digital services act, disinformation, dsa, eu, gaza, israel, misinformation
Companies: meta, shein, temu, twitch, twitter, x

Ctrl-Alt-Speech: Won’t Someone Please Think Of The Adults?

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: age estimation, age verification, digital services act, dsa, ireland, nist, social media
Companies: nextdoor, telegram, tiktok, twitter, x

Ctrl-Alt-Speech: Do You Really Want The Government In Your DMs?

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: content moderation, deepfakes, digital services act, eu, india, ofcom, section 230, singapore
Companies: facebook, instagram, meta, tiktok, youtube

DSA Framers Insisted It Was Carefully Calibrated Against Censorship; Then Thierry Breton Basically Decided It Was An Amazing Tool For Censorship

from the our-new-truth-czar-is-overreaching dept

A few weeks ago, I highlighted how EU chief Digital Services Act enforcer, Thierry Breton, was making a mess of things sending broadly threatening letters (which have since been followed up with opening official investigations) to all the big social media platforms. His initial letter highlighted the DSA’s requirements regarding takedowns of illegal content, but very quickly blurred the line between illegal content and disinformation.

Following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU.

I noted that the framers of the DSA have insisted up, down, left, right, and center that the DSA was carefully designed such that it couldn’t possibly be used for censorship. I’ve highlighted throughout the DSA process how this didn’t seem accurate at all, and a year ago when I was able to interview an EU official, he kept doing a kind of “of course it’s not for censorship, but if there’s bad stuff online, then we’ll have to do something, but it’s not censorship” dance.

Some people (especially on social media and especially in the EU) got mad about my post regarding Breton’s letters, either saying that he was just talking about illegal content (he clearly is not!) or defending the censorship of disinformation as necessary (one person even told me that censorship means something different in the EU).

However, it appears I’m not the only one alarmed by how Breton has taken the DSA and presented it as a tool for him to crack down on legal information that he personally finds problematic. Fast Company had an article highlighting experts saying they were similarly unnerved by Breton’s approach to this whole thing.

“The DSA has a bunch of careful, procedurally specific ways that the Commission or other authorities can tell platforms what to do. That includes ‘mitigating harms,’” Keller says. The problem with Breton’s letters, she argues, is that they “blow right past all that careful drafting, seeming to assume exactly the kind of unconstrained state authority that many critics in the Global South warned about while the DSA was being drafted.”

Meanwhile, others are (rightfully!) noting that these threat letters are likely to lead to the suppression of important information as well:

Ashkhen Kazaryan, senior fellow of free speech and peace at the nonprofit Stand Together, objects to the implication in these letters that the mere existence of harmful, but legal, content suggests companies aren’t living up to their obligations under the DSA. After all, there are other interventions, including warning labels and reducing the reach of content, that platforms may be using rather than removing content altogether. Particularly in times of war, Kazaryan, who is a former content policy manager for Meta, says these alternative interventions can be crucial in preserving evidence to be used later on by researchers and international tribunals. “The preservation of [material] is important, especially for things like actually verifying it,” Kazaryan says, pointing to instances where evidence of Syrian human rights offenses have been deleted en masse.

The human rights civil society group Access Now similarly came out with concerns about Breton’s move fast and break speech approach might come across.

Firstly, the letters establish a false equivalence between the DSA’s treatment of illegal content and “disinformation.”’ “Disinformation” is a broad concept and encompasses varied content which can carry significant risk to human rights and public discourse. It does not automatically qualify as illegal and is not per se prohibited by either European or international human rights law. While the DSA contains targeted measures addressing illegal content online, it more appropriately applies a different regulatory approach with respect to other systemic risks, primarily consisting of VLOPs’ due diligence obligations and legally mandated transparency. However, the letters strongly focus on the swift removal of content rather than highlighting the importance of due diligence obligations for VLOPs that regulate their systems and processes. We call on the European Commission to strictly respect the DSA’s provisions and international human rights law, and avoid any future conflation of these two categories of expression.

Secondly, the DSA does not contain deadlines for content removals or time periods under which service providers need to respond to notifications of illegal content online. It states that providers have to respond in a timely, diligent, non-arbitrary, and objective manner. There is also no legal basis in the DSA that would justify the request to respond to you or your team within 24 hours. Furthermore, by issuing such public letters in the name of DSA enforcement, you risk undermining the authority and independence of DG Connect’s DSA Enforcement Team.

Thirdly, the DSA does not impose an obligation on service providers to “consistently and diligently enforce [their] own policies.” Instead, it requires all service providers to act in a diligent, objective, and proportionate manner when applying and enforcing the restrictions based on their terms and conditions and for VLOPs to adequately address significant negative effects on fundamental rights stemming from the enforcement of their terms and conditions. Terms and conditions often go beyond restrictions permitted under international human rights standards. State pressure to remove content swiftly based on platforms’ terms and conditions leads to more preventive over-blocking of entirely legal content.

Fourthly, while the DSA obliges service providers to promptly inform law enforcement or judicial authorities if they have knowledge or suspicion of a criminal offence involving a threat to people’s life or safety, the law does not mention a fixed time period for doing so, let alone one of 24 hours. The letters also call on Meta and X to be in contact with relevant law enforcement authorities and EUROPOL, without specifying serious crimes occurring in the EU that would provide sufficient legal and procedural ground for such a request.

Freedom of expression and the free flow of information must be vigorously defended during armed conflicts. Disproportionate restrictions of fundamental rights may distort information that is vital for the needs of civilians caught up in the hostilities and for recording documentation of ongoing human rights abuses and atrocities that could form the basis for evidence in future judicial proceedings. Experience shows that shortsighted solutions that hint at the criminal nature of “false information” or “fake news” — without further qualification — will disproportionately affect historically oppressed groups and human rights defenders fighting against aggressors perpetrating gross human rights abuses.

No one is suggesting that the spread of mis- and disinformation regarding the crisis is a good thing, but the ways to deal with it are tricky, nuanced, and complex. And having a bumbling, egotistical, blowhard like Breton acting like the dictator for social media speech is going to cause a hell of a lot more problems than it solves.

Filed Under: censorship, digital services act, disinformation, dsa, eu, thierry breton
Companies: meta, tiktok, twitter, x, youtube

from the cluelessness-über-alles dept

Back in September 2021 Techdirt covered an outrageous legal attack by Sony Music on Quad9, a free, recursive, anycast DNS platform. Quad9 is part of the Internet’s plumbing: it converts domain names to numerical IP addresses. It is operated by the Quad9 Foundation, a Swiss public-benefit, not-for-profit organization. Sony Music says that Quad9 is implicated in alleged copyright infringement on the sites it resolves. That’s clearly ridiculous, but unfortunately the Regional Court of Hamburg agreed with Sony Music’s argument, and issued an interim injunction against Quad9. The German Society for Civil Rights (Gesellschaft für Freiheitsrechte e.V. or “GFF”) summarizes the court’s thinking:

In its interim injunction the Regional Court of Hamburg asserts a claim against Quad9 based on the principles of the German legal concept of “Stoererhaftung” (interferer liability), on the grounds that Quad9 makes a contribution to a copyright infringement that gives rise to liability, in that Quad9 resolves the domain name of website A into the associated IP address. The German interferer liability has been criticized for years because of its excessive application to Internet cases. German lawmakers explicitly abolished interferer liability for access providers with the 2017 amendment to the German Telemedia Act (TMG), primarily to protect WIFI operators from being held liable for costs as interferers.

As that indicates, this is a case of a law that is a poor fit for modern technology. Just as the liability no longer applies to WIFI operators, who are simply providing Internet access, so the German law should also not catch DNS resolvers like Quad9. The GFF post notes that Quad9 has appealed to the Hamburg Higher Regional Court against the lower court’s decision. Unfortunately, another regional court has just handed down a similar ruling against the company, reported here by Heise Online (translation by DeepL):

the Leipzig Regional Court has sentenced the Zurich-based DNS service Quad9. On pain of an administrative fine of up to 250,000 euros or up to 2 years’ imprisonment, the small resolver operator was prohibited from translating two related domains into the corresponding IP addresses. Via these domains, users can find the tracks of a Sony music album offered via Shareplace.org.

The GFF has already announced that it will be appealing along with Quad9 to the Dresden Higher Regional Court against this new ruling. It says that the Leipzig Regional Court has made “a glaring error of judgment”, and explains:

If one follows this reasoning, the copyright liability of completely neutral infrastructure services like Quad9 would be even stricter than that of social networks, which fall under the infamous Article 17 of the EU Copyright Directive,” criticizes Felix Reda, head of the Control © project of the Society for Civil Rights. “The [EU] Digital Services Act makes it unequivocally clear that the liability rules for Internet access providers apply to DNS services. We are confident that this misinterpretation of European and German legal principles will be overturned by the Court of Appeals.”

Let’s hope so. If it isn’t, we can expect companies providing the Internet’s basic infrastructure in the EU to be bombarded with demands from the copyright industry and others for domains to be excluded from DNS resolution. The likely result is that perfectly legal sites and their holdings will be ghosted by DNS companies, which will prefer to err on the side of caution rather than risk becoming the next Quad9.

Follow me @glynmoody on Mastodon or Twitter.

Filed Under: article 17, copyright, digital services act, dns, eu copyright directive, felix reda, germany, hamburg, leipzig, liability, quad9, sony music, switzerland, wifi
Companies: quad9, sony music

Techdirt Podcast Episode 336: The DSA Is A Mess, But Will Now Rule The Internet

from the big-regulation dept

There are big internet regulatory changes coming in the EU, with the Digital Services Act and the Digital Markets Act. Each is a huge bundle of new rules that could drastically change the future of the entire internet, and today we’re focusing on the DSA, which is set to come into force in 2024. Emma Llansó from the Center for Democracy & Technology and Daphne Keller from Stanford’s Cyber Policy Center join us on this week’s episode to dig into the DSA and its many, many implications.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: daphne keller, digital services act, dsa, emma llanso, eu, intermediary liability, podcast, regulation

EU To Open New Silicon Valley Office To Figure Out Better And Better Ways To Destroy The Internet

from the new-sheriff-in-town dept

The EU is well on its way to fundamentally destroying the internet. Two giant new regulations are set to become law soon: the Digital Services Act and the Digital Markets Act. And while neither is ridiculous in the same way that laws in the US and the UK and some other places are just pandering to grandstanding nonsense, that doesn’t make these laws good. Both regulations went through long, convoluted bureaucratic processes… and came out with long, convoluted bureaucratic regulations that simply don’t match with an internet that is designed to be an open system for innovation.

Between the DSA and the DMA, the EU is basically setting up a fundamental shift in the internet, away from the permissionless innovation that allowed anyone to experiment, and iterate, and figure out what people wanted… to a new, “mother, may I” approach to innovation overseen by fearful bureaucrats.

To basically put an exclamation point on how these two new regulations are designed to bring Silicon Valley companies to heel, the European Commission has announced that it’s setting up shop in Silicon Valley, to allow its bureaucrats to get closer to the services they wish to tie down with overly burdensome compliance requirements.

The European Commission, the executive branch of the European Union government, is opening a San Francisco office on Sept. 1 that will liaise with Silicon Valley companies affected by EU tech regulation.

While the Commission is trying to frame this as a chance for “improved relationships” between the Commission and tech companies — and a chance for both sides to learn from each other — it sure feels a lot like a foreign regulator setting up shop to watch over its new regulated industry.

A central part of Mr. de Graaf’s work in San Francisco will be meeting with companies that must comply with EU tech rules because they do business in the 27-nation bloc. Big tech companies often bring more than a dozen lawyers to meetings, and Mr. de Graaf said he could help ensure companies adopt a more strategic approach to EU laws and not one that is driven by lawyers alone. Still, he expects companies to file lawsuits against coming tech legislation. “A relationship between the regulator and the regulated is always a bit complicated. A regulator is always like a bit of a policeman,” he said.

Yes. A regulator is always like a bit of a policeman. So, welcome to Silicon Valley, our new internet overlords.

Filed Under: digital markets act, digital services act, dma, eu, regulators

Because Of Course: Rightsholders Pushing To Turn Digital Services Act Into Another Anti-Piracy Tool

from the they-can't-resist dept

It never fails. We’ve been talking about the EU’s Digital Services Act for a few years now, looking at how the EU’s technocratic desire to overregulate the internet is going to cause real problems. And while at least they took a more systematic process to figuring out how to write the law, the end result still struck us as a disaster in waiting. And, because this is how any internet regulation attempt always turns out, after all the back and forth discussions and careful weighing of different ideas, someone always has to come in at the very end and seek to make everything much worse. In this case, the issue is that the EU Parliament, which gave us the terrible and broken Copyright Directive, is now trying to sneak more bad copyright ideas into the DSA.

The note expressed support for the proposal of the European Commission, which defined search engines as an ad hoc category and introduces a notice and action (N&A) mechanism that would be required to take down illegal content once it is flagged to them. At the same time, Didier proposed a few changes to the Commission’s text.

The note asked to remove a part in the text’s preamble providing examples of ‘mere conduit’, ‘caching’ and ‘hosting’ services, categories with different liability regimes established in the eCommerce Directive, the predecessor of the DSA. These examples were overly descriptive for the rightsholders that prefer a case-by-case in court.

Another change would mandate that if illegal content is flagged, not just the relevant web pages but the entire website should be delisted, namely removed by the search results. In the most extreme case, that would mean that if a video is illegally uploaded on YouTube, Google would have to remove the entire platform from its search results.

Finally, a modification to an article would oblige search engines to remove all search results referring to the flagged illegal content, not only the specific website. In other words, the platforms would have to monitor all websites searching for unlawful content.

Basically, throwing out whatever semi-balanced (not really, but they’d like to believe it is) intermediary liability rules they could come up with, and inserting a maximum punishment for sites and content deemed illegal (i.e. infringing). Incredible.

I mean, literally, they want this to say that if content is “flagged” (not adjudicated) as illegal, entire websites should be blocked from search results. That seems extreme, and extremely censorial, effectively giving tremendous power to silence speech to just about anyone who wants to take down some content.

Filed Under: content blocking, copyright, digital services act, dsa, due process, eu, rightsholders, search, site blocking

New Yorker’s Famed Fact Checking Crew Apparently Unaware Of The 1st Amendment?

from the it's-literally-the-1st-amendment dept

The New Yorker magazine is famous for its fact checking effort. Indeed, the New Yorker itself has written multiple pieces about how ridiculously far its fact checking team will go. And when people want to present the quintessential example of how “fact checking” should work, they often point to The New Yorker. Of course, I don’t doubt that the magazine does more of a form of fact checking than most any other publication out there, but that doesn’t mean they’re necessarily that good at it. Remember, it once published an article that heavily implied that a game I helped create to better understand the role of technology in elections, was actually created by a billionaire nonsense peddler to relive the glory of influencing elections.

Anyway, recently, the New Yorker had a provocatively titled article, How Congress Can Prevent Elon Musk from Turning Twitter Back into an Unfettered Disinformation Machine, by John Cassidy. I clicked with interest, because while I don’t want Twitter to turn into an “unfettered disinformation machine,” I even more strongly don’t want Congress determining what speech is allowed on any website — and I’m pretty sure that the 1st Amendment means that Congress can’t prevent Musk from turning Twitter back into an unfettered disinformation machine. If he wants to, he absolutely can. And there’s nothing Congress can or should do about it, because if it can, then it can also do that for any other media organization, and we have the 1st Amendment to stop that.

Bizarrely, Cassidy’s article doesn’t even mention the 1st Amendment. Instead, it points to the (already extremely problematic) Digital Services Act in the EU, which is taking a very paternalistic approach to content moderation and internet website regulation. It is a sweepingly different approach, enabling governments to demand the removal of content.

Regulating content in a manner consistent with protecting free speech may be a trickier proposition, but the E.U. has just provided a road map for how it could be done: by putting the onus on social-media companies to monitor and remove harmful content, and hit them with big fines if they don’t. The Digital Services Act is “nothing short of a paradigm shift in tech regulation,” Ben Scott, the executive director of the advocacy group Reset, told the Associated Press. “It’s the first major attempt to set rules and standards for algorithmic systems in digital media markets.”

Musk would surely object to the U.S. adopting a regulatory system like the one that the Europeans are drawing up, but that’s too bad. The health of the Internet—and, most important, democracy—is too significant to leave to one man, no matter how rich he is.

It’s not that it’s a “trickier proposition,” it’s that the 1st Amendment literally would not allow a law like the DSA to exist here (or, at least, not for long until a court tossed it out as unconstitutional). Reno v. ACLU exists. You’d think that the New Yorker’s fact checkers might have come across it. Or at least, the 1st Amendment.

I get that people are worried about what Musk might do with Twitter. I get that people are frustrated about what they perceive as the rampant flow of mis- and disinformation (even though I’m pretty sure most misunderstand it and how such misinformation actually flows.) But this weird rush to simply throw out the 1st Amendment (or, in this case, to ignore it) is especially bizarre coming from a media organization that heavily relies on the 1st Amendment to do what it does.

There’s this unfortunate belief among too many people that if something is “bad” it must be “regulated.” But when it comes to speech there are really important reasons why it can’t be regulated — in part because any attempt to regulate it will be widely abused by the powerful against the powerless. We know this because it has happened basically over and over again throughout history.

We’d get much further in this world by recognizing that there are other approaches to dealing with bad stuff in the world — especially bad speech — than demanding that the government step in and make it illegal. Because when you do that, you create massive problems that you can’t just fact check away, no matter how good your fact checking department is.

Filed Under: 1st amendment, content moderation, content regulation, digital services act, dsa, elon musk, free speech, john cassidy, new yorker

EU Parliament's 'More Thoughtful' Approach To Regulating The Internet Still A Complete Disaster

from the regulating-human-behavior dept

For a while now, the EU has been working on its latest big update to internet regulations, mostly under the umbrella of the Digital Services Act (DSA). Multiple people who have been following the process there have noted how much more thoughtful the process has been for the DSA as compared to internet regulatory attempts in the US, which seem to mostly be driven by which senator thinks they can get the biggest headlines for misrepresenting which particular outrage this week. A more careful, thoughtful approach is definitely appreciated, but that doesn’t mean the results will be any good. Last week, the EU Parliament approved the latest version of the DSA in what has been seen as something of a mixed bag.

Pirate Party MEP Patrick Breyer described the final vote as having both “huge success and major setbacks.” I’m actually a bit surprised that the EFF seems mostly happy with the result (with a few caveats), though that seems to mainly be because a few really bad ideas didn’t make the cut. But, it still seems like an awful lot of bad ideas did make it through.

The good parts are that the new DSA mostly retains the E-Commerce Directive’s “conditional liability regime” and rejected a proposal that would require “general monitoring” (i.e., faulty filters to try to screen “bad stuff”). There was an attempt to go even further and ban upload filters entirely, but that was rejected. Similarly a proposal to say that courts could not require ISPs engage in full site blocking was rejected.

On the good side, this version of the DSA includes a right to pay for digital services anonymously, though it rejected a limitation on requiring a court order for government’s to snoop through your data. It also rejected a proposal that would require a court order to remove content — banning the practice of enabling government agencies to order content removals. This is extremely unfortunate, and an attack on due process.

There’s a lot more in there that’s a mix of good and bad, and the whole thing isn’t truly final yet either. But, I still think that overall the DSA will have a hugely negative impact on internet freedoms and free speech, even if it got some small things at the margin right.

In the end, I do think that any big “sweeping” set of internet regulations — whether prepared thoughtfully or not — are always going to be a disaster. They can’t take into account how complex the world is, can’t take into account context, and can’t take into account the general dynamism of the internet — and how quickly things change. Not only that, but just the very process of opening up such sweeping regulations that cover so much of how the internet works for users is going to get hijacked by special interests who want this or that thing included in the final regulation.

Is the process more reality-based than the US’s grandstand-o-rama? Sure. Will the end results be any better? Doesn’t seem like it.

Filed Under: anonymity, conditional liability, data, digital services act, dsa, eu, filters, intermediary liability, internet regulations, privacy