canada – Techdirt (original) (raw)
Good News: Canada Passes Major New ‘Right To Repair’ Reforms
from the fix-your-own-shit dept
The world might be going to hell, but at least activists’ efforts to protect consumers’ rights to affordable and easy tech repair continue to gain steam.
Most recently in Canada, where the country’s Copyright Act was amended by two different bills allowing the circumvention of technological protection measures (TPMs) if done for the purposes of “maintaining or repairing a product, including any related diagnosing,” and “to make the program or a device in which it is embedded interoperable with any other computer program, device or component.”
These TPMs take on a variety of shapes, whether it’s just password protected access to administrative functions, or the need for a USB dongle to unlock access to copyrighted parts of software. Initially implemented to “fight piracy,” such restrictions were quickly expanded to be leveraged to help companies monopolize repair. Like in the U.S., Canadian copyright bars circumvention.
Overall, Canada’s legal updates should be a great boon to independent repair shops looking to provide affordable repair options to Canadian consumers, and to tinkerers wanting to repair devices and hardware they own. iFixit calls the amendments a “huge step forward” for right to repair:
“These bills are a huge step forward for the right to repair, giving Canadians more freedom to repair their own devices without breaking the law. They make Canada the first country to tackle copyright law’s digital locks at a federal level in favor of repair access.”
iFixit notes there’s still work left to be done, given that Canada’s latest legal updates do nothing to help improve access to the needed repair tools:
“While Canadians can now legally bypass TPMs to fix their own devices, they can’t legally sell or share tools designed for that purpose. This means Canadian consumers and repair pros still face technical and legal hurdles to access the necessary repair tools, much like in the US.”
Here in the States, any hopes for a federal right to repair law have been crushed by Trump’s electoral win. Activists have, however, had considerable luck passing numerous state right to repair laws.
Last March Oregon became the seventh state to pass “right to repair” legislation making it easier, cheaper, and more convenient to repair technology you own. The bill’s passage came on the heels of legislation passed in Massachusetts (in 2012 and 2020), Colorado (in 2022 and 2023), New York (2023), Minnesota, Maine and California. All told, 30 states contemplated such bills in 2024.
The problem: I’ve yet to see any examples of these laws actually being enforced. And with Trumpism ushering in a whole bunch of new life and death legal struggles hinging at the state level (immigration, the dismantling of all federal consumer protection), I strongly suspect going toe to toe with major companies over right to repair won’t be a priority for state officials with limited resources.
Filed Under: canada, copyright, drm, hardware, locks, right to repair, software, tpms
Ctrl-Alt-Speech: Presidents & Precedents
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Pennsylvania Becomes Hot Spot for Election Disinformation (NY Times)
- After Trump Took the Lead, Election Deniers Went Suddenly Silent (NY Times)
- X Is a White-Supremacist Site (The Atlantic)
- Papers, Please? The Republican Plan to Wall Off the Internet (Tech Policy Press)
- What Trump’s Victory Means for Internet Policy (CNET)
- The government plans to ban under-16s from social media platforms. Here’s what we know so far (ABC Australia)
- Canada orders shutdown of TikTok’s Canadian business, app access to continue (Reuters)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: australia, canada, content moderation, donald trump, social media
Companies: tiktok, twitter, x
A Rare Copyright Win For The Public; But A Small One, Only In Canada, And Possibly Temporary
from the sometimes-the-public-actually-wins dept
It is extraordinary that within the copyright world it is accepted dogma that legal protections for this intellectual monopoly should always get stronger – creating a kind of copyright ratchet. One of the manifestations of this belief was the WIPO Copyright Treaty, signed in 1996, which extended copyright in important ways. A key element was the prohibition of any circumvention of copyright protection systems for any reason – even if it were for a legal purpose. This meant, for example, that if a work’s copyright had expired, it would be nonetheless illegal to access this public domain work if doing so required circumvention of any protection that had been applied. In effect, copyright term would become infinite.
In the US, the WIPO copyright treaty was implemented in the 1998 Digital Millennium Copyright Act (DMCA). The EU followed a few years later with the 2001 Information Society Directive. In Canada, it was not until 2012 that the relevant law was passed, the 2012 Copyright Modernization Act. As the Canadian copyright expert Michael Geist explains, the law was controversial, not least because of fears that it might restrict perfectly legal activities:
The classic example was that a user might be entitled to copy a portion of a chapter in a book, but if the book became an e-book with a digital lock, the publisher could use technology to stop copying that was otherwise permitted under the law. If the user sought to circumvent or by-pass the technology to assert their rights, that act of circumvention would itself become an infringement even if the underlying copying itself was permitted.
This issue has been at the heart of a case that has been heard in multiple Canadian courts during the last eight years. The Federal Court in Canada has recently issued what Geist calls a “landmark decision” on copyright’s anti-circumvention rules, which concludes that digital locks should not “trump” fair dealing:
Rather, the two must co-exist in harmony, leading to an interpretation that users can still rely on fair dealing even in cases involving those digital locks. The decision could have enormous implications for libraries, education, and users more broadly as it seeks to restore the copyright balance in the digital world. The decision also importantly concludes that merely requiring a password does not meet the standard needed to qualify for copyright rules involving technological protection measures.
Geist’s post explains the background to the decision in more detail, and notes that the case could still be appealed. He points out that “for now the court has restored a critical aspect of the copyright balance after more than a decade of uncertainty and concern”. It is extraordinary that it has taken so long merely to achieve something as mild as “balance”. What’s even more ridiculous is that this rare win for the public only applies in Canada. In most other countries, and in general, it is still illegal to circumvent digital locks to carry out perfectly legal activities with copyright material.
Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.
Filed Under: canada, copyright, digital locks, drm, fair dealing
Canada Imposes 5% Tax On Streaming To Fund Local News, Diverse Content
from the get-ready-to-pay-more dept
Thu, Jun 13th 2024 05:27am - Karl Bode
Canadian Regulators are leaning on new authority built into the 2023 Online Streaming Act to impose a new 5 percent tax on streaming TV and music services like Netflix and Spotify; funding that the regulator says will then be used to help fund Canadian broadcasting.
According to the Canadian Radio-television and Telecommunications Commission (CRTC) announcement, the plan should drive $200 million in new funding annually to local news and a variety of other public interest content:
“The funding will be directed to areas of immediate need in the Canadian broadcasting system, such as local news on radio and television, French-language content, Indigenous content, and content created by and for equity-deserving communities, official language minority communities, and Canadians of diverse backgrounds.”
The fee systems effectively mirrors the fees already imposed on local broadcasters. Past efforts on this front (in the U.S. and Canada) haven’t been received particularly well by streaming giants, and the same applied here. The Digital Media Association, which represents Amazon Music, Apple Music and Spotify, insisted in a statement that the new tax will only expand what they’re calling an “affordability crisis”:
“As Canada’s affordability crisis remains a significant challenge, the government needs to avoid adding to this burden. This is especially true for younger Canadians who are the predominant users of audio streaming services.”
Huge contracts for the likes of Joe Rogan and Wall Street’s insatiable demand for relentless quarterly growth have more to do with streaming affordability than anything else, though in this case the services are correct in that they’ll simply pass the cost of the new taxes directly on to users. Facing slowing subscriber growth, streaming giants have already been pretty relentlessly raising rates.
That said, real journalism (especially independent and minority owned) is consistently facing a funding crisis, and much of the conversation (both in the U.S. and Canada) tends to be centered around what isn’t possible, shouldn’t be done (usually framed around the interests of giant corporations), as opposed to actually fixing the problem.
At the same time, similar efforts are often derailed by corruption, and there’s no guarantee the money guaranteed for useful things actually finds the way it its original destination.
Efforts to tax streaming companies to help fund broadband deployment in the States, for example, risk being hijacked by telecom giants looking to exploit corrupt policymakers simply to pad their wallets. Municipalities in Texas have also tried to tax Netflix with a fairly broad disdain for existing law and no particular public interest initiative in mind.
Filed Under: canada, crtc, journalism, streaming, tax, video
Link Taxes Backfire: Canadian News Outlets Lose Out, Meta Unscathed
from the what-a-surprise dept
As California (and possibly Congress) are, again, revisiting instituting link taxes in the US, it’s worth highlighting that our prediction about the Canadian link tax has now been shown to be correct. It didn’t harm Meta one bit to remove news.
The entire premise behind these link taxes/bargaining codes is that social media gets “so much free value” from news orgs, that they must pay up. Indeed, a ridiculously bad study that came out last fall, and was widely passed around, that argued that Google and Meta had stripped $14 billion worth of value from news orgs and should offer to pay up that amount.
$14 billion. With a “b.”
No one, who understands anything, believes that’s true. Again, social media is not taking value away from news orgs. It’s giving them free distribution and free circulation, things that, historically, cost media organizations a ton of money.
But, now a study, in Canada is proving that social media companies get basically zero value from news links. Meta, somewhat famously, blocked links to news in Canada in response to that country’s link tax. This sent many observers into a tizzy, claiming that it was somehow both unfair for Meta to link to news orgs AND to not link to news orgs.
Yes, media organizations are struggling. Yes, the problems facing the news industry are incredibly important to solve to help protect democracy. Yes, we should be thinking and talking about creative solutions for funding.
But, taxing links to force internet companies to pay media companies is simply a terrible solution.
Thanks to Meta , not giving in to Canada and blocking links to news, we now have some data on what happens when a link tax approach is put in place. Some new research from McGill University and the University of Toronto has found that Meta didn’t lose very much engagement from a lack of news links. But media orgs lost out big time.
Laura Hazard Owen has a good summary at Nieman Lab.
“We expected the disappearance of news on Meta platforms to have caused a major shock to the Canadian information ecosystem,” the paper’s authors — Sara Parker, Saewon Park, Zeynep Pehlivan, Alexei Abrahams, Mika Desblancs, Taylor Owen, Jennie Phillips, and Aengus Bridgman — write. But the shock appears to have been one-sided. While “the ban has significantly impacted Canadian news outlets,” the authors write, “Meta has deprived users of the affordance of news sharing without suffering any loss in engagement of their user base.”
What the researchers found is that users are still using Meta platforms just as much, and still getting news from those platforms. They’re just no longer following links back to the sources. This has done particular harm to smaller local news organizations:
This remarkable stability in Meta platform users’ continued use of the platforms for politics and current affairs anticipates the findings from the detailed investigation into engagement and posting behaviour of Canadians. We find that the ban has significantly impacted Canadian news outlets but had little impact on Canadian user behaviour. Consistent with the ban’s goal, we find a precipitous decline in engagement with Canadian news and consequently the posting of news content by Canadian news outlets. The effect is particularly acute for local news outlets, while some news outlets with national or international scope have been able to make a partial recovery after a few months. Additionally, posting by and engagement with alternative sources of information about Canadian current affairs appears unmoved by the ban. We further find that Groups focused on Canadian politics enjoy the same frequency of posting and diversity of engagement after the ban as before. While link sharing declines, we document a complementary uptick in the sharing of screenshots of Canadian news in a sample of these political Groups, and confirm by reviewing a number of such posts where users deliberately circumvented the link-sharing ban by posting screenshots. Although the screenshots do not compensate for the total loss of link sharing, the screenshots themselves garner the same total amount of engagement as news links previously had.
I feel like I need to keep pointing this out, but: when you tax something, you get less of it. Canada has decided to tax news links, so they get fewer news links. But users still want to talk about news, so they’re replacing the links with screenshots and discussions. And it’s pretty impressive how quickly users switched over:
Meaning the only one losing out here are the news publishers themselves who claimed to have wanted this law so badly.
The impact on Canadian news orgs appears to be quite dramatic:
But the activity on Meta platform groups dedicated to news doesn’t seem to have changed that much:
If “news links” were so valuable to Meta, then, um, wouldn’t that have declined once Meta blocked links?
One somewhat incredible finding in the paper is that “misinformation” links also declined after Meta banned news links:
Surprisingly, the number of misinformation links in political and local community Groups decreased after the ban.
Political Groups:
- Prior to the ban: 2.8% of links (5612 out of 198,587 links) were misinformation links
- After the ban: 1.4% of links (5306 out of 379,202 links) were misinformation links
Though the paper admits that this could just be a function of users recognizing they can’t share links.
This is still quite early research, but it is notable, especially given that the US continues to push for this kind of law as well. Maybe, just maybe, we should take a step back and recognize that taxing links is not helpful for news orgs and misunderstands the overall issue.
It’s becoming increasingly clear that the approach taken by Canada and other countries to force platforms like Meta to pay for news links is misguided and counterproductive. These laws are reducing the reach and engagement of news organizations while doing little to address the underlying challenges facing the industry. Instead of helping news organizations, these laws are having the opposite effect. Policymakers need to take a more nuanced and evidence-based approach that recognizes the complex dynamics of the online news ecosystem.
Filed Under: c-18, canada, link tax, link taxes, news links
Companies: meta
NSO Group Continues To Use The Lawsuit Filed Against It By WhatsApp To Harass Canadian Security Researchers
from the if-you-can't-beat-'em,-fuck-with-'em dept
Israeli malware manufacturer NSO Group spent years making good money selling to bad people. Its only concern for the longest time was how long it would take nearby autocrats and totalitarians to start targeting Israeli citizens.
To be fair, the Israeli government shares at least some of the blame. Surrounded by entities that would love to see it erased from the earth, the government helped broker deals with unfriendly countries — a perverse form of diplomacy that allowed some of its worst enemies to gain access to extremely powerful spyware.
NSO is no longer the local darling in Israel. In fact, none of its competitors are either. The country achieved terminal embarrassment velocity following the leak of documents that appeared to show many of NSO’s customers were abusing access to its Pegasus spyware to target journalists, dissidents, human rights lawyers, political opponents, and even the occasional ex-wife and her lawyer.
NSO has also been sued multiple times. The first tech firm to sue NSO was WhatsApp. Backed by Meta, WhatsApp took NSO to court for using WhatsApp’s US-based servers to deliver malware packages to users targeted by NSO’s absolute shitlist of customers.
Some of what WhatsApp observed might have been due to the FBI taking a bespoke version of NSO’s Pegasus for a spin before deciding it would be pretty much impossible to use it without doing a ton of damage to the Fourth Amendment.
This lawsuit has not gone well for NSO. It invoked a variety of defenses, including sovereign immunity, reasoning that it was a stand-in for the governments it sold to. And, as such, it was entitled to the same immunity often granted foreign governments by US courts.
This tactic didn’t work. Not only did multiple courts (district, appellate, the Top Court in the Land) reject NSO immunity overtures, but the original court handling this lawsuit ordered the company to turn over its code to WhatsApp. And that order meant all the code, not just the stuff involving NSO’s flagship spyware, Pegasus.
Far from the nation’s courts, Canadians have been giving NSO (and its competitors) fits for years. Citizen Lab — a group of Canadian malware researchers linked to the University of Toronto — has been examining NSO’s malware for years. More importantly, it’s been detecting infections and allowing those targeted by NSO spyware to rid themselves of these infections. In every case, Citizen Lab has exposed the targeting of the usual people: dissidents, opposition leaders, journalists, lawyers, diplomats, etc. The company continues to pretend this malware is sold to target the most dangerous criminals despite all evidence to the contrary.
With NSO now being asked to turn over its source code, it has decided to drag a non-party into the mix by going after Citizen Lab repeatedly during this lawsuit. (This is something its financial backers did years before NSO was a defendant in multiple lawsuits and an international pariah.)
As Shawn Musgrave reports for The Intercept, NSO appears to be engaged in a campaign of harassment against Citizen Lab… presumably because it has run out of believable defenses and/or solid litigation strategies.
FOR YEARS, CYBERSECURITY researchers at Citizen Lab have monitored Israeli spyware firm NSO Group and its banner product, Pegasus. In 2019, Citizen Lab reported finding dozens of cases in which Pegasus was used to target the phones of journalists and human rights defenders via a WhatsApp security vulnerability.
Now NSO, which is blacklisted by the U.S. government for selling spyware to repressive regimes, is trying to use a lawsuit over the WhatsApp exploit to learn “how Citizen Lab conducted its analysis.”
[…]
With the lawsuit now moving forward, NSO is trying a different tactic: demanding repeatedly that Citizen Lab, which is based in Canada, hand over every single document about its Pegasus investigation. A judge denied NSO’s latest attempt to get access to Citizen Lab’s materials last week.
While it’s good to see a court shut down this obvious attempt to turn Citizen Lab into a co-litigant, the fact remains that Citizen Lab has never been a party to this lawsuit. This is nothing more than NSO attempting to obtain information it has no legal reason to request, possibly because it’s still aching from being ordered to turn over its own information: i.e, its source code.
It also may be even more petty than the previous hypothetical: it may be trying to get Citizen Lab to burn up some of its limited resources fighting stupid requests for stuff Citizen Lab should even be asking for, much less expecting a judge to sign off on.
Whatever it is, it certainly isn’t good litigation. This reeks of desperation. These are the acts of litigant that has run out of options. NSO is just flailing, hoping to drag down a non-party with it as it heads towards a seemingly-inevitable loss.
And this certainly isn’t a winning strategy. It’s not even capable of maintaining the miserable status quo NSO Group is currently mired in. Citizen Lab (obviously) refused these demands for information (justifiably!) and the judge handling the case has made it clear there’s almost zero chance of NSO being able to drag anything out of this particular thorn in its side.
Citizen Lab opposed NSO’s demands on numerous grounds, particularly given “NSO’s animosity” toward its research.
In the latest order, Hamilton concluded that NSO’s demand was “plainly overbroad.” She left open the possibility for NSO to try again, but only if it can point to evidence that specific individuals that Citizen Lab categorized as “civil society” targets were actually involved in “criminal/terrorist activity.”
lol at that last sentence. Does anyone think anyone, much less an aggrieved NSO Group, has any evidence Citizen Lab is involved in “criminal/terrorist activity?” All it has done is expose abuse of malware sold by NSO Group to governments with long histories of corruption and/or human rights abuses.
NSO is just going to keep on losing. Reap/sow. Lie down with dogs. The foreseeable consequences of actions. Etc. Etc. Etc. Citizen Lab will keep performing its important work. And, with any luck, NSO will soon collapse under the weight of its hubris. Hope the (temporary) shekels were worth it.
Filed Under: canada, discovery, harassment, source code, spyware, surveillance
Companies: citizen lab, meta, nso group, whatsapp
Ctrl-Alt-Speech: This One Weird Trick To Save The Open Internet
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Alex Feerst, former General Counsel and head of trust & safety at Medium, and co-founder of the Digital Trust & Safety Partnership. Together they cover:
- Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability? (Techdirt)
- EU Commission opens formal proceedings against Facebook and Instagram under the Digital Services Act (European Commission)
- OnlyFans Investigated over its duties to protect under-18s from restricted materials (Ofcom)
- Canadian banks need to do more to stop abusive e-transfers, survivors say (CBC)
- TikTok And Meta Aren’t Labeling State Propaganda About The War In Gaza (Forbes)
- How we fought bad apps and bad actors in 2023 (Google security blog)
- Meta’s oversight body prepares to lay off workers (Washington Post)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: canada, content moderation, eu, eu commission, oversight board, section 230
Companies: google, meta, onlyfans, tiktok
Restricting Flipper Is A Zero Accountability Approach To Security
from the moral-panic dept
On February 8, François-Philippe Champagne, the Canadian Minister of Innovation, Science and Industry, announced Canada would ban devices used in keyless car theft. The only device mentioned by name was the Flipper Zero—the multitool device that can be used to test, explore, and debug different wireless protocols such as RFID, NFC, infrared, and Bluetooth.
While it is useful as a penetration testing device, Flipper Zero is impractical in comparison to other, more specialized devices for car theft. It’s possible social media hype around the Flipper Zero has led people to believe that this device offers easier hacking opportunities for car thieves*. But government officials are also consuming such hype. That leads to policies that don’t secure systems, but rather impedes important research that exposes potential vulnerabilities the industry should fix. Even with Canada walking back on the original statement outright banning the devices, restricting devices and sales to “move forward with measures to restrict the use of such devices to legitimate actors only” is troublesome for security researchers.
This is not the first government seeking to limit access to Flipper Zero, and we have explained before why this approach is not only harmful to security researchers but also leaves the general population more vulnerable to attacks. Security researchers may not have the specialized tools car thieves use at their disposal, so more general tools come in handy for catching and protecting against vulnerabilities. Broad purpose devices such as the Flipper have a wide range of uses: penetration testing to facilitate hardening of a home network or organizational infrastructure, hardware research, security research, protocol development, use by radio hobbyists, and many more. Restricting access to these devices will hamper development of strong, secure technologies.
When Brazil’s national telecoms regulator Anatel refused to certify the Flipper Zero and as a result prevented the national postal service from delivering the devices, they were responding to media hype. With a display and controls reminiscent of portable video game consoles, the compact form-factor and range of hardware (including an infrared transceiver, RFID reader/emulator, SDR and Bluetooth LE module) made the device an easy target to demonize. While conjuring imagery of point-and-click car theft was easy, citing examples of this actually occurring proved impossible. Over a year later, you’d be hard-pressed to find a single instance of a car being stolen with the device. The number of cars stolen with the Flipper seems to amount to, well, zero (pun intended). It is the same media hype and pure speculation that has led Canadian regulators to err in their judgment to ban these devices.
Still worse, law enforcement in other countries have signaled their own intentions to place owners of the device under greater scrutiny. The Brisbane Times quotes police in Queensland, Australia: “We’re aware it can be used for criminal means, so if you’re caught with this device we’ll be asking some serious questions about why you have this device and what you are using it for.” We assume other tools with similar capabilities, as well as Swiss Army Knives and Sharpie markers, all of which “can be used for criminal means,” will not face this same level of scrutiny. Just owning this device, whether as a hobbyist or professional—or even just as a curious customer—should not make one the subject of overzealous police suspicions.
It wasn’t too long ago that proficiency with the command line was seen as a dangerous skill that warranted intervention by authorities. And just as with those fears of decades past, the small grain of truth embedded in the hype and fears gives it an outsized power. Can the command line be used to do bad things? Of course. Can the Flipper Zero assist criminal activity? Yes. Can it be used to steal cars? Not nearly as well as many other (and better, from the criminals’ perspective) tools. Does that mean it should be banned, and that those with this device should be placed under criminal suspicion? Absolutely not.
We hope Canada wises up to this logic, and comes to view the device as just one of many in the toolbox that can be used for good or evil, but mostly for good.
*Though concerns have been raised about Flipper Devices’ connection to the Russian state apparatus, no unexpected data has been observed escaping to Flipper Devices’ servers, and much of the dedicated security and pen-testing hardware which hasn’t been banned also suffers from similar problems.
Originally posted to the EFF Deeplinks blog.
Filed Under: canada, flipper zero, francois-philippe champagne, hacking, pentesting
Even The Most Well-Meaning Internet Regulations Can Cause Real Harm
from the regulating-speech-never-ends-well dept
Here’s how people advocating for internet regulations for “bad speech” think things will work: an enlightened group of pure-minded, thoughtful individuals will carefully outlaw dangerous speech that invokes hatred, or encourages bad behavior. Then, that speech will magically and cleanly disappear from the internet, and the internet will be a better place.
What could go wrong? Turns out, pretty much everything.
Here’s how such things work in reality: laws are written that are broadly applicable to appease a variety of interests. Then those in power get upset about some sort of speech. And, even if they’re well-meaning politicians, they realize, ‘surely, what’s the harm in using these laws to stop this kind of speech?’ And that’s how hate speech laws are used to jail French citizens for comparing their President to Hitler. Or to force a website to take down speech calling Germany’s Justice Minister “an idiot.” Or to jail critics of the government. Or how there’s a push to make criticizing the police into hate speech. The list goes on and on.
That’s not to say those creating the laws aren’t well-intentioned. Some of them really are. Many people keep telling me that we need to outlaw bad speech online or to force websites to “be responsible” for the bad speech online.
And I keep trying to point out that the well-intentioned people calling for such laws are unlikely to be the people determining what is “good speech” and what is “bad speech.” Over and over again, the people calling for such laws are unlikely to like the choices made by those who get to decide how those laws are used.
Laws to crack down on “bad” speech, even well-intentioned ones, are open to very serious abuse. It is important to think about this, because we do see all sorts of not well-intentioned laws, and we criticize those all the time as well. But understanding how even well-intentioned laws can go wrong is equally important, to avoid enabling dangerous abuses of power.
That brings me to a recent piece in Foreign Affairs by David Kaye, former UN Special Rapporteur on freedom of expression, about the Risks of Internet Regulation. We talked about this article a bit on last week’s Ctrl-Alt-Speech podcast, but I wanted to write some more about it. Kaye, who also wrote the excellent book “Speech Police: the global struggle to govern the internet,” is not some random person writing about this. He’s spent years writing and thinking about these issues, and his piece in Foreign Affairs should stand as a warning to those rushing in to regulate the internet.
In response to public pressure to clean up the Internet, policymakers in Brussels, London, Washington, and beyond are following a path that, in the wrong hands, could lead to censorship and abuse. Some in Brussels, including in EU institutions and civil society, speak of “an Orban test,” according to which lawmakers should ask themselves whether they would be comfortable if legislation were enforced by Hungary’s authoritarian and censorial Prime Minister Viktor Orban, or someone like him. This is a smart way to look at things, particularly for those in the United States concerned about the possibility of another term for former U.S. President Donald Trump (who famously, and chillingly, referred to independent media as enemies of the people). Rather than expanding government control over Internet speech, policymakers should focus on the kinds of steps that could genuinely promote a better Internet.
As the opening suggests, Kaye’s piece details efforts in the US, EU, and the UK to regulate the internet. It highlights how each has a very real risk of going wrong in the wrong hands, even when done “thoughtfully.”
At the heart of Brussels’ approach to online content is the Digital Services Act (DSA). When negotiations over the DSA concluded in April 2022, European Commission Executive Vice President Margrethe Vestager exulted that “democracy is back.” For Vestager and her allies, the DSA asserts the EU’s public authority over private platforms. It restates existing EU rules that require platforms to take down illegal content when they are notified of its existence. In its detailed bureaucratic way, the DSA also goes further, seeking to establish how the platforms should deal with speech that, though objectionable, is not illegal. This category includes disinformation, threats to “civic discourse and electoral processes,” most content deemed harmful to children, and many forms of hate speech. The DSA disclaims specific directives to the companies. It does not require, for instance, the removal of disinformation or legal content harmful to children. Instead, it requires the largest platforms and search engines to introduce transparent due diligence and reporting. Such a step would give the Commission oversight power to evaluate whether these companies are posing systemic risks to the public.
Politicization, however, threatens the DSA’s careful approach, a concern that emerged soon after Hamas’s October 7 terrorist attacks on Israel. Posts glorifying Hamas or, conversely, promising a brutal Israeli vengeance immediately began circulating online. Thierry Breton, the European commissioner responsible for implementing the DSA, saw an opportunity and, three days after the attacks, sent a letter to X CEO Elon Musk and then to Meta, TikTok, and YouTube. “Following the terrorist attacks carried out by Hamas against Israel,” Breton wrote to Musk, “we have indications that your platform is being used to disseminate illegal content and disinformation in the EU.” He urged the platforms to ensure that they had in place mechanisms to address “manifestly false or misleading information” and requested a “prompt, accurate and complete response” to the letters within 24 hours. Breton gave the impression that he was acting in accordance with the DSA, but he went much further, taking on a bullying approach that seemed to presuppose that the platforms were enabling illegal speech. In fact, the DSA authorizes Commission action only after careful, technical review.
[….]
Breton showed that the DSA’s careful bureaucratic design can be abused for political purposes. This is not an idle concern. Last July, during riots in France following the police shooting of a youth, Breton also threatened to use the DSA against social media platforms if they continued to post “hateful content.” He said that the European Commission could impose a fine and even “ban the operation [of the platforms] on our territory,” which are steps beyond his authority and outside the scope of the DSA.
European legal norms and judicial authorities, and the commission rank-and-file’s commitment to a successful DSA, may check the potential for political abuse. But this status quo may not last. It is possible that June’s European Parliament elections will tilt leadership in directions hostile to freedom of expression online. New commissioners could take lessons from Breton’s political approach to DSA enforcement and issue new threats to the social media companies. Indeed, Breton’s actions may have legitimized politicization in ways that could be used to limit public debate, rather than going through the careful, if technical, approaches of DSA risk assessment, researcher access, and transparency.
And, indeed, the same day that Kaye’s piece came out, a piece in the Financial Times also came out, talking about how the EU is pushing out a series of hastily crafted “guidelines” for how social media must handle “election disinformation,” in the run-up to June’s EU elections, which will be enforced under the DSA.
While the guidelines are broadly drafted, the code is legally enforceable as part of the Digital Services Act, a core piece of legislation aimed at setting the rules on how Big Tech should police the internet.
“Social media platforms need to show that they are complying or explain what else they are doing to mitigate risks,” said one EU official. “If they don’t explain, we issue a fine.”
Once again, this is proving how the DSA is very much structured to be a tool that can be used to suppress speech. I keep pointing this out, and EU officials keep insisting it’s not… even as they promulgate rules that are clearly designed to suppress speech.
Kaye’s piece also talks about the UK’s Online Safety Act, which we’ve also discussed quite a bit.
One concern is that the UK legislation defines content harmful to children so broadly that it could cause companies to block legitimate health information, such as that related to gender identity or reproductive health, that is critical to childhood development and those who study it. Moreover, the act requires companies to conduct age verification, a difficult process that may oblige a user to present some form of official identification or age assurance, perhaps by using biometric measures. This is a complicated area involving a range of approaches that will have to be the focus of Ofcom’s attention since the act does not specify how companies should enforce this. But, as the French data privacy regulator has found, age verification and assurance schemes pose serious privacy concerns for all users, since they typically require personal data and enable tracking of online activity. These schemes also often fail to meet their objectives, instead posing new barriers to access to information for everyone, not just children.
The Online Safety Act gives Ofcom the authority to require a social media platform to identify and swiftly remove publicly posted terrorist or child sexual abuse content. This is not controversial, since such material should not be anywhere on the Internet; child sexual abuse content in particular is vile and illegal, and there are public tools designed to facilitate its detection, investigation, and removal. But the act also gives Ofcom the authority to order companies to apply technology to scan private, user-to-user content for child sexual abuse material. It sounds legitimate, but doing so would require monitoring private communications, at the risk of disrupting the encryption that is fundamental to Internet security generally. If required, it would open the door to the type of monitoring that would be precisely the tool authoritarians would like in order to gain access to dissident communications. The potential for such interference with digital security is so serious that the heads of Signal and WhatsApp, the world’s leading encrypted messaging services, indicated that they would leave the British market if the provision were to be enforced. For them, and those who use the services, encryption is a guarantee of privacy and security, particularly in the face of criminal hacking and interference by authoritarian governments. Without encryption, all communications would be potentially subject to snooping. So far, it seems that Ofcom is steering clear of such demands. Yet the provision stands, leaving many uncertain about the future of digital security in the UK.
Nor is the US spared (though I doubt anyone would consider the US’s approaches to online regulation to be carefully considered or thoughtful).
Yet at its core, KOSA regards the Internet as a threat from which young people ought to be protected. The bill does not develop a theory for how an Internet for children, with its vast access to information, can be promoted, supported, and safeguarded. As such, critics including the Electronic Frontier Foundation, the American Civil Liberties Union, and many advocates for LGBTQI communities still rightly argue that KOSA could undermine broader rights to expression, access to information, and privacy. For example, the bill would require platforms to take reasonable steps to prevent or mitigate a range of harms, pushing them to filter content that could be said to harm minors. The threat of litigation would be ever present as an incentive for companies to take down even lawful, if awful, content. This could be mitigated if enforcement were in the hands of a trustworthy, neutral body that, like Ofcom, is independent. But KOSA places enforcement not only in the hands of the Federal Trade Commission but also, for some provisions, of state attorneys general—elected officials who have become increasingly partisan in national political debates in recent years. Thus, it will be politicians in each state who could wield power over KOSA’s enforcement. When Blackburn said that her bill pursued the goal of “protecting minor children from the transgender in this culture,” she was not reassuring those fearing politicized implementation.
There’s much more in the piece as well, but you get the idea. Each of these laws has very real risks of serious harm. And that’s true even when the law is more carefully thought out. Last week, Tim Cushing wrote here about some of the problems of Canada’s similar attempt to regulate the internet, and we’re seeing ever greater concern there. The Globe and Mail recently ran a column by Andrew Coyne also calling out problems of the bill, even after noting how this was supposed to be “the ‘good’ bill” that carefully addressed what seemed like real problems.
And yet:
It soon became clear, however, that there was much more to the bill than just that. And the more closely it was examined, the worse it appeared.
Most obviously out of bounds are a suite of amendments to the Criminal Code. Any attempt to criminalize speech ought to be viewed with extreme suspicion, and kept to the narrowest possible grounds. The onus should always be on the state to prove the necessity of any exception to the general rule of free speech – to prove not merely that the speech is objectionable or offensive, but demonstrably harmful.
[….]
The most remarkable part of this is the timing. At the very moment when everyone and his dog is accusing someone else of genocide, or of promoting it – as Israel’s defenders say of Hamas’s supporters, as the Palestinians’ say of Israel’s, as Ukraine’s say of Russia’s – the government proposes that the penalty for being on the losing side of such controversies should be life in prison? I have my views on these questions, and you have yours, but I would not throw you in jail for your opinions, and I hope you would not do the same to me – not for five years, and certainly not for life.
Hardly better is the proposal to create a new hate crime – that is, for acts motivated by hatred. Whether the state should be punishing people for their motives, rather than for their crimes, is perhaps too rarefied a debate: We take motives into account, for example, with regard to crimes committed in self-defence. And hatred has long been considered an aggravating factor at sentencing.
But the new proposal is to set up a whole separate category for crimes motivated by hatred. Well, not just crimes. The new crime would apply not only to offences under the Criminal Code but “any other Act of Parliament.” Got that? It doesn’t matter how obscure or trivial the law: anyone who breaks it for reasons of hate would be guilty of a crime. And the punishment? Once again, up to life imprisonment.
As Kaye stated at the top of his article, run such laws through the “Orban test” or the “Trump test.” Or, if you somehow like Orban or Trump, run them through the “Biden test.”
These laws can be abused. They can suppress all kinds of speech. This doesn’t mean that there aren’t bad things online. This doesn’t mean that social media companies have your best interests at heart. This doesn’t mean that there aren’t other regulations that might be useful. But if you expect your government to regulate speech online, recognize the many, many ways in which it will be abused.
Filed Under: canada, david kaye, dsa, eu, free speech, hate speech, online harms act, online safety act, speech, uk
Ctrl-Alt-Speech: Murthy, Reddit, and the Speech Deciders
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Amazon Music, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s online speech, content moderation and internet regulation round-up, Mike and Ben cover:
- Supreme Court Seems Skeptical Of The Claims That The Federal Government Coerced Social Media To Moderate (Techdirt)
- Reddit’s I.P.O. Is a Content Moderation Success Story (New York Times)
- Elon Musk’s X Is Suspending Accounts That Reveal a Neo-Nazi Cartoonist’s Alleged Identity (Wired)
- The Risks of Internet Regulation (Foreign Affairs)
- EU to impose election safeguards on Big Tech (Financial Times)
- Canada’s Online Harms Act is revealing itself to be staggeringly reckless (Globe and Mail)
The episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Block Party, which builds privacy and anti-harassment tools to increase user control, protection, and safety. In our Bonus Chat at the end of the episode, Block Party founder and CEO Tracy Chou discusses the impact of harassment on self-censorship and explains how she is making navigating privacy and safety settings on major platforms easier for users through her tool, Privacy Party.
Filed Under: canada, content moderation, elon musk, eu, murthy, supreme court
Companies: reddit, twitter, x