global censorship – Techdirt (original) (raw)
Phew: EU Court Of Justice Says Right To Be Forgotten Is Not A Global Censorship Tool (Just An EU One)
from the dodged-a-bullet dept
Over the past few years, an important legal battle has been playing out concerning the jurisdictional reach of the EU’s terrible “right to be forgotten” laws. France decided that Google needed to not just block such content within the EU, but globally. In response, Google pointed out that French regulators shouldn’t be able to censor the global internet. The question made it to the EU Court of Justice (CJEU) last year, and the ruling has finally come down saying that Google was right after all. The right to be forgotten may exist in the EU, but that does not mean it can be applied globally.
For once, the CJEU actually seemed to recognize that the RTBF and freedom of expression are often in conflict — and that different countries may want to set the “balance” (if you can call it that) between the two in different places:
… the balance between the right to privacy and the protection of personal data, on the one hand, and the freedom of information of internet users, on the other, is likely to vary significantly around the world.
Indeed, the ruling notes that data protection is not an “absolute right.”
The processing of personal data should be designed to serve mankind. The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality.
Of course, I have a bit of trouble with the idea of things that are considered fundamental rights — such as freedom of expression — being “balanced” against things that are not fundamental rights, like “protection of personal data.” It seems like the fundamental right should always win out in such circumstances. In the US, for the most part, we’ve decided that the 1st Amendment doesn’t have a “balancing” test.
Still, it’s good to see the CJEU at least put some limits on the right to be forgotten and the ability of it to be used elsewhere against perfectly legal, truthful speech. It still sucks for people in the EU, but at least they can’t fully export their censorship.
Filed Under: censorship, cjeu, cnil, eu, france, free speech, global censorship, jurisdiction, right to be forgotten, rtbf
Companies: cnil, google
European Court Of Justice Suggests Maybe The Entire Internet Should Be Censored And Filtered
from the oh-come-on dept
The idea of an open “global” internet keeps taking a beating — and the worst offender is not, say, China or Russia, but rather the EU. We’ve already discussed things like the EU Copyright Directive and the Terrorist Content Regulation, but it seems like every day there’s something new and more ridiculous — and the latest may be coming from the Court of Justice of the EU (CJEU), which frequently is a bulwark against overreaching laws regarding the internet, but sometimes (too frequently, unfortunately) gets things really, really wrong (saying the “Right to be Forgotten” applied to search engines was one terrible example).
And now, the CJEU’s Advocate General has issued a recommendation in a new case that would be hugely problematic for the idea of a global open internet that isn’t weighted down with censorship filters. The Advocate General’s recommendations are just that: recommendations for the CJEU to consider before making a final ruling. However, as we’ve noted in the past, the CJEU frequently accepts the AG’s recommendations. Not always. But frequently.
The case here involves a an attempt to get Facebook to delete critical information of a politician in Austria under Austrian law. In the US, of course, social media companies are not required to delete such information. The content itself is usually protected by the 1st Amendment, and the platforms are then protected by Section 230 of the Communications Decency Act that prevents them from being liable, even if the content in question does violate the law (though, importantly, most platforms will still remove such content if it’s been determined by a court to violate the law).
In the EU, the intermediary liability scheme is significantly weaker. Under the E-Commerce Directive’s rules, there is an exemption of liability, but it’s much more similar to the DMCA’s safe harbors for copyright-infringing material in the US. That is, the liability exemptions only occur if the platform doesn’t have knowledge of the “illegal activity” and if they do get such knowledge, they need to remove the content. There is also a prohibition on a “general monitoring” requirement (i.e., filters).
The case at hand involved someone on Facebook posting a link to an article about an Austrian politician, Eva Glawischnig-Piesczek, and added some comments along with the link. Specifically:
That user also published, in connection with that article, an accompanying disparaging comment about the applicant accusing her of being a ?lousy traitor of the people?, a ?corrupt oaf? and a member of a ?fascist party?.
In the US — some silly lawsuits notwithstanding — such statements would be clearly protected by the 1st Amendment. Apparently not so much in Austria. But then there’s the question of Facebook’s responsibility.
An Austrian court ordered Facebook to remove the content, which it complied with by removing access to anyone in Austria. The original demand was also that Facebook be required to prevent “equivalent content” from appearing as well. On appeal, a court denied Facebook’s request that it only had to comply in Austria, and also said that such “equivalent content” could only be limited to cases where someone then alerted Facebook to the “equivalent content” being posted (and, thus, not a general monitoring requirement).
From there, the case went to the CJEU, who was asked to determine if such blocking needs to be global and how should the “equivalent content” question be handled.
And, then, basically everything goes off the rails. First up, the Advocate General, seems to think that — like many misguided folks concerning CDA 230 — there’s some sort of “neutrality” requirement for internet platforms, and that doing any sort of monitoring might lose their safe harbors for no longer being neutral. This is mind-blowingly stupid.
It should be observed that Article 15(1) of Directive 2000/31 prohibits Member States from imposing a general obligation on, among others, providers of services whose activity consists in storing information to monitor the information which they store or a general obligation actively to seek facts or circumstances indicating illegal activity. Furthermore, it is apparent from the case-law that that provision precludes, in particular, a host provider whose conduct is limited to that of an intermediary service provider from being ordered to monitor all (9) or virtually all (10) of the data of all users of its service in order to prevent any future infringement.
If, contrary to that provision, a Member State were able, in the context of an injunction, to impose a general monitoring obligation on a host provider, it cannot be precluded that the latter might well lose the status of intermediary service provider and the immunity that goes with it. In fact, the role of a host provider carrying out general monitoring would no longer be neutral. The activity of that host provider would not retain its technical, automatic and passive nature, which would imply that that host provider would be aware of the information stored and would monitor it.
Say what now? It’s right that general monitoring is not required (and explicitly rejected) in the law, but the corollary that deciding to do general monitoring wipes out your safe harbors is… crazy. Here, the AG is basically saying we can’t have a general monitoring obligation (good) because that would overturn the requirement of platforms to be neutral (crazy):
Admittedly, Article 14(1)(a) of Directive 2000/31 makes the liability of an intermediary service provider subject to actual knowledge of the illegal activity or information. However, having regard to a general monitoring obligation, the illegal nature of any activity or information might be considered to be automatically brought to the knowledge of that intermediary service provider and the latter would have to remove the information or disable access to it without having been aware of its illegal content. (11) Consequently, the logic or relative immunity from liability for the information stored by an intermediary service provider would be systematically overturned, which would undermine the practical effect of Article 14(1) of Directive 2000/31.
In short, the role of a host provider carrying out such general monitoring would no longer be neutral, since the activity of that host provider would no longer retain its technical, automatic and passive nature, which would imply that the host provider would be aware of the information stored and would monitor that information. Consequently, the implementation of a general monitoring obligation, imposed on a host provider in the context of an injunction authorised, prima facie, under Article 14(3) of Directive 2000/31, could render Article 14 of that directive inapplicable to that host provider.
I thus infer from a reading of Article 14(3) in conjunction with Article 15(1) of Directive 2000/31 that an obligation imposed on an intermediary service provider in the context of an injunction cannot have the consequence that, by reference to all or virtually all of the information stored, the role of that intermediary service provider is no longer neutral in the sense described in the preceding point.
So the AG comes to a good result through horrifically bad reasoning.
However, while rejecting general monitoring, the AG then goes on to talk about why more specific monitoring and censorship is probably just fine and dandy, with a somewhat odd aside about how the “duration” of the monitoring can make it okay. However, the key point is that the AG has no problem with saying, once something is deemed “infringing,” that it can be a requirement on the internet platform to have to remove new instances of the same content:
In fact, as is clear from my analysis, a host provider may be ordered to prevent any further infringement of the same type and by the same recipient of an information society service. (24) Such a situation does indeed represent a specific case of an infringement that has actually been identified, so that the obligation to identify, among the information originating from a single user, the information identical to that characterised as illegal does not constitute a general monitoring obligation.
To my mind, the same applies with regard to information identical to the information characterised as illegal which is disseminated by other users. I am aware of the fact that this reasoning has the effect that the personal scope of a monitoring obligation encompasses every user and, accordingly, all the information disseminated via a platform.
Nonetheless, an obligation to seek and identify information identical to the information that has been characterised as illegal by the court seised is always targeted at the specific case of an infringement. In addition, the present case relates to an obligation imposed in the context of an interlocutory order, which is effective until the proceedings are definitively closed. Thus, such an obligation imposed on a host provider is, by the nature of things, limited in time.
And then, based on nothing at all, the AG pulls out the “magic software will make this work” reasoning, insisting that software tools will make sure that the right content is properly censored:
Furthermore, the reproduction of the same content by any user of a social network platform seems to me, as a general rule, to be capable of being detected with the help of software tools, without the host provider being obliged to employ active non-automatic filtering of all the information disseminated via its platform.
This statement… is just wrong? First off, it acts as if using software to scan for the same content is somehow not a filter. But it is. And then it shows a real misunderstanding about the effectiveness of filters (and the ability of some to trick filters). And there’s no mention of false positives. I mean, in this case here, a politician was called a corrupt oaf. How should Facebook be forced to block that. Is any use of the phrase “corrupt oaf” now blocked? Perhaps it would have to be “corrupt oaf” and the politician, Eva Glawischnig-Piesczek, that need to be together to be blocked. But, in that case, does it mean that this article itself cannot be posted on Facebook? So many questions…
The AG then insists that somehow this isn’t too burdensome (based on what, exactly?) and seems to make the mistake of many non-technical people, who think that filters are (a) much better than they are, and (b) not dealing with significant gray areas all the time.
First of all, seeking and identifying information identical to that which has been characterised as illegal by a court seised does not require sophisticated techniques that might represent an extraordinary burden.
And, I mean, perhaps that’s true for Facebook — but it certainly could represent a much bigger burden for lots of other, smaller providers. Like, us, for example.
Hilariously, as soon as the AG is done saying the filtering is easy, the recommendation notes that (oh right!) context may be important:
Last, such an obligation respects internet users? fundamental right to freedom of expression and information, guaranteed in Article 11 of the Charter, in so far as the protection of that freedom need not necessarily be ensured absolutely, but must be weighed against the protection of other fundamental rights. As regards the information identical to the information that was characterised as illegal, it consists, prima facie and as a general rule, in repetitions of an infringement actually characterised as illegal. Those repetitions should be characterised in the same way, although such characterisation may be nuanced by reference, in particular, to the context of what is alleged to be an illegal statement.
Next up is the question of blocking “equivalent content.” The AG, properly notes that determining what is, and what is not, “equivalent” represents quite a challenge — and at least seeks to limit what may be ordered to be blocked, saying that it should only apply to content from the same user, and that any injunction be quite specific in what needs to be blocked:
I propose that the answer to the first and second questions, in so far as they relate to the personal scope and the material scope of a monitoring obligation, should be that Article 15(1) of Directive 2000/31 must be interpreted as meaning that it does not preclude a host provider operating a social network platform from being ordered, in the context of an injunction, to seek and identify, among all the information disseminated by users of that platform, the information identical to the information that was characterised as illegal by a court that has issued that injunction. In the context of such an injunction, a host provider may be ordered to seek and identify the information equivalent to that characterised as illegal only among the information disseminated by the user who disseminated that illegal information. A court adjudicating on the removal of such equivalent information must ensure that the effects of its injunction are clear, precise and foreseeable. In doing so, it must weigh up the fundamental rights involved and take account of the principle of proportionality.
Then, finally, it gets to the question of global blocking — and basically says that nothing in EU law prevents a member state, such as Austria, from ordering global blocking, and therefore, that it can do so — but that local state courts should consider the consequences of ordering such global takedowns.
… as regards the territorial scope of a removal obligation imposed on a host provider in the context of an injunction, it should be considered that that obligation is not regulated either by Article 15(1) of Directive 2000/31 or by any other provision of that directive and that that provision therefore does not preclude that host provider from being ordered to remove worldwide information disseminated via a social network platform. Nor is that territorial scope regulated by EU law, since in the present case the applicant?s action is not based on EU law.
Regarding the consequences:
To conclude, it follows from the foregoing considerations that the court of a Member State may, in theory, adjudicate on the removal worldwide of information disseminated via the internet. However, owing to the differences between, on the one hand, national laws and, on the other, the protection of the private life and personality rights provided for in those laws, and in order to respect the widely recognised fundamental rights, such a court must, rather, adopt an approach of self-limitation. Therefore, in the interest of international comity, (51) to which the Portuguese Government refers, that court should, as far as possible, limit the extraterritorial effects of its junctions concerning harm to private life and personality rights. (52) The implementation of a removal obligation should not go beyond what is necessary to achieve the protection of the injured person. Thus, instead of removing the content, that court might, in an appropriate case, order that access to that information be disabled with the help of geo-blocking.
That is a wholly unsatisfying answer, given that we all know how little many governments think about “self-limitation” when it comes to censoring critics globally.
And now we have to wait to see what the court says. Hopefully it does not follow these recommendations. As intermediary liability expert Daphne Keller from Stanford notes, there are some serious procedural problems with how all of this shakes out. In particular, because of the nature of the CJEU, they will only hear from some of the parties whose rights are at stake (a lightly edited quote of her tweetstorm):
The process problems are: (1) National courts don?t have to develop a strong factual record before referring the case to the CJEU, and (2) Once cases get to the CJEU, experts and public interest advocates can?t intervene to explain the missing info. That?s doubly problematic when ? as in every intermediary liability case ? the court hears only from (1) the person harmed by online expression and (2) the platform but NOT (3) the users whose rights to seek and impart information are at stake. That’s an imbalanced set of inputs. On the massively important question of how filters work, the AG is left to triangulate between what plaintiff says, what Facebook says, and what some government briefs say. He uses those sources to make assumptions about everything from technical feasibility to costs.
And, in this case in particular, that leads to some bizarre results — including quoting a fictional movie as evidence.
In the absence of other factual sources, he also just gives up and quotes from a fictional movie ? The Social Network — about the permanence of online info.
That, in particular, is most problematic here. It is literally the first line of the AG’s opinion:
The internet?s not written in pencil, it?s written in ink, says a character in an American film released in 2010. I am referring here, and it is no coincidence, to the film The Social Network.
But a quote in a film that is arguably not even true, seems like an incredibly weak basis for a law that fundamentally could lead to massive global censorship filters across the internet. Again, one hopes that the CJEU goes in a different direction, but I wouldn’t hold my breath.
Filed Under: advocate general, cjeu, corrupt oaf, defamation, e-commerce directive, eu, eva glawischnig-piesczek, filters, global censorship, intermediary liability, jrusidiction, monitoring
Companies: facebook
Google Fights In EU Court Against Ability Of One Country To Censor The Global Internet
from the this-is-important dept
For quite some time now we’ve been talking about French regulators and their ridiculous assertion that Google must apply its “Right to be Forgotten” rules globally rather than just in France. Earlier this week, the company presented its arguments to the EU Court of Justice who will eventually rule on this issue in a way that will have serious ramifications for the global internet.
In a hearing at the EU Court of Justice, Google said extending the scope of the right all over the world was ?completely unenvisagable.? Such a step would ?unreasonably interfere? with people?s freedom of expression and information and lead to ?endless conflicts? with countries that don?t recognize the right to be forgotten.
?The French CNIL?s global delisting approach seems to be very much out on a limb,? Patrice Spinosi, a French lawyer who represents Google, told a 15-judge panel at the court in Luxembourg on Tuesday. It is in ?utter variance? with recent judgments.
Even if you absolutely despise everything about Google, the argument of French regulators should be of massive concern to you. France’s argument is that if a French regulator determines that some content should be disappeared from the internet, it is necessary for it to be memory holed entirely and permanently, literally calling such deleting of history “a breath of fresh air.”
?For the person concerned, the right to delisting is a breath of fresh air,? said Jean Lessi, who represents France?s data protection authority CNIL, told the court. Google?s policy ?doesn?t stop the infringement of this fundamental right which has been identified, it simply reduces the accessibility. But that is not satisfactory.?
Where one can be at least marginally sympathetic to the French regulator’s argument, it is in the issue of circumvention. If Google is only required to suppress information in France, then if someone really wants to, they can still find that information by presenting themselves as surfing from somewhere else. Which is true. But that limited risk — which would likely only occur in the very narrowest of circumstances in which someone already knew that some information was being hidden and then went on a quest to search it out — is a minimal “risk” compared to the very, very real risk of lots of truthful, historical information completely being disappeared into nothingness. And that is dangerous.
The broader impact of such global censorship demands can easily be understood if you just recognize that it won’t just be the French looking to memory hole content they don’t like. Other governments — such as Russia, China, Turkey, and Iran — certainly wouldn’t mind making some information disappear. And if you think that various internet platforms will be able to say “well, we abide by French demands to disappear content, but ignore Russian ones,” well, how does that work in actual practice? Not only that, but such rules could clearly violate the US First Amendment. Ordering companies to take down content that is perfectly legal in the US would have significant ramifications.
But, it also means that we’re likely moving to a more fragmented internet — in which the very nature of the global communications network is less and less global, because to allow that to happen means allowing the most aggressive censor and the most sensitive dictator to make the rules concerning which content is allowed. And, as much as people rightfully worry about Mark Zuckerberg or Jack Dorsey deciding whose speech should be allowed online, we should be much, much, much more concerned when its people like Vladimir Putin or Recep Erdogan.
Filed Under: censorship, france, global censorship, jurisdiction, right to be forgotten
Companies: google
Canadian Privacy Commissioner Report Says Existing Law Already Gives Canadians A Right To Be Forgotten
from the which-means-the-United-States-now-has-a-RTBF-apparently dept
The Privacy Commissioner of Canada is proposing something dangerous. Given the Canadian Supreme Court’s ruling in the Equustek case — which basically said Canada’s laws are now everybody’s laws — a recent report issued by the Commissioner that reads something into existing Canadian law should be viewed with some concern. Michael Geist has more details.
The Privacy Commissioner of Canada waded into the debate on Friday with a new draft report concluding that Canadian privacy law can be interpreted to include a right to de-index search results with respect to a person’s name that are inaccurate, incomplete, or outdated. The report, which arises from a 2016 consultation on online reputation, sets the stage for potential de-indexing requests in Canada and complaints to the Privacy Commissioner should search engines refuse to comply.
The Commissioner envisions a system that would allow Canadians to file de-indexing requests with leading search engines, who would be required to evaluate the merits of the claim and, where appropriate, remove the link from the search index or lower its rank to obscure the search result. Moreover, the commissioner would require search engines to actively block Canadians from accessing the offending links by using geo-identifying technologies to limit access in Canada to the results.
In other words, the Commissioner is looking to import Europe’s right-to-be-forgotten law, but without having to amend or rewrite any Canadian laws. The report interprets existing Canadian privacy protections as offering RTBF to Canadian citizens. And if it offers it to Canadians, it can be enforced worldwide, despite their being no local statutory right to be forgotten.
Geist notes there are several problems with the troubling conclusion the Commissioner has drawn. First, the privacy protections included in PIPEDA (Personal Information Protection and Electronic Documents Act) cover commercial activity only, regulating use of users’ personal data. When it comes to search results, no commercial transaction takes place. The search engine simply returns results the user asks for. Search engines display ads with the results, but there’s no purchase involved, nor is there necessarily a relinquishment of user info.
Just as importantly, the Commissioner’s conclusion — even if statutorily sound (though it isn’t) — runs directly contrary to the comments received from numerous stakeholders, including privacy groups.
The feedback from leading Internet services, media companies, academics, and civil society groups cautioned against creating a right to be forgotten in Canada. Without a foundation for its approach arising from the consultation, participants can be forgiven for wondering whether the report’s recommendations were a foregone conclusion.
As Geist points out, a right-to-be-forgotten, raised unbidden from existing privacy laws, turns search engines into tools of government micromanagement. Despite its noble aim, it will be abused more often than it is legitimately used. Fortunately, Google and other search engines have been actively challenging dubious requests. And the rest of the private sector has pitched in, with journalistic entities informing readers when convicted criminals, political figures, and other abusers of the system attempt to eradicate factual recountings of their misdeeds.
Filed Under: canada, censorship, data protection, free speech, global censorship, jurisdiction, privacy, right to be forgotten
Top European Court To Consider If EU Countries Can Censor The Global Internet
from the it's-spreading... dept
Last month we wrote about the tragic and hugely problematic ruling in Canada that said a Canadian court could order global censorship of content it deems to be illegal. As lots of people pointed out, that is going to have dangerous consequences for speech around the world. If you accept that Canada can censor the global internet, what’s to stop China, Iran or Russia from claiming the same rights?
And now we’ll get to find out if the EU similarly believes in the ability of one country to demand global censorship online. In another case that we’ve been following, French data protection officials had been demanding Google censor content globally, and Google had been refusing. Now, the issue has been sent to the EU Court of Justice, the very same court who created this mess three years ago in saying that Google was subject to “right to be forgotten” claims. Google had reasonably interpreted the law to just apply in the EU (where the jurisdiction existed). But now the same court will decide if EU officials can censor globally.
One hopes that the sheer absurdity of the situation may lead the CJEU to start to recognize just how problematic its ruling was back in 2014, but somehow, that’s unlikely. We’ll certainly be paying attention to this case…
Filed Under: censorship, cjeu, eu, france, free speech, global censorship, jurisdiction, right to be forgotten, rtbf
Companies: google
Austrian Court's 'Hate Speech' Ruling Says Facebook Must Remove Perfectly Legal Posts All Over The World
from the one-court-to-rule-them-all dept
The European anti-hate speech machinery rolls on, with each successive demand for social media platform responsiveness being greeted by Facebook’s “Thank you, may I have another?” Mark Zuckerberg informed the German chancellor in 2015 that Facebook’s often-blundering proxy censorship team was all about removing hate speech. In appreciation for Facebook’s efforts, German officials spent the following year trying to find a way to hold the company criminally liable for third party postings determined to be hate speech under German law.
Right next door, an Austrian court has just declared that Facebook is required to stamp out locally-defined hate speech… all over the globe.
Facebook must remove postings deemed as hate speech, an Austrian court has ruled, in a legal victory for campaigners who want to force social media companies to combat online “trolling”.
The case – brought by Austria’s Green party over insults to its leader – has international ramifications as the court ruled the postings must be deleted across the platform and not just in Austria, a point that had been left open in an initial ruling.
Not only will Facebook need to delete original posts and reposts, but it’s apparently supposed to track down anything that quotes the offending posts verbatim and delete those as well. Simply blocking them in Austria isn’t sufficient, though. Whatever one aggrieved Austrian political party thinks is hate speech has the possibility to affect all Facebook users, regardless of their location or level of free speech protections.
But that’s not all Austria’s Greens want: they want this ruling expanded to grant the Austrian government additional power over Facebook’s moderation efforts.
The Greens hope to get the ruling strengthened further at Austria’s highest court. They want the court to demand Facebook remove similar – not only identical – postings, and to make it identify holders of fake accounts.
These are dangerous powers to hand over to any government entity, but especially to recently-offended government officials with a half-dozen axes to grind. If this ruling holds up, Facebook — and by extension, its users — will be subservient to a foreign government that appears to like the sort of thing it sees in more authoritarian regimes where insults to government officials are met with harsh punishments. The worst thing about the ruling — it contains many bad aspects — is that it allows the Austrian government to determine what the rest of the world gets to see on Facebook.
Filed Under: austria, censorship, free speech, global censorship, hate speech
Companies: facebook
French Regulating Body Says Google Must Honor Right To Be Forgotten Across All Of Its Domains
from the CNIL:-WE-ARE-THE-WORLD dept
France’s privacy regulator thinks it should be able to control what the world sees in Google’s search results. Back in June, the regulator said Google must apply the “right to be forgotten” ruling across all of its domains, not just Google.fr, etc.
Google rightly responded, “Go fuck forget yourself” (but in appeal form), as Jennifer Baker of The Register reports.
Google had argued that around 97 per cent of French users use Google.fr rather than Google.com, that CNIL was trying to apply French law extra-territorially and that applying the RTBF on its global domains would impede the public’s right to information and would be a form of censorship.
But France seems intent on standing up for the 3%. The regulating body has rejected Google’s appeal and declared its intent on bending the world to its interpretation of the RTBF ruling. As it sees it, what’s good for France is good for the rest of the connected world. And since all roads lead through Google, a deletion honored at Google.fr must also be delisted at Google.com
Geographical extensions are only paths giving access to the processing operation. Once delisting is accepted by the search engine, it must be implemented on all extensions, in accordance with the judgment of the ECJ.
If this right was limited to some extensions, it could be easily circumvented: in order to find the delisted result, it would be sufficient to search on another extension (e.g. searching in France using google.com) , namely to use another form of access to the processing. This would equate stripping away the efficiency of this right, and applying variable rights to individuals depending on the internet user who queries the search engine and not on the data subject.
In any case, the right to delisting never leads to deletion of the information on the internet; it merely prevents some results to be displayed following a search made on the sole basis of a person’s name. Thus, the information remains directly accessible on the source website or through a search using other terms. For instance, it is impossible to delist an event.
Yes, delisting at one domain means it’s still accessible at others. That’s the way these things are supposed to work. Perhaps the government bodies involved in this decision might have considered the unintended side effects before deciding RTBF was a great idea with minimal flaws.
The general tone of the regulator’s response is that Google is being deliberately obtuse when it claims compliance at Google.fr (for example) is following the letter of the law. The French governing body wants Google to follow the spirit of the law, which means basically anticipating various governments’ next moves after another hole in their “forget me now” plan presents itself.
CNIL then makes this disingenuous statements about its decision.
Finally, contrary to what Google has stated, this decision does not show any willingness on the part of the CNIL to apply French law extraterritorially. It simply requests full observance of European legislation by non European players offering their services in Europe.
If this is what it’s actually requesting, complying at French domains would be all that was required of Google. But it isn’t. It’s asking for “full observance” and then leaving it up to Google to comply with requests in countries where the Right to Be Forgotten isn’t recognized as an actual “right.”
Those behind the push for a right to be forgotten should have seen this coming. They also should have recognized the limits of their desires. Pushing Google to delist any RTBF request across all domains allows Europe to decide what can and can’t be seen (at least through Google’s search engine) by the rest of the world. And yet, the regulating body calling for this ridiculous “solution” has the gall to claim it’s not actually applying its decision extra-territorially, but that Google’s global reach somehow obliges it to do this “voluntarily,” if only to maintain the consistency regulators had in mind when they started enforcing the “right to be forgotten.”
The deflectionary reminder that the content isn’t actually deleted from the web is a cheap dodge. What’s never acknowledged in these rulings is that removing links from search engine results is pretty much the same thing as removing it from the original websites. If search engines can’t “find” it, it ceases to exist for all intents and purposes. Giving people the power to selectively edit the web without even acquiring a court order was — and is — a bad idea. The EU continues to assert the general public has the right to rewrite their own history, and now, with decisions like these, it’s forcing the rest of the world to play along with these edited narratives.
Filed Under: france, free speech, global censorship, jurisdiction, right to be forgotten
Companies: google