safe harbor – Techdirt (original) (raw)

Yet Another EU Data Protection Authority Says Google Analytics Violates The Law

from the this-is-why-we-can't-have-nice-things dept

It’s kind of weird that in some convoluted way, the NSA may be killing Google Analytics, at least in the EU. You may recall that back in 2020, Max Schrems won his second big data privacy effort against the EU/US Privacy Shield agreement, which allowed data from people in the EU to be transferred to US companies under certain conditions. The “Privacy Shield” was a concept the EU and US cooked up after their earlier setup, the EU/US “safe harbor” framework was tossed out in an earlier case brought by Schrems. In both cases, a key underlying issue was the NSA’s ability to conduct mass surveillance on the internet. The failure to fix that between the safe harbor framework and the Privacy Shield meant that the Privacy Shield was doomed from the start.

Earlier this year, the US and EU announced a new version of the Privacy Shield though details were still lacking. Assuming the NSA isn’t giving up its powers to surveil much of the internet, it doesn’t seem likely to survive Schrems’ next attempt.

In the meantime, though, it’s causing all sorts of issues. And many of those issues are basically: Google Analytics. Most recently, Italy’s data protection authority, said that using Google Analytics violates the GDPR by sending data overseas, something that can’t be done without a new Privacy Shield (or equivalent) agreement between the US and the EU.

As TechCrunch points out, this decision is just the latest in an increasingly long line of similar rulings:

Earlier this month, France’s data protection regulator issued updated guidance warning over illegal use of Google Analytics — following a similar finding of fault with a local website’s use of the software in February.

[…]

Austria’s DPA also upheld a similar complaint over a site’s use of Google Analytics in January.

While the European Parliament found itself in hot water over the same core issue at the start of the year.

Leaving aside the ongoing irony of the EU Parliament’s own website violating the GDPR, at the heart of all this remains: the NSA basically has screwed up Google Analytics for the EU.

Now, there are all sorts of reasons to dislike Google Analytics — we ditched it ourselves — but it’s important to remember that at the core of this, is the NSA basically making things impossible for a number of American internet companies. This is one of many reasons (and certainly lower in importance than just basic civil rights and liberties) why it’s still amazing that we’ve more or less allowed the NSA to continue its surveillance efforts with only minor modifications in the decade or so since Ed Snowden leaked the details.

Filed Under: data sharing, eu, gdpr, google analytics, italy, nsa, privacy shield, safe harbor, surveillance
Companies: google

Academic Paper Shows How Badly The Mainstream Media Misled You About Section 230

from the proving-the-media-distortion-field dept

We’ve had to publish many, many articles highlighting just how badly the mainstream media has misrepresented Section 230, with two of the worst culprits being the NY Times and the Wall Street Journal. Professor Eric Goldman now points us to an incredible 200 page masters thesis by a journalism student at UNC named Kathryn Alexandria Johnson, who did an analysis entirely about how badly both the NYT and the WSJ flubbed their reporting on Section 230.

The paper is actually more than just that, though. It includes a really useful description of Section 230 itself, along with its history, and some of the often confused nuances around the law. Johnson clearly did her homework here, and it actually is one of the best summaries of the issues around 230 I’ve seen. The paper is worth reading for just that section (the first half of the paper) alone.

But then we get to the analysis. Johnson notes that the Times and the Journal are basically the most powerful “agenda setting” newspapers in the US, so how they cover issues like Section 230 can have a huge impact on actual policy. And they failed. Badly.

The thesis explores the data in multiple ways, but one chart stands out: when talking about the impact of 230, both newspapers almost always frame the law as having a negative impact. They almost never describe it as having a positive impact.

That is, out of 116 articles in the NY Times that talk about the impact of Section 230, 107 described it negatively. Another six gave a combination of negative and positive, and only two (two!) described the impact positively. For the WSJ, it’s basically the same story: 88 articles discussing the impact of Section 230, 80 of them purely negative. Another four with a combination of negative and positive, and just three describing the law’s impact positively. That means, grand total, 91.7% of the articles in these two agenda-setting newspapers described the law’s impact as negative, with another 4.9% describing both negative and positive impacts, and just 2.5% describing the impact positively.

That’s pretty amazing. Now, some may argue that if you truly believe that the impact of Section 230 is negative, then these two publications are only being accurate in their descriptions. But, for those of us who have studied Section 230, and understand its broadly positive aspects, the whole thing seems crazy.

I’ve had many people argue over the years that the big newspapers like the Times and the Journal have an institutional interest in trashing social media and the internet, because it takes away from their gatekeeping powers. And I’ve always brushed that aside as an exaggeration. But the numbers here are pretty damn stark.

The paper also explores how these newspapers sought to frame Section 230, and found that they did a very poor job explaining how it has multiple functions, often choosing to focus on one framing — rather than a more accurate framing of how Section 230 is structured to encourage multiple things. It protects websites from being held liable as a publisher of third party content, which encourages more websites to allow for more speech, and it protects them from content moderation decisions creating liability, enabling them to cultivate their communities in the way they see fit. Understanding both of these is kind of important to understanding Section 230, but it appears that these papers rarely gave a complete description. Also, perhaps oddly (or perhaps because they’re just super confused themselves), they often used the publisher framing, even though they were really talking about the content moderation function — which may very well be why so many others, including politicians, are so confused about 230.

As previously discussed, the majority of definitions including only the “publisher” frame. Interestingly, despite a majority of definitions referencing only platforms’ protection from liability for the content posted by third-parties (59.5%), a large majority of articles were focused on the societal impacts of censorship and deplatforming. Such issues most closely map to the “content moderation” frame. And despite many of the articles’ focus on censorship and deplatforming, very few articles included definitions with only the “content moderation” frame.

For the purposes of creating the most informed electorate, the most helpful definitions are those that present both of Section 230’s functions. These articles were coded as “Both” when discussed above. Only a third of the definitions of Section 230 included both the publisher and content moderation frame, indicating a weakness in journalists’ reporting on this issue. Coverage in The Wall Street Journal more frequently defined Section 230 in terms of both publisher activity and content moderation activity than The New York Times, but coverage in The Wall Street Journal still mentioned both legal frames less than half the time. Journalists could improve coverage could be improved by including definitions that explain both legal frames associated with Section 230, regardless of the focus of the article.

Then there’s the question of how often these two famed newspapers just flat out got things wrong about Section 230. The data may be lower than you might expect, as Johnson found it happened 16.2% of the time, but that’s still kind of astounding. This is a fundamental issue that has gotten a ton of attention and to still get it wrong in about one out of every six articles is indefensible.

It is interesting, though, to note that the WSJ misrepresented the law at nearly double the rate of the NY Times. Again, people have pointed out that Rupert Murdoch, who owns the WSJ, has more or less declared war on the entire internet, and noted that could impact the coverage of things like Section 230. I always assumed that would be a stretch, but the data here is, once again, noteworthy.

As Johnson notes in her paper, many of the misrepresentations were not necessarily outright falsehoods (though there were some of those), but “rather statements lacking enough important context or requiring clarification.”

Then there’s this:

Every misrepresentation identified in the entire sample could be credited to an unattributed source. Therefore, journalists themselves were the source of each misrepresentation. This finding suggests that either journalists themselves do not fully understand the nuance of how Section 230 is applied or that journalists do understand how Section 230 functions but are not accurately conveying that knowledge to the reader

For what it’s worth, it may also be the fault of the editors, rather than the journalists. I am familiar with at least one situation in which a major newspaper misrepresented Section 230, and the journalist later explained to me that they had fought for the correct representation, but their editor insisted on running a misleading one.

Johnson’s paper also highlights how these misrepresentations can lead to further misunderstanding of Section 230.

Understanding that the First Amendment, and not Section 230, enables platforms to moderate content is important to social understanding regarding how platforms would function if Section 230 was reformed or repealed. Without the portion of Section 230 that precludes publisher liability, platforms would still be able to remove content, that for example violated their community standards; however, platforms would be less likely to do so because they would once again, be liable for any unlawful content that they did not remove.

Johnson also, correctly, summarizes what would actually happen with the removal of Section 230: there would be fewer places to speak online.

In fact, Australia’s high court recently ruled that news media outlets are to be treated as “publishers” of the unlawful content that is posted in comments sections on social media. In response, news media outlets began disabling their comments sections due to their inability to constantly moderate all comments. Removing the comments section was the easiest way to protect themselves from legal liability. This anecdote suggests that if Section 230 was changed and platforms were treated as publishers of third-party content, platforms would begin restricting users’ ability to post on their sites—severely stifling the ability of the public to share content and ideas online. Limiting the public’s ability to communicate online has negative implication for self-governance beyond just debate and discussion regarding Section 230. The internet provides a forum for citizens to ask questions, seek answers, and engage in debate about important policy issues. As a “vast democratic forum[ ]” the internet has democratized speech by lowering the barrier of entry for individuals to speak, be heard, and engage in debates about important issues facing society. In this way, Section 230 creates a causality dilemma. Section 230 is necessary to create the speech environment online that is required for individuals to debate and discuss issues related to Section 230.

Johnson’s paper also highlights how many stories about 230 inaccurately refer to it as a “safe harbor” rather than an “immunity.” As it notes, this is an important distinction. DMCA 512 is a safe harbor, and in order to make use of it, you need to meet a bunch of qualifications. This is why there is a long history of case law involving extensive litigation about a bunch of different factors to determine if a site qualifies for the DMCA safe harbor or if it “loses” the safe harbor. But 230 is an immunity, which is different. You can’t lose an immunity. You don’t have to take any steps to get the immunity. And one of the biggest misconceptions about 230 is that sites can take some sort of action that loses them the protections. That’s not true, but when news organizations report on it as a safe harbor, they support that misconception.

There’s much, much more in the paper, but it’s quite an excellent thesis, incredibly detailed, including getting a lot of very nuanced and complex topics correct that (as the paper itself shows) journalists often get very, very wrong. And it also adds clear data to the discussion. Just an all around excellent piece of scholarship.

Filed Under: 1st amendment, immunity, kathryn johnson, misleading, narrative, reporting, safe harbor, section 230
Companies: ny times, wall street journal, wsj

from the fine,-just-drop-Article-13 dept

Last week, as the last round of “trilogue” negotiations were getting underway in the EU on the EU Copyright Directive, we noted a strange thing. While tech companies and public interest groups have been speaking out loudly against Article 13, a strange “ally” also started complaining about it: a bunch of TV, movie and sports organizations started complaining that Article 13 was a bad idea. But… for very different reasons. Their concerns were that regulators had actually finally begun to understand the ridiculousness of Article 13 and had been trying to add in some “safe harbors” into the law. Specifically, the safe harbors would make it clear that if platforms followed certain specific steps to try to stop infringing works from their platform, they would avoid liability. But, according to these organizations, safe harbors of any kind are a non-starter.

Those same groups are back with a new letter that’s even more unhinged and more explicit about this. The real issue is that they recently got a ruling out of a German court that basically said platforms are already liable for any infringement, and they’re now afraid that Article 13 will “soften” that ruling by enabling safe harbors.

In a letter of 1 December we alerted the three EU institutions that the texts under discussion would undermine current case law of the Court of Justice of the European Union (CJEU) which already makes it clear that online content sharing service providers (OCSSPs) communicate to the public and are not eligible for the liability privilege of Article 14 E-Commerce Directive (ECD). The proposal would further muddy the waters of jurisprudence in this area in light of the pending German Federal Court of Justice (Bundesgerichtshof) referral to the CJEU in a case involving YouTube/Google and certain rightholders, addressing this very issue. The initial goal of Article 13 was to codify the existing case-law in a way that would enable right holders to better control the exploitation of their content vis a vis certain OCSSPs which currently wrongfully claim they benefit from the liability privilege of Article 14 ECD. Unfortunately, the Value Gap provision has mutated in such a way that it now creates a new liability privilege for big platforms and therefore even further strengthens the role of OCSSPs to the direct detriment of rightholders.

First of all, it is complete and utter bullshit to claim that Article 13 was “to codify existing case law.” Article 13 was designed to create an entirely brand new liability regime that deliberately sought to avoid Article 14 of the E-Commerce Directive (ECD). The ECD functions somewhat akin to the DMCA’s safe harbors in the US, in that they include intermediary liability protections for sites that comply with takedown notices in a reasonable manner. The entire point of Article 13 in the EU Copyright Directive was to take copyright out of the E-Commerce Directive and to remove those safe harbors. To claim otherwise is laughable.

It is, of course, hilarious that these companies are now pretending that just because they got a good ruling in their favor on this point, that they’re suddenly freaking out that any safe harbor might exist for internet platforms, but here they’re explicit about how against a safe harbor they are:

Last week, we proposed a balanced and sound compromise solution consisting in guidance on the issue of OCSSP liability with reference to the existing jurisprudence of the CJEU. This solution would ensure rightholder collaboration in furtherance of the deployment of appropriate and proportionate measures as well as addressing the potential liability of uploaders where the platform has concluded a license, without the creation of any new safe harbours for big platforms. We continue to believe that this reasonable approach would have broad support, including in the rightholders community and could at the same time conciliate different views of Member States and different political groups in the European Parliament, without the need to give powerful active platforms the gift of a new liability privilege which goes beyond the stated intent of the proposed copyright reform. We also indicated that if, on the contrary, any new safe harbour/?mitigation of liability? would be part of a final trilogue agreement, we want to be excluded from the entire value gap provision.

It’s also hilarious that they refer to this as “the value gap provision.” The “value gap” is a made up concept by some legacy copyright companies to complain that their business models aren’t as all powerful as they used to be, and therefore the government must step in to force other companies to give them money.

Also note the messaging here: they don’t talk about what would be best for the public. Just for “the rightsholder community.”

Anyway, if they want to be “excluded” from Article 13 entirely, I think that’s fine. The best solution here is the obvious one: the EU can drop Article 13 entirely.

Filed Under: article 13, copyright, eu, eu copyright directive, intermediary liability, safe harbor
Companies: mpa, premier league

Facebook Asked To Change Terms Of Service To Protect Journalists

from the a-chance-to-fix-things dept

There are plenty of things to be concerned about regarding Facebook these days, and I’m sure we’ll be discussing them for years to come, but the Knight First Amendment Center is asking Facebook to make a very important change as soon as possible: creating a safe harbor for journalists who are researching public interest stories on the platform. Specifically, the concern is that basic tools used for reporting likely violate Facebook’s terms of service, and could lead to Facebook being able to go after reporters for CFAA violations for violating its terms. From the letter:

Digital journalism and research are crucial to the public?s understanding of Facebook?s platform and its influence on our society. Many of the most important stories written about Facebook and other social media platforms in recent months have relied on basic tools of digital investigation. For example, research published by an analyst with the Tow Center for Digital Journalism, and reported in The Washington Post, uncovered the true reach of the Russian disinformation campaign on Facebook. An investigation by Gizmodo showed how Facebook?s ?People You May Know? feature problematically exploits ?shadow? profile data in order to recommend friends to users. A story published by ProPublica revealed that Facebook?s self-service ad platform had enabled advertisers of rental housing to discriminate against tenants based on race, disability, gender, and other protected characteristics. And a story published by the New York Times exposed a vast trade in fake Twitter followers, some of which impersonated real users.

Facebook?s terms of service limit this kind of journalism and research because they ban tools that are often necessary to it?specifically, the automated collection of public information and the creation of temporary research accounts. Automated collection allows journalists and researchers to generate statistical insights into patterns, trends, and information flows on Facebook?s platform. Temporary research accounts allow journalists and researchers to assess how the platform responds to different profiles and prompts.

Journalists and researchers who use tools in violation of Facebook?s terms of service risk serious consequences. Their accounts may be suspended or disabled. They risk legal liability for breach of contract. The Department of Justice and Facebook have both at times interpreted the Computer Fraud and Abuse Act to prohibit violations of a website?s terms of service. We are unaware of any case in which Facebook has brought legal action against a journalist or researcher for a violation of its terms of service. In multiple instances, however, Facebook has instructed journalists or researchers to discontinue important investigative projects, claiming that the projects violate Facebook?s terms of service. As you undoubtedly appreciate, the mere possibility of legal action has a significant chilling effect. We have spoken to a number of journalists and researchers who have modified their investigations to avoid violating Facebook?s terms of service, even though doing so made their work less valuable to the public. In some cases, the fear of liability led them to abandon projects altogether.

This is a big deal, as succinctly described above. We’ve talked in the past about how Facebook has used the CFAA to sue useful services and how damaging that is. But the issues here have to do with actual reporters trying to better understand aspects of Facebook, for which there is tremendous and urgent public interest, as the letter lays out. Also, over at Gizmodo, Kash Hill has a story about how Facebook threatened them over their story investigating Facebook’s “People You May Know” feature, showing that this is not just a theoretical concern:

In order to help conduct this investigation, we built a tool to keep track of the people Facebook thinks you know. Called the PYMK Inspector, it captures every recommendation made to a user for however long they want to run the tool. It?s how one of us discovered Facebook had linked us with an unknown relative. In January, after hiring a third party to do a security review of the tool, we released it publicly on Github for users who wanted to study their own People You May Know recommendations. Volunteers who downloaded the tool helped us explore whether you?ll show up in someone?s People You Know after you look at their profile. (Good news for Facebook stalkers: Our experiment found you won?t be recommended as a friend just based on looking at someone?s profile.)

Facebook wasn?t happy about the tool.

The day after we released it, a Facebook spokesperson reached out asking to chat about it, and then told us that the tool violated Facebook?s terms of service, because it asked users to give it their username and password so that it could sign in on their behalf. Facebook?s TOS states that, ?You will not solicit login information or access an account belonging to someone else.? They said we would need to shut down the tool (which was impossible because it?s an open source tool) and delete any data we collected (which was also impossible because the information was stored on individual users? computers; we weren?t collecting it centrally).

The proposal in the letter is that Facebook amend its terms of service to create a “safe harbor” for journalism. While Facebook recently agreed to open up lots of data to third party academics, it’s important to note that journalists and academics are not the same thing.

The safe harbor we envision would permit journalists and researchers to conduct public-interest investigations while protecting the privacy of Facebook?s users and the integrity of Facebook?s platform. Specifically, it would provide that an individual does not violate Facebook?s terms of service by collecting publicly available data by automated means, or by creating and using temporary research accounts, as part of a news-gathering or research project, so long as the project meets certain conditions.

First, the purpose of the project must be to inform the general public about matters of public concern. Projects designed to inform the public about issues like echo chambers, misinformation, and discrimination would satisfy this condition. Projects designed to facilitate commercial data aggregation and targeted advertising would not.

Second, the project must protect Facebook?s users. Those who wish to take advantage of the safe harbor must take reasonable measures to protect user privacy. They must store data obtained from the platform securely. They must not use it for any purpose other than to inform the general public about matters of public concern. They must not sell it, license it, or transfer it to, for example, a data aggregator. And they must not disclose any information that would readily identify a user without the user?s consent, unless the public interest in disclosure would clearly outweigh the user?s interest in privacy.

There are a few more conditions in the proposal, including not interfering with the proper working of Facebook. The letter includes a draft amendment as well.

While there may be some hesitation among certain people with anything that seems to try to carve out different rules for a special class of people, I appreciate that the approach here is focused on carving out a safe harbor for journalism rather than journalists. That is, as currently structured, anyone could qualify for the safe harbors if they are engaged in acts of journalism, and it does not have any silly requirement about being attached to a well known media organization or anything like that. The entire setup seems quite reasonable, so now we’ll see how Facebook responds.

Filed Under: cfaa, data collection, journalism, reporting, safe harbor, tools
Companies: facebook, knight 1st amendment center

EU Politicians Tell European Commission To Suspend Privacy Shield Data Transfer Framework

from the US-must-try-harder dept

A couple of months ago, we wrote about an important case at the Court of Justice of the European Union (CJEU), the region’s highest court. The final judgment is expected to rule on whether the Privacy Shield framework for transferring EU personal data to the US is legal under EU data protection law. Many expect the CJEU to throw out Privacy Shield, which does little to address the earlier criticisms of the preceding US-EU agreement: the Safe Harbor framework, struck down by the same court in 2015. However, that’s not the only problem that Privacy Shield is facing. One of the European Parliament’s powerful committees, which helps determine policy related to civil liberties, has just issued a call to the European Commission to suspend the Privacy Shield agreement unless the US tries harder:

The data exchange deal should be suspended unless the US complies with it by 1 September 2018, say MEPs, adding that the deal should remain suspended until the US authorities comply with its terms in full.

There are a couple of reasons why the European Parliament’s committee has taken this unusual step. One is the recent furore surrounding Cambridge Analytica‘s use of personal data collected by Facebook, which the EU politicians incorrectly call a “data breach”. However, as they correctly point out, both companies were certified under Privacy Shield, which doesn’t seem to have prevented the data from being misused:

Following the Facebook-Cambridge Analytica data breach, Civil Liberties MEPs emphasize the need for better monitoring of the agreement, given that both companies are certified under the Privacy Shield.

MEPs call on the US authorities to act upon such revelations without delay and if needed, to remove companies that have misused personal data from the Privacy Shield list. EU authorities should also investigate such cases and if appropriate, suspend or ban data transfers under the Privacy Shield, they add.

The other concern is the recently-passed Clarifying Lawful Overseas Use of Data Act (CLOUD Act), which grants the US and foreign police access to personal data across borders. This undermines the effectiveness of the privacy protections of the data transfer scheme, since it would allow the personal data of EU citizens to be accessed more easily. The head of the civil liberties committee, Claude Moraes, is quoted as saying:

While progress has been made to improve on the Safe Harbor agreement, the Privacy Shield in its current form does not provide the adequate level of protection required by EU data protection law and the EU Charter. It is therefore up to the US authorities to effectively follow the terms of the agreement and for the Commission to take measures to ensure that it will fully comply with the GDPR.

The mention of the new GDPR there is significant, since it raises the bar for the Privacy Shield framework’s compliance with EU data protection laws. A greater stringency makes it more likely that the European Commission will suspend the deal, and that the CJEU will strike it down permanently at some point.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: cjeu, data, eu, eu commission, eu parliament, privacy, privacy shield, safe harbor, surveillance, us
Companies: facebook

Top EU Data Protection Body Asks US To Fix Problems Of 'Privacy Shield' Or Expect A Referral To Region's Highest Court

from the please-don't-make-us-do-this dept

The Privacy Shield framework is key to allowing personal data to flow legally across the Atlantic from the EU to the US. As we’ve noted several times this year, there are a number of reasons to think that the EU’s highest court, the Court of Justice of the European Union (CJEU), could reject Privacy Shield just as it threw out its predecessor, the Safe Harbor agreement. An obscure but influential advisory group of EU data protection officials has just issued its first annual review of Privacy Shield (pdf). Despite its polite, bureaucratic language, it’s clear that the privacy experts are not happy with the lack of progress in dealing with problems pointed out by them previously. As the “Article 29 Data Protection Working Party” — the WP29 for short — explains:

Based on the concerns elaborated in its previous opinions … the WP29 focused on the assessment of both the commercial aspects of the Privacy Shield and on the government access to personal data transferred from the EU for the purposes of Law Enforcement and National Security, including the legal remedies available to EU citizens. The WP29, assessed whether these concerns have been solved and also whether the safeguards provided under the EU-U.S. Privacy Shield are workable and effective.

As far as the commercial aspects of Privacy Shield are concerned, the WP29 is unhappy about a number of important “unresolved” issues such as “the lack of guidance and clear information on, for example, the principles of the Privacy Shield, on onward transfers [of personal data] and on the rights and available recourse and remedies for data subjects.” The issue of US government access to the personal data of EU citizens is even thornier. Although the WP29 welcomed efforts by the US government to become more “transparent on their use of their surveillance powers”, the collection of and access to personal data for national security purposes under both section 702 of FISA and Executive Order 12333 were still a problem. On the former, WP29 suggests:

Instead of authorizing surveillance programs, section 702 should provide for precise targeting, along with the use of the criteria such as that of “reasonable suspicion”, to determine whether an individual or a group should be a target of surveillance, subject to stricter scrutiny of individual targets by an independent authority ex-ante.

As regards the Executive Order 12333, WP29 wants the Privacy and Civil Liberties Oversight Board (PCLOB) “to finish and issue its awaited report on EO 12333 to provide information on the concrete operation of this Executive Order and on its necessity and proportionality with regard to interferences brought to data protection in this context.” That’s likely to be a bit tricky, because the PCLOB is understaffed due to unfilled vacancies, and possibly moribund. In conclusion, the WP29 “acknowledges the progress of the Privacy Shield in comparison with the invalidated Safe Harbor Decision”, but underlines that the EU group has “identified a number of significant concerns that need to be addressed by both the [European] Commission and the U.S. authorities.” It spells out what will happen if they aren’t sorted out:

In case no remedy is brought to the concerns of the WP29 in the given time frames, the members of WP29 will take appropriate action, including bringing the Privacy Shield Adequacy decision to national courts for them to make a reference to the CJEU for a preliminary ruling.

That is, it will ask the EU’s highest court to rule on the so-called “adequacy decision” of the European Commission, where it decided that Privacy Shield offered enough protection for EU personal data moving to the US. There’s a clear implication that WP29 doubts the CJEU’s ruling will be favorable unless all the changes it has requested are made soon. And without the Privacy Shield framework, it will be much harder to transfer personal data legally across the Atlantic. Moreover, the EU’s data protection laws are about to become even more stringent next year, when the new General Data Protection Regulation (GDPR) is enforced. Organizations in breach of the GDPR can be fined up to 4% of annual global turnover, which means even the biggest Internet companies will have a strong incentive to comply.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: eu, executive order 12333, privacy, privacy shield, safe harbor, section 702, surveillance

from the so-dumb dept

If you run any kind of website it’s super important that you file with the Copyright Office to officially register a DMCA agent. This is a key part of the DMCA. If you want to make use of the DMCA’s safe harbors — which create a clear safe harbor for websites to avoid liability of infringing material posted by users — then you have to first register with the Copyright Office. Larger corporate sites already know this, but many, many smaller sites do not. This is why for years we’ve posted messages reminding anyone who has a blog to just go and register with the Copyright Office to get basic DMCA protections (especially after a copyright troll went after some smaller blogs who had not done so).

A few months back, we noted, with alarm, that the Copyright Office was considering a plan to revamp how it handled DMCA registrations, which had some good — mainly making the registration process cheaper — but a really horrific idea of requiring sites to re-register every three years or lose their safe harbor protections. Despite many people warning the Copyright Office how this would be a disaster, on Monday the Office announced it was going ahead with the plan anyway, not even acknowledging that thousands of sites are likely to get fucked over by this move:

The United States Copyright Office has completed development of a new electronic system to designate and search for agents to receive notifications of claimed infringement, as required under the Digital Millennium Copyright Act (DMCA). Accordingly, the Office is publishing a final rule in the Federal Register tomorrow to implement that system, replacing an interim rule that the Office had adopted after the DMCA?s enactment. A prepublication version of the rule is available for public inspection here. The rule is effective on December 1, 2016, the date that the new online registration system and directory will be launched. In the meantime, users can begin to acquaint themselves with the new system by watching the video tutorials available here. Any service provider that has previously designated an agent with the Office will have until December 31, 2017, to submit a new designation electronically through the new online registration system.

Got that? Even if you followed the law and registered before, if you don’t re-register (and pay another fee), then you will be kicked off the list of registered DMCA agents, meaning that you will lose your DMCA safe harbors. Basically, the Copyright Office is announcing that it is kicking EVERY SINGLE site that registered for DMCA safe harbor protection OFF THE SAFE HARBOR LIST.. That’s horrendous. You have until the end of 2017, but really, how many sites (especially smaller sites or one person blogs) who don’t follow copyright law are going to realize they need to do this? This is a recipe for disaster, and is basically the Copyright Office giving a giant middle finger to the DMCA’s safe harbors for the sites that need them the most: smaller blogs and forum sites.

The Copyright Office’s stated reason for doing this is nonsensical. It complains about outdated information in its database:

Since the DMCA?s enactment in 1998, online service providers have designated agents with the Copyright Office via paper filings, and the Office has made scanned copies of these filings available to the public by posting them on the Office?s website. Although the DMCA requires service providers to update their designations with the Office as information changes, an examination of a large sample of existing designations found that 22 percent were for defunct service providers, while approximately 65 percent of nondefunct service providers? designations had inaccurate information (when compared to the information provided by service providers on their own websites).

The correct way to deal with this is to create a campaign to encourage sites to update their info — not to kick everyone off. This is the goddamn Copyright Office, whose whole job is about “registering” information about people. Does the Copyright Office threaten to dump someone’s copyright if a copyright holder’s information is out of date? Of course not. That would be ridiculous. But it’s now going to do that to any website that doesn’t remember to constantly re-register a DMCA agent. This is a recipe for disaster, and for no good reason at all. And the idea that false or outdated info is a problem doesn’t make much sense either. If a site has incorrect info, then they risk missing DMCA takedown notifications, meaning that they already face problems in not keeping their info up to date. The “solution” is not to make life worse for lots of other sites.

The final rule insists that this is no big deal because outdated info is “functionally equivalent to not designating an agent.” However, as Eric Goldman points out, that’s a total non sequitur. While it’s true that those with outdated info may not have an official registration any more, that’s no excuse for kicking off all those sites that do have valid registration. Those sites are fucked, for no reason other than the Copyright Office decided it has the right and power to just dump the entire list as of December 1.

As Goldman notes in his write-up of this clusterfuck, there’s a simple solution here that the Copyright Office could have done, but didn’t:

And when the Copyright Office disingenuously says the renewal requirement ?should in many cases actually assist service providers in retaining their safe harbor, rather than serving to deprive them of it,? it mockably conflates its ability to communicate helpful information to registrants with draconian substantive policy effects. Here?s an alternative: send out the reminder notices but don?t make the consequences of non-renewal COMPLETE FORFEITURE OF THE DMCA SAFE HARBORS. Similarly, the analogies to other recurring obligations, like business licenses, ignore (1) the frequently less draconian consequences of non-compliance, and (2) the reality that many of these recurring obligations are nevertheless frequently mismanaged by both big and small companies.

Goldman also notes another horrifying part of the plan. The Copyright Office — ostensibly to “help” those sites who forget to renew their DMCA agent – will list those sites on a public website of “lapsed” safe harbor registrants. Except, it’s likely that instead of “helping” those sites, it will actually just be presenting a target list for copyright trolls to go searching for any infringing material on any site with a lapsed safe harbor registration.

There are questions about what can be done here. It’s possible with new management coming to the Copyright Office that it may reverse this move, but that seems unlikely. It’s quite likely, however, that there will be some lawsuit over this, especially as the Copyright Office is single-handedly removing protections for a whole bunch of sites. Another option is that Congress could get involved, but that would likely create an even bigger mess (Congress touching copyright law — especially anything to do with the DMCA’s safe harbors — is not likely to end well).

This just seems like yet another example (in a ridiculously long list) of how screwed up the Copyright Office is these days.

Filed Under: 512 safe harbors, copyright, copyright office, dmca, dmca 512, dmca registration, safe harbor

EU And US Come To 'Agreement' On Safe Harbor, But If It Doesn't Stop Mass Surveillance, It Won't Fly

from the separate-out-the-issues dept

Back in October, we noted that it was a really big deal that the European Court of Justice had said that the EU/US Safe Harbor framework violated data protection rules, because it had become clear that the NSA was scooping up lots of the data. The issue, if you’re not aware of it, is that under the safe harbor framework, US internet companies could have European customers and users, with their information and data stored on US servers. Without the safe harbor framework, there are at least some cases where many companies would be forced to set up separate data centers in Europe, and make sure European information is kept there.

Many privacy activists are actually supportive of keeping the data in Europe altogether, but I still think that would be a disaster for lots of internet companies and services — especially smaller ones. The big guys — Google, Facebook, Microsoft, Yahoo, Twitter, etc. — can afford to have separate European data centers. A small company — like Techdirt — cannot. Requiring separate data centers and careful separation of the data would ensure less competition and fewer startups to take on the big guys. That’s a problem. Beyond that, having those separate data centers could actually lead to even less privacy in the long run, because having many jurisdictions in which data is kept means that, inevitably, some of those jurisdictions will fall into states that have even worse surveillance and fewer data protections — and also leaves open the opportunity for different data center setups, which may lead to more vulnerabilities. Remember, when the NSA broke into Google and Yahoo’s datacenters, they were the ones outside the US, which may have had weaker security. And, despite many Europeans not wishing to believe this, many European countries have many fewer restrictions on the kind of surveillance their intelligence agencies are able to do on local data and citizens.

The real issue here is mass surveillance overall. The only real way to fix this issue is to stop mass surveillance and go back to saying that intelligence agencies and law enforcement need to go back to doing targeted surveillance using warrants and true oversight. But, instead, the EU and the US keep trying to paper over this by coming up with a new agreement. That agreement was supposed to have been concluded by a fake “deadline” set for yesterday, but after missing that and claiming that progress had been made on a new agreement, a new deal was finally announced a few hours ago, with the ridiculous name “The EU-US Privacy Shield.”

Here’s the key part of the announcement:

* Strong obligations on companies handling Europeans’ personal data and robust enforcement: U.S. companies wishing to import personal data from Europe will need to commit to robust obligations on how personal data is processed and individual rights are guaranteed. The Department of Commerce will monitor that companies publish their commitments, which makes them enforceable under U.S. law by the US. Federal Trade Commission. In addition, any company handling human resources data from Europe has to commit to comply with decisions by European DPAs. * Clear safeguards and transparency obligations on U.S. government access: For the first time, the US has given the EU written assurances that the access of public authorities for law enforcement and national security will be subject to clear limitations, safeguards and oversight mechanisms. These exceptions must be used only to the extent necessary and proportionate. The U.S. has ruled out indiscriminate mass surveillance on the personal data transferred to the US under the new arrangement. To regularly monitor the functioning of the arrangement there will be an annual joint review, which will also include the issue of national security access. The European Commission and the U.S. Department of Commerce will conduct the review and invite national intelligence experts from the U.S. and European Data Protection Authorities to it. * Effective protection of EU citizens’ rights with several redress possibilities: Any citizen who considers that their data has been misused under the new arrangement will have several redress possibilities. Companies have deadlines to reply to complaints. European DPAs can refer complaints to the Department of Commerce and the Federal Trade Commission. In addition, Alternative Dispute resolution will be free of charge. For complaints on possible access by national intelligence authorities, a new Ombudsperson will be created.

The key thing here? The claim that the US “has ruled out indiscriminate mass surveillance on the personal data transferred to the US.” I’m curious about how much bullshit the NSA will be able to sneak under “indiscriminate.” I’m also curious as to what kind of real oversight there will be. The EU Commission and the Department of Commerce will be able to review, but we all know how good the NSA is at hiding what it’s actually doing from oversight bodies. Finally, the “ombudsperson” only matters if they have actual power, and that seems incredibly unlikely.

And as Max Schrems, who brought the original case that took down the safe harbors, is saying (over and over again), as it stands right now, it looks like this new deal will lose again in the EU courts.

And that brings us back to the underlying point. The effort to kill off the safe harbor agreement wasn’t really about the safe harbor agreement at all, but to force the hand of the US government (and hopefully European governments as well) to recognize that they need to stop doing mass surveillance. The claim above about no indiscriminate mass surveillance pays lip service to that idea, but there needs to be some real and concrete change to make that happen. And that’s going to take more than an “exchange of letters” between the EU and the US, as the basis of this deal. It’s going to need actual surveillance reform, not just the “surveillance reform lite” we saw with the USA FREEDOM Act.

Again, I think having the ability to transfer data from the EU to the US is hugely important — which not everyone agrees with. Fragmenting the internet by requiring that data stays in certain countries seems as silly to me as geoblocking content. But the underlying issue here is not about where the data is stored — it’s about mass surveillance. Focusing the agreement on how to allow data transfers without actually tackling how to stop mass surveillance is inevitably a fake solution.

Filed Under: data protection, data transfers, eu, mass surveillance, privacy, privacy shield, safe harbor, us

Think Tank Who Proposed SOPA Now Argues That US Should Encourage Countries To Censor The Pirate Bay

from the censorship dept

On Tuesday, the House Judiciary Committee held a hearing on what sounds like a boring topic: “International Data Flows: Promoting Digital Trade in the 21st Century.” However, as we’ve discussed, this seemingly boring topic can have a profound impact on how the internet functions, and whether it remains a global platform for free expression — or becomes a fragmented system used for widespread censorship, surveillance and control. In other words this is important.

The hearing was mostly pretty bland (as Congressional hearings tend to be), but at one point, Robert Atkinson, the President of the Information Technology and Innovation Foundation (ITIF) argued that the US should be encouraging global censorship if it’s for sites like The Pirate Bay. You can watch the portion of the video below (it should start at the right moment, but if not, jump to 1 hour, 27 minutes and 40 seconds):

It starts with Rep. Jerry Nadler reading a question someone else clearly prepared for him, directed at Atkinson about how to handle situations in which different countries have different laws regarding free speech and content, and what that should mean for “data flows” across borders. In short, this is a question about “what should we do with countries who want to censor the internet — and should we allow that sort of thing.” Atkinson’s answer is a bit rambling, but he basically starts off by saying that we’ll never agree with some other countries on free speech and such… but then says no matter what, one thing we should all agree on is that it’s good to censor sites like the Pirate Bay and the US should encourage such blatant censorship worldwide:

I think it’s an untenable project that we would end up with “global harmony” on every single rule with regard to the internet. We’re not going to be able to do that. And we’re certainly not going to be able to do that with free speech. There are certain countries, particularly more traditional, religious countries that find pornography objectionable. We don’t with our… or at least we have free speech, we may find it objectionable, but we allow it. We’re not going to agree on that. And for certain things like that, countries are going to do that and I think we just have to be okay with that.

Another example is in Germany, you’re not allowed to download a copy of Mein Kampf. In the US, we can. Again, we’re not going to change the German view. I don’t know if they’re right or wrong. It doesn’t make any difference.

Where we can and should, though, take action, is there are some things that are clearly illegal under the WTO framework for intellectual property, for example piracy and intellectual property theft can be prosecuted. So when countries engage in steps, for example, to block certain websites that are clear piracy sites — like, for example, a web or a domain called “the pirate bay” that should be quite… you know we should be encouraging that. That’s quite different than blocking, say, Facebook or something like that, or blocking some site just because you don’t want competition.

Where to start? Well, how about I let Atkinson debunk Atkinson. In the question immediately preceding this one about blocking websites, Nadler had asked Atkinson about backdooring encryption. And there, Atkinson gave a much better answer, noting that it was a terrible idea (he’s right!), but then notes:

If they try to mandate that, they’re setting a dangerous precedent, for example, by letting the Chinese government do the exact same thing.

Uh. Yeah. And having the US government “encourage” censoring websites also sets a dangerous precedent by letting the Chinese government (and lots of other governments) point to the US as doing the same thing they do. But, as Atkinson and other copyright system supporters will undoubtedly scream, “that’s different — this is about copyright, not censorship.” Yeah, well, you’re not paying attention if you don’t recognize how copyright is used for political censorship as well. Remember how Russia was using copyright law to intimidate its critics? What you might not remember is that when China first set up its massive online censorship system, known as the Great Firewall of China, one of its key justifications to the outside world was that it would be used to stop piracy online. And, of course, during the big SOPA/PIPA fight, the Chinese were laughing at those of us in America who whined about their Great Firewall, while we were debating a proposal to set up an identical system.

Of course, it’s no surprise that Atkinson is making this argument. The organization he runs, ITIF, is frequently credited with first proposing the ideas behind SOPA in a white paper that came out right before the SOPA push. And ITIF famously argued in favor of SOPA by pointing to authoritarian countries who censor the internet as proof that SOPA wasn’t that harmful. Yes, Atkinson’s own firm suggested that the US should emulate China, Saudi Arabia, Iran, Syria and a number of other countries in censoring the internet. But, you know, “just for copyright.”

And this doesn’t even get to the issue of Atkinson’s assured statement that certain sites are “clear piracy sites.” Except, as we’ve noted over and over again, almost every great innovation around content delivery was decried as a “tool for piracy” originally. Radio, TV, cable TV, the photocopier, the VCR, the DVR, the mp3 player and YouTube and similar sites were all attacked as piracy tools originally. And yet every one of them actually opened up new and important arenas for content creation, distribution and monetization. What looks like a piracy tool in the early days often becomes a massive and legitimate business opportunity soon after (again: it was just four years after the MPAA’s Jack Valenti declared VCR’s the “Boston Strangler” to the film industry that home video revenues surpassed box office revenues).

Either way, what Atkinson was saying here is both shocking and dangerous. He’s outright advocating a censorship regime based on his belief of what is and is not appropriate — and suggesting that the US should “encourage” other countries to censor the web without legal due process, without consideration for innovation, because he has decided which sites are bad. At the end he says that blocking The Pirate Bay is not like blocking Facebook. Yet, there are many people who argue that Facebook is, similarly, a giant piracy site. Whose definition is right in that context? And the same question can be asked about YouTube. Viacom sued YouTube claiming that it was just as bad as the Pirate Bay. Would Atkinson support countries blocking all access to YouTube “under the WTO”?

There is a rather astounding level of cognitive dissonance that some people, such as Atkinson have, around issues related to copyright and censorship. They assume, incorrectly, that copyright is some magical fairy tale world where it’s never used for censorship, and thus it’s fine to block “bad sites” where people like Atkinson get to decide what is and what is not bad. But all he’s doing is encouraging internet censorship, and giving massive amounts of cover to authoritarian regimes who want to censor the internet for all sorts of reasons. They can easily take Atkinson’s claims that we must encourage censorship over copyright and either abuse copyright for that purpose, or even just twist it slightly to note “well, blocking infringement is important to the US, and we feel the same way about political unrest.”

Atkinson’s ITIF lost its battle for SOPA nearly four years ago. It shouldn’t try to reintroduce the idea of a global platform for internet censorship today.

Filed Under: censorship, congress, copyright, data flows, digita trade, free flow of information, free speech, judiciary committee, jurisdiction, robert atkinson, safe harbor, sopa
Companies: itif

How NSA Surveillance May Result In Fragmenting The Internet: EU Court Leaning Towards Ending 'Privacy Safe Harbor'

from the this-could-be-a-mess dept

If you haven’t dealt with it, the “EU-US data protection safe harbor” is somewhat confusing to deal with. The basics, however, are that under an agreement between the US and the EU, if US companies wish to transfer data out of Europe and to American servers, they have to abide by this “safe harbor” process, whereby they agree to take certain steps to keep that data safe and out of prying eyes. The process itself is something of a joke (we at Techdirt have actually gone through it to make sure we weren’t violating the law — though I imagine many small American internet companies don’t even know it exists). You basically have to pay a company to declare you in compliance, which in reality often just means that the company reviews your terms of service/privacy policy to make sure it has specific language in it. There have been plenty of (potentially reasonable) complaints out of the EU that the safe harbor process doesn’t actually do much to protect Europeans’ data. That may be true, but the flipside of it isn’t great either. Without the safe harbor framework, it’s possible that it would be much more difficult for American internet companies to operate in Europe — or for Europeans to use American internet companies. Some in Europe may think that’s a good idea, until they suddenly can’t use large parts of the internet.

Either way, the whole safe harbor system has come under attack on a variety of fronts, and it looks close to breaking… all because of the NSA. Max Schrems, who made news back in 2011 by asking Facebook for a copy of all the data it had on him, argued that the NSA’s PRISM surveillance program violated EU data protection rules. The European Court of Justice’s Advocate General, Yves Bot, has now sided with Schrems and basically said that the NSA surveillance has made the safe harbor process invalid.

The European Court of Justice still needs to come out with its final decision, but it usually (though not always!) agrees with the Advocate General’s recommendation. Here, the Advocate General basically says that NSA surveillance has completely undermined the idea that the US can keep Europeans’ data safe, and thus the safe harbor cannot stand.

According to the Advocate General, that interference with fundamental rights is contrary to the principle of proportionality, in particular because the surveillance carried out by the United States intelligence services is mass, indiscriminate surveillance. Indeed, the access which the United States intelligence authorities may have to the personal data covers, in a generalised manner, all persons and all means of electronic communication and all the data transferred (including the content of the communications), without any differentiation, limitation or exception according to the objective of general interest pursued. The Advocate General considers that, in those circumstances, a third country cannot in any event be regarded as ensuring an adequate level of protection, and this is all the more so since the safe harbour scheme as defined in the Commission decision does not contain any appropriate guarantees for preventing mass and generalised access to the transferred data. Indeed, no independent authority is able to monitor, in the United States, breaches of the principles for the protection of personal data committed by public actors, such as the United States security agencies, in respect of citizens of the EU.

In short, thanks to indiscriminate mass surveillance by the NSA, we may witness a fractured and fragmented internet. That’s a big deal.

The EU Commission and the US have been negotiating for a while to change the EU-US Safe Harbor setup anyway, so it’s possible that even if the court follows the Advocate General’s suggestion, a new, more acceptable, safe harbor process will be put in place. But, in the short term, this could create quite a mess for the internet. Once again, we see how the NSA’s actions, which it claims are to “protect” America could end up doing massive economic damage to the internet.

Filed Under: advocate general, cjeu, data privacy, eu, eu court of justice, eucj, fragmentation, localization, max schrem, nsa, privacy, safe harbor, surveillance