gchq – Techdirt (original) (raw)
German Parliament Rejects EU Commission Call For Client-Side Scanning
from the not-happening-here-if-we-can-help-it dept
Everybody agrees child sexual abuse material is a serious problem. Unfortunately, far too many supposedly serious people are coming up with very unserious “solutions” to the problem.
Pressure applied by lawmakers and law enforcement led to Apple deciding to get out ahead of the seemingly-impending mandates to “do something” about the problem. In August 2021, it declared its intent to engage in client-side scanning of users’ content which would search for illegal material on users’ devices as well as their cloud storage accounts. After receiving a ton of backlash, Apple backpedaled, putting its scanning plans on ice for the foreseeable future.
Apple recognized the problem, albeit after the fact. Legislators pushing for client-side scanning don’t appear to be getting any smarter about the issue, despite having a real-world example to learn from. A bunch of security researchers wrote a report detailing all the security and privacy issues client-side scanning introduces, noting that any tradeoffs in effectiveness of shutting down CSAM were extremely limited.
This too has been ignored. Government officials all over the world still think the best thing for the children is something that would reduce the security and privacy of children who own smartphones and are almost always connected to the internet. Two GCHQ employees wrote a paper suggesting the smart thing to do was mandate client-side scanning wherever it was needed. Bundled with that proposal was the implicit suggesting that end-to-end encryption was no longer an option — not when there are children to protect.
Less than a month after this paper was published, an EU commissioner composed an incomprehensible defense of client-side scanning, one presumably provoked by the EU Data Protect Board’s rejection of the entire premise, which pointed out the numerous violations of enshrined personal privacy rights client-side scanning would result in.
Somehow, despite all of this, the EU Commission is trying to move forward with mandated client-side scanning. Here’s what at least some members of the Commission want, as described by Hanna Bozakov in a blog post at Tutanota:
The EU proposal covers three types of sexualized abuse, such as depictions of abuse, previously unknown material, but also so-called grooming, i.e. targeted contact with minors with the intention of abuse.
The draft law is currently in the European process of becoming a law. If passed in its current form, it would force online service providers to scan all chat messages, emails, file upload, chats during games, video conferences etc. for child sexual abuse material. This would undermine everybody’s right to privacy and weaken the level of security online for all EU citizens.
Broad. Sweeping. Dangerous. These are all suitable terms for this proposal. And let’s not forget the children it’s supposed to help, who will be just as victimized by the law as the people who wrote it.
Fortunately, there’s already some strong opposition to this proposal. The German Parliament has soundly rejected this push for client-side scanning, saying there’s no way it’s willing to inflict this privacy invasion on its constituents.
While the German Parliament itself is not directly involved with the EU Commission’s proposal to make client-side scanning of encrypted communication mandatory for online services, the hearing was still a great success for digital rights groups and privacy activists.
The draft law itself is being negotiated between the EU Commission, the European Parliament and the member states in the Council of Ministers. In this context, the German government can have a deciding influence in the Council of Ministers.
And, to the very least, the German government wants the removal of client-side scanning, i.e. the examination of communications content on end devices, from the proposal.
So, if the EU Commission ratifies this proposal, the German government likely won’t enforce it. In fact, it will probably challenge the law in the EU human rights court, which will almost certainly find it a violation of rights guaranteed by other EU laws. This is a losing proposal for several reasons, but especially in a continent where this same commission has created sweeping privacy protections for European residents. It can’t just undo those because it wants to solve a problem it didn’t consider during its erection of other privacy protections.
Now that an entire country has rejected client-side scanning, the EU Commission needs to go back to the drawing board. Yes, CSAM is a problem that needs to be addressed. But it simply can’t be solved by turning everyone accessing the internet into a suspect.
Filed Under: client side scanning, csam, eu, gchq, germany, privacy, security, surveillance
New Book Says NSA Pressured GCHQ To Shut Down Publication Of Snowden Leaks By UK Journalists
from the unexpected-but-also,-sadly,-unsurprising dept
A new book written by journalist Richard Kerbaj, detailing the history of the so-called “Five Eyes” surveillance collaboration between the NSA and surveillance agencies in the UK, Australia, Canada, and New Zealand, is revealing a few more postscripts to the Ed Snowden story.
Snowden’s first leak appeared nearly a decade ago. Since then, spy agencies have been reformed, sued, discussed heavily, and, ultimately, emerged largely unscathed.
The new book contains a couple of revelations that don’t appear to previously have been published. Perhaps the most shocking (but maybe not all that shocking) is the apparent fact that NSA applied pressure to its UK counterpart in hopes of preventing UK journalists from committing journalism.
The US National Security Agency (NSA) tried to persuade its British counterpart to stop the Guardian publishing revelations about secret mass data collection from the NSA contractor, Edward Snowden, according to a new book.
Sir Iain Lobban, the head of Government Communications Headquarters (GCHQ), was reportedly called with the request in the early hours of 6 June 2013 but rebuffed the suggestion that his agency should act as a censor on behalf of its US partner in electronic spying.
The head of GCHQ felt comfortable rejecting the NSA’s request to somehow stop publication of the first Snowden leaks. But it wasn’t so resistant a few days later, when its own government apparently talked it into showing up at The Guardian’s offices and forcing employees to destroy hard drives that supposedly contained leaked NSA documents.
What’s not shocking about this is that the NSA would have likely done anything to stop the leaks from being published, especially if it could persuade a third-party located in a different country to apply pressure. That would free it from legal liability and allegations of rights violations and make another spy agency look like it was the one that couldn’t handle the pressure and pulled the trigger on an outrageous attempt to save itself at the expense of journalistic freedom.
The NSA’s supreme self-interest is further exposed in the book. NSA officials kept the agency’s closest so-called “partner” in the dark about the source of the leaks, allowing GCHQ to find out the name of the source the same time the rest of us not employed by the NSA found out.
Kerbaj reports that the US-UK intelligence relationship was further strained when the head of the NSA, Gen Keith Alexander, failed to inform Lobban that the Americans had identified Snowden, a Hawaii-based government contractor, as the source of the stories, leaving the British agency investigating its own ranks in the search for the leaker. GCHQ did not discover Snowden’s identity until he went public in a Guardian interview.
Yikes. Apparently, the NSA thought this was the best solution to its own problem. Making matters worse, Ed Snowden’s outing of himself further enraged GCHQ officials, who could not believe a mere government contractor (rather than an official NSA employee) had access to this wealth of classified information.
Despite their differences, the spy agencies remain united. They both agree the public shouldn’t know any more than they’re willing to officially release about spy programs that inadvertently or deliberately target citizens. They will both continue to go on joint fishing expeditions, pulling communications and data from offshore cables to remain out of reach of local laws. And presumably, they both still agree Snowden is the actual villain here, no matter how often they’ve ignored rights and regulations to engage in spying. But hopefully they both realize history will ultimately vindicate Snowden while the jury remains out on the effectiveness of counterterrorism programs that involve dragnet collections.
Filed Under: censorship, ed snowden, gchq, keith alexander, mass surveillance, nsa
Two GCHQ Employees Suggest The Solution To CSAM Distribution Is… More Client-Side Scanning
from the offloading-law-enforcement-work-to-the-private-sector dept
The font of not-great ideas continues to overflow at Lawfare. To be fair, this often-overflowing font is due to its contributors, which are current and former members of spy agencies that have violated rights, broken laws, and otherwise done what they can to make internet communications less secure.
We’ve heard from these contributors before. Ian Levy and Crispin Robinson are both GCHQ employees. A few years ago, as companies like Facebook started tossing around the idea of end-to-end encryption, Levy and Robinson suggested a workaround that would have done the same amount of damage as mandated backdoors, even if the pitch was slightly different than the suggestions offered by consecutive FBI directors.
What was suggested then was some sort of parallel communication network that would allow spies and law enforcement to eavesdrop on communications. The communications would still be encrypted. It’s just that the “good guys” would have their own encrypted channel to listen in on these communications. Theoretically, communications would still be secure, unable to be accessed by criminals. But opening a side door is not a whole lot different than opening a back door. A blind CC may be a bit more secure than undermining encryption entirely, but it’s still opens up another communication channel — one that might be left open and unguarded by the interceptors who would likely feel whatever bad things might result from that is acceptable because (lol) spy agencies only target dangerous enemies of the state.
The pair are back at it. In this post for Lawfare, Levy and Robinson suggest a “solution” that has already been proposed (and discarded) by the company that attempted it first: Apple. The “solution” is apparently trivially easy to exploit and prone to false positives/negatives, but that isn’t stopping these GCHQ reps from suggesting it be given another spin.
According to the paper [PDF] published by these two GCHQ employees, the key to fighting CSAM (Child Sexual Assault Material) in the era of end-to-end encryption is… more client-side scanning of content. And it goes beyond matching local images to known hashes stored by agencies that combat the sexual exploitation of children.
For example, one of the approaches we propose is to have language models running entirely locally on the client to detect language associated with grooming. If the model suggests that a conversation is heading toward a risky outcome, the potential victim is warned and nudged to report the conversation for human moderation. Since the models can be tested and the user is involved in the provider’s access to content, we do not believe this sort of approach attracts the same vulnerabilities as others.
Well, no vulnerabilities except for the provider’s access to what are supposed to be end-to-end encrypted communications. If this is the solution, the provider may as well not offer encryption at all, since it apparently won’t actually be encrypted at both ends. The provider will have access to the client side in some form, which opens a security hole that would not be present otherwise. The only mitigating factor is that the provider will not have its own copy of the communications. And if it doesn’t have that, of what use is it to law enforcement?
The proposal (which the authors note is not to be viewed as representative of GCHQ or the UK government) operates largely on faith.
[W]e believe that a robust evidence-based approach to this problem can lead to balanced solutions that ensure privacy and safety for all. We also believe that a framework for evaluating the benefits and disbenefits is needed.
“Disbenefits” is a cool word. If one were more intellectually honest, they might use a word like “drawback” or “flaw” or “negative side effects.” But that’s the Newspeak offered by the two GCHQ employees.
The following sentence makes it clear the authors don’t know whether any of their proposals will work, nor how many [cough] disbenefits they will cause. Just spitballing, I guess, but with the innate appeal to authority that comes from their positions and a tastefully-formatted PDF.
We don’t provide one in this paper but note that the U.K.’s national Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) is doing so as part of the U.K. government’s Safety Tech Challenge Fund, although this will require interpretation in the context of national data protection laws and, in the U.K., guidance from the Information Commissioner’s Office.
[crickets.wav]
The authors do admit client-side scanning (whether of communications or content) is far from flawless. False negatives and false positives will be an ongoing problem. The system can easily be duped into ok’ing CSAM. That’s why they want to add client-side scanning of written communications to the mix, apparently in hopes that a combination of the two will reduce the “disbenefits.”
Supposedly this can be accomplished with tech magic crafted by people nerding harder and a system of checks and balances that will likely always remain theoretical, even if it’s hard-coded into moderation guidelines and law enforcement policies.
For example, offenders often send existing sexually explicit images of children to potential victims to try to engender trust (hoping that victims reciprocate by sending explicit images of themselves). In this case, there is no benefit whatsoever in an offender creating an image that is classified as child abuse material (but is not), since they are trying to affect the victim, not the system. This weakness could also be exploited by sending false-positive images to a target, hoping they are somehow investigated or tracked. This is mitigated by the reality of how the moderation and reporting process works, with multiple independent checks before any referral to law enforcement.
This simply assumes such “multiple independent checks” exist or will exist. They may not. It may be policy for tech companies to simply forward everything questionable to law enforcement and allow the “pros” to sort it out. That “solution” is easiest for tech companies and since they’ll be operating in good faith, legal culpability for adverse law enforcement reactions will be minimal.
That assumptive shrug that robust policies exist, will exist, or will be followed thousands of times a day leads directly into another incorrect assumption: that harm to innocent people will be mitigated because of largely theoretical checks and balances on both ends of the equation.
The second issue is that there is no way of proving which images a client-side scanning algorithm is seeking to detect, leaving the possibility of “mission creep” where other types of images (those not related to child sexual abuse) are also detected. We believe this is relatively simple to fix through a small change to how the global child protection non-governmental organizations operate. We would have a consistent list of known bad images, with cryptographic assurances that the databases contain only child sexual abuse images that can be attested to publicly and audited privately. We believe these legitimate privacy concerns can be mitigated technically and the legal and policy challenges are likely harder, but we believe they are soluble.
The thing is, we already have a “consistent list of known bad images.” If we’re not already doing the other things in that sentence (a verifiable database that can be “attested to publicly and audited privately”), then the only thing more client-side content scanning can do is produce more false positives and negatives. Again, the authors assume these things are already in place. And they use these assumptions to buttress their claims that the “disbenefits” will be limited by what they assume will happen (“multiple independent checks”) or assume has already happened (an independently verifiable database of known CSAM images).
That’s a big ask. The other big ask is the paper proposes the private sector do all of the work. Companies will be expected to design and implement client-side scanning. They will be expected to hire enough people to provide human backstops for AI-guided flagging of content. They will need to have specialized personnel in place to act as law enforcement liaisons. And they will need to have solid legal teams in place to deal with the blowback (I’m sorry, “disbenefits”) of false positives and negatives.
If all of this is in place, and law enforcement doesn’t engage in mission creep, it will work the way the authors suggest it will work: a non-encryption-breaking solution to the distribution of CSAM via end-to-end encrypted communications platforms. That’s not to say the paper does not admit all the pieces need to come together to make this work. But this proposal raises far more questions than it answers. And yet the authors seem to believe it will work because it’s merely possible.
Through our research, we’ve found no reason as to why client-side scanning techniques cannot be implemented safely in many of the situations society will encounter. That is not to say that more work is not needed, but there are clear paths to implementation that would seem to have the requisite effectiveness, privacy, and security properties.
It’s still a long way from probable, though. And that’s not even in the same neighborhood as theoretically possible if everything else goes right.
Filed Under: client side scanning, csam, encryption, gchq, surveillance
Europe's Human Rights Court Says UK Mass Surveillance Violated Rights, Unlawfully Obtained Journalists' Communications
from the leaking-works dept
Another court case prompted by the Snowden leaks has reached its conclusion. And the findings are that Snowden’s revelations were accurate: the NSA’s Five Eyes partners were breaking laws and ignoring people’s rights when engaging in mass surveillance. That’s just a natural side effect of grabbing communications and data in bulk and pretending it’s lawful if you sort through it after you’ve already acquired it.
The UK spy agency GCHQ’s methods for bulk interception of online communications violated the right to privacy and the regime for collection of data was unlawful, the grand chamber of the European court of human rights has ruled.
In what was described as a “landmark victory” by Liberty, one of the applicants, the judges also found the bulk interception regime breached the right to freedom of expression and contained insufficient protections for confidential journalistic material but said the decision to operate a bulk interception regime did not of itself violate the European convention on human rights.
Another one of the claimants — the Bureau for Investigative Journalism — says this is a win for journalists all over Europe. The ruling institutes more protections for news gatherers, which should hopefully prevent some of these violations from reoccurring.
The UK government violated the freedom of the press for decades under its mass spying programme and must now seek independent permission to access any confidential journalistic material, the European Court of Human Rights has ruled.
In a significant victory for press freedom, the new protections will apply to all confidential material collected by journalists in their reporting, not just the identity of sources. The judgment covers state authorities across Europe, including intelligence agencies, government departments and the police.
Unfortunately, it takes lots of plaintiffs and millions of dollars just to obtain a common sense ruling that intercepting journalists’ communications and metadata is a violation of rights. It also takes whistleblowers willing to come forward and expose what they’ve seen while working for surveillance agencies. It’s very unlikely a case would have been brought — much less won — without the Snowden leaks. This legal process began in 2013, shortly after Snowden started releasing material to journalists. And some of the recipients of these leaks saw themselves attacked and surveilled by their governments for reporting on these documents.
From now on, any surveillance targeting journalists or inadvertent collection of their communications and data must be run by an independent entity before it can be used in investigations or prosecutions. The spy agencies must prove their interests are greater than the public’s interest in whatever the journalists are reporting on. They also must show there’s no other way to obtain this same information without utilizing data and communications obtained from journalists. Yes, this may encourage parallel construction and other data laundering, but at least it should deter the direct targeting of journalists.
There may be more favorable rulings in the future. One of the complainants says this win will allow some of its other legal challenges against mass surveillance to move forward. And some of the 17(!) dissenting judges said this ruling — while a landmark decision — does not go far enough to protect journalists and other innocent people from bulk surveillance.
Then there’s the UK’s “Snooper’s Charter” — one that’s been in the works for nearly a half-decade — which would expand surveillance powers in the UK. The UK exited the European Union, making it unclear what effect this decision would have on domestic surveillance. And the UK government’s ongoing war on encryption makes it clear many of those currently writing and approving legislation don’t really care if rights are violated — not if those rights stand in the way of law enforcement investigations and vague national security interests.
But it’s still a significant win. While it may do little for UK journalists now or in the future, it does erect additional protections for journalists located elsewhere in Europe. And it shows whistleblowing works and, indirectly, why far too many governments have decided whistleblowers are threats to be eradicated, rather than protected.
Filed Under: ed snowden, eu, european court of human rights, gchq, human rights, snowden leaks, surveillance, uk
GCHQ Propose A 'Going Dark' Workaround That Creates The Same User Trust Problem Encryption Backdoors Do
from the wiretaps-but-for-Whatsapp dept
Are we “going dark?” The FBI certainly seems to believe so, although its estimation of the size of the problem was based on extremely inflated numbers. Other government agencies haven’t expressed nearly as much concern, even as default encryption has spread to cover devices and communications platforms.
There are solutions out there, if it is as much of a problem as certain people believe. (It really isn’t… at least not yet.) But most of these solutions ignore workarounds like accessing cloud storage or consensual searches in favor of demanding across-the-board weakening/breaking of encryption.
A few more suggestions have surfaced over at Lawfare. The caveat is that both authors, Ian Levy and Crispin Robinson, work for GCHQ. So that should give you some idea of which shareholders are being represented in this addition to the encryption debate.
The idea (there’s really only one presented here) isn’t as horrible as others suggested by law enforcement and intelligence officials. But that doesn’t mean it’s a good one. And there’s simply no way to plunge into this without addressing an assertion made without supporting evidence towards the beginning of this Lawfare piece.
Any functioning democracy will ensure that its law enforcement and intelligence methods are overseen independently, and that the public can be assured that any intrusions into people’s lives are necessary and proportionate.
By that definition, the authors’ home country is excluded from the list of “functioning democracies.” Multiple rulings have found GCHQ’s surveillance efforts in violation of UK law. And a number of leaks over the past half-decade have shown its oversight is mostly ornamental.
The same can be said for the “functioning democracy” on this side of the pond. Leaked documents and court orders have shown the NSA frequently ignores its oversight when not actively hiding information from Congress, the Inspector General, and the FISA court. Oversight of our nation’s law enforcement agencies is a patchwork of dysfunction, starting with friendly magistrates who care little about warrant affidavit contents and ending with various police oversight groups that are either filled with cops or cut out of the process by the agencies they nominally oversee. We can’t even get a grip on routine misconduct, much less ensure “necessary and proportionate intrusions into people’s lives.”
According to the two GCHQ reps, there’s a simple solution to eavesdropping on encrypted communications. All tech companies have to do is keep targets from knowing their communications are no longer secure.
In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.
We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.
Suppressing notifications might be less harmful than key escrow or backdoors. It wouldn’t require a restructuring of the underlying platform or its encryption. If everything is in place — warrants, probable cause, exhaustion of less intrusive methods — it could give law enforcement a chance to play man-in-the-middle with targeted communications.
But there’s a downside — one that isn’t referenced in the Lawfare post. If both ends of a conversation are targeted, this may be workable. But what if one of the participants isn’t a target? This leaves them unprotected because the suppressed messages wouldn’t inform other non-target parties the conversation isn’t protected. Obviously it wouldn’t do the let anyone targets converse with know things are no longer normal on the target’s end, as it’s likely one of those participants will let the target know they’ve encountered a security warning while talking to them.
In that respect, it is analogous to a wiretap on someone’s phones. It will capture innocent conversations irrelevant to the investigation. In those cases, investigators are told to stop eavesdropping. It’s unclear how the same practice will work when the communications are being harvested digitally via unseen government additions to private conversations.
This proposal seems at odds with the authors’ suggested limitations, especially this one:
Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.
When a service provider starts suppressing warning messages, the trust relationship is going to be fundamentally altered. Even if users are made aware this is only happening in rare instances involving targets of investigations, the fact that their platform provider has chosen to mute these messages means they really can’t trust a lack of warnings to mean everything is still secure.
On the whole, it’s a more restrained solution than others have proposed — but it still has the built-in exploitation avenue key escrow does. It’s better than a backdoor but not by much. And the authors of this proposal shouldn’t pretend the solution lives up to the expectations they set for it. Their own proposal falls short of their listed ideals… and the whole thing is delivered under the false pretense law enforcement/intelligence agencies are subject to robust oversight.
Filed Under: backdoors, encryption, gchq, going dark, third parties, vulnerabilities
European Court Of Human Rights: UK Surveillance Revealed By Snowden Violates Human Rights
from the well-of-course-it-does dept
Yet another vindication of Ed Snowden. Soon after some of the documents he leaked as a whistleblower revealed that the UK’s GCHQ was conducting mass surveillance, a variety of human rights groups filed complaints with the European Court of Human Rights. It’s taken quite some time, but earlier today the court ruled that the surveillance violated human rights, though perhaps in a more limited way than many people had hoped.
At issue were three specific types of surveillance: bulk interception of communications, sharing what was collected with foreign intelligence agencies, and obtaining communications data (metadata) from telcos. The key part of the ruling was to find that the bulk interception of communications violated Article 8 of the Human Rights Act (roughly, but not exactly, analogous to the US 4th Amendment). It was not a complete victory, as the court didn’t say that bulk interception by itself violated human rights, but that the lack of oversight over how this was done made the surveillance “inadequate.” The court also rejected any claims around GCHQ sharing the data with foreign intelligence agencies.
In short, the court found that bulk interception could fit within a human rights framework if there was better oversight, and that obtaining data from telcos could be acceptable if there were safeguards to protect certain information, such as journalist sources. But the lack of such oversight and safeguards doomed the surveillance activity that Snowden revealed.
Operating a bulk interception scheme was not per se in violation of the Convention and Governments had wide discretion (?a wide margin of appreciation?) in deciding what kind of surveillance scheme was necessary to protect national security. However, the operation of such systems had to meet six basic requirements, as set out in Weber and Saravia v. Germany. The Court rejected a request by the applicants to update the Weber requirements, which they had said was necessary owing to advances in technology.
The Court then noted that there were four stages of an operation under section 8(4): the interception of communications being transmitted across selected Internet bearers; the using of selectors to filter and discard ? in near real time ? those intercepted communications that had little or no intelligence value; the application of searches to the remaining intercepted communications; and the examination of some or all of the retained material by an analyst.
While the Court was satisfied that the intelligence services of the United Kingdom take their Convention obligations seriously and are not abusing their powers, it found that there was inadequate independent oversight of the selection and search processes involved in the operation, in particular when it came to selecting the Internet bearers for interception and choosing the selectors and search criteria used to filter and select intercepted communications for examination. Furthermore, there were no real safeguards applicable to the selection of related communications data for examination, even though this data could reveal a great deal about a person?s habits and contacts.
Such failings meant section 8(4) did not meet the ?quality of law? requirement of the Convention and could not keep any interference to that which was ?necessary in a democratic society?. There had therefore been a violation of Article 8 of the Convention.
The court also found that acquiring data from telcos violated Article 8 as well, for similar reasons.
It first rejected a Government argument that the applicants? application was inadmissible, finding that as investigative journalists their communications could have been targeted by the procedures in question. It then went on to focus on the Convention concept that any interference with rights had to be ?in accordance with the law?.
It noted that European Union law required that any regime allowing access to data held by communications service providers had to be limited to the purpose of combating ?serious crime?, and that access be subject to prior review by a court or independent administrative body. As the EU legal order is integrated into that of the UK and has primacy where there is a conflict with domestic law, the Government had conceded in a recent domestic case that a very similar scheme introduced by the Investigatory Powers Act 2016 was incompatible with fundamental rights in EU law because it did not include these safeguards. Following this concession, the High Court ordered the Government to amend the relevant provisions of the Act. The Court therefore found that as the Chapter II regime also lacked these safeguards, it was not in accordance with domestic law as interpreted by the domestic authorities in light of EU law. As such, there had been a violation of Article 8.
Both of those elements also ran afoul of Article 10’s protection of free expression because journalists’ communications had been swept up in the bulk data collection:
In respect of the bulk interception regime, the Court expressed particular concern about the absence of any published safeguards relating both to the circumstances in which confidential journalistic material could be selected intentionally for examination, and to the protection of confidentiality where it had been selected, either intentionally or otherwise, for examination. In view of the potential chilling effect that any perceived interference with the confidentiality of journalists? communications and, in particular, their sources might have on the freedom of the press, the Court found that the bulk interception regime was also in violation of Article 10.
When it came to requests for data from communications service providers under Chapter II, the Court noted that the relevant safeguards only applied when the purpose of such a request was to uncover the identity of a journalist?s source. They did not apply in every case where there was a request for a journalist?s communications data, or where collateral intrusion was likely. In addition, there were no special provisions restricting access to the purpose of combating ?serious crime?. As a consequence, the Court also found a violation of Article 10 in respect of the Chapter II regime.
On the final issue of passing on the info to foreign intelligence agencies, the court didn’t find any human rights issues there:
The Court found that the procedure for requesting either the interception or the conveyance of intercept material from foreign intelligence agencies was set out with sufficient clarity in the domestic law and relevant code of practice. In particular, material from foreign agencies could only be searched if all the requirements for searching material obtained by the UK security services were fulfilled. The Court further observed that there was no evidence of any significant shortcomings in the application and operation of the regime, or indeed evidence of any abuse.
It would have been nice if there was more of a blanket recognition of the problems of bulk interception and mass surveillance. Unfortunately the court didn’t go that far. But at the very least this has to be seen as a pretty massive vindication of Snowden whistleblowing on the lack of oversight to protect privacy and the lack of safeguards to prevent telcos from sharing information with the government that should have been protected.
Filed Under: bulk collection, echr, ed snowden, european court of human rights, gchq, human rights, mass surveillance, suveillance
UK Tribunal Says GCHQ Engaged In Illegal Telco Collection Program For More Than A Decade
from the eleven-years,-zero-accountability dept
UK’s NSA — GCHQ — has lost legal battle after legal battle in recent years, most of those triggered by the Snowden leaks. The UK Appeals Court ruled its bulk collection of internet communications metadata illegal earlier this year. This followed a 2015 loss in lawsuit filed over the interception of privileged communications, resulting in a destruction order targeting everything collected by GCHQ that fell under that heading.
Some battles are still ongoing, with several of them spearheaded by Privacy International. PI’s work — and multiple lawsuits — have led to the exposure of GCHQ’s oversight as completely toothless and a declaration that the agency’s surveillance agreement with the NSA was illegal… at least up to 2014’s codification of illegal spy practices. (This codification was ultimately ruled illegal earlier this year.)
Thanks to another PI legal challenge, the Investigatory Powers Tribunal has found GCHQ engaged in even more illegal spying… for more than a decade. The expansion of surveillance powers following the September 11, 2001 terrorist attacks gave GCHQ more ways to collect data from telcos. This was supposed to be directed and overseen by the UK Foreign Secretary, but the lawsuit showed the oversight did nearly nothing and there were virtually no limits to what could be collected from phone companies.
The Investigatory Powers Tribunal (IPT) – set up to investigate complaints about how personal data is handled by public bodies – ruled that most of the directions given between 2001 and 2012 had been unlawful.
The tribunal was critical of the way the government handed on requests to GCHQ, partly because phone and internet providers “would not be in any position to question the scope of the requirement” because they “would have no knowledge of the limited basis upon which the direction had been made”.
That being said, the IPT also somehow came to the conclusion GCHQ had never abused the apparently illegal privilege. It had “carte blanche” power to demand data, but the IPT saw “no evidence” it had collected more than the Foreign Secretary had approved. But that’s not all that heartening (or convincing) considering the Foreign Secretary had delegated that responsibility to GCHQ, allowing the agency to determine what it needed without input or oversight.
Supposedly everything is all better now with the rules put in place in 2014. The Data Retention and Investigatory Powers Act of 2014 was a weak attempt at surveillance reforms following the steady stream of leaked documents triggered by Snowden in the summer of 2013. So weak were the reforms, the EU Court declared the act incompatible with international law, making GCHQ’s collection efforts targeting other Europeans illegal everywhere else but in the UK.
The collection power was illegal, at least under previous versions of the UK’s Snoopers’ Charter. Whether or not GCHQ wrote itself blank collection checks with the signed checkbook handed to it by the Foreign Secretary is still an open question, despite the court’s determination. It’s not like surveillance agencies have ever hidden questionable collections from their oversight or found a way to avoid delivering incriminating evidence against themselves until years after issues were first contested.
Filed Under: gchq, illegal surveillance, surveillance, telco collection, uk tribunal
UK Appeals Court Says GCHQ's Mass Collection Of Internet Communications Is Illegal
from the of-course,-when-you're-the-government,-you-just-have-the-laws-changed dept
The UK’s mass surveillance programs haven’t been treated kindly by the passing years (2013-onward). Ever since Snowden began dumping details on GCHQ surveillance, legal challenges to the lawfulness of UK bulk surveillance have been flying into courtrooms. More amazingly, they’ve been coming out the other side victorious.
In 2015, a UK tribunal ruled GCHQ had conducted illegal surveillance and ordered it to destroy intercepted communications between detainees and their legal reps. In 2016, the UK tribunal declared GCHQ’s bulk collection of communications metadata illegal. However, the tribunal did not order destruction of this collection, meaning GCHQ is likely still making use of illegally-collected metadata.
A second loss in 2016 — this time at the hands of the EU Court of Justice — found GCHQ’s collection of European communications being declared illegal due to the “indiscriminate” (untargeted) nature of the collection process. The UK government appealed this decision, taking the ball back to its home court. And, again, it has been denied a victory.
The court of appeal ruling on Tuesday said the powers in the Data Retention and Investigatory Powers Act 2014, which paved the way for the snooper’s charter legislation, did not restrict the accessing of confidential personal phone and web browsing records to investigations of serious crime, and allowed police and other public bodies to authorise their own access without adequate oversight.
The three judges said Dripa was “inconsistent with EU law” because of this lack of safeguards, including the absence of “prior review by a court or independent administrative authority”.
Hey, the elimination of privacy safeguards is just the price that has to be paid when the nation’s security can only be guaranteed by rushed, liberty-violating legislation dropped onto the floor shortly before closing time. If power is going to be consolidated, it needs to be done with a little debate as possible. Built-in safeguards for citizens’ privacy is something that can be relegated to an afterthought. And that afterthought need never be brought up again.
Those powers – granted by DRIPA — have been declared illegal. That’s going to cause problems for the Snooper’s Charter, which is DRIPA’s surveillance state successor. Chances are the problem will be dealt with by erecting a few minimal privacy protections while codifying prior surveillance abuses. And since this only upholds an EU court decision, it will mean less than nothing once Britain completes its exit from the Union.
The good news is the court’s decision backs up what critics have been saying for years: bulk interception of communications violates UK law, and the supposed oversight these collections receive falls far short of what’s required to make the collections legal again.
Filed Under: cjeu, dripa, gchq, mass surveillance, privacy, surveillance, uk
GCHQ Knew FBI Wanted To Arrest MalwareTech, Let Him Fly To The US To Be Arrested There
from the so-much-for-those-'flight-risk'-fears dept
It looks like the UK found an easy way to avoid another lengthy extradition battle. Its intelligence agency, GCHQ, knew something security research Marcus Hutchins didn’t — and certainly didn’t feel obliged to tell him. Not only that, but it let a criminal suspect fly out of the country with zero pre-flight vetting. (Caution: registration wall ahead.)
Officials at the intelligence agency knew that Marcus Hutchins, from Devon, who was hailed as a hero for helping the NHS, would be walking into a trap when he flew to the US in July for a cyber-conference.
Hutchins’s arrest by the FBI on August 2 while he was returning from Las Vegas freed the British government from the “headache of an extradition battle” with their closest ally, say sources familiar with the case.
Certainly no one expected GCHQ to give Hutchins a heads-up on the legal troubles awaiting him on the other side of the pond, but there’s something a bit mean-spirited about allowing a UK citizen to walk into custody in another country. And as for the “headache,” too bad. That’s just part of the deal when you make promises to other countries you’ll ship them your citizens to face an uphill battle in an unfamiliar judicial system while facing charges for laws that may not apply the same way — or as harshly — at home.
This is even more disconcerting when it was Hutchins who was instrumental in killing off the WannaCry ransomware that wreaked havoc pretty much everywhere earlier this year. In gratitude for his efforts, a few publications outed the person behind the “MalwareTech” pseudonym, which probably made it a bit easier to tie Hutchins to various online personas.
As Marcy Wheeler pointed out on Twitter, it works out pretty well for the UK. It gets to outsource its prosecutions to a nation where punishments for malicious hacking are much, much higher. It also gets to dodge the publicity black eye of handing over its (inadvertent) WannaCry hero to the feds and their threat of a few decades in jail. It also suggests the Five Eyes partnership is paying off in questionable ways and, sooner or later, it’s going to be an American citizen walking into the same sort of trap overseas.
Filed Under: doj, extradition, fbi, gchq, malwaretech, marcus hutchins, uk
Aussie Prime Minister Says The Laws Of Math Don't Apply In Australia When It Comes To Encryption
from the good-luck-with-that,-mate dept
Oh boy. It’s no secret that the Australian government — led by George Brandis (who has made it abundantly clear he has no clue what a VPN is or what metadata is) — is pushing strongly for mandated backdoors to encryption. At this point, it’s beating a dead horse, but this is a very, very bad idea for a whole host of reasons — mainly having to do with making absolutely everyone significantly less safe.
And it appears that Brandis’ ignorance has moved up the chain of command. Australian Prime Minister Malcolm Turnbull has now put out what may be the single dumbest statement on encryption yet (and that’s a pretty high bar). After being told yet again that safe encryption backdoors violate basic mathematics, Turnbull became super patriotic about the ability of Australian law to trump mathematics:
“The laws of Australia prevail in Australia, I can assure you of that,” he said on Friday. “The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.”
And, then he pulled out the “nerd harder, nerds” argument:
“I’m not a cryptographer, but what we are seeking to do is to secure their assistance,” Turnbull said. “They have to face up to their responsibility. They can’t just, you know, wash their hands of it and say it’s got nothing to do with them.”
“I am sure they know morally they should. Morally they should.”
So after admitting that he doesn’t understand how this works, he’s saying that the “moral” responsibility of cryptographers — who have basically all told him his plan will make people less safe — is to make people less safe.
Turnbull seems to think he can get around the whole problem by… semantics. You see, if we just redefine things and say we’re not asking for “backdoors” then it’s fine:
“A back door is typically a flaw in a software program that perhaps the — you know, the developer of the software program is not aware of and that somebody who knows about it can exploit,” he said. “And, you know, if there are flaws in software programs, obviously, that’s why you get updates on your phone and your computer all the time.”
“So we’re not talking about that. We’re talking about lawful access.”
That bit of word salad suggests that at least a tiny smidgen of actual knowledge made it into his brain. A backdoor is an exploit. But “lawful access” is a backdoor. Pretending they are different suggests a fairly staggering level of ignorance.
Not to be outdone, but Brandis then took his own turn at the podium to spew more ignorance:
Asked how Australia’s proposed regime would allow local authorities to read messages sent with either WhatsApp or Signal, Brandis said ?Last Wednesday I met with the chief cryptographer at GCHQ … And he assured me that this was feasible.?
Right. It’s pretty well known that intelligence communities can frequently hack into things to get messages, but not because of backdoors to encryption but through other flaws. This includes things like keyloggers or other spyware that effective route around the encryption. But that’s entirely different than demanding backdoors. And, of course, this all comes about a week after GCHQ’s own former boss argued that attacking the end points was a better strategy than backdoors. It’s almost certain that what GCHQ told Brandis is that they can be pretty successful in attacking those endpoints, without undermining encryption — and that message got twisted in Brandis’ mind to believe that it meant that there were already backdoors in Whatsapp and Signal (there are not).
This whole thing is a somewhat tragic comedy of errors with completely clueless politicians making policy badly, potentially putting everyone at risk… while astoundingly claiming that laws can trump basic mathematics. What a joke.
Filed Under: australia, encryption, gchq, george brandis, going dark, malcolm turnbull, math, nerd harder
Companies: signal, whatsapp