messenger – Techdirt (original) (raw)

Nevada Is In Court This Morning Looking To Get A Temporary Restraining Order Blocking Meta From Using End-To-End Encryption

from the protect-encryption-now dept

There have been plenty of silly lawsuits against tech companies over the last few years, but a new one from Nevada against Meta may be the most crazy — and most dangerous — that we’ve seen so far. While heavily redacted, the basics fit the pattern of all of these lawsuits. Vague claims of harms to children from social media, with lots of vague handwaving and conclusory statements with no basis in insisting that certain harms are directly traceable back to choices Meta made (despite a near total lack of evidence to support those claims).

But, rather than go through the many, many, many problems of the lawsuit (you can read it yourself at the link above or embedded below), let’s jump ahead to a hearing that is happening today. Nevada has asked the court to issue a temporary restraining order, blocking Meta from using end-to-end encryption on messages, claiming that such encryption is harmful to children.

That sounds hyperbolic, but it’s exactly what’s happening:

With this Motion, the State seeks to enjoin Meta from using end-to-end encryption (also called “E2EE”) on Young Users’ Messenger communications within the State of Nevada. 1 This conduct—which renders it impossible for anyone other than a private message’s sender and recipient to know what information the message contains—serves as an essential tool of child predators and drastically impedes law enforcement efforts to protect children from heinous online crimes, including human trafficking, predation, and other forms of dangerous exploitation. Under such circumstances, the Nevada Supreme Court makes clear that to obtain the injunctive relief sought by this Motion, the State need only show “a reasonable likelihood that the statute was violated and that the statute specifically allows injunctive relief.” State ex rel. Off. of Att’y Gen., Bureau of Consumer Prot. v. NOS Commc’ns, Inc., 120 Nev. 65, 69, 84 P.3d 1052, 1055 (2004) (emphasis added). The State’s Complaint is replete with indisputable factual allegations detailing this harm and explaining—with specificity—how Meta’s conduct in this matter violates the Nevada Unfair and Deceptive Trade Practices Act, N.R.S. §§ 598.0903 through 598.0999 (“NDTPA”). And, because the NDTPA expressly authorizes the Attorney General to seek, inter alia, injunctive relief, the State’s Motion should be granted.

It’s no secret that lazy cops like the FBI’s Chris Wray (and before him, James Comey) have always hated encryption and wanted it banned for making it just slightly more difficult to read everyone’s messages, but at least they spoke mostly about just requiring magic backdoors that would allow encryption to work for normal people, but have it break when the cops came asking (this is not a thing, of course, as it would break for everyone if you did that).

Here, the state of Nevada is literally just saying “fuck it, ban all encryption, because it might make it harder for us to spy on people.”

The TRO request is full of fearmongering language. I mean:

And, as of December 2023, Meta reconfigured Messenger to make E2EE—child predators’ main preferred feature—the default for all communications.

The TRO request also more or less admits that Nevada cops are too fucking lazy to go through basic due process, and the fact that the 4th Amendment, combined with encryption, means they have to take an extra step to spy on people is simply a bridge too far:

As set forth in the Declaration Anthony Gonzales, the use of end-to-end encryption in Messenger makes it impossible to obtain the content of a suspect’s (or defendant’s) messages via search warrant served on Meta. See Ex. 2 (Gonzales Decl.) at ¶¶ 9-16. Instead, investigators are only able to obtain “information provided [that] has been limited to general account information about a given suspect and/or metadata and/or log information about the Messenger communications of that suspect.” Id. at ¶ 14. Once again, this is the equivalent of trying to divine the substance of a letter between two parties by only using the visible information on the outside of a sealed envelope.

Instead, the State is forced to try to obtain the device that the suspect used to send communications via Messenger—which itself requires separate legal process—and then attempt to forensically extract the data using sophisticated software. See Ex. 1 (Defonseka Decl.) at ¶¶ 5- 8. Even this time-consuming technique has its limits. For example, it is not possible to obtain the critical evidence if the device is “locked,” or if the suspect has deleted data prior to relinquishing his phone. Id. at ¶ 8; see also Ex. 2 (Gonzales Decl.) at ¶ 19 (describing commonplace “destruction of the evidence sought by investigators” when trying to acquire Messenger communications).

Just because you’re a cop does not mean you automatically get access to all communications.

As for the actual legal issues at play, the state claims that Meta using encryption to protect everyone is a “deceptive trade practice.” I shit you not. Apparently Nevada has a newish state law (from 2022) that makes it an additional crime to engage in “unlawful use of encryption.” And the state’s argument is that because Meta has turned on encryption for messages, and some people may use that to commit crimes, then Meta has engaged in a deceptive trade practice in enabling the unlawful use of encryption. Really.

As a threshold matter, the State alleges that Meta “willfully committed . . . deceptive trade practices by violating one or more laws relating to the sale or lease of goods or services” in violation of NRS § 598.0923(1)(c). Compl. ¶ 473. Nevada law states that “[a] person shall not willfully use or attempt to use encryption, directly or indirectly, to: (a) Commit, facilitate, further or promote any criminal offense; (b) Aid, assist or encourage another person to commit any criminal offense; (c) Conceal the commission of any criminal offense; (d) Conceal or protect the identity of a person who has committed any criminal offense; or (e) Delay, hinder or obstruct the administration of the law.”…. This amounts to both direct and indirect aiding and abetting of child predators, via the use of E2EE, in violation of NRS § 205.486(1)(a)-(d). And, as demonstrated in the Gonzales Declaration, Meta knows that E2EE drastically limits the ability of law enforcement to obtain critical evidence in their investigations—namely, the substance of a suspect’s Messenger communications—which is in violation of NRS § 205.486(1)(e).

Furthermore, Nevada claims that Meta engaged in deceptive trade practices by promoting encryption as a tool to keep people safer.

Meta “represent[ed] that Messenger was safe and not harmful to Young Users’ wellbeing when such representations were untrue, false, and misleading…..

Similarly, Meta publicly touted its use of end-to-end encryption as a positive for users, meant to protect them from harm—going so far as to call it an “extra layer of security” for users

This is a full-on attack on encryption. If Nevada succeeds here, then it’s opening up courts across the country to outlaw encryption entirely. This is a massive, dangerous attack on security and deserves much more attention.

Meta’s response to the motion is worth reading as well, if only for the near exasperation of the company’s lawyers as to why suddenly, now, end-to-end encryption for messaging — a technology that has been available for many, many years — has become so scary and so problematic that it needs to be stopped immediately.

Meta Platforms, Inc. (“Meta”)1 has offered end-to-end encryption (“E2EE”) as an option on its Messenger app since 2016. Compl. ¶ 202. E2EE technology is commonplace and has been hailed as “vital” by privacy advocates for protecting users’ communications with each other.2 The only change Meta made in December 2023 was to announce that the Messenger app would transition all messages to E2EE (rather than an option), id.—which is what Apple iMessage, Signal and numerous other messaging services already do.

These facts completely disprove the State’s assertion that it is entitled to temporary injunctive relief. E2EE has been available as an option on Meta’s Messenger app for eight years, and Meta began rolling out E2EE for all messages on Messenger months ago. The State cannot properly assert that it requires emergency injunctive relief—on two days’ notice—blocking Meta’s use of E2EE, when that feature has been in use on Messenger for years and began to be rolled out for all messages more than two months ago. The State’s delay—for years—to bring any enforcement action related to Meta’s use of E2EE (or other providers’ use of E2EE) demonstrates why its request for the extraordinary relief of a TRO should be denied.

The response also points out that for the state to argue it’s in such a rush to ban Meta from using end-to-end encryption, it sure isn’t acting like it’s in a rush:

The State admits that E2EE has been available as feature on Messenger for eight years. See Mot. 10 (“Since 2016, Meta has allowed users the option of employing E2EE for any private messages they send via Messenger.” (emphasis added)). On December 6, 2023—ten weeks ago— Meta began making E2EE the standard for all messages on Messenger, rather than a setting to which users could opt in. 3 In doing so, Messenger joined other services, including Apple’s iMessage, which has deployed E2EE as a standard feature since 2011, 4 and FaceTime, for which E2EE has been standard since at least 2013. 5 Yet the State waited until January 20, 2024—six weeks after the new default setting was announced, and eight years after E2EE first became available on Messenger—to file its Complaint. It then inexplicably waited another three weeks to serve Meta with the Complaint.6 As such, before yesterday, Meta had not even been able to review the full scope of the State’s allegations.7 Mot. 14. Concurrently with its lengthy Complaint, the State served the present motion, along with two supporting declarations that purport to justify enjoining a practice that was announced two months ago (and was available for years as a nondefault setting and as a feature in other services, such as Apple’s iMessage).

The State’s delays demonstrate the fundamental unfairness of requiring Meta to prepare this Opposition on one day’s notice. There is no emergency that requires this accelerated timetable. Quiroga v. Chen, 735 F. Supp. 2d 1226, 1228 (D. Nev. 2010) (“The temporary restraining order should be restricted to serving its underlying purpose of preserving the status quo and preventing irreparable harm just so long as is necessary to hold a hearing, and no longer.” (cleaned up)). Meta has not been given sufficient time to identify and prepare responses to the myriad assertions and misstatements in the State’s Motion. Moreover, the State apparently seeks to present live testimony from its witnesses. See Mot. at 6. In this unfairly accelerated and truncated timetable, Meta has not been given a fair chance to develop responses to the State’s witnesses, nor to develop and present its own witnesses and evidence. In short, there is no exigency that warrants this highly accelerated and unfairly compressed timetable for Meta’s Opposition to the TRO motion—in contrast to a motion for preliminary injunction that can be noticed, briefed and heard under a reasonable schedule that allows Meta a fair opportunity to be heard.

Meta also points out that Nevada itself recognizes the value of encryption:

Indeed, Nevada law recognizes the value of encryption, requiring data collectors to encrypt personal information. See Nev. Rev. Stat. 603A.215. A seismic shift that would fundamentally challenge the use of E2EE should not be undertaken with a 24-hour turnaround on briefing that does not afford Meta a fair and reasonable opportunity to develop a full response to the State’s arguments.

Nevada’s position here, including the haste with which it is moving (after doing nothing about encryption for years) is astounding, dangerous, and disconnected from reality. Hopefully the court recognizes this.

Filed Under: encryption, end to end encryption, messenger, nevada, tro
Companies: meta

Meta Finally Launches Default End-To-End Encryption In Messenger

from the finally dept

For many, many years we’ve been calling on companies to enable end-to-end encryption by default on any messaging/communications tools. It’s important to recognize that doing so correctly is difficult, but not impossible (similarly, it’s important to recognize that doing so poorly is dangerous, as it will lead people to believe their communications are secure when they are most certainly not).

So, over the years we’ve been hopeful as Meta made moves towards implementing end-to-end encryption in Facebook Messenger. However, over and over during the past decade or so, those working on the issue have told us that while Meta really wants to set it up, the practical realities of doing it correctly are way more complex than most people think. And that’s ignoring the fact that law enforcement, intelligence agencies, and, even random shareholders, have tried to get Meta to move away from its encryption plans.

And, now, finally, Meta has announced that Facebook Messenger is end-to-end encrypted by default.

Today I’m delighted to announce that we are rolling out default end-to-end encryption for personal messages and calls on Messenger and Facebook, as well as a suite of new features that let you further control your messaging experience. We take our responsibility to protect your messages seriously and we’re thrilled that after years of investment and testing, we’re able to launch a safer, more secure and private service.

Since 2016, Messenger has had the option for people to turn on end-to-end encryption, but we’re now changing private chats and calls across Messenger to be end-to-end encrypted by default. This has taken years to deliver because we’ve taken our time to get this right. Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up. We’ve introduced new privacy, safety and control features along the way like delivery controls that let people choose who can message them, as well as app lock, alongside existing safety features like report, block and message requests. We worked closely with outside experts, academics, advocates and governments to identify risks and build mitigations to ensure that privacy and safety go hand-in-hand.

The extra layer of security provided by end-to-end encryption means that the content of your messages and calls with friends and family are protected from the moment they leave your device to the moment they reach the receiver’s device. This means that nobody, including Meta, can see what’s sent or said, unless you choose to report a message to us.

It’s extremely rare that I’d offer kudos to Meta, but this is a case where it absolutely deserves it. Even if some of us kept pushing the company to move faster, they did get there, and it looks like they got there by doing it carefully and appropriately (rather than the half-assed attempts of certain other companies).

I am sure that we’ll hear reports of law enforcement and politicians whining about this, but this is an unquestionably important move towards protecting privacy and private communications.

Filed Under: encryption, end-to-end encryption, facebook messenger, messenger
Companies: facebook, meta

Social Responsibility Organization Says Meta’s Embrace Of Encryption Is Important For Human Rights

from the encryption-protects-human-rights dept

Encryption is under attack from all over the world. Australia already has a law on the books trying to force companies to backdoor encryption. The UK is pushing its Online Safety Bill, which would be an attack on encryption (the UK government has made it clear it wants an end to encryption). In the US, we have the EARN IT Act, whose author, Senator Richard Blumenthal, has admitted he sees it as a necessary attack on companies who “hide behind” encryption.

All over the world, politicians and law enforcement officials insist that they need to break encryption to “protect” people. This has always been false. If you want to protect people, you want them to have (and use) encryption.

Against this backdrop, we have Meta/Facebook. While the company has long supported end-to-end encryption in WhatsApp, it’s been rolling it out on the company’s other messaging apps as well. Even if part of the reason for enabling encryption is about protecting itself, getting more encryption out to more people is clearly a good thing.

And now there’s more proof of that. Business for Social Responsibility is a well respected organization, which was asked by Meta to do a “human rights assessment” of Meta’s expanded use of end-to-end encryption. While the report was paid for by Meta, BSR’s reputation is unimpeachable. It’s not the kind of organization that throws away its reputation because a company paid for some research. The end result is well worth reading, but, in short, BSR notes that the expansion of end-to-end encryption is an important step in protecting human rights.

The paper is thorough and careful, details its methodology, and basically proves what many of us have been saying all along: if you’re pushing to end or diminish end-to-end encryption, you are attacking human rights. The key point:

Privacy and security while using online platforms should not only be the preserve of the technically savvy and those able to make proactive choices to opt into end-to-end encrypted services, but should be democratized and available for all.

The report notes that we’re living in a time of rising authoritarianism, and end-to-end encryption is crucial in protecting people fighting back against such efforts. The report is careful and nuanced, and isn’t just a one-sided “all encryption must be good” kind of thing. It does note that there are competing interests.

The reality is much more nuanced. There are privacy and security concerns on both sides, and there are many other human rights that are impacted by end-to-end encrypted messaging, both positively and negatively, and in ways that are interconnected. It would be easy, for example, to frame the encryption debate not only as “privacy vs. security” but also as “security vs. security,” because the privacy protections of encryption also protect the bodily security of vulnerable users. End-to-end encryption can make it more challenging for law enforcement agencies to access the communications of criminals, but end-to-end encryption also makes it more challenging for criminals to access the communications of law-abiding citizens.

As such, the report highlights the various tradeoffs involved in encrypting more communications, but notes:

Meta’s expansion of end-to-end encrypted messaging will directly result in the increased realization of a range of human rights, and will address many human rights risks associated with the absence of ubiquitous end-to-end encryption on messaging platforms today. The provision of end-to-end encrypted messaging by Meta directly enables the right to privacy, which in turn enables other rights such as freedom of expression, association, opinion, religion, and movement, and bodily security. By contrast, the human rights harms associated with end-toend encrypted messaging are largely caused by individuals abusing messaging platforms in ways that harm the rights of others—often violating the service terms that they have agreed to. However, this does not mean that Meta should not address these harms; rather, Meta’s relationship to these harms can help identify the types of leverage Meta has available to address them.

The report notes that people who are worried that by enabling end-to-end encryption, Meta will enable more bad actors, do not seem to be supported by evidence, since bad actors have a plethora of encrypted communications channels already at their disposal:

If Meta decided not to implement end-toend encryption, the most sophisticated bad actors would likely choose other end-to-end encrypted communications platforms. Sophisticated tech use is increasingly part of criminal tradecraft, and the percentage of criminals without the knowledge and skills to use end-to-end encryption will continue to decrease over time. For this reason, if Meta chose not to provide end-to-end encryption, this choice would likely not improve the company’s ability to help law enforcement identify the most sophisticated and motivated bad actors, who can choose to use other end-to-end encrypted messaging products.

While the report notes that things like child sexual abuse material (CSAM) are a serious issue, focusing solely on scanning everything and trying to block it is not the only way (or even the best) way of addressing the issue. Someone should send this to the backers of the EARN IT Act, which is predicated on forcing more companies to scan more communications.

Content removal is just one way of addressing harms. Prevention methods are feasible in an end-to-end encrypted environment, and are essential for achieving better human rights outcomes over time. The public policy debate about end-to-end encryption often focuses heavily or exclusively on the importance of detecting and removing problematic, often illegal content from platforms, whether that be CSAM or terrorist content. Content removal is important for harm from occurring in end-to-end encrypted messaging through the use of behavioral signals, public platform information, user reports, and metadata to identify and interrupt problematic behavior before it occurs.

The report also, correctly, calls out how the “victims” in this debate are most often vulnerable groups — the kind of people who really could use much more access to private communications. It also notes that while some have suggested “technical mitigations” that can be used to identify illegal content in encrypted communications, these mitigations are “not technically feasible today.” This includes the much discussed “client-side” scanning idea that Apple has toyed with.

Methods such as client-side scanning of a hash corpus, trained neural networks, and multiparty computation including partial or fully homomorphic encryption have all been suggested by some as solutions to enable messaging apps to identify, remove, and report content such as CSAM. They are often collectively referred to as ”perceptual hashing” or “client-side scanning,” even though they can also be server-side. Nearly all proposed client-side scanning approaches would undermine the cryptographic integrity of end-to-end encryption, which because it is so fundamental to privacy would constitute significant, disproportionate restrictions on a range of rights, and should therefore not be pursued

The report also notes that even if someone came up with a backdoor technology that allowed Meta to scan encrypted communications, the risks to human rights would be great, given that such technology could be repurposed in dangerous ways.

For example, if Meta starts detecting and reporting universally illegal content like CSAM, some governments are likely to exploit this capability by requiring Meta to block and report legitimate content they find objectionable, thereby infringing on the privacy and freedom of expression rights of users. It is noteworthy that even some prior proponents of homomorphic encryption have subsequently altered their perspective for this reason, concluding that their proposals would be too easily repurposed for surveillance and censorship. In addition, these solutions are not foolproof; matching errors can occur, and bad actors may take advantage of the technical vulnerabilities of these solutions to circumvent or game the system

The report notes that there are still ways that encrypted communications can be at risk, even name-checking NSO Group’s infamous Pegasus spyware.

How about all the usual complaints from law enforcement about how greater use of encryption will destroy their ability to solve crimes? BSR says “not so fast…”

While a shift to end-to-end encryption may reduce law enforcement agency access to the content of some communications, it would be wrong to conclude that law enforcement agencies are faced with a net loss in capability overall. Trends such as the collection and analysis of significantly increased volumes of metadata, the value of behavioral signals, and the increasing availability of artificial intelligence-based solutions run counter to the suggestion that law enforcement agencies will necessarily have less insight into the activities of bad actors than they did in the past. Innovative approaches can be deployed that may deliver similar or improved outcomes for law enforcement agencies, even in the context of end-to-end encryption. However, many law enforcement entities today lack the knowledge or the resources to take advantage of these approaches and are still relying on more traditional techniques.

Still, the report does note that Meta should take responsibility in dealing with some of the second- and third-order impacts of ramping up encryption. To that end, it does suggest some “mitigation measures” Meta should explore — though noting that a decision not to implement end-to-end encryption “would also more closely connect Meta to human rights harm.” In other words, if you want to protect human rights, you should encrypt. In fact, the report is pretty bluntly direct on this point:

If Meta were to choose not to implement end-to-end encryption across its messaging platforms in the emerging era of increased surveillance, hacking, and cyberattacks, then it could be considered to be “contributing to” many adverse human rights impacts due to a failure to protect the privacy of user communications.

Finally, the paper concludes with a series of recommendations for Meta on how to “avoid, prevent, and mitigate the potential adverse human rights impacts from the expansion of end-to-end encryption, while also maximizing the beneficial impact end-to-end encryption will have on human rights.”

The report has 45 specific (detailed and thoughtful) recommendations to that end. Meta has already committed to fully implementing 34 of them, while partly implementing four more, and assessing six others. There is only one of the recommendations that Meta has rejected. The one that it rejected has to do with “client side scanning” which the report itself was already nervous about (see above). However, one of the recommendations suggested that Meta “continue investigating” client-side scanning techniques to see if a method was eventually developed that wouldn’t have all the problems detailed above. However, Meta says it sees no reason to continue exploring such a technology. From Meta’s response:

As the HRIA highlights, technical experts and human rights stakeholders alike have raised significant concerns about such client-side scanning systems, including impacts on privacy, technical and security risks, and fears that governments could mandate they be used for surveillance and censorship in ways that restrict legitimate expression, opinion, and political participation that is clearly protected under international human rights law.

Meta shares these concerns. Meta believes that any form of client-side scanning that exposes information about the content of a message without the consent and control of the sender or intended recipients is fundamentally incompatible with an E2EE messaging service. This would be the case even with theoretical approaches that could maintain “cryptographic integrity” such as via a technology like homomorphic encryption—which the HRIA rightly notes is a nascent technology whose feasibility in this context is still speculative.

People who use E2EE messaging services rely on a basic premise: that only the sender and intended recipients of a message can know or infer the contents of that message. As a result, Meta does not plan to actively pursue any such client-side scanning technologies that are inconsistent with this user expectation.

We spend a lot of time criticizing Facebook/Meta around these parts, as the company often seems to trip over itself in trying to do the absolutely wrongest thing over and over again. But on this it’s doing a very important and good thing. The BSR report confirms that.

Filed Under: client-side scanning, csam, encryption, end to end encryption, human rights, messenger
Companies: facebook, instagram, meta, whatsapp

DOJ Boss Joins UK, Australian Gov't In Asking Facebook To Ditch Its End-To-End Encryption Plan

from the [stacks-exploited-bodies-higher]-MR-FACEBOOK-PLEASE dept

The DOJ seems to be handling its anti-encryption (a.k.a. “going dark“) grief badly. I doubt it will ever reach “acceptance,” but it is accelerating through the rest of the stages with alarming speed.

It went through shock first, personified by former FBI director Jim Comey, who insisted tech companies were offering encryption to:

A. Give the feds the middle finger
B. Enable all sorts of dangerous criminals
C. To act like children in a roomful of adults

“Denial” seems to have been bypassed completely. Instead, Comey (and others) repeated the “shock” stage, banging the table louder and louder in hopes of convincing everyone they were right.

They weren’t right and encryption deployments continued.

The FBI and DOJ shifted quickly to anger. This was first displayed during the legal fight over the San Bernardino shooter’s iPhone. The DOJ insisted a law nearly 230 years old gave it permission to force Apple to break encryption. Apple disagreed. The court disagreed. The FBI insisted this would be the death of us all and ignored outside offers to crack the phone while pursuing precedent it would never obtain.

The phone was eventually cracked by a third party and the FBI moved on, still clinging to its “going dark” narrative, even as vendor after vendor stepped up to provide phone-cracking tools. It also overstated the number of “uncrackable” devices in its possession by at least 6,000 devices. It has been nearly 17 months since the FBI promised to correct this count. It still has yet to provide an updated number.

The DOJ’s new boss is carrying the (apparently unlit) torch for the FBI. He has demonized both end-to-end encryption and citizens who don’t believe cops are blameless white knights standing between us and the collapse of civilization.

Now, he’s moving the feds on to the next stage of grief: bargaining. A letter sent to Facebook — sporting Barr’s signature, along with other stalwart encryption foes like UK Home Dept. head Priti Patel and Australian MP Peter Dutton — begs Facebook to please please please stop adding encryption to its services.

BuzzFeed obtained a draft report of the letter, which appears to be the charm offensive preceding the new US-UK data sharing agreement that targets encrypted communications. The letter contains some loaded language about child porn and its victims, suggesting Barr isn’t done leaning on victimized children to advance his anti-encryption efforts. Hey, it didn’t work for Comey, but maybe Bill Barr will get the horrific crime he needs to turn the public against their own best interests.

Here are some excerpts from the letter, as first published by BuzzFeed.

Dear Mr. Zuckerberg,

OPEN LETTER: FACEBOOK’S “PRIVACY FIRST” PROPOSALS

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

So, this is a request for a backdoor. (But one no government agency will refer to as a “backdoor.”) “Lawful access” is law enforcement slang for “backdoor,” kind of like “officer-involved shooting” is slang for “homicide” and “detected the odor of marijuana” is slang for “Fourth Amendment violation.”

Barr (and his anti-encryption warriors) then attempt to call Zuck’s bluff… um… I guess??

In your post of 6 March 2019, “A Privacy-Focused Vision for Social Networking,” you acknowledged that “there are real safety concerns to address before we can implement end-to-end encryption across all our messaging services.” You stated that “we have a responsibility to work with law enforcement and to help prevent” the use of Facebook for things like child sexual exploitation, terrorism, and extortion. We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.

And there it is. “Our most vulnerable citizens.” Apparently that demographic group doesn’t contain Facebook users. Facebook users will be fine, I guess, even if any number of malicious hackers/governments want access to communications no one on Facebook actually wants to share with them. “For the children” is the game here, and Barr forges forward with contradictory statements and terrible logic.

We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications.

(But, pointedly, not Facebook communications.)

We also respect promises made by technology companies to protect users’ data. Law abiding citizens have a legitimate expectation that their privacy will be protected.

(Except from us.)

However, as your March blog post recognized, we must ensure that technology companies protect their users and others affected by their users’ online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world. We must find a way to balance the need to secure data with public safety and the need for law enforcement to access the information they need to safeguard the public, investigate crimes, and prevent future criminal activity. Not doing so hinders our law enforcement agencies’ ability to stop criminals and abusers in their tracks.

Ah, the famous tradeoff government officials always pitch, but one that isn’t actually the tradeoff being made. It’s not privacy vs. the security of the nation as a whole. It’s personal security vs. government access that also grants access to criminals and state-sponsored hackers.

What people want is security. They’re aren’t really interested in trading security for government access. That does nothing for them. The government may solve a few more crimes, but the government was solving crimes long before cellphones, social media platforms, and end-to-end encryption.

Now, multiple governments feel they can’t solve crimes without on-demand access to people’s communications — something they have never had in the history of crime-solving and communications. But here we are, listening to Barr and his buddies make a pitch for encryption backdoors while standing on the backs of child porn victims.

Barr makes this pitch while acknowledging that Facebook probably does far more than all US and UK law enforcement agencies combined to combat child porn.

Facebook currently undertakes significant work to identify and tackle the most serious illegal content and activity by enforcing your community standards. In 2018, Facebook made 16.8 million reports to the US National Center for Missing & Exploited Children (NCMEC) – more than 90% of the 18.4 million total reports that year. As well as child abuse imagery, these referrals include more than 8,000 reports related to attempts by offenders to meet children online and groom or entice them into sharing indecent imagery or meeting in real life. The UK National Crime Agency (NCA) estimates that, last year, NCMEC reporting from Facebook will have resulted in more than 2,500 arrests by UK law enforcement and almost 3,000 children safeguarded in the UK.

And yet, Barr wants to complain. Barr and his UK/Aussie counterparts want to claim this isn’t enough. What’s really needed is insecure communications on a platform used by billions. And to make this claim, Barr again points to something Facebook does as evidence that Facebook isn’t doing enough.

While these statistics are remarkable, mere numbers cannot capture the significance of the harm to children. To take one example, Facebook sent a priority report to NCMEC, having identified a child who had sent self-produced child sexual abuse material to an adult male. Facebook located multiple chats between the two that indicated historical and ongoing sexual abuse. When investigators were able to locate and interview the child, she reported that the adult had sexually abused her hundreds of times over the course of four years, starting when she was 11. He also regularly demanded that she send him sexually explicit imagery of herself. The offender, who had held a position of trust with the child, was sentenced to 18 years in prison. Without the information from Facebook, abuse of this girl might be continuing to this day.

Here’s what Barr thinks will happen if Facebook deploys end-to-end encryption. Facebook will no longer be able to “read” messages sent between users, which will result in an increase in abused children that authorities will be powerless to help.

Our understanding is that much of this activity, which is critical to protecting children and fighting terrorism, will no longer be possible if Facebook implements its proposals as planned. NCMEC estimates that 70% of Facebook’s reporting – 12 million reports globally – would be lost. This would significantly increase the risk of child sexual exploitation or other serious harms. You have said yourself that “we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves”. While this tradeoff has not been quantified, we are very concerned that the right balance is not being struck, which would make your platform an unsafe space, including for children.

“For children.” That’s the leverage. Barr wants Facebook to abandon its encryption plans to save children. Sure, that’s admirable, if you’re willing to overlook the considerable downside of creating a backdoor for governments or simply removing the encryption offer altogether. Facebook’s encryption plans offer a whole new layer of security for lawful users — some of which are targeted by authoritarian/corrupt governments. Many governments around the world pose as much of a threat to their citizens as criminals do. And a great many people believe their communications should be private, which means not being read/scanned by Facebook, much less any government that happens to stroll by waving some paperwork.

All Barr wants is for Facebook to abandon its encryption plans. He wants Facebook to be able to access the content of its users’ messages. He wants every government in the world to be able to access the content of users’ messages. He may only be aligned with three-fifths of the Five Eyes in this letter, but ensuring US/UK/Australian “lawful access” means giving every other two-bit dictatorship the same level of access to users’ communications.

This isn’t standard government bullshit. This is heinous, dangerous bullshit. This is a conglomerate of Western governments, on the eve of the deployment of a mysterious “data-sharing” agreement, portraying the implementation of encryption for communications as aiding and abetting the sexual abuse of children. This is a not-very-subtle smearing of every tech company that deploys encryption to protect its users from criminals and governments that behave like criminals. This is the abuse of the phrase “lawful access” to portray the possession of a warrant as a golden ticket to everything law enforcement wishes to obtain.

To be historically clear, a warrant has NEVER guaranteed access to communications. It has only allowed law enforcement to search for them. The implementation of encryption doesn’t change this equation. But Barr and others keep pushing this in hopes of persuading the public — and the tech companies they patronize — that secret communications are something new and far more dangerous than anything law enforcement has ever encountered prior to the rise of social media and smartphones.

Filed Under: doj, encryption, mark zuckerberg, messenger, peter dutton, priti patel, privacy, security, snooping, william barr
Companies: facebook, instagram, whatsapp

Facebook Experiments With End To End Encryption In Messenger

from the good-to-see dept

This has been rumored before, and perhaps isn’t a huge surprise due to Whatsapp’s use of end to end encryption, but Facebook has launched a trial of end to end encryption in Facebook messenger, under a program it’s calling “Secret Conversations” (which also allows for expiring conversations).

It?s encrypted messages, end-to-end, so that in theory no one?not a snoop on your local network, not an FBI agent with a warrant, not even Facebook itself?can intercept them. For now, the feature will be available only to a small percentage of users for testing; everyone with Facebook Messenger gets it later this summer or in early fall.

What’s good to see is that Facebook is directly admitting that offering end to end encryption is a necessary feature if you’re in the messaging business today.

?It?s table stakes in the industry now for messaging apps to offer this to people,? says Messenger product manager Tony Leach. ?We wanted to make sure we?re doing what we can to make messaging private and secure.?

This is a good sign. For years, tech companies more or less pooh-poohed requests for encryption, basically suggesting it was only tinfoil hat wearing paranoids who really wanted such things. But now they’re definitely coming around (something you can almost certainly thank Ed Snowden for inspiring). And, not surprisingly, Facebook is using the Signal protocol, which is quickly becoming the de facto standard for end to end encrypted messaging. It’s open source, well-known and well-tested, which doesn’t mean it’s perfect (nothing is!), but it’s at least not going to have massively obvious encryption errors that pop up when people try to roll out their own.

Some security folks have been complaining, though, that Facebook decided to make this “opt-in” rather than default. This same complaint cropped up recently when Google announced that end to end encryption would be an “option” on its new Allo messaging app. Some security folks argue — perhaps reasonably — that being optional rather than default almost certainly means that it won’t get enough usage, and some users may be fooled into thinking messages are encrypted when they are not.

Facebook’s Chief Security Officer, Alex Stamos (who knows his shit on these things) took to Twitter (not Facebook?) to explain why its optional, and makes a fairly compelling set of arguments (which also suggest that there’s a chance that end to end encryption will eventually move towards default). A big part of it is that because of the way end to end encryption works (mainly the need to store your key on your local device) that makes it quite difficult to deploy on a system, like Facebook Messenger, that people use from a variety of interfaces. Moxie Marlinspike, the driving force behind Signal has already pointed out that Signal protocol does support multi-device, so hopefully Facebook will figure it out eventually. But in the short term, it would definitely change the way people use Messenger, and it’s at least somewhat understandable that Facebook would be moderately cautious in deploying a change like this that would end up removing some features, and potentially confusing/upsetting many users of the service. Over time, hopefully, end to end encryption can be simplified and rolled out further.

As some cryptogrphers have noted, this is a good start for a company with hundreds of millions of users on an existing platform in moving them towards encryption. A ground up solution probably should have end to end enabled by default, but for a massive platform making the shift, this is a good start and a good move to protect our privacy and security.

Anyway, anyone have the count down clock running on how long until someone from the FBI or Congress whines about Facebook doing this?

Filed Under: encryption, end-to-end encryption, messenger, signal protocol
Companies: facebook

Politicians Investigating Leaks Sites… Not Leaks

from the you're-doing-it-wrong dept

It was rumored recently that some politicians were going to investigate Wikileaks for some leaked documents that were posted there. The details weren’t clear, and I was hoping something was lost in the translation, and they meant that the politicians would be investigating the leaks not the site Wikileaks for posting it. No such luck apparently. Three Congressional Reps have apparently asked Homeland Security what can be done about sites that post leaked documents, including not just Wikileaks, but Cryptome as well. In the letter to Homeland Security, they basically suggest that if needed, they’ll put forth legislation that would make reposting such content illegal, which could create one hell of a First Amendment legal battle at some point. Either way, these politicians are focused on the wrong things. The problems aren’t these sites, which are just service providers for the information. The problems are the leaks of info themselves.

Filed Under: blame, cryptome, homeland security, leaks, messenger, wikileaks

Where's The Line Between Exploiting A Security Flaw And Alerting People To The Flaw?

from the blurry-lines dept

Over the years we’ve seen so many stories of the messengers being blamed for finding security holes that you would think that most folks would realize how dangerous it is to do so. After all, that just encourages those who find security holes to keep quiet resulting in huge security vulnerabilities left wide open for those with malicious intent to exploit. However, what happens in cases where someone alerts those responsible for the flaw, but also is exploiting the flaw in some way? Do the lines get blurry?

For example, there’s a story making the rounds about a 15-year-old student who has been charged with various crimes after accessing data on school employees. Apparently the school misconfigured its servers, meaning that plenty of students could have gotten access to the file. What’s unclear, however, is the student’s motive. In the article linked above, it just says that one of the two students who accessed the data “alerted the principal” of the security hole, sending a semi-anonymous email signed from “a student.” However, the kid was quickly tracked down and promptly arrested.

On reading that story, it certainly sounds like yet another case of “blame the messenger.” But it’s not clear if that’s really accurate. A local newspaper’s version of the story is somewhat different, where it’s claimed that the “alert” to the principal was the student sending an email saying “look what I have” as if he were gloating — rather than alerting the school to a security breach. The police officer involved in the case also claims that the kid “was looking to profit from his criminal act.” There aren’t any details provided to back that up, but it certainly sounds like there may be more to this story than just a kid alerting officials to a security breach.

Filed Under: blame, flaw, messenger, security

Google Attacks The Messenger Over Android Vulnerability

from the not-very-friendly dept

There was plenty of news over the weekend about a security flaw found in Google’s Android mobile operating system that could allow certain websites to run attack code and access sensitive data. The security researchers have said they won’t reveal the details of the flaw, even though it’s apparently a known flaw that is in some of the open source code in Android that Google did not update. However, that didn’t stop Google from attacking the messenger, claiming that the security researcher who discovered the flaw broke some “unwritten rules” concerning disclosure. First of all, there is no widespread agreement on any such “unwritten rules” and many security researchers believe that revealing such flaws is an effective means of getting companies to patch software. Considering that Android’s source code was revealed last week, it’s quite reasonable to assume that many malicious hackers had already figured out this vulnerability, and making that news public seems to serve a valuable purpose. It’s unfortunate that Google chose to point fingers, rather than thanking the researcher and focus on patching the security hole.

Filed Under: android, flaw, messenger, security
Companies: google