end to end encryption – Techdirt (original) (raw)
Nevada Is In Court This Morning Looking To Get A Temporary Restraining Order Blocking Meta From Using End-To-End Encryption
from the protect-encryption-now dept
There have been plenty of silly lawsuits against tech companies over the last few years, but a new one from Nevada against Meta may be the most crazy — and most dangerous — that we’ve seen so far. While heavily redacted, the basics fit the pattern of all of these lawsuits. Vague claims of harms to children from social media, with lots of vague handwaving and conclusory statements with no basis in insisting that certain harms are directly traceable back to choices Meta made (despite a near total lack of evidence to support those claims).
But, rather than go through the many, many, many problems of the lawsuit (you can read it yourself at the link above or embedded below), let’s jump ahead to a hearing that is happening today. Nevada has asked the court to issue a temporary restraining order, blocking Meta from using end-to-end encryption on messages, claiming that such encryption is harmful to children.
That sounds hyperbolic, but it’s exactly what’s happening:
With this Motion, the State seeks to enjoin Meta from using end-to-end encryption (also called “E2EE”) on Young Users’ Messenger communications within the State of Nevada. 1 This conduct—which renders it impossible for anyone other than a private message’s sender and recipient to know what information the message contains—serves as an essential tool of child predators and drastically impedes law enforcement efforts to protect children from heinous online crimes, including human trafficking, predation, and other forms of dangerous exploitation. Under such circumstances, the Nevada Supreme Court makes clear that to obtain the injunctive relief sought by this Motion, the State need only show “a reasonable likelihood that the statute was violated and that the statute specifically allows injunctive relief.” State ex rel. Off. of Att’y Gen., Bureau of Consumer Prot. v. NOS Commc’ns, Inc., 120 Nev. 65, 69, 84 P.3d 1052, 1055 (2004) (emphasis added). The State’s Complaint is replete with indisputable factual allegations detailing this harm and explaining—with specificity—how Meta’s conduct in this matter violates the Nevada Unfair and Deceptive Trade Practices Act, N.R.S. §§ 598.0903 through 598.0999 (“NDTPA”). And, because the NDTPA expressly authorizes the Attorney General to seek, inter alia, injunctive relief, the State’s Motion should be granted.
It’s no secret that lazy cops like the FBI’s Chris Wray (and before him, James Comey) have always hated encryption and wanted it banned for making it just slightly more difficult to read everyone’s messages, but at least they spoke mostly about just requiring magic backdoors that would allow encryption to work for normal people, but have it break when the cops came asking (this is not a thing, of course, as it would break for everyone if you did that).
Here, the state of Nevada is literally just saying “fuck it, ban all encryption, because it might make it harder for us to spy on people.”
The TRO request is full of fearmongering language. I mean:
And, as of December 2023, Meta reconfigured Messenger to make E2EE—child predators’ main preferred feature—the default for all communications.
The TRO request also more or less admits that Nevada cops are too fucking lazy to go through basic due process, and the fact that the 4th Amendment, combined with encryption, means they have to take an extra step to spy on people is simply a bridge too far:
As set forth in the Declaration Anthony Gonzales, the use of end-to-end encryption in Messenger makes it impossible to obtain the content of a suspect’s (or defendant’s) messages via search warrant served on Meta. See Ex. 2 (Gonzales Decl.) at ¶¶ 9-16. Instead, investigators are only able to obtain “information provided [that] has been limited to general account information about a given suspect and/or metadata and/or log information about the Messenger communications of that suspect.” Id. at ¶ 14. Once again, this is the equivalent of trying to divine the substance of a letter between two parties by only using the visible information on the outside of a sealed envelope.
Instead, the State is forced to try to obtain the device that the suspect used to send communications via Messenger—which itself requires separate legal process—and then attempt to forensically extract the data using sophisticated software. See Ex. 1 (Defonseka Decl.) at ¶¶ 5- 8. Even this time-consuming technique has its limits. For example, it is not possible to obtain the critical evidence if the device is “locked,” or if the suspect has deleted data prior to relinquishing his phone. Id. at ¶ 8; see also Ex. 2 (Gonzales Decl.) at ¶ 19 (describing commonplace “destruction of the evidence sought by investigators” when trying to acquire Messenger communications).
Just because you’re a cop does not mean you automatically get access to all communications.
As for the actual legal issues at play, the state claims that Meta using encryption to protect everyone is a “deceptive trade practice.” I shit you not. Apparently Nevada has a newish state law (from 2022) that makes it an additional crime to engage in “unlawful use of encryption.” And the state’s argument is that because Meta has turned on encryption for messages, and some people may use that to commit crimes, then Meta has engaged in a deceptive trade practice in enabling the unlawful use of encryption. Really.
As a threshold matter, the State alleges that Meta “willfully committed . . . deceptive trade practices by violating one or more laws relating to the sale or lease of goods or services” in violation of NRS § 598.0923(1)(c). Compl. ¶ 473. Nevada law states that “[a] person shall not willfully use or attempt to use encryption, directly or indirectly, to: (a) Commit, facilitate, further or promote any criminal offense; (b) Aid, assist or encourage another person to commit any criminal offense; (c) Conceal the commission of any criminal offense; (d) Conceal or protect the identity of a person who has committed any criminal offense; or (e) Delay, hinder or obstruct the administration of the law.”…. This amounts to both direct and indirect aiding and abetting of child predators, via the use of E2EE, in violation of NRS § 205.486(1)(a)-(d). And, as demonstrated in the Gonzales Declaration, Meta knows that E2EE drastically limits the ability of law enforcement to obtain critical evidence in their investigations—namely, the substance of a suspect’s Messenger communications—which is in violation of NRS § 205.486(1)(e).
Furthermore, Nevada claims that Meta engaged in deceptive trade practices by promoting encryption as a tool to keep people safer.
Meta “represent[ed] that Messenger was safe and not harmful to Young Users’ wellbeing when such representations were untrue, false, and misleading…..
Similarly, Meta publicly touted its use of end-to-end encryption as a positive for users, meant to protect them from harm—going so far as to call it an “extra layer of security” for users
This is a full-on attack on encryption. If Nevada succeeds here, then it’s opening up courts across the country to outlaw encryption entirely. This is a massive, dangerous attack on security and deserves much more attention.
Meta’s response to the motion is worth reading as well, if only for the near exasperation of the company’s lawyers as to why suddenly, now, end-to-end encryption for messaging — a technology that has been available for many, many years — has become so scary and so problematic that it needs to be stopped immediately.
Meta Platforms, Inc. (“Meta”)1 has offered end-to-end encryption (“E2EE”) as an option on its Messenger app since 2016. Compl. ¶ 202. E2EE technology is commonplace and has been hailed as “vital” by privacy advocates for protecting users’ communications with each other.2 The only change Meta made in December 2023 was to announce that the Messenger app would transition all messages to E2EE (rather than an option), id.—which is what Apple iMessage, Signal and numerous other messaging services already do.
These facts completely disprove the State’s assertion that it is entitled to temporary injunctive relief. E2EE has been available as an option on Meta’s Messenger app for eight years, and Meta began rolling out E2EE for all messages on Messenger months ago. The State cannot properly assert that it requires emergency injunctive relief—on two days’ notice—blocking Meta’s use of E2EE, when that feature has been in use on Messenger for years and began to be rolled out for all messages more than two months ago. The State’s delay—for years—to bring any enforcement action related to Meta’s use of E2EE (or other providers’ use of E2EE) demonstrates why its request for the extraordinary relief of a TRO should be denied.
The response also points out that for the state to argue it’s in such a rush to ban Meta from using end-to-end encryption, it sure isn’t acting like it’s in a rush:
The State admits that E2EE has been available as feature on Messenger for eight years. See Mot. 10 (“Since 2016, Meta has allowed users the option of employing E2EE for any private messages they send via Messenger.” (emphasis added)). On December 6, 2023—ten weeks ago— Meta began making E2EE the standard for all messages on Messenger, rather than a setting to which users could opt in. 3 In doing so, Messenger joined other services, including Apple’s iMessage, which has deployed E2EE as a standard feature since 2011, 4 and FaceTime, for which E2EE has been standard since at least 2013. 5 Yet the State waited until January 20, 2024—six weeks after the new default setting was announced, and eight years after E2EE first became available on Messenger—to file its Complaint. It then inexplicably waited another three weeks to serve Meta with the Complaint.6 As such, before yesterday, Meta had not even been able to review the full scope of the State’s allegations.7 Mot. 14. Concurrently with its lengthy Complaint, the State served the present motion, along with two supporting declarations that purport to justify enjoining a practice that was announced two months ago (and was available for years as a nondefault setting and as a feature in other services, such as Apple’s iMessage).
The State’s delays demonstrate the fundamental unfairness of requiring Meta to prepare this Opposition on one day’s notice. There is no emergency that requires this accelerated timetable. Quiroga v. Chen, 735 F. Supp. 2d 1226, 1228 (D. Nev. 2010) (“The temporary restraining order should be restricted to serving its underlying purpose of preserving the status quo and preventing irreparable harm just so long as is necessary to hold a hearing, and no longer.” (cleaned up)). Meta has not been given sufficient time to identify and prepare responses to the myriad assertions and misstatements in the State’s Motion. Moreover, the State apparently seeks to present live testimony from its witnesses. See Mot. at 6. In this unfairly accelerated and truncated timetable, Meta has not been given a fair chance to develop responses to the State’s witnesses, nor to develop and present its own witnesses and evidence. In short, there is no exigency that warrants this highly accelerated and unfairly compressed timetable for Meta’s Opposition to the TRO motion—in contrast to a motion for preliminary injunction that can be noticed, briefed and heard under a reasonable schedule that allows Meta a fair opportunity to be heard.
Meta also points out that Nevada itself recognizes the value of encryption:
Indeed, Nevada law recognizes the value of encryption, requiring data collectors to encrypt personal information. See Nev. Rev. Stat. 603A.215. A seismic shift that would fundamentally challenge the use of E2EE should not be undertaken with a 24-hour turnaround on briefing that does not afford Meta a fair and reasonable opportunity to develop a full response to the State’s arguments.
Nevada’s position here, including the haste with which it is moving (after doing nothing about encryption for years) is astounding, dangerous, and disconnected from reality. Hopefully the court recognizes this.
Filed Under: encryption, end to end encryption, messenger, nevada, tro
Companies: meta
Apple’s Nonsensical Attack On Beeper For Making Apple’s Own Users Safer
from the weakening-security-to-lock-up-users dept
Apple has spent the past few years pushing the marketing message that it, alone among the big tech companies, is dedicated to your privacy. This has always been something of an exaggeration, but certainly less of Apple’s business is based around making use of your data, and the company has built in some useful encryption elements to its services (both for data at rest, and data in transit). But, its actions over the past few days call all of that into question, and suggest that Apple’s commitment to privacy is much more a commitment to walled gardens and Apple’s bottom line, rather than the privacy of Apple’s users.
First, some background:
Back in September, we noted that the EU had designated which services were going to be “gatekeepers” under the Digital Markets Act (DMA), which would put on them various obligations, including regarding some level of interoperability. Apple had been fighting the EU over whether or not iMessage would qualify, and just a few days ago there were reports that the EU would not designate iMessage as a gatekeeper. But that’s not final yet. This also came a few weeks after Apple revealed that, after years of pushing back on the idea, it might finally support RCS for messaging (though an older version that doesn’t support end-to-end encryption).
Separately, for years, there has been some debate over Apple’s setup in which messaging from Android phones shows up in “green bubbles” vs. iMessage’s “blue bubbles.” The whole green vs. blue argument is kind of silly, but some people reasonably pointed out that by not allowing Android users to actually use iMessage itself, it was making communications less secure. That’s because messages within the iMessage ecosystem can be end-to-end encrypted. But messages between iMessage and an Android phone are not. If Apple actually opened up iMessage to other devices, messaging for iPhone users and the people they spoke to would be much more protected.
But, instead of doing that, Apple has generally made snarky “just buy an iPhone” comments when asked about its unwillingness to interoperate securely.
That’s why Apple’s actions over the last week have been so stupidly frustrating.
For the past few years, some entrepreneurs (including some of the folks who built the first great smartwatch, the Pebble), have been building Beeper, a universal messaging app that is amazing. I’ve been using it since May and have sworn by it and gotten many others to use it as well. It creates a very nice, very usable single interface for a long list of messaging apps, reminiscent of earlier such services like Trillian or Pidgin… but better. It’s built on top of Matrix, the open-source decentralized messaging platform.
Over the last few months I’ve been talking up Beeper to lots of folks as the kind of app the world needs more of. It fits with my larger vision of a world in which protocols dominate over siloed platforms. It’s also an example of the kind of adversarial interoperability that used to be standard, and which Cory Doctorow rightfully argues is a necessary component of stopping the enshittification curve of walled garden services.
Of course, as we’ve noted, the big walled gardens are generally not huge fans of things that break down their walls, and have fought back over the years, including with terrible CFAA lawsuits against similar aggregators (the key one being Facebook’s lawsuit against Power.com). And ever since I started using Beeper, I wondered if anyone (and especially Apple) might take the same approach and sue.
There have been some reasonable concerns, about how Beeper handled end-to-end encrypted messaging services like Signal, WhatsApp, and iMessage. It originally did this by basically setting up a bunch of servers that it controls, which has access to your messages. In some ways, Beeper is an “approved” man-in-the-middle attack on your messages, with some safeguards, but built in such a way that those messages are no longer truly end-to-end encrypted. Beeper has taken steps to do this as securely as possible, and many users will think those tradeoffs are acceptable for the benefit. But, still, those messages have not been truly end-to-end encrypted. (For what it’s worth, Beeper open sourced this part of its code so if you were truly concerned, you could also host the bridge yourself and basically man in the middle yourself to make Beeper work, but I’m guessing very few people did that).
That said, from early on Beeper has made it clear that it would like to move away from this setup to true end-to-end encryption, but that requires interoperable end-to-end encrypted APIs, which (arguably) the DMA may mandate.
Or… maybe it just takes a smart hacking teen.
Over the summer, a 16-year-old named James Gill reached out to Beeper’s Eric Migicovsky and said he’d reimplemented iMessage in a project he’d released called Pypush. Basically, he reverse engineered iMessage and created a system by which you could message securely in a truly end-to-end encrypted manner with iMessage users.
If you want to understand the gory details, and why this setup is actually secure (and not just secure-like), Snazzy Labs has a great video:
Over the last few months, Beeper had upgraded the bridge setup it used for iMessage within its offering to make use of Pypush. Beeper also released a separate new app for Android, called Beeper Mini, which is just for making iMessage available for Android users in an end-to-end encrypted manner. It also allows users (unlike the original Beeper, now known as Beeper Cloud) to communicate with iMessage users just via their phone number, and not via an AppleID (Beeper Cloud requires the Apple ID). Beeper Mini costs $2/month (after a short free trial), and apparently there was demand for it.
I spoke to Migicovsky on Sunday and he told me they had over 100k downloads in the first two days it was available, and that it’s the most successful launch of a paid Android app ever. It was a clear cut example of why interoperability without permission (adversarial interoperability) is so important, and folks like Cory Doctorow rightfully cheered this on.
But all that attention also seems to have finally woken up Apple. On Friday, users of both Beeper Cloud and Beeper Mini found that they could no longer message people via iMessage. If you watch that YouTube video above by Snazzy Labs, he explains why it’s not that easy for Apple to block the way Beeper Mini works, but, Apple still has more resources at its disposal than just about anyone else and devoted some of them to doing exactly what Snazzy Labs (and Beeper) thought it was unlikely to do: blocking Beeper Mini from working.
So… with that all as background, the key thing to understand here is that Beeper Mini was making everyone’s messaging more secure. It certainly better protected Android users in making sure their messages to iPhone users were encrypted. And it similarly better protected Apple users, in making sure their messages to Android users were also encrypted. Which means that Apple’s response to this whole mess underscores the lie that Apple cares about users’ privacy.
Apple’s PR strategy is often to just stay silent, but it actually did respond to David Pierce at the Verge and put out a PR statement that is simply utter nonsense, claiming it did this to “protect” Apple users.
At Apple, we build our products and services with industry-leading privacy and security technologies designed to give users control of their data and keep personal information safe. We took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage. These techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. We will continue to make updates in the future to protect our users.
Almost everything here is wrong. Literally, Beeper Mini’s interoperable setup better protected the privacy of Apple’s customers than Apple itself did. Beeper Mini’s setup absolutely did not “pose significant risks to user security and privacy.” It effectively piggybacked onto Apple’s end-to-end encryption system to make sure that it was extended to messages between iOS users and Android users, better protecting both of them.
When I spoke to Eric on Sunday he pledged that if Apple truly believed that Beeper Mini somehow put Apple users at risk, he was happy to agree to have the software fully audited by an independent third party security auditor that the two organizations agreed upon to see if it created any security vulnerabilities.
For many years people like myself and Cory Doctorow have been talking up the importance of interoperability, open protocols, and an end to locked-down silos. Big companies, including Apple, have often made claims about “security” and “privacy” to argue against such openness. But this seems like a pretty clear case in which that’s obviously bullshit. The security claims here are weak, given that from the way Beeper Mini is constructed, it seems significantly more secure than Apple’s own implementation, which puts less security on iOS-Android interactions.
And for Apple to do this just as policymakers are looking for more and more ways to ensure openness and interoperability seems like a very stupid self-own. We’ll see if the EU decides to exempt iMessage from the DMA’s “gateekeeper” classification and its interop requirements, but policymakers elsewhere are certainly noticing.
While I often think that Elizabeth Warren’s tech policy plans are bonkers, she’s correctly calling out this effort by Apple.
She’s correct. Chatting between different platforms should be easy and secure, and Apple choosing to weaken the protections of its users while claiming it’s doing the opposite is absolute nonsense, and should be called out as such.
Filed Under: adversarial interoperability, blue bubbles, end to end encryption, green bubbles, imessage, interoperability, messaging, open source, protocols, pypush, silos
Companies: apple, beeper
After Passing Online Safety Bill, UK Government Gets Back To Harassing Meta About Its End-To-End Encryption
from the yeah-well-we-still-want-the-thing-we-always-wanted dept
Last week, it appeared ever so briefly, the UK government might be finally giving up on its desires to legislate at least one end of messaging services’ end-to-end encryption. Having faced resistance from nearly every encrypted service (all of which threatened to exit the UK if anti-encryption mandates were put in place) as well as internal reports strongly suggesting undermining encryption would be a truly terrible idea, it seemed those pushing the Online Safety Bill were finally willing to accept the uncomfortable fact that breaking encryption only results in broken encryption. What it doesn’t do is end the online harms the UK government felt this bill addressed.
But the concession wasn’t much of a concession. Nothing changed in the wording of the bill. All that really happened is a couple of proponents suggested the UK government wouldn’t pull the trigger on encryption-breaking demands immediately. This concession was surrounded by statements suggesting government officials truly thought the only thing standing between it and “safely” broken encryption was recalcitrant techies working for services like WhatsApp and Signal.
Parkinson said that Ofcom, the tech regulator, would only require companies to scan their networks when a technology was developed that was capable of doing so.
[…]
“As has always been the case, as a last resort, on a case-by-case basis and only when stringent privacy safeguards have been met, [the legislation] will enable Ofcom to direct companies to either use, or make best efforts to develop or source, technology to identify and remove illegal child sexual abuse content — which we know can be developed,” the government said.
The Online Safety Bill has now been passed by the UK Parliament. And, as Glyn Moody recently pointed out, all the anti-encryption language remains intact.
[UK] Technology Secretary Michelle Donelan insisted that nothing had changed in the long-awaited legislation, after privacy campaigners earlier this month claimed a victory following widespread reports of a shift in the Government stance on encryption.
Now that the bill has passed with the anti-encryption mandates still intact, it looks like the UK government is going back to leaning hard on uncooperative tech companies in hopes of pressuring them into abandoning encryption plans prior to the implementation of the new law. Facebook has long been the target of criticism from governments around the world that seem to feel they’re entitled to demand Meta not protect its Facebook Messenger service with end-to-end encryption.
Years of ignored requests are culminating in a last-minute push by UK legislators, as Natasha Lomas reports for TechCrunch:
In an interview on BBC Radio 4’s Today Program this morning, [Home Secretary] Suella Braverman claimed the vast majority of online child sexual abuse activity that U.K. law enforcement is currently able to detect is taking place on Facebook Messenger and Instagram. She then hit out at Meta’s proposal to expand its use of E2EE “without safety measures” to the two services — arguing the move would “disable and prohibit law enforcement agencies from accessing this criminal activity [i.e. CSAM]”.
Saying that one of the most popular messaging services is responsible for the most CSAM reports doesn’t really say anything more than the service has a lot of users. It doesn’t mean Meta somehow cares less about limiting the sharing of CSAM than other, less popular services. And I have no idea what “safety measures” Braverman thinks can be attached to E2EE services without, you know, removing at least an E or two.
Braverman doesn’t know or doesn’t care. Or both. Her further comments indicate she’d prefer Meta just maintained its less-than-secure status quo, sacrificing users’ privacy and security in favor of government gains.
First, there’s the stick:
Asked by the BBC what the government would do if Meta goes ahead with its E2EE rollout without the additional measures she wants, Braverman confirmed Ofcom has powers to fine Meta up to 10% of its global annual turnover if it fails to comply with the Online Safety Bill.
Then there’s the carrot — Bravermen says she wants to “work constructively” with Meta to create some sort of magical form of encryption Meta can break at will without compromising user security.
Then there’s the insanity:
“My job is fundamentally to protect children not paedophiles, and I want to work with Meta so that they roll out the technology that enables that objective to be realised. That protects children but also protects their commercial interests,” she said. “We know that technology exists…”
Really? Where is it? Can you point to any examples of this encryption that remains secure despite deliberately introduced flaws? Have you tried it out? Have you performed a security audit on it? SHOW ME ON THE PUBLICLY RELEASED GOVERNMENT REPORT WHERE THIS TECHNOLOGY ALREADY EXISTS.
While it’s true tech exists to detect hashes that match known CSAM, no tech exists to perform hash-matching on E2EE communication services. The only way to do this is to perform scanning on one side of the communication. And to do that, you have to remove the encryption from one end. Some have suggested this is a solution to the problem. But the only tech company that considered moving forward with voluntary client-side scanning abandoned that plan shortly after hearing from everyone (anti-encryption legislators excepted, of course) what a bad idea that would be.
So, in a sense, the tech does exist. But it’s not something anyone truly concerned about safety, security, or privacy would consider to be a real solution to the CSAM problem. But that’s what the UK government wants: insecure services that allow it to take a look at anyone’s communications. And that should never be considered an acceptable outcome.
Filed Under: encryption, end to end encryption, ofcom, online safety bill, suella braverman, uk
Companies: facebook, meta
The STOP CSAM Act Is An Anti-Encryption Stalking Horse
from the durbin,-this-is-distrubin dept
Recently, I wrote for Lawfare about Sen. Dick Durbin’s new STOP CSAM Act bill, S.1199. The bill text is available here. There are a lot of moving parts in this bill, which is 133 pages long. (Mike valiantly tries to cover them here.) I am far from done with reading and analyzing the bill language, but already I can spot a couple of places where the bill would threaten encryption, so those are what I’ll discuss today.
According to Durbin, online service providers covered by the bill would have “to produce annual reports detailing their efforts to keep children safe from online sex predators, and any company that promotes or facilitates online child exploitation could face new criminal and civil penalties.” Child safety online is a worthy goal, as is improving public understanding of how influential tech companies operate. But portions of the STOP CSAM bill pose risks to online service providers’ ability to use end-to-end encryption (E2EE) in their service offerings.
E2EE is a widely used technology that protects everyone’s privacy and security by encoding the contents of digital communications and files so that they’re decipherable only by the sender and intended recipients. Not even the provider of the E2EE service can read or hear its users’ conversations. E2EE is built in by default to popular apps such as WhatsApp, iMessage, FaceTime, and Signal, thereby securing billions of people’s messages and calls for free. Default E2EE is also set to expand to Meta’s Messenger app and Instagram direct messages later this year.
E2EE’s growing ubiquity seems like a clear win for personal privacy, security, and safety, as well as national security and the economy. And yet E2EE’s popularity has its critics – including, unfortunately, Sen. Durbin. Because it’s harder for providers and law enforcement to detect malicious activity in encrypted environments than unencrypted ones (albeit not impossible, as I’ll discuss), law enforcement officials and lawmakers often demonize E2EE. But E2EE is a vital protection against crime and abuse, because it helps to protect people (children included) from the harms that happen when their personal information and private conversations fall into the wrong hands: data breaches, hacking, cybercrime, snooping by hostile foreign governments, stalkers and domestic abusers, and so on.
That’s why it’s so important that national policy promote rather than dissuade the use ofE2EE – and why it’s so disappointing that STOP CSAM has turned out to be just the opposite: yet another misguided effort by lawmakers in the name of online safety that would only make us all less safe.
First, STOP CSAM’s new criminal and civil liability provisions could be used to hold E2EE services liable for CSAM and other child sex offenses that happen in encrypted environments. Second, the reporting requirements look like a sneaky attempt to tee up future legislation to ban E2EE outright.
STOP CSAM’s New Civil and Criminal Liability for Online Service Providers
Among the many, many things it does in 133 pages, STOP CSAM creates a new federal crime, “liability for certain child exploitation offenses.” It also creates new civil liability by making a carve-out from Section 230 immunity to allow child exploitation victims to bring lawsuits against the providers of online services, as well as the app stores that make those services available. Both of these new forms of liability, criminal and civil, could be used to punish encrypted services in court.
The new federal crime is for a provider of an interactive computer service (an ICS provider, as defined in Section 230) “to knowingly (1) host or store child pornography or make child pornography available to any person; or (2) otherwise knowingly promote or facilitate a violation of” certain federal criminal statutes that prohibit CSAM and child sexual exploitation (18 U.S.C. §§ 2251, 2251A, 2252, 2252A, or 2422(b)).
This is rather duplicative: It’s already illegal under those laws to knowingly possess CSAM or knowingly transmit it over the Internet. That goes for online service providers, too. So if there’s an online service that “knowingly hosts or stores” or transmits or “makes available” CSAM (whether on its own or by knowingly letting its users do so), that’s already a federal crime under existing law, and the service can be fined.
So why propose a new law that says “this means you, online services”? It’s the huge size of the fines that could be imposed on providers: up to 1million,or1 million, or 1million,or5 million if the provider’s conduct either causes someone to be harmed or “involves a conscious or reckless risk of serious personal injury.” Punishing online service providers specifically with enormous fines, for their users’ child sex offenses, is the point of re-criminalizing something that’s already a crime.
The new civil liability for providers comes from removing Section 230’s applicability to civil lawsuits by the victims of CSAM and other child sexual exploitation crimes. There’s a federal statute, 18 U.S.C. § 2255, that lets those victims sue the perpetrator(s). Section 230 currently bars those lawsuits from being brought against providers. That is, Congress has heretofore decided that if online services commit the aforementioned child sex offenses, the sole enforcer should be the Department of Justice, not civil plaintiffs. STOP CSAM would change that. (More about that issue here.)
Providers would now be fair game for 2255 lawsuits by child exploitation victims. Victims could sue for “child exploitation violations” under an enumerated list of federal statutes. They could also sue for “conduct relating to child exploitation.” That phrase is defined with respect to two groups: ICS providers (as defined by Section 230), and “software distribution services” (think: app stores, although the definition is way broader than that).
Both ICS providers and software distribution services could be sued for one type of “conduct relating to child exploitation”: “the intentional, knowing, reckless, or negligent promotion or facilitation of conduct that violates” an enumerated list of federal child exploitation statutes. And, ICS providers alone (but not software distribution services) could be sued for a different type of conduct: “the intentional, knowing, reckless, or negligent hosting or storing of child pornography or making child pornography available to any person.”
So, to sum up: STOP CSAM
(1) creates a new crime when ICS providers knowingly promote or facilitate CSAM and child exploitation crimes, and
(2) exposes ICS providers to civil lawsuits by child exploitation victims if they intentionally, knowingly, recklessly, or negligently (a) host/store/make CSAM available, or (b) promote or facilitate child exploitation conduct (for which app stores can be liable too).
Does E2EE “Promote or Facilitate” Child Exploitation Offenses?
Here, then, is the literally million-dollar question: Do E2EE service providers “promote or facilitate” CSAM and other child exploitation crimes, by making their users’ communications unreadable by the provider and law enforcement?
It’s not clear what “promote or facilitate” even means! That same phrase is also found in a 2018 law, SESTA/FOSTA, that carved out sex trafficking offenses from providers’ general immunity against civil lawsuits and state criminal charges under Section 230. And that same phrase is currently being challenged in court as unconstitutionally vague and overbroad. Earlier this year, a panel of federal appeals judges appeared skeptical of its constitutionality at oral argument, but they haven’t issued their written opinion yet. Why Senator Durbin thinks it’s a great idea to copy language that’s on the verge of being held unconstitutional, I have no clue.
If a court were to hold that E2EE services “promote or facilitate” child sex offenses (whatever that means), then the E2EE service provider’s liability would turn on whether the case was criminal or civil. If it’s criminal, then federal prosecutors would have to prove the service knowingly promoted or facilitated the crime by being E2EE. “Knowing” is a pretty high bar to prove, which is appropriate for a crime.
In a civil lawsuit, however, there are four different mental states the plaintiff could choose from. Two of them – recklessness or negligence – are easier to prove than the other two (knowledge or intent). They impose a lower bar to establishing the defendant’s liability in a civil case than the DOJ would have to meet in a federal criminal prosecution. (See here for a discussion of these varying mental-state standards, with helpful charts.)
Is WhatsApp negligently facilitating child exploitation because it’s E2EE by default? Is Zoom negligently facilitating child exploitation because users can choose to make a Zoom meeting E2EE? Are Apple and Google negligently facilitating child exploitation by including WhatsApp, Zoom, and other encrypted apps in their app stores? If STOP CSAM passes, we could expect plaintiffs to immediately sue all of those companies and argue exactly that in court.
That’s why STOP CSAM creates a huge disincentive against offering E2EE. It would open up E2EE services to a tidal wave of litigation by child exploitation victims for giving all their users a technology that is indispensable to modern cybersecurity and data privacy. The clear incentive would be for E2EE services to remove or weaken their end-to-end encryption, so as to make it easier to detect child exploitation conduct by their users, in the hopes that they could then avoid being deemed “negligent” on child safety because, ironically, they used a bog-standard cybersecurity technology to protect their users.
It is no accident that STOP CSAM would open the door to punishing E2EE service providers. Durbin’s February press release announcing his STOP CSAM bill paints E2EE as antithetical to child safety. The very first paragraph predicts that providers’ continued adoption of E2EE will cause a steep reduction in the volume of (already mandated) reports of CSAM they find on their services. It goes on to suggest that deploying E2EE treats children as “collateral damage,” framing personal privacy and child safety as flatly incompatible.
The kicker is that STOP CSAM never even mentions the word “encryption.” Even the EARN IT Act – a terrible bill that I’ve decried at great length, which was reintroduced in the Senate on the same day as STOP CSAM – has a weak-sauce provision that at least kinda tries halfheartedly to protect encryption from being the basis for provider liability. STOP CSAM doesn’t even have that!
Teeing Up a Future E2EE Ban
Even leaving aside the “promote or facilitate” provisions that would open the door to an onslaught of litigation against the providers of E2EE services, there’s another way in which STOP CSAM is sneakily anti-encryption: by trying to get encrypted services to rat themselves out to the government.
The STOP CSAM bill contains mandatory transparency reporting provisions, which, as my Lawfare piece noted, have become commonplace in the recent bumper crop of online safety bills. The transparency reporting requirements apply to a subset of the online service providers that are required to report CSAM they find under an existing federal law, 18 U.S.C. § 2258A. (That law’s definition of covered providers has a lot of overlap, in practice, with Section 230’s “ICS provider” definition. Both of these definitions plainly cover apps for messaging, voice, and video calls, whether they’re E2EE or not.) In addition to reporting the CSAM they find, those covered providers would also separately have to file annual reports about their efforts to protect children.
Not every provider that has to report CSAM would have to file these annual reports, just the larger ones: specifically, those with at least one million unique monthly visitors/users and over $50 million in annual revenue. That’s a distinction from the “promote or facilitate” liability discussed above, which doesn’t just apply to the big guys.
Covered providers must file an annual report with the Attorney General and the Federal Trade Commission that provides information about (among other things) the provider’s “culture of safety.” This means the provider must describe and assess the “measures and technologies” it employs for protecting child users and keeping its service from being used to sexually abuse or exploit children.
In addition, the “culture of safety” report must also list “[f]actors that interfere with the provider’s ability to detect or evaluate instances of child sexual exploitation and abuse,” and assess those factors’ impact.
That provision set off alarm bells in my head. I believe this reporting requirement is intended to force providers to cough up internal data and create impact assessments, so that the federal government can then turn around and use that information as ammunition to justify a subsequent legislative proposal to ban E2EE.
This hunch arises from Sen. Durbin’s own framing of the bill. As I noted above, his February press release about STOP CSAM spends its first two paragraphs claiming that E2EE would “turn off the lights” on detecting child sex abuse online. Given this framing, it’s pretty straightforward to conclude that the bill’s “interfering factors” report requirement has E2EE in mind.
So: In addition to opening the door to civil and/or criminal liability for E2EE services without ever mentioning the word “encryption” (as explained above), STOP CSAM is trying to lay the groundwork for justifying a later bill to more overtly ban providers from offering E2EE at all.
But It’s Not That Simple, Durbin
There’s no guarantee this plan will succeed, though. If this bill passes, I’m skeptical that its ploy to fish for evidence against E2EE will play out as intended, because it rests on a faulty assumption. The policy case for outlawing or weakening E2EE rests on the oft–repeated premise that online service providers can’t fight abuse unless they can access the contents of users’ files and communications at will, a capability E2EE impedes. However, my own research has proved this assumption untrue.
Last year, I published a peer-reviewed article analyzing the results of a survey I conducted of online service providers, including some encrypted messaging services. Many of the participating providers would likely be covered by the STOP CSAM bill. The survey asked participants to describe their trust and safety practices and rank how useful they were against twelve different categories of online abuse. Two categories pertained to child safety: CSAM and child sexual exploitation (CSE) such as grooming and enticement.
My findings show that CSAM is distinct from other kinds of online abuse. What currently works best to detect CSAM isn’t what works best against other abuse types, and vice versa. For CSAM, survey participants considered scanning for abusive content to be more useful than other techniques (user reporting and metadata analysis) that — unlike scanning — don’t rely on at-will provider access to user content. However, that wasn’t true of any other category of abuse — not even other child safety offenses.
For detecting CSE, user reporting and content scanning were considered equally useful for abuse detection. In most of the remaining 10 abuse categories, user reporting was deemed more useful than any other technique. Many of those categories (e.g., self-harm and harassment) affect children as well as adults online. In short, user reports are a critically important tool in providers’ trust and safety toolbox.
Here’s the thing: User reporting — the best weapon against most kinds of abuse, according to providers themselves — can be, and is, done in E2EE environments. That means rolling out E2EE doesn’t nuke a provider’s abuse-fighting capabilities. My research debunks that myth.
My findings show that E2EE does not affect a provider’s trust and safety efforts uniformly; rather, E2EE’s impact will likely vary depending on the type of abuse in question. Even online child safety is not a monolithic problem (as was cogently explained in another recent report by Laura Draper of American University). There’s simply no one-size-fits-all answer to solving online abuse.
From these findings, I conclude that policymakers should not pass laws regulating encryption and the Internet based on the example of CSAM alone, because CSAM poses such a unique challenge.
And yet that’s just what I suspect Sen. Durbin has in mind: to collect data about one type of abusive content as grounds to justify a subsequent law banning providers from offering E2EE to their users. Never mind that such a ban would affect all content and all users, whether abusive or not.
That’s an outcome we can’t afford. Legally barring providers from offering strong cybersecurity and privacy protections to their users wouldn’t keep children safe; it would just make everybody less safe, children included. As a recent report from the Child Rights International Network and DefendDigitalMe describes, while E2EE can be misused, it is nevertheless a vital tool for protecting the full range of children’s rights, from privacy to free expression to protection from violence (including state violence and abusive parents). That’s in addition to the role strong encryption plays in protecting the personal data, financial information, sensitive secrets, and even bodily safety of domestic violence victims, military servicemembers, journalists, government officials, and everyone in between.
Legislators’ tunnel-vision view of E2EE as nothing but a threat requires casting all those considerations aside — treating them as “collateral damage,” to borrow Sen. Durbin’s phrase. But the reality is that billions of people use E2EE services every day, of whom only a tiny sliver use them for harm — and my research shows that providers have other ways to deal with those bad actors. As I conclude in my article, anti-E2EE legislation just makes no sense.
Given the crucial importance of strong encryption to modern life, Sen. Durbin shouldn’t expect the providers of popular encrypted services to make it easy for him to ban it. Those major players covered by the STOP CSAM bill? They have PR departments, lawyers, and lobbyists. Those people weren’t born yesterday. If I can spot a trap, so can they. The “culture of safety” reporting requirements are meant to give providers enough rope to hang themselves. That’s like a job interviewer asking a candidate what their greatest weakness is and expecting a raw and damning response. The STOP CSAM bill may have been crafted as a ticking time bomb for blowing up encryption, but E2EE service providers won’t be rushing to light the fuse.
From my research, I know that providers’ internal child-safety efforts are too complex to be reduced to a laundry list of positives and negatives. If forced to submit the STOP CSAM bill’s mandated reports, providers will seize upon the opportunity to highlight how their E2EE services help protect children and describe how their panoply of abuse-detection measures (such as user reporting) help to mitigate any adverse impact of E2EE. While its opponents try to caricature E2EE as a bogeyman, the providers that actually offer E2EE will be able to paint a fuller picture.
Will It Even Matter What Providers’ “Culture of Safety” Reports Say?
Unfortunately, given how the encryption debate has played out in recent years, we can expect Congress and the Attorney General (a role recently held by vehemently anti-encryption individuals) to accuse providers of cherry-picking the truth in their reports. And they’ll do so even while they themselves cherry-pick statistics and anecdotes that favor their pre-existing agenda.
I’m basing that prediction on my own experience of watching my research, which shows that online trust and safety is compatible with E2EE, get repeatedly cherry-picked by those trying to outlaw E2EE. They invariably highlight my anomalous findings regarding CSAM while leaving out all the other findings and conclusions that are inconvenient to their false narrative that E2EE wholly precludes trust and safety enforcement. As an academic, I know I can’t control how my work product gets used. But that doesn’t mean I don’t keep notes on who’s misusing it and why.
Providers can offer E2EE and still effectively combat the misuse of their services. Users do not have to accept intrusive surveillance as the price of avoiding untrammeled abuse, contrary to what anti-encryption officials like Sen. Durbin would have us believe.
If the STOP CSAM bill passes and its transparency reporting provisions go into effect, providers will use them to highlight the complexity of their ongoing efforts against online child sex abuse, a problem that is as old as the Internet. The question is whether that will matter to congressmembers who have already made up their minds about the supposed evils of encryption and the tech companies that offer it — or whether those annual reports were always intended as an exercise in futility.
What’s Next for the STOP CSAM Bill?
It took two months after that February press release for Durbin to actually introduce the bill in mid-April, and it took even longer for the bill text to actually appear on the congressional bill tracker. Durbin chairs the Senate Judiciary Committee, where the bill was supposed to be considered in committee meetings during each of the last two weeks, but it got punted out both times. Now, the best guess is that it will be discussed and marked up this coming Thursday, May 4. However, it’s quite possible it will get delayed yet again. On the one hand, Durbin as the committee chair has a lot of power to move his own bill along; on the other hand, he hasn’t garnered a single co-sponsor yet, and might take more time to get other Senators on board before bringing it to markup.
I’m heartened that Durbin hasn’t gotten any co-sponsors and has had to slow-roll the bill. STOP CSAM is very dense, it’s very complicated, and in its current form, it poses a huge threat to the security and privacy of the Internet by dissuading E2EE. There may be some good things in the bill, as Mike wrote, but at 133 pages long, it’s hard to figure out what the bill actually does and whether those would be good or bad outcomes. I’m sure I’ll be writing more about STOP CSAM as I continue to read and digest it. Meanwhile, if you have any luck making sense of the bill yourself, and your Senator is on the Judiciary Committee, contact their office and let them know what you think.
Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory. A version of this piece originally appeared on the Stanford CIS blog.
Filed Under: dick durbin, encryption, end to end encryption, liability, section 230, stop csam act
Musk Does Have Some Good Ideas: Encrypting DMs Would Be Huge, But…
from the it's-not-as-easy-as-he-thinks dept
We’ve been somewhat critical of Elon Musk‘s tenure as Twitter owner and CEO (I think for fairly good reasons), but he does have a few good ideas. Lead among them, wanting to enable encrypted direct messages (DMs). He’s mentioned it before, but also had this slide in a recent internal presentation he gave:
There’s not much to go on with that slide, given that… it just says “Encrypted DMs” and appears to have an image of… existing, unencrypted DMs.
However, Jane Manchun Wong, who is basically a wizard in sniffing out new features and new code being tested on Twitter (and elsewhere) notes that she’s seen snippets of code referencing Signal Protocol for encrypted DMs already showing up inside the Twitter iOS app.
Of course, it appears that’s old code. Like so many things that Elon trots out, these were ideas that Twitter was already exploring, though it did appear that encrypted DMs was shelved. Jane had also spotted encrypted DM testing all the way back in early 2018 as well.
That said, it looks like the new code… is just the old code that Twitter had worked on being dusted off. Former Twitter engineer Brandon Carpenter notes that the code that Jane spotted was really his own code from that 2018 test, quote tweeting Jane and noting “Oh look! Some code I wrote four years ago.”
For what it’s worth, Brandon also laid out one of the issues they had back in 2018, which was in the process of trying to obtain a license from Signal, Moxie Marlinspike, Signal’s founder, ghosted them for weeks when he just decided to go sailing without telling anyone. I’ve seen some people question why anyone would need a license from Signal, considering that Signal Protocol is an open protocol that anyone can use. But, it’s not easy to do it right, and there are many, many reasons to get Signal’s seal of approval before trusting the encryption.
On a… let’s say related note… Twitter’s former Chief Information Security Officer, Lea Kissner wrote out a very interesting and useful thread about the general pitfalls of trying to implement end-to-end encryption, especially in a web app. Suffice it to say it is not easy, and is not something you rush through or things are going to go very, very badly. There are big questions to consider, including how do you handle lost keys, how do you handle stolen keys, how do you handle abuse, and much, much more.
This has all proven challenging for others as well, including Facebook’s very slow efforts to roll out more end-to-end encryption among its various messaging products with a much larger team.
Still, it’s good that Elon considers this important, and one hopes that he can actually get it done, and at least implement less bad answers to some of the many questions that have stymied other teams looking to implement end-to-end encryption. Of course, it may also mean being willing to stand up against government demands and threats regarding encryption, something that we don’t know if Elon is actually willing to do.
On the whole, though, even as he’s made many other mistakes, it’s worth celebrating his stated support for more encrypted messaging.
Filed Under: e2ee, elon musk, encryption, end to end encryption, signal protocol
Companies: signal, twitter
Data Privacy Matters: Facebook Expands Encryption Just After Facebook Messages (Obtained Via Search Warrant) Used To Charge Teen For Abortion
from the encrypt-all-the-things dept
In the wake of the Dobbs decision overturning Roe v. Wade, there has been plenty of attention paid to the kinds of data that companies keep on us, and how they could be exposed, including to law enforcement. Many internet companies seemed somewhat taken by surprise regarding all of this, which is a bit ridiculous, given that (1) they had plenty of time to prepare for this sort of thing, and (2) it’s not like plenty of us haven’t been warning companies about the privacy problems of having too much data.
Anyway, this week, a story broke that is re-raising many of these concerns, as it’s come out that a teenager in Nebraska has been charged with an illegal abortion, after Meta turned over messages on Facebook Messenger pursuant to a search warrant, which was approved following an affidavit from Norfolk Police Detective Ben McBride.
This is raising all sorts of alarms, for all sorts of good reasons. While many are blaming Meta, that’s somewhat misplaced. As the company notes (and as you can confirm by looking at the linked documents above), the search warrant that was sent to the company said it was an investigation into the illegal burning and burial of a stillborn infant, not something to do with abortion. Given that, it’s not difficult to see why Meta provided the information requested.
Of course, there’s a bigger question here: which is about why Meta should even have access to that information in the first place. And, it appears that Meta agrees. Just days after this all came out, the company announced that it is (finally) testing a much more encrypted version of Messenger (something the company has been talking about for a while, but which has proven more complicated to implement). The new features include encrypted backups of messages and also making end-to-end encrypted chats the default for some users.
While the timing is almost certainly a coincidence, many observers are making the obvious connection to this story.
While the Nebraska story is horrifying in many ways, it’s also a reminder of why full end-to-end encryption is so incredibly important, and how leaving unencrypted data with third parties means your data is always, inherently at risk.
Arguably, Facebook should have encrypted its messaging years ago, but it’s been a struggle for a variety of reasons, as Casey Newton laid out in a fascinating piece. Facebook has certainly faced technical challenges and (perhaps more importantly) significant political pushback from governments and law enforcement who like the ability to snoop on everyone.
But also part of the problem is the end users themselves:
The first is that end-to-end encryption can be a pain to use. This is often the tradeoff we make in exchange for more security, of course. But average people may be less inclined to use a messaging app that requires them to set a PIN to restore old messages, or displays information about the security of their messages that they find confusing or off-putting.
The second, related challenge is that most people don’t know what end-to-end encryption is. Or, if they’re heard of it, they might not be able to distinguish it from other, less secure forms of encryption. Gmail, among many other platforms, encrypts messages only when a message is in transit between Google’s servers and your device. This is known as transport layer security, and it offers most users good protection, but Google — or law enforcement — can still read the contents of your messages.
Meta’s user research has shown that people grow concerned when you tell them you’re adding end-to-end encryption, one employee told me, because it scares them that the company might have been reading their messages before now. Users also sometimes assume new features are added for Meta’s benefit, rather than their own — that’s one reason the company labeled stored-message feature “secure storage,” rather than “automatic backups,” so as to emphasize security in the branding.
It’s also interesting to note that Casey’s piece says that Meta’s user survey found that most of their users don’t think encrypting their own data is that much of a priority, as they’re just not that concerned. This does not surprise me at all — as we’ve now had decades of revealed preferences that show that, contrary to what many in the media suggest, most people don’t actually care that much about their privacy.
And, yet, as this story shows, they really should. But if we’ve learned anything over the past couple decades, no amount of horror stories about revealed data will convince the majority of people to take proactive steps to better secure their data. So, on that front, it’s actually a positive move that Meta is pushing forward with effectively moving people over to fully encrypted messaging — hopefully in a user-friendly manner.
Data privacy does matter, and the answer to it has to come from making it widely available in a consumer friendly version — even when that’s a really difficult challenge. Laws are not going to protect privacy. Remember, governments seem more interested in banning or breaking end-to-end encryption than encouraging it. And while perhaps the rise of data abuse post-Dobbs will expand the number of people who proactively seek to use encryption and take their own data privacy more seriously, history has shown that most people will still take the most convenient way forward.
And that means it’s actually good news that Facebook is finally moving forward with efforts to make that most convenient path… still end-to-end encrypted.
Filed Under: abortion, data, data privacy, encrypted backups, encryption, end to end encryption, facebook messenger, privacy
Companies: facebook, meta
Social Responsibility Organization Says Meta’s Embrace Of Encryption Is Important For Human Rights
from the encryption-protects-human-rights dept
Encryption is under attack from all over the world. Australia already has a law on the books trying to force companies to backdoor encryption. The UK is pushing its Online Safety Bill, which would be an attack on encryption (the UK government has made it clear it wants an end to encryption). In the US, we have the EARN IT Act, whose author, Senator Richard Blumenthal, has admitted he sees it as a necessary attack on companies who “hide behind” encryption.
All over the world, politicians and law enforcement officials insist that they need to break encryption to “protect” people. This has always been false. If you want to protect people, you want them to have (and use) encryption.
Against this backdrop, we have Meta/Facebook. While the company has long supported end-to-end encryption in WhatsApp, it’s been rolling it out on the company’s other messaging apps as well. Even if part of the reason for enabling encryption is about protecting itself, getting more encryption out to more people is clearly a good thing.
And now there’s more proof of that. Business for Social Responsibility is a well respected organization, which was asked by Meta to do a “human rights assessment” of Meta’s expanded use of end-to-end encryption. While the report was paid for by Meta, BSR’s reputation is unimpeachable. It’s not the kind of organization that throws away its reputation because a company paid for some research. The end result is well worth reading, but, in short, BSR notes that the expansion of end-to-end encryption is an important step in protecting human rights.
The paper is thorough and careful, details its methodology, and basically proves what many of us have been saying all along: if you’re pushing to end or diminish end-to-end encryption, you are attacking human rights. The key point:
Privacy and security while using online platforms should not only be the preserve of the technically savvy and those able to make proactive choices to opt into end-to-end encrypted services, but should be democratized and available for all.
The report notes that we’re living in a time of rising authoritarianism, and end-to-end encryption is crucial in protecting people fighting back against such efforts. The report is careful and nuanced, and isn’t just a one-sided “all encryption must be good” kind of thing. It does note that there are competing interests.
The reality is much more nuanced. There are privacy and security concerns on both sides, and there are many other human rights that are impacted by end-to-end encrypted messaging, both positively and negatively, and in ways that are interconnected. It would be easy, for example, to frame the encryption debate not only as “privacy vs. security” but also as “security vs. security,” because the privacy protections of encryption also protect the bodily security of vulnerable users. End-to-end encryption can make it more challenging for law enforcement agencies to access the communications of criminals, but end-to-end encryption also makes it more challenging for criminals to access the communications of law-abiding citizens.
As such, the report highlights the various tradeoffs involved in encrypting more communications, but notes:
Meta’s expansion of end-to-end encrypted messaging will directly result in the increased realization of a range of human rights, and will address many human rights risks associated with the absence of ubiquitous end-to-end encryption on messaging platforms today. The provision of end-to-end encrypted messaging by Meta directly enables the right to privacy, which in turn enables other rights such as freedom of expression, association, opinion, religion, and movement, and bodily security. By contrast, the human rights harms associated with end-toend encrypted messaging are largely caused by individuals abusing messaging platforms in ways that harm the rights of others—often violating the service terms that they have agreed to. However, this does not mean that Meta should not address these harms; rather, Meta’s relationship to these harms can help identify the types of leverage Meta has available to address them.
The report notes that people who are worried that by enabling end-to-end encryption, Meta will enable more bad actors, do not seem to be supported by evidence, since bad actors have a plethora of encrypted communications channels already at their disposal:
If Meta decided not to implement end-toend encryption, the most sophisticated bad actors would likely choose other end-to-end encrypted communications platforms. Sophisticated tech use is increasingly part of criminal tradecraft, and the percentage of criminals without the knowledge and skills to use end-to-end encryption will continue to decrease over time. For this reason, if Meta chose not to provide end-to-end encryption, this choice would likely not improve the company’s ability to help law enforcement identify the most sophisticated and motivated bad actors, who can choose to use other end-to-end encrypted messaging products.
While the report notes that things like child sexual abuse material (CSAM) are a serious issue, focusing solely on scanning everything and trying to block it is not the only way (or even the best) way of addressing the issue. Someone should send this to the backers of the EARN IT Act, which is predicated on forcing more companies to scan more communications.
Content removal is just one way of addressing harms. Prevention methods are feasible in an end-to-end encrypted environment, and are essential for achieving better human rights outcomes over time. The public policy debate about end-to-end encryption often focuses heavily or exclusively on the importance of detecting and removing problematic, often illegal content from platforms, whether that be CSAM or terrorist content. Content removal is important for harm from occurring in end-to-end encrypted messaging through the use of behavioral signals, public platform information, user reports, and metadata to identify and interrupt problematic behavior before it occurs.
The report also, correctly, calls out how the “victims” in this debate are most often vulnerable groups — the kind of people who really could use much more access to private communications. It also notes that while some have suggested “technical mitigations” that can be used to identify illegal content in encrypted communications, these mitigations are “not technically feasible today.” This includes the much discussed “client-side” scanning idea that Apple has toyed with.
Methods such as client-side scanning of a hash corpus, trained neural networks, and multiparty computation including partial or fully homomorphic encryption have all been suggested by some as solutions to enable messaging apps to identify, remove, and report content such as CSAM. They are often collectively referred to as ”perceptual hashing” or “client-side scanning,” even though they can also be server-side. Nearly all proposed client-side scanning approaches would undermine the cryptographic integrity of end-to-end encryption, which because it is so fundamental to privacy would constitute significant, disproportionate restrictions on a range of rights, and should therefore not be pursued
The report also notes that even if someone came up with a backdoor technology that allowed Meta to scan encrypted communications, the risks to human rights would be great, given that such technology could be repurposed in dangerous ways.
For example, if Meta starts detecting and reporting universally illegal content like CSAM, some governments are likely to exploit this capability by requiring Meta to block and report legitimate content they find objectionable, thereby infringing on the privacy and freedom of expression rights of users. It is noteworthy that even some prior proponents of homomorphic encryption have subsequently altered their perspective for this reason, concluding that their proposals would be too easily repurposed for surveillance and censorship. In addition, these solutions are not foolproof; matching errors can occur, and bad actors may take advantage of the technical vulnerabilities of these solutions to circumvent or game the system
The report notes that there are still ways that encrypted communications can be at risk, even name-checking NSO Group’s infamous Pegasus spyware.
How about all the usual complaints from law enforcement about how greater use of encryption will destroy their ability to solve crimes? BSR says “not so fast…”
While a shift to end-to-end encryption may reduce law enforcement agency access to the content of some communications, it would be wrong to conclude that law enforcement agencies are faced with a net loss in capability overall. Trends such as the collection and analysis of significantly increased volumes of metadata, the value of behavioral signals, and the increasing availability of artificial intelligence-based solutions run counter to the suggestion that law enforcement agencies will necessarily have less insight into the activities of bad actors than they did in the past. Innovative approaches can be deployed that may deliver similar or improved outcomes for law enforcement agencies, even in the context of end-to-end encryption. However, many law enforcement entities today lack the knowledge or the resources to take advantage of these approaches and are still relying on more traditional techniques.
Still, the report does note that Meta should take responsibility in dealing with some of the second- and third-order impacts of ramping up encryption. To that end, it does suggest some “mitigation measures” Meta should explore — though noting that a decision not to implement end-to-end encryption “would also more closely connect Meta to human rights harm.” In other words, if you want to protect human rights, you should encrypt. In fact, the report is pretty bluntly direct on this point:
If Meta were to choose not to implement end-to-end encryption across its messaging platforms in the emerging era of increased surveillance, hacking, and cyberattacks, then it could be considered to be “contributing to” many adverse human rights impacts due to a failure to protect the privacy of user communications.
Finally, the paper concludes with a series of recommendations for Meta on how to “avoid, prevent, and mitigate the potential adverse human rights impacts from the expansion of end-to-end encryption, while also maximizing the beneficial impact end-to-end encryption will have on human rights.”
The report has 45 specific (detailed and thoughtful) recommendations to that end. Meta has already committed to fully implementing 34 of them, while partly implementing four more, and assessing six others. There is only one of the recommendations that Meta has rejected. The one that it rejected has to do with “client side scanning” which the report itself was already nervous about (see above). However, one of the recommendations suggested that Meta “continue investigating” client-side scanning techniques to see if a method was eventually developed that wouldn’t have all the problems detailed above. However, Meta says it sees no reason to continue exploring such a technology. From Meta’s response:
As the HRIA highlights, technical experts and human rights stakeholders alike have raised significant concerns about such client-side scanning systems, including impacts on privacy, technical and security risks, and fears that governments could mandate they be used for surveillance and censorship in ways that restrict legitimate expression, opinion, and political participation that is clearly protected under international human rights law.
Meta shares these concerns. Meta believes that any form of client-side scanning that exposes information about the content of a message without the consent and control of the sender or intended recipients is fundamentally incompatible with an E2EE messaging service. This would be the case even with theoretical approaches that could maintain “cryptographic integrity” such as via a technology like homomorphic encryption—which the HRIA rightly notes is a nascent technology whose feasibility in this context is still speculative.
People who use E2EE messaging services rely on a basic premise: that only the sender and intended recipients of a message can know or infer the contents of that message. As a result, Meta does not plan to actively pursue any such client-side scanning technologies that are inconsistent with this user expectation.
We spend a lot of time criticizing Facebook/Meta around these parts, as the company often seems to trip over itself in trying to do the absolutely wrongest thing over and over again. But on this it’s doing a very important and good thing. The BSR report confirms that.
Filed Under: client-side scanning, csam, encryption, end to end encryption, human rights, messenger
Companies: facebook, instagram, meta, whatsapp