encryption – Techdirt (original) (raw)
Apple Snuck In Code That Automatically Reboots Idle IPhones And Cops Are Not Happy About It
from the phone-cracking-now-has-a-countdown-timer dept
Detroit law enforcement officials got a bit of shock last week when some seized iPhones rebooted themselves, despite being in airplane mode and, in one case, stored inside a Faraday bag. Panic — albeit highly localized — ensued. It was covered by Joseph Cox for 404 Media, who detailed not only the initial panic, but the subsequent responses to this unexpected development.
Law enforcement officers are warning other officials and forensic experts that iPhones which have been stored securely for forensic examination are somehow rebooting themselves, returning the devices to a state that makes them much harder to unlock, according to a law enforcement document obtained by 404 Media.
The exact reason for the reboots is unclear, but the document authors, who appear to be law enforcement officials in Detroit, Michigan, hypothesize that Apple may have introduced a new security feature in iOS 18 that tells nearby iPhones to reboot if they have been disconnected from a cellular network for some time. After being rebooted, iPhones are generally more secure against tools that aim to crack the password of and take data from the phone.
The problem (for the cops, not iPhone owners) is that the reboot takes the phone out of After First Unlock (AFU) state — a state where current phone-cracking tech can still be effective — and places it back into Before First Unlock (BFU) state, which pretty much renders phone-cracking tech entirely useless.
The speculation as to the source of these unexpected reboots was both logical and illogical. The logical assumption was that Apple had, at some point, added some new code to the latest iOS version without informing the public this new feature had been added.
The other guesses were just kind of terrible and, frankly, a bit worrying, considering their source: law enforcement professionals tasked with finding technical solutions to technical problems.
The law enforcement officials’ hypothesis is that “the iPhone devices with iOS 18.0 brought into the lab, if conditions were available, communicated with the other iPhone devices that were powered on in the vault in AFU. That communication sent a signal to devices to reboot after so much time had transpired since device activity or being off network.” They believe this could apply to iOS 18.0 devices that are not just entered as evidence, but also personal devices belonging to forensic examiners.
These are phones, not Furbies. There needs to be some avenue for phone-to-phone communication, which can’t be achieved if the phones are not connected to any networks and/or stored in Faraday cages/bags. The advisory tells investigators to “take action to isolate” iOS 18 devices to keep them from infecting (I guess?) other seized phones currently awaiting cracking.
Fortunately, a day later, most of this advisory was rendered obsolete after actual experts took a look at iOS 18’s code. Some of those experts work for Magnet Forensics, which now owns Grayshift, the developer of the GrayKey phone cracker. This was also covered by Joseph Cox and 404 Media.
In a law enforcement and forensic expert only group chat, Christopher Vance, a forensic specialist at Magnet Forensics, said “We have identified code within iOS 18 and higher that is an inactivity timer. This timer will cause devices in an AFU state to reboot to a BFU state after a set period of time which we have also identified.”
[…]
“The reboot timer is not tied to any network or charging functions and only tied to inactivity of the device since last lock [sic],” he wrote.
It’s an undocumented feature in the latest version of iOS, apparently. And one that isn’t actually a bug dressed in “feature” clothing. This was intentional, as was Apple’s decision to keep anyone from knowing about until it was discovered, presumably. Apple has issued no statement confirming or denying the stealthy insertion of this feature.
Law enforcement officials and the tech contractors they work with aren’t saying much either. Everything published by 404 Media was based on screenshots taken from a law enforcement-only group chat or secured from a source in the phone forensics field. Magnet Forensic has only offered a “no comment,” along with the acknowledgement the company is aware this problem now exists.
This means iPhones running the latest iOS version will need to be treated like time bombs by investigators. The clock will start running the moment they remove the phones from the networks they use.
This isn’t great news for cops, but it’s definitely great news for iPhone owners. And not just the small percentage who are accused criminals. Everyone benefits from this. And the feature will deter targeting of iPhones by criminals, who are even less likely to be able to beat the clock with their phone-cracking tech. Anything that makes electronic devices less attractive to criminals is generally going to cause additional problems for law enforcement because both entities — to one degree or another — know the true value of a seized/stolen phone isn’t so much the phone itself as it is the wealth of information those phones contain.
Filed Under: device cracking, device encryption, device security, encryption, law enforcement, security
Companies: apple, grayshift, magnet forensics
Wyden: CALEA Hack Proves Dangers Of Government-Mandated Backdoors
from the backdoors-are-bad,-full-stop dept
When Congress passed the Communications Assistance for Law Enforcement Act (CALEA) in 1994, they were assured by then-FBI Director Louis Freeh that the mandated wiretap backdoors posed no security risks. Fast forward to today, following the news of a massive CALEA hack and Senator Ron Wyden is reminding the DOJ of that history, while urging the Attorney General to better protect Americans’ security, in part by no longer demanding backdoors in encryption systems.
Last week, we wrote about the bombshell story of the Chinese hacking group Salt Typhoon apparently having “months or longer” access to the mandated wiretapping system found within our phone system. We noted how this story should put an end to the idea — often pushed by lawmakers and law enforcement — that surely we can put similar “backdoors” into encrypted communications.
Senator Ron Wyden has now sent a letter to the FCC and the DOJ highlighting a bit of the history behind CALEA, the statute that mandated wiretapping of the phone lines. In particular, Wyden points out that cybersecurity professionals warned Congress at the time that CALEA would lead to massive vulnerabilities in our phone system and could put everyone’s communications at risk.
These telecommunications companies are responsible for their lax cybersecurity and their failure to secure their own systems, but the government shares much of the blame. The surveillance systems reportedly hacked were mandated by federal law, through the Communications Assistance for Law Enforcement Act (CALEA). CALEA, which was enacted in 1994 at the urging of the Federal Bureau of Investigations (FBI), forced phone companies to install wiretapping technology into then-emerging digital phone networks. In 2006, acting on a request from the FBI, the Federal Communications Commission (FCC) expanded this backdoor mandate to broadband internet companies.
During the Congressional hearings for CALEA, cybersecurity experts warned that these backdoors would be prime targets for hackers and foreign intelligence services. However, these concerns were dismissed by then-FBI Director Louis J. Freeh, who testified to Congress that experts’ fears of increased vulnerability were “unfounded and misplaced.” Congress, relying on the FBI Director’s assurances that the security risks experts warned about could be addressed, passed the law mandating backdoors. The Department of Justice (DOJ) received $1 billion in today’s dollars to provide industry grants for the development and purchase of new wiretapping technology.
The letter suggests that the DOJ should use this to start pushing back on efforts to backdoor encryption:
DOJ must stop pushing for policies that harm Americans’ privacy and security by championing surveillance backdoors in other communications technologies, like encrypted messaging apps. There is, and has long been, broad consensus among cybersecurity experts that wiretapping capabilities undermine the security of communications technology and create an irresistible target for hackers and spies. Even so, law enforcement officials, including your predecessor, as well as the current and former FBI Directors, have denied this reality, spread disinformation about non-existent secure backdoors, and sought to pressure companies to weaken the security of their products.
The letter also asks the FCC to issue rules regarding security on CALEA wiretaps. The FCC has had the ability to do this for decades, but has mostly chosen to stay out of it:
Chairwoman Rosenworcel, your agency has the authority to require strong cybersecurity defenses in these systems today. The FCC should initiate a rulemaking process to update the CALEA regulations to fully implement the system security requirements in the law. At a minimum, these updated regulations should establish baseline cybersecurity standards for telecommunications carriers, enforced by steep fines; require independent, annual third-party cybersecurity audits; require board-level cybersecurity expertise; and require senior executives annually sign certifications of compliance with the cybersecurity standards.
Overall, this is a good letter. It would be nice if the DOJ, at least, started pushing back on backdooring encryption, rather than (as it has done for years) pushing for such a security disaster.
Filed Under: backdoors, calea, doj, encryption, fcc, ron wyden, salt typhoon, wiretaps
The FBI Has Apparently Spent A Year Trying To Crack NYC Mayor Eric Adams’ Personal Phone
from the MAYOR-BEATS-FEDS dept
The spectacular collapse of the Mayor Adams’ administration is still in progress. Pretty much everyone with ties to the ex-cop, current mayor has either been informed of an ongoing investigation or managed to infer that following multiple raids by the FBI.
The mayor’s handpicked police commissioner, Edward Caban, resigned shortly after these raids occurred, most likely because he was on the receiving end of one of these raids. So were First Deputy Mayor Sheena Wright, Deputy Mayor for Public Safety Phil Banks, Phil Banks’ brother, David Banks, who is the schools chancellor, and Timothy Pearson, the mayor’s adviser.
Edward Caban issued a “get out of accountability free” missive to the NYPD as he left the building. He was replaced by former FBI Special Agent Michael Donlon… whose own house was also raided by the FBI.
In the middle of all this raiding and resigning, the Mayor’s PR people came forward to say the mayor was shocked, shocked! to discover there might be some sort of corruption-laden city government with himself at the center of all of it. The issued statement wasn’t quite the exoneration it was meant to be:
“As a former member of law enforcement, the mayor has repeatedly made clear that all members of the team need to follow the law.”
You know who doesn’t have to say that kind of thing repeatedly? Someone who oversees a bunch of people who have expressed no interest nor engaged in acts that might potentially violate the law. No honest politician/advisor/political appointee/police chief needs to be “repeatedly” reminded to “follow the law.” It just comes naturally to most people.
But Mayor Adams’ people are not most people. A lot of them are also former cops. Perhaps that explains all the corruption.
Mayor Adams himself isn’t immune to this ongoing investigation. In fact, he experienced his own personal raid a year before the onslaught of recent raids that have made headlines around the nation. Now that the mayor is under indictment, court filings are starting to expose a lot of details that were deliberately kept out of public view as the FBI engaged in its investigation.
One of those details is the fact that the FBI executed a search warrant targeting multiple phones used by Mayor Adams. However, his personal phone was not among those seized. A subpoena was issued ordering the mayor to turn over his personal phone (which is alleged to be the device the mayor used to “communicate about the conduct described in this indictment”). Mayor Adams complied. Sort of. He gave the FBI his phone. What he didn’t give the FBI was a way to see the phone’s contents, according to this report by Gaby Del Valle for The Verge.
When Adams turned in his personal cellphone the following day, charging documents say, he said he had changed the password a day prior — after learning about the investigation — and couldn’t remember it.
Sure looks like an attempt to withhold and/or destroy evidence. The fact that this happened the day after the FBI seized the mayor’s other phones isn’t going to work out well for him in court. His excuse — that he couldn’t remember it — is no more believable than his office’s assertion that everyone engaged in legal behavior because they were repeatedly told not to violate the law.
But both of those statements are far more believable than the mayor’s explanation of the post-FBI visit password changing:
Adams told investigators he changed the password “to prevent members of his staff from inadvertently or intentionally deleting the contents of his phone,” the indictment alleges.
LOL
Keep in mind, this was the mayor’s personal phone. Pretending staffers had routine and easy access to it or its contents beggars belief. And the simplest way to prevent staffers from “accidentally” deleting evidence of alleged criminal actions would be to maintain possession of the phone on your person or throw it in a safe or lock it in a desk drawer or do literally anything other than change a password and immediately “forget” what it was.
Again, none of this is going to reflect well on the mayor as he faces these charges in court. Any judge will see it the way the rest of us see it: a deliberate attempt to thwart a federal investigation.
Even so, let’s hope this doesn’t result in any stupid precedent motivated by the mayor’s apparently willful attempt to obstruct this investigation. There’s some potential here for rulings that might negatively affect Fifth Amendment rights and/or give the feds leverage to agitate for compelled assistance from phone manufacturers.
Because there’s a chance it might do any of these things. The FBI has had the phone for a long time. And it still hasn’t managed to access its contents. The FBI insists (without supporting evidence, obviously) that this is a BIG DEAL that might BREAK THE CASE.
During a federal court hearing, prosecutor Hagan Scotten said the FBI’s inability to get into Adams’ phone is a “significant wild card,” according to a report from the New York Post.
I want to believe that might be true. But only because I want the feds to deliver a ton of incriminating evidence that takes down Mayor Adams and anyone else in his administration who engaged in corruption. On the other hand, the FBI always claims any phone it can’t get into must be loaded with incriminating evidence capable of producing slam-dunk prosecutions. The FBI’s anti-encryption agitation relies on its fervent belief that the best and most incriminating evidence is always found on encrypted devices, therefore courts should force companies (or accused persons) to decrypt the contents so special agents can open and close investigations without ever leaving their desks.
I’m definitely here for the fallout. I’m guessing these raids will lead to a string of resignations, a cooperating witness or two, and a few wrist slaps for ex-law enforcement officials. But if someone’s going to burn for this, it should be the person at the top of the city food chain. And as much as I’d like to see that happen, I’d much rather it was accomplished without collateral damage to Ccnstitutional rights or the security and privacy provided by strong encryption.
Filed Under: 5th amendment, doj, encryption, eric adams, fbi, nyc, phone searches
Chinese Access To AT&T/Verizon Wiretap System Shows Why We Cannot Backdoor Encryption
from the backdoors-can-be-opened-by-spies-too dept
Creating surveillance backdoors for law enforcement is just asking for trouble. They inevitably become targets for hackers and foreign adversaries. Case in point: the US just discovered its wiretapping system has been compromised for who knows how long. This should end the encryption backdoor debate once and for all.
The law enforcement world has been pushing for backdoors to encryption for quite some time now, using their preferred term for it: “lawful access.” Whenever experts point out that backdooring encryption breaks the encryption entirely and makes everyone less safe and less secure, you’ll often hear law enforcement say that it’s really no different than wiretapping phones, and note that that hasn’t been a problem.
Leaving aside the fact that it’s not even that much like wiretapping phones, this story should be thrown back in the faces of all of law enforcement folks believing that backdooring “lawful access” into encryption is nothing to worry about. Chinese hackers have apparently had access to the major US wiretapping system “for months or longer.”
A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers, potentially accessing information from systems the federal government uses for court-authorized network wiretapping requests.
For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.
According to the reporting, the hackers, known as “Salt Typhoon,” a known Chinese state-sponsored hacking effort, were able to breach the networks of telco giants Verizon and AT&T.
The Wall Street Journal says that officials are freaking out about this, saying that the “widespread compromise is considered a potentially catastrophic security breach.”
Here’s the thing: whenever you set up a system that allows law enforcement to spy on private communications, it’s going to become a massive target for all sorts of sophisticated players, from organized crime to nation states. So, this shouldn’t be a huge surprise.
But it should also make it clear why backdoors to encryption should never, ever be considered a rational decision. Supporters say it’s necessary for law enforcement to get access to certain information, but as we keep seeing, law enforcement has more ways than ever to get access to all sorts of information useful for solving crimes.
Putting backdoors into encryption, though, makes us all less safe. It opens up so many private communications to the risk of hackers getting in and accessing them.
And again, for all the times that law enforcement has argued for backdoors to encryption being just like wiretaps, it seems like this paragraph should destroy that argument forever.
The surveillance systems believed to be at issue are used to cooperate with requests for domestic information related to criminal and national security investigations. Under federal law, telecommunications and broadband companies must allow authorities to intercept electronic information pursuant to a court order. It couldn’t be determined if systems that support foreign intelligence surveillance were also vulnerable in the breach.
It’s also worth highlighting how this breach was only just discovered and has been in place for months “or longer” (meaning years, I assume). Can we not learn from this, and decide not to make encryption systems vulnerable to such an attack by effectively granting a backdoor that hackers will figure out a way to get into?
On an unrelated note, for all the talk of how TikTok is a “threat from China,” it seems like maybe we should have been more focused on stopping these kinds of actual hacks?
Filed Under: breach, china, encryption, lawful access, security, wiretaps
Companies: at&t, verizon
EU’s Commission’s Anti-Encryption Plans On The Ropes (Again) After Rejection By The Dutch Gov’t
from the maybe-don't-send-a-thug-out-to-do-diplomacy dept
The EU Commission is the definition of insanity. It has tried for years to convince all EU members the best way to fight crime is to undermine the security and privacy of millions of EU residents. And, for years, it has failed to make an argument capable of convincing a majority of the 27 European Union countries that this especially drastic, incredibly dangerous proposal is necessary.
Those pushing for encryption backdoors (that they dishonestly won’t call encryption backdoors) have leveraged all the usual hot buttons: terrorism, drug trafficking, national security, child sexual abuse material. But once anyone reads past the introductory hysteria, they tend to see it for what it is: a way to create massive government-mandated security flaws that would negatively affect their constituents and, ironically enough, their own national security.
The Commission keeps pushing, though. And it has no reason to stop. After all, it’s not playing with its own money and it rarely, if ever, seems to actually care what most Europeans think about this proposal. But to get it passed it does need a majority. So far, it hasn’t even managed to talk most members of the EU Parliament into giving broken-by-mandate encryption a thumbs up, much less at least 14 of the 27 governments that make up the EU Council.
The desperation of the would-be encryption banners is evident. If the EU Commission thought it had the upper hand in anti-encryption negotiations, it never would have sent out the EU’s Donald Trump to convince fence-sitters to side with the encryption breakers. This is from activist group EDRi’s (European Digital Rights) report on the latest failure of the EU Commission to secure some much-needed support for its “chat control” (a.k.a. client-side scanning) efforts.
In summer 2024, the government of Hungary became the fifth country to be given the unenviable task of attempting to broker a common position of the Council of the EU on this ill-fated law. The European Commission has long been trying to convince Member State governments that the proposed Regulation is legally sound (it isn’t), would protect encryption (it wouldn’t) and that reliable technologies already exist (they don’t).
[…]
According to Politico and to local reports, notorious Hungarian Prime Minister, Viktor Orbán, pulled out all the stops to try and convince the Netherlands to support the latest text. And in the last few days, he came worryingly close to succeeding.
Orban, last seen at Techdirt manipulating emergency powers rolled out during the pandemic to arrest people who called him things like “dear dictator” and “cruel tyrant” on social media, is one of an unfortunate number of European leaders to hold “conservative” views. (You know which ones.) He’s a nationalist, which is a polite way of calling him a bigot. And, of course, our own would-be “dear dictator” thinks he’s one of the greatest guys in Europe.
Here’s why Trump thinks he’s so great. It’s also why Orban might think forcing companies to break end-to-end encryption might be a good idea.
Orbán, who has turned into a hero of Trump’s followers and other conservative populists, is known for his restrictions on immigration and LGBTQ+ rights. He’s also cracked down on the press and judiciary in his country while maintain a close relationship with Russia.
You can’t make human rights violation omelets without breaking a few encryptions, as they say. There are several self-serving reasons why Orban would support the notion of “chat control.” And very few of them have anything to do with fighting crime, combating terrorism, or stopping the spread of CSAM.
And that’s exactly why he should have been the last choice to soft-sell continent-wide undermining of encryption. But, as EDRi notes, it almost worked in the Netherlands. If the near-success of Orban’s sales tactics is surprising, it’s not nearly as surprising as the entities that showed up to push the Dutch government away from agreeing to the Commission’s “chat control” proposal.
On 1 October, following significant mobilisation from civil society, including EDRi member Bits of Freedom and national opposition politicians, the news broke that the Netherlands would officially abstain from the proposal. This is a welcome development, because it means that Hungary does not have a majority to move forward with their proposal, instead having to remove the CSA Regulation from an upcoming Council agenda.
One of the most interesting parts of the Netherlands’ will-they-won’t-they saga, however, is the fact that one decisive element seems to be an opinion of the national security service. Dutch spooks warned their government that the latest proposal would threaten the cybersecurity of the country, putting national security at risk. This is a warning that should resonate with other countries, too.
When the people who would have the most to gain from pervasive disruption of encrypted services tell you there’s also a downside, that means something. It’s one thing for rights groups to say it. It’s quite another when the spies say the negatives would outweigh the positives.
While one might think that the last ditch effort that briefly converted an aspiring autocrat into a EU salesperson might signal the end of the line for “chat control”/client-side scanning/encryption bans, hope seems to spring eternal at the Commission. A new Commission will be in place by the end of the year and we can expect several of the new members will be just as desirous of breaking encryption as their predecessors, no matter how many times (and by how many countries) they’ve been told “no.”
Filed Under: chat control, client side scanning, encryption, encryption backdoors, encryption bans, eu commision, eu council
Australia’s Security Chief Says It’s Time To Start Forcing Companies To Break Chat Room Encryption
from the start-reviewing-your-exit-plans,-service-providers dept
More than a half-decade ago, the Australian government gave itself more powers. These new powers allowed the government to compel decryption — something far easier said than done, especially if existing encryption was expected to still protect everyone else but the government’s targets.
Shortly after the law was passed, Australia’s federal law enforcement and national security agencies started wielding it against service providers. The first wave was noticeable, but subsequent efforts have flown under the radar for the most part, whether due to extreme amounts of secrecy or the new powers not being quite as possible as the Australian government hoped.
Three years after the enactment of the law, the powers and their side effects were reviewed by federal overseers. The review came to a couple of unsurprising conclusions. First, the joint committee noted the program suffered from a lack of rigorous oversight, which is pretty ironic when the statement is being made by one of the program’s oversight bodies. Second, it said the law was great and had no downsides, a conclusion it reached by… simply stating there were no downsides.
“Agencies have made the case that these powers remain necessary to combat serious national security threats, and some of the worst fears held by industry at the time of passage have not been realised,” committee chair and Liberal Senator James Paterson said.
Really refreshing to see a government body declare an unprecedented expansion of powers to be a net benefit for all mankind. What’s hilarious is that there are actually downsides, but since not every outcome has been negative, the new powers are somehow an unmitigated success. The committee chair did not say “none” of the “worst fears” stated by the industry in opposition to these powers have come to pass. Senator James Patterson says only “some” have “not been realised,” which suggests others have been “realised.”
Apparently, getting its way isn’t sitting right with the current head of the Australian Security Intelligence Organisation (ASIO). Companies must be made to comply more often and more quickly. As Sarah Ferguson reports for Australia’s ABC News, ASIO believes it’s time to fully flex powers that have apparently only been partially flexed previously.
ASIO head, Mike Burgess, says he may soon use powers to compel tech companies to cooperate with warrants and unlock encrypted chats to aid in national security investigations.
“If you actually break the law or you’re a threat to security, you lose your right to privacy, and what I’ve been asking for those companies that build messaging apps (is to) respond to the lawful requests. So when I have a warrant you give me access to that communication,” Mr Burgess told 7.30.
Mr Burgess said ASIO is seeking targeted access to chat rooms hosted on encrypted platforms – which are increasingly used by bad actors to hide their communications.
“We’re not asking for mass surveillance. We need their cooperation,” he said.
“If they don’t cooperate, then there’s a private conversation I need to have with government about what we accept or what I need to do my job more effectively.”
This goes beyond simply breaking encryption to give intelligence and law enforcement agencies access to communications at rest. This is the ASIO amping things up to demand companies provide them access to ongoing communications in the form of message groups of chat rooms.
Obviously, this creates a much larger problem for non-targets of investigations. It’s one thing to give the government access to a single user’s communications. It’s quite another to break encryption on chat rooms or multi-person messaging groups, which means exposing everyone in these conversations to surveillance, even if they’re not actually targets of investigations.
On top of that, this means stripping encryption from entire communications platforms. It’s not like service providers can just bypass the encryption safeguarding one set of communications. To allow ASIO the access its boss is demanding, the entire platform must be deprived of its security.
And, once again, we have a supposed expert in the fields of law enforcement and surveillance completely misunderstanding what’s at stake and what he’s asking for. “Targeted access” is a meaningless term when doing so means depriving every user of these services of the protection encryption provides.
The more Mike Burgess says, the stupider he looks.
“I understand there are people who really need [encryption] in some countries, but in this country, we’re subject to the rule of law, and if you’re doing nothing wrong, you’ve got privacy because no one’s looking at it,” Mr Burgess said.
Nothing about this statement makes any sense. Encryption is acceptable for people in other countries? The rule of law concept is only present in Australia? Australians aren’t deserving of the security and privacy communication encryption provides?
And please do not give us another helping of this horseshit “nothing wrong/nothing to fear” platitude. If Burgess is given the access he wants, people who are “doing nothing wrong” can still have their privacy invaded if they happen to participate in chats/messages with people the government is targeting. Once the encryption is broken, it’s broken. Everyone’s communications can be seen, even if the government is only interested in a few chat room members. Worse, once the platform itself is compromised, people who aren’t even participating in chats/messages with government targets can be surveilled.
Then there’s this, in which Burgess insists unicorns not only exist, but that tech companies are perfectly capable of generating all the unicorns the Australian government demands.
Mr Burgess says tech companies could design apps in a way that allows law enforcement and security agencies access when they request it without comprising the integrity of encryption.
Wrong! It simply does not work like that. There’s no magic switch that can be built in that the government can flip on and off when it wants to intercept or view communications. Either the encryption is solid or it’s broken. At best, the encryption is compromised, which means anyone with the means or willingness to do so can eavesdrop on communications or intercept/exfiltrate sensitive data. At worst, it means no one is protected from anything because encryption is simply no longer an option.
These are dangerous people. They’re the worst combination of powerful and stupid. And it doesn’t even matter to them that they’re wrong. They’re on the side of the “rule of law” and any incremental gains in law enforcement effectiveness will always outweigh the critical collateral damage these mandates will generate. The theoretical security of the nation is more important than the quantifiable security encryption provides to millions of Australians. No sacrifice is too great… just so long as it’s not the government making the sacrifice.
Filed Under: asio, australia, compelled decryption, encryption, encryption backdoors, mike burgess, national security
Durov’s Arrest Details Released, Leaving More Questions Than Answers
from the still-concerning dept
Is the arrest of Pavel Durov, founder of Telegram, a justified move to combat illegal activities, or is it a case of dangerous overreach that threatens privacy and free speech online? We had hoped that when French law enforcement released the details of the charges we’d have a better picture of what happened. Instead, we’re actually just left with more questions and concerns.
Earlier today we wrote about the arrest and how it already raised a lot of questions that didn’t have easy answers. Soon after that post went out, the Tribunal Judiciaire de Paris released a press release with some more details about the investigation (in both French and English). All it does is leave most of the questions open, which might suggest they don’t have very good answers.
First, the report notes “the context of the judicial investigation” which may be different from what he is eventually charged with, though the issues are listed as “charges.”
I would bucket the list of charges into four categories, each of which raise concerns. If I had to put these in order of greatest concern to least, it would be as follows:
- Stuff about encryption. The last three charges are all variations on “providing a cryptology service/tool” without some sort of “prior declaration” or “certified declaration.” Apparently, France (like some other countries) has certain import/export controls on encryption. It appears they’re accusing Durov of violating those by not going through the official registration process. But, here, it’s hard not to see that as totally pretextual: an excuse to arrest Durov over other stuff they don’t like him doing.
- “Complicity” around a failure to moderate illegal materials. There are a number of charges around this. Complicity to “enable illegal transactions” for “possessing” and “distributing” CSAM, for selling illegal drugs, hacking tools, and organized fraud. But what is the standard for “complicity” here? This is where it gets worrisome. If it’s just a failure to proactively moderate, that seems very problematic. If it’s ignoring direct reports of illegal behavior, then it may be understandable. If it’s more directly and knowingly assisting criminal behavior, then things get more serious. But the lack of details here make me worry it’s the earlier options.
- Refusal to cooperate with law enforcement demands for info: This follows on from my final point in number two. There’s a suggestion in the charges (the second one) that Telegram potentially ignored demands from law enforcement. It says there was a “refusal to communicate, at the request of competent authorities, information or documents necessary for carrying out and operating interceptions allowed by law.” This could be about encryption, and a refusal to provide info they didn’t have, or about not putting in a backdoor. If it’s either of those, that would be very concerning. However, if it’s just “they didn’t respond to lawful subpoenas/warrants/etc.” that… could be something that’s more legitimate.
- Finally, money laundering. Again, this one is a bit unclear, but it says “laundering of the proceeds derived from organized group’s offences and crimes.” It’s difficult to know how serious any of this is, as that could represent something legitimate, or it could be French law enforcement saying “and they profited off all of this!” We’ve seen charges in other contexts where the laundering claims are kind of thrown in. Details could really matter here.
In the end, though, a lot of this does seem potentially very problematic. So far, there’s been no revelation of anything that makes me say “oh, well, that seems obviously illegal.” A lot of the things listed in the charge sheet are things that lots of websites and communications providers could be said to have done themselves, though perhaps to a different degree.
So we still don’t really have enough details to know if this is a ridiculous arrest, but it does seem to be trending towards that so far. Yes, some will argue that Durov somehow “deserves” this for hosting bad content, but it’s way more complicated than that.
I know from the report that Stanford put out earlier this year that Telegram does not report CSAM to NCMEC at all. That is very stupid. I would imagine Telegram would argue that as a non-US company it doesn’t have to abide by such laws. These charges are in France rather than the US, but it still seems bad that the company does not report any CSAM to the generally agreed-upon organization that handles such reports, and to which companies operating in the US have a legal requirement to report.
But, again, there are serious questions about where you draw these lines. CSAM is content that is outright illegal. But some other stuff may just be material that some people dislike. If the investigation is focused just on the outright illegal content that’s one thing. If it’s not, then this starts to look worse.
On top of that, as always, are the intermediary liability questions, where the question should be how much responsibility a platform has for its users’ use of the system. The list of “complicity” in various bad things worries me because every platform has some element of that kind of content going on, in part because it’s impossible to stop entirely.
And, finally, as I mentioned earlier today, it still feels like many of these issues would normally be worthy of a civil procedure, perhaps by the EU, rather than a criminal procedure by a local court in France.
So in the end, while it’s useful to see the details of this investigation, and it makes me lean ever so slightly in the direction of thinking these potential charges go too far, we’re still really missing many of the details. Nothing released today has calmed the concerns that this is overreach, but nothing has made it clear that it definitely is overreach either.
Filed Under: complicity, content moderation, csam, encryption, france, law enforcement, pavel durov
Companies: telegram
FBI Back To Complaining About Encryption Making It Difficult To Scrape All Data From A Dead Person’s Phone
from the can-it,-chris dept
It’s 2016 all over again. The FBI can’t get everything it wants from a dead person’s phone, so it has decided to start revving up its anti-encryption engine. The DOJ took Apple to court in hopes of securing precedent compelling tech companies to crack encrypted devices for it after it recovered the San Bernardino shooter’s iPhone. That attempt failed. But that hasn’t stopped the complaining.
Before we get into the latest bout of whining to Congress, let’s take a look back at another date: May 29, 2018. That’s the date the FBI acknowledged it had seriously overstated the number of uncracked encrypted devices in its possession. That was the same day it promised to deliver an updated, far more accurate tally of these devices. It has been 2,246 days since that promise was made — 6 years, 1 month, and 23 days. That number still has not been updated.
However, that six years has been filled with FBI Director Chris Wray’s intermittent bad faith attacks on encryption. If nothing else, no one should allow the FBI to push anti-encryption arguments until it hands over the updated number of devices so everyone has the same facts available to gauge exactly how big the “problem” is.
But the latest round of complaints sound like the ones made in 2016. Even though the FBI was able to break into the Trump rally shooter’s device thanks to unreleased software provided by Cellebrite, Chris Wray is telling Congress that being able to break into a phone simply isn’t enough. All encryption must go, not just that protecting the device itself.
Wray said the bureau is facing challenges with getting into “encrypted messaging applications” used by Thomas Matthew Crooks, who was killed by a Secret Service counter-sniper team after firing at least eight shots toward the stage at the July 13 rally in Butler, Pennsylvania. Reports said officials have identified at least three such accounts.
Speaking to the House Judiciary Committee, Wray said that in some cases, the FBI is waiting on “legal process returns” to get into the accounts. He did not specify what companies or services host them.
Wray is presenting the reality of all criminal investigations like it’s evidence that the criminals are constantly one step ahead of the feds, even when said criminal is dead and neither facing prosecution nor capable of committing more crime. It’s not a great test case for anti-encryption legal battles or legislation, much like the last time the FBI made a lot of noise about not being able to get into a dead person’s phone.
But that’s not all Wray said. This part is even worse and a whole lot stupider.
“This has unfortunately become very commonplace,” he said. “It’s a real challenge not just for the FBI but for state and local law enforcement all over the country.” Even with access to a user’s phone, the end-to-end encryption used in many apps would make messages and other data inaccessible even to the app developer.
“Some places we’ve been able to look, some places we will be able to look, some places we may never be able to see, no matter how good our legal process is,” Wray said.
First off, there’s no way of telling how “commonplace” this is because, as noted above, the FBI’s encrypted device numbers have been wrong for more than six years and have yet to be corrected. We can assume it’s more commonplace now that more services are offering end-to-end encryption, but we should not automatically assume it’s enough to be referred to casually as “commonplace” and a persistent threat to successful criminal investigations. If it were, one would expect to hear more about it from other law enforcement officials. Instead, most of what we hear about the supposed evil of encryption has come from the mouths of consecutive FBI directors.
As for the second paragraph, that’s something that’s always been true about criminal investigations, dating back to long before devices or device encryption existed. No investigation will ever uncover all existing evidence. It’s an impossibility. Some evidence will be destroyed. Some evidence simply won’t be where investigators are looking for it. And some evidence is ethereal, gone as soon as it’s uttered via untapped phone calls or in-person conversations.
Pretending that this reality of criminal investigations is somehow new is intellectual dishonesty. Claiming that it’s somehow more common due to encrypted devices and communication services is meaningless if the FBI’s not willing to give the public — or at least its congressional oversight — accurate information detailing just how often the FBI runs into this particular problem.
Until the FBI can be honest about the problem its directors claim is omnipresent, its anti-encryption agitation should be ignored. And it should certainly be ignored when the FBI is doing nothing more than complaining about a lack of access to a dead person’s phone contents and communications.
Filed Under: chris wray, encryption, fbi, lawful access
Companies: cellebrite
Cellebrite Sent The FBI Unreleased Software To Crack The Trump Shooter’s Phone
from the to-what-end-though dept
If nothing else, it appears the FBI has decided it’s not worth fighting the “compelled assistance” battle again. Several years ago, the DOJ went to court in hopes of forcing Apple to decrypt a phone belonging to the (dead) San Bernardino shooter.
It didn’t go well for the DOJ or the FBI, no matter how much then-FBI director James Comey bitched about it. The phone was eventually unlocked. And Comey has since been replaced, but his successor (Chris Wray) is just as dumb, dishonest, and histrionic about device encryption.
Fortunately, we haven’t heard anything from Chris Wray about the latest extremely minimal and temporary hiccup the FBI encountered while breaking into the phone owned by the person who tried to kill Donald Trump but killed an innocent person instead.
After a couple of days of failure, the FBI apparently reached out to one of its preferred vendors. And, as Bloomberg reports, that company — the Israel-based Cellebrite — apparently had a solution.
The agents called Cellebrite’s federal team, which liaises with law enforcement and government agencies, according to the people.
Within hours, Cellebrite transferred to the FBI in Quantico, Virginia, additional technical support and new software that was still being developed. The details about the unsuccessful initial attempt to access the phone, and the unreleased software, haven’t been previously reported.
Once the FBI had the Cellebrite software update, unlocking the phone took 40 minutes, according to reporting in the Washington Post, which first detailed the FBI’s use of Cellebrite.
So much for “going dark.” This reporting follows a report on leaked Cellebrite documents by Joseph Cox for 404 Media that detailed Cellebrite’s capabilities, at least as of April 2024. According to those documents, post-2020 iPhones running the latest version of iOS were beyond the cellphone-cracking powers of Cellebrite. It wasn’t quite as clear-cut for Android phones, although it did appear Google Pixels were less crackable than others.
According to the Bloomberg report, the shooter’s phone was a “newer Samsung model,” which doesn’t add much to the “what phones can be cracked” matrix. While I’m sure the FBI appreciated the assist from Cellebrite, it’s unclear what they hope to learn from cracking the dead shooter’s phone.
What they have learned isn’t doing much to assure the public that law enforcement is at the top of its game, especially when it comes to the Secret Service. What has been gleaned from the phone extraction are unsettling details like the shooter’s drone flight over the rally grounds prior to the shooting. It also hasn’t given exactly given Trump fans the satisfaction they so sorely want: the shooter was a registered Republican, albeit one that recently donated an extremely small amount to a progressive cause.
What is clear is that law enforcement isn’t out of options when it comes to encrypted devices. And that has always been the case, no matter how many might proclaim criminals have the upper hand, despite not being in control of Nasdaq-listed companies (which Cellebrite is). Phones can be cracked, even when the option of simply beating a password out of someone is no longer an option.
As for the rest of this sad state of affairs, I won’t say much more than this: the party encouraging the most violence was the recipient of it here. But the greater problem isn’t the rhetoric so much as it is the rhetorical options, so to speak. The Secret Service, working in conjunction with law enforcement, appears to have been looking past this game to the Republic National Convention, to use a sportsball analogy. But even if everyone had their shit locked down tight, there’s simply no way to completely prevent the act of violence witnessed during this Trump rally.
As usual, The Onion has summed it up best:
Investigation Finds Secret Service Failed To Account For Nation’s 393 Million Guns
And The Onion knows where we’re headed from here because it will always fail to see the forest for the 393 million trees:
WASHINGTON—In response to the attempted assassination of former President Donald Trump at a rally in Pennsylvania over the weekend, Congress moved quickly to pass legislation Monday that bans the civilian use of roofs. “As our country continues to reel from this horrific event, we in Congress have taken action by enacting a nationwide ban on all roofs, roof terraces, and balconies,” said House Speaker Mike Johnson, explaining that the would-be assassin, who shot at and nearly killed Trump from atop a building 430 feet away, highlighted just how lax U.S. laws had been in addressing the threat of widespread roof access.
In the end, the FBI got what it wanted. But what did it actually learn from this experience? So far, there are no answers. And no matter how much agents root around in the shooter’s phone, they’ll never find a satisfactory answer. All it got was the assurance that if it asks nicely (or desperately!), it will get the help it wants, even if it’s not anything it really needs.
Filed Under: cellphone cracking, donald trump, encryption, fbi
Companies: cellebrite, samsung
Leaked Docs Show Cellebrite Is Still Trailing Apple In The Device Security Arms Race
from the still-mostly-secure-on-the-home-front dept
Good news for phone owners. Perhaps a little less great for law enforcement, which presumably still doesn’t have the capability to crack the latest cell phones.
Not that it’s all bad news for law enforcement. Whether or not compelled password production is a constitutional violation is still an open question. Those whose phones are secured with biometrics are definitely less protected by the Constitution than those using passcodes. And, despite all the crying you might hear from officials (like, say, consecutive FBI directors), law enforcement still has plenty of options to obtain evidence that don’t involve cracking encrypted devices but rather serving warrants to service providers to obtain stuff stored in the cloud.
Cellebrite has been selling its phone-cracking tech for several years now. But it’s stuck in a one step forward, one step back loop as device makers patch exploitable flaws, including those used by purveyors of these devices.
Joseph Cox of 404 Media managed to obtain some very recent documents that apparently show the limitations of Cellebrite’s tech. The documents were leaked in April 2024, which doesn’t necessarily mean they document Cellebrite’s latest software version, but they do at least provide a fairly up-to-date snapshot of the tech’s capabilities.
For all locked iPhones able to run 17.4 or newer, the Cellebrite document says “In Research,” meaning they cannot necessarily be unlocked with Cellebrite’s tools. For previous iterations of iOS 17, stretching from 17.1 to 17.3.1, Cellebrite says it does support the iPhone XR and iPhone 11 series. Specifically, the document says Cellebrite recently added support to those models for its Supersonic BF [brute force] capability, which claims to gain access to phones quickly. But for the iPhone 12 and up running those operating systems, Cellebrite says support is “Coming soon.”
As Cox notes in his article, this means Cellebrite is capable of cracking iPhones released through the first part of 2020, but possibly only if they haven’t been updated to the latest iOS version. That’s still a significant number of phones, which means staying ahead of Cellebrite possibly means having to be an early adopter or, at the very least, ensuring the latest updates have been applied to your phone.
The same can’t be said for Android, something pretty much everyone has already known. Not only are carriers hit-and-miss when it comes to regular Android updates, the wide variety of manufacturers and models means it’s often difficult to tell which Android model is more secure (or, more accurately, less compromised). The rule of thumb, though, is that newer is better, at least in terms of crack-thwarting.
The second document shows that Cellebrite does not have blanket coverage of locked Android devices either, although it covers most of those listed. Cellebrite cannot, for example, brute force a Google Pixel 6, 7, or 8 that has been turned off to get the users’ data, according to the document. The most recent version of Android at the time of the Cellebrite documents was Android 14, released October 2023. The Pixel 6 was released in 2021.
Cellebrite has confirmed the authenticity of the leaked documents but told 404 Media that it does not completely reflect its current line of products or their capabilities. So, these should be taken with at least as large a grain of salt as Cellebrite’s statement. If these documents accurately portray Cellebrite’s offerings, one would expect the company to claim they don’t in order to keep criminals (or journalists, activists, politicians, dissidents, etc.) guessing about the current state of cracking tech.
Then there’s the fact that Cellebrite is not the only player in this market, even if it appears to be the most well-known. Competitors are presumably engaged in the same race against patches and system updates in order to provide something worth paying for to government customers.
Finally, the Israel-based company appears to have been stung a bit by the steady deluge of negative press covering phone-hacking malware purveyors like NSO Group and Candiru, both of which have been blacklisted by the US government for selling their goods to known human rights violators.
“Cellebrite does not sell to countries sanctioned by the U.S., EU, UK or Israeli governments or those on the Financial Action Task Force (FATF) blacklist. We only work with and pursue customers who we believe will act lawfully and not in a manner incompatible with privacy rights or human rights,” the email added.
Well, great, I guess. That answers a question no one asked, but as long as you’re in the news, I suppose it’s smart to get out ahead of the criticism, even if it’s still unspoken at this point.
While some in law enforcement might view this reporting as a half-empty glass where the tech they use will always be a step or two behind the efforts of device manufacturers, everyone else should see this as more than half-full. More companies and developers are putting more time and effort into ensuring the devices they sell are as secure as humanly possible. That’s a net win for everyone, even if you halfway believe the often-hysterical proclamations of government officials who think device security is the enemy of public safety.
It may not necessarily discourage device theft, but it does limit the damage done by those who steal devices. And it helps protect journalists, dissidents, activists, and political opposition leaders from abusive tech deployments just as much as it “protects” criminals from having their seized devices cracked. Non-criminals will always outnumber criminals. And that fact shouldn’t be ignored by law enforcement officials just because it makes things a bit tougher when it comes to extracting data from seized devices.
Filed Under: cellphone cracking, encryption, fbi
Companies: android, apple, cellebrite