vulnerabilities – Techdirt (original) (raw)
Stories filed under: "vulnerabilities"
Investigation Shows NSO Group Competitor QuaDream’s Spyware Was Used To Target Journalists And Activists
from the bad-people-selling-to-worse-people dept
Here we go again. Another NSO-alike, founded in Israel by former government snoops, is selling powerful phone exploits to bad people who, unsurprisingly, use it to do bad things.
And, as usual, it’s Citizen Lab doing the heavy lifting, sifting through code, identifying targets, and seeking information to find the source of these attacks. NSO’s careless handling of its malware and customer base saw it sanctioned by the US Commerce Department and investigated by its own government. The same thing is coming for its competitors as it appears none of them consider a moral compass to be an essential business accessory.
QuaDream sells a zero-click exploit that targets iPhones. Its customers use this to target the kind of people you’d prefer governments (who buy this tech under the pretense it will be used to target terrorists and criminals) didn’t target. This is from Citizen Lab’s extensive report on QuaDream:
Based on an analysis of samples shared with us by Microsoft Threat Intelligence, we developed indicators that enabled us to identify at least five civil society victims of QuaDream’s spyware and exploits in North America, Central Asia, Southeast Asia, Europe, and the Middle East. Victims include journalists, political opposition figures, and an NGO worker. We are not naming the victims at this time.
QuaDream’s exploit is called “Reign,” and it gives governments the ability to fully compromise iPhones without users performing any actions of their own, leaving them fully unaware their phones are now infected with powerful spyware.
QuaDream works with a Cyprus-based company called InReach. InReach handles all of QuaDream’s sales and promotion outside of Israel. Adding another company to the mix also makes it easier to sell to governments blacklisted by Israel’s recent efforts to rein in its homegrown malware problem. But this partnership has undergone some recent friction, with QuaDream suing InReach for refusing to transfer sales revenues back to the QuaDream. Thanks to that legal battle, Citizen Lab has been able to discover a bit more about the people running both companies.
Citizen Lab has a list of suspected locations of QuaDream operators. And that list isn’t pretty. It includes the United Arab Emirates, Uzbekistan, Singapore, and Ghana — all countries known to engage in habitual human rights violations. It also includes Hungary and Mexico, both of which routinely target journalists and human rights defenders with malware and other surveillance.
QuaDream may be slightly restricted in who it can sell to. But those restrictions can be circumvented with front companies. And, as Israeli news organization Haaretz points out in its report on Citizen Lab’s findings, the government is already relaxing export restrictions on the many, many ethically dubious malware purveyors that call Israel home.
[A]ccording to industry sources, in recent months restrictions on the process of granting licenses to sell these tools have been eased, to the point where these technologies can be marketed even to countries to which sales are still prohibited. Following Prime Minister Benjamin Netanyahu’s return to office, the Defense Ministry is reportedly expected to renew the scope of these permits – mainly to allow sales to countries in South America and Central Asia, industry sources say.
This seems… unwise, if only from a PR standpoint. Information on unsavory customers and abusive targeting are still surfacing regularly, all of it powered by Israeli tech companies formed by former government employees. Sure, lifting restrictions will make those particular constituents happy, but allowing these companies to return to their former position as enablers of human rights abuse isn’t going to work out in the long run.
Filed Under: exploits, reign, spyware, surveillance, vulnerabilities
Companies: nso group, quadream
‘Smart’ Garage Door Company Nukes Key Feature After Ignoring Vulnerability For Months
from the that's-one-way-to-fix-it dept
It will never stop being humorous uncovering just how many smart products are run by dumb companies. If you’re going to roll out a product that connects to the internet, you would think that the very basics of IT/internet security in those products would be taken into account. You would also think that there would be intelligent contingency plans proactively thought out for when something inevitably goes wrong or the unexpected is uncovered.
Meet Nexx. Nexx makes smart garage door openers that allow you to control your garage door via an app either over an internet connection or, if you’re close by, over Bluetooth. A researcher named Sam Sabetan uncovered a series of vulnerabilities within the app itself, which allowed him to get information not just about his own Nexx device, but about a ton of others as well.
Sabtean made a video proof-of-concept of the hack. It shows him fist opening his own garage door as expected with the Nexx app. He then logs into a tool to view messages sent by the Nexx device. Sabetan closes the door with the app, and captures the data the device sends to Nexx’s server during this action.
With that, Sabetan doesn’t just receive information about his own device, but messages from 558 other devices that aren’t his. He is now able to see the device ID, email address, and name linked to each, according to the video. Sabetan then replays a command back to the garage through the software—rather than the app—and his door opens once again. Sabetan only tested this on his own garage door, but he could have remotely opened other users’ garage doors with this technique.
Sabetan believes that this would allow him to open the garage doors for pretty much any Nexx customer. Additionally, it appears that Nexx makes an app allowing for control over a home’s power outlets, too, which he could also manipulate using this technique. This is all obviously a massive security threat, which is why Sabetan contacted Nexx about it.
Nexx ignored him. For months. Worried his messages weren’t getting through, Sabetan then opened a new ticket for support on his device and was contacted back. In response, he again asked Nexx to take a look at the original ticket he’d open for the vulnerability. Nexx again did not respond, which is when Sabetan took the story to Motherboard.
“Great to know your support is alive and well and that I’ve been ignored for two months,” Sabetan replied. Please respond to ticket [ticket number,” he wrote, referring to his vulnerability report.
The response from Nexx finally came, but not to Sabetan. Instead, Nexx simply nuked the entire IoT function of the product, rendering the only method for opening a Nexx garage door to doing so over Bluetooth. It then put this message out to its customers.
“It has come to our attention of a potential internet security vulnerability with the following products: Nexx Garage, Nexx Gate, and Nexx Plug,” an email sent by the company, called Nexx, to customers, reads according to a post on Hacker News. A member of a Facebook Page for Nexx customers wrote a post saying they received a similarly worded email. “As we examine the issue, we are taking proactive action by temporarily disabling internet access remote control” for the products, the message continues.
Nexx and I appear to have a serious deviation in terms of our definition of the word “proactive.” This is all very, very reactive, and it’s causing a bunch of confusion and anger with Nexx’s clients.
“I have two NXG100 units that both stopped working at the same time last night. I disconnected power and reconnected just to see if that would reset it…. that didn’t work,” one impacted customer wrote on the Nexx Community Facebook page. “If they don’t address their security vulnerabilities, it might be time to move onto another product,” the customer added in another post.
And now Nexx is ducking Motherboard’s repeated follow-ups trying to get some kind of comment from the company. If you’re a Nexx customer, or a potential one, this likely won’t set your mind at ease, having a product’s key feature disabled and having security concerns such as this addressed so poorly.
Filed Under: garage doors, iot, smart garage, vulnerabilities
Companies: nexx
Microsoft Says China Is Abusing Vulnerability Disclosure Requirements To Hoard Exploits
from the all-the-cool-surveillance-kids-are-doing-it dept
Plenty of countries have vulnerability disclosure requirements in place. This is supposed to increase the security of all users by requiring notification of affected platforms or software of exploits that may be used by malicious entities.
Define “malicious entity” tho.
The NSA has never abided by these requirements, despite being the free world leader in surveillance. It would rather delay notification than give up vulnerabilities that give it an upper hand on its surveillance targets. And if the NSA is doing it, then everyone is doing it. Say what you will about the NSA (lord knows I have), but it likely has more oversight than any other government surveillance entity in the world.
And if the NSA feels comfortable blowing off mandates to maintain its surveillance capabilities, it’s unlikely a government that deploys one of the most pervasive and invasive domestic surveillance programs in the world is going to care what Microsoft has to say about its actions.
The Chinese government has issued mandates requiring increased vulnerability reporting from hardware and software providers that do business in China. This would obviously include Microsoft. But this isn’t being done to make citizens safer. It’s being done to allow the Chinese government to make use of vulnerabilities reported to the government on its one-way disclosure street.
Somehow, the entity heading up US Homeland Security efforts sees nothing wrong with how the Chinese government handled vulnerability disclosures, as reported by Jonathan Greig for The Record.
Concerns that the Chinese military would exploit vulnerabilities before reporting them more broadly was an integral part of the investigation into the handling of the widespread Log4j vulnerability. Reports emerged earlier this year that the Chinese government had sanctioned Alibaba for reporting the vulnerability to Apache first, rather than to the government.
The Homeland Security Department’s Cyber Safety Review Board spoke with the Chinese government and “did not find evidence” that China used its advanced knowledge of the weakness to exploit networks.
Maybe the DHS just didn’t look hard enough. There’s evidence this isn’t the case, as Microsoft stated in its latest security report [PDF].
[I]n a 114-page security report released on Friday, Microsoft openly accused the Chinese government of abusing the new rules and outlines how state-aligned groups have increasingly exploited vulnerabilities globally since they were implemented.
Here’s what the report says about the new reporting mandate and how the Chinese government is using the mandate to further its own aims:
China’s vulnerability reporting regulation went into effect September 2021, marking a first in the world for a government to require the reporting of vulnerabilities into a government authority for review prior to the vulnerability being shared with the product or service owner. This new regulation might enable elements in the Chinese government to stockpile reported vulnerabilities toward weaponizing them. The increased use of zero days over the last year from China-based actors likely reflects the first full year of China’s vulnerability disclosure requirements for the Chinese security community and a major step in the use of zero-day exploits as a state priority. The vulnerabilities described below were first developed and deployed by China-based nation state actors in attacks, before being discovered and spread among other actors in the larger threat ecosystem.
Unsurprising if true. This was always the goal of the new disclosure mandates. The Chinese government appears to have opened up a one-way portal that allows it to use reported exploits while keeping affected users in the dark. Unfortunately, it’s not all that unlike how the NSA has treated its disclosure requirements and the temptation to weaponize reported vulnerabilities often results in delayed disclosure to companies whose products and users are affected.
And this report won’t make anything better. China will continue to be China. And other nations might decide it’s in their best interest to start hoarding exploits, if for no other reason than to defend themselves against foreign governments and/or re-purpose exploits to go on the offensive. The internet is everyone’s playground. Unfortunately, it’s inhabited by far too many powerful bullies.
Filed Under: china, exploits, vulnerabilities, vulnerability disclosure
Companies: microsoft
$100 Bluetooth Hack Can Unlock All Kinds Of Devices, Including Teslas, From Miles Away
from the dumb-tech-is-smart-tech dept
Fri, May 20th 2022 02:18pm - Karl Bode
While they’re not impervious, at least you know where you stand with a good, old fashioned dumb lock. That’s in stark contrast to so-called “smart” locks, which studies have repeatedly shown to be easily compromised with minimal effort. One report showed that 12 of 16 smart locks they tested could be relatively easily hacked thanks to flimsy security standards.
Now there’s a new vulnerability to worry about. Sultan Qasim Khan, a researcher at NCC Groupover has discovered a new Bluetooth vulnerability that’s relatively trivial to exploit with around $100 in hardware, and impacts potentially thousands of Bluetooth devices, including Teslas.
The attack exploits a weaknesses in the Bluetooth Low Energy (BLE) standard adhered to by thousands of device makers, including “smart” door locks, cars, laptops, and various “internet of things” devices. It’s a form of “relay attack” that usually requires two attackers, one near the target, and one near the phone used to unlock the target.
But this class of attack doesn’t even require two people. A relaying device can be placed near where the target device is located or will be located (like by your driveway), and the other attacker can be targeting the device from hundreds of yards — or even miles — away:
“Hacking into a car from hundreds of miles away tangibly demonstrates how our connected world opens us up to threats from the other side of the country—and sometimes even the other side of the world,” Sultan Qasim Khan, a principal security consultant and researcher at security firm NCC Group, told Ars. “This research circumvents typical countermeasures against remote adversarial vehicle unlocking and changes the way we need to think about the security of Bluetooth Low Energy communications.”
Device makers have implemented a bunch of countermeasures to prevent against BLE attacks like these, but Khan found a way to mitigate those attacks. Many other companies are smart enough to avoid using BLE for proximity authentication (since it was designed for data transfer, not authentication), but given that privacy and security is an afterthought for many companies, many still do.
All told, it’s just another reminder that dumb tech is often… smarter.
Filed Under: bluetooth, hackers, internet of things, laptops, smart locks, vulnerabilities
Journalists In St. Louis Discover State Agency Is Revealing Teacher Social Security Numbers; Governors Vows To Prosecute Journalists As Hackers
from the wtf-missouri? dept
Last Friday, Missouri’s Chief Information Security Officer Stephen Meyer stepped down after 21 years working for the state to go into the private sector. His timing is noteworthy because it seems like Missouri really could use someone in their government who understands basic cybersecurity right now.
We’ve seen plenty of stupid stories over the years about people who alert authorities to security vulnerabilities then being threatened for hacking, but this story may be the most ridiculous one we’ve seen. Journalists for the St. Louis Post-Dispatch discovered a pretty embarrassing leak of private information for teachers and school administrators. The state’s Department of Elementary and Secondary Education (DESE) website included a flaw that allowed the journalists to find social security numbers of the teachers and administrators:
Though no private information was clearly visible nor searchable on any of the web pages, the newspaper found that teachers? Social Security numbers were contained in the HTML source code of the pages involved.
The newspaper asked Shaji Khan, a cybersecurity professor at the University of Missouri-St. Louis, to confirm the findings. He called the vulnerability ?a serious flaw.?
?We have known about this type of flaw for at least 10-12 years, if not more,? Khan wrote in an email. ?The fact that this type of vulnerability is still present in the DESE web application is mind boggling!?
In the HTML source code means that it sent that information to the computers/browsers of those who knew what pages to go to. It also appears that the journalists used proper disclosure procedures, alerting the state and waiting until it had been patched before publishing their article:
The Post-Dispatch discovered the vulnerability in a web application that allowed the public to search teacher certifications and credentials. The department removed the affected pages from its website Tuesday after being notified of the problem by the Post-Dispatch.
Based on state pay records and other data, more than 100,000 Social Security numbers were vulnerable.
The newspaper delayed publishing this report to give the department time to take steps to protect teachers? private information, and to allow the state to ensure no other agencies? web applications contained similar vulnerabilities.
Also, it appears that the problems here go back a long ways, and the state should have been well aware that this problem existed:
The state auditor?s office has previously sounded warning bells about education-related data collection practices, with audits of DESE in 2015 and of school districts in 2016.
The 2015 audit found that DESE was unnecessarily storing students? Social Security numbers and other personally identifiable information in its Missouri Student Information System. The audit urged the department to stop that practice and to create a comprehensive policy for responding to data breaches, among other recommendations. The department complied, but clearly at least one other system contained an undetected vulnerability.
This is where a competent and responsible government would thank the journalists for finding the vulnerability and disclosing it in an ethical manner designed to protect the info of the people the state failed to properly protect.
But that’s not what happened.
Instead, first the Education Commissioner tried to make viewing the HTML source code nefarious:
In the letter to teachers, Education Commissioner Margie Vandeven said ?an individual took the records of at least three educators, unencrypted the source code from the webpage, and viewed the social security number (SSN) of those specific educators.?
It was never “encrypted,” Commissioner, if the journalists could simply look at the source code and get the info.
Then DESE took it up a notch and referred to the journalists as “hackers.”
But in the press release, DESE called the person who discovered the vulnerability a ?hacker? and said that individual ?took the records of at least three educators? ? instead of acknowledging that more than 100,000 numbers had been at risk, and that they had been available to anyone through DESE?s own search engine.
And then, it got even worse. Missouri Governor Mike Parson called a press conference in which he again called the journalists hackers and said he had notified prosecutors and the Highway Patrol’s Digital Forensic Unit to investigate. Highway Patrol? He also claimed (again) that they had “decoded the HTML source code.” That’s… not difficult. It’s called “view source” and it’s built into every damn browser, Governor. It’s not hacking. It’s not unauthorized.
Through a multi-step process, an individual took the records of at least three educators, decoded the HTML source code, and viewed the SSN of those specific educators.
We notified the Cole County prosecutor and the Highway Patrol?s Digital Forensic Unit will investigate. pic.twitter.com/2hkZNI1wXE
— Governor Mike Parson (@GovParsonMO) October 14, 2021
It gets worse. Governor Parson claims that this “hack” could cost $50 million. I only wish I was joking.
This incident alone may cost Missouri taxpayers up to $50 million and divert workers and resources from other state agencies. This matter is serious.
The state is committing to bring to justice anyone who hacked our system and anyone who aided or encouraged them ? In accordance with what Missouri law allows AND requires.
A hacker is someone who gains unauthorized access to information or content. This individual did not have permission to do what they did. They had no authorization to convert and decode the code. This was clearly a hack.
We must address any wrongdoing committed by bad actors.
If it costs $50 million to properly secure the data on your website that previous audits had already alerted you as a problem, then that’s on the incompetent government who failed to properly secure the data in the first place. Not on journalists ethically alerting you to fix the vulnerability. And, there’s no “unauthorized access.” Your system put that info into people’s browsers. There’s no “decoding” to view the source. That’s not how any of this works.
As people started loudly mocking Governor Parson, he decided to double down, insisting that it was more than a simple “right click” and repeating that journalists had to “convert and decode the data.”
We want to be clear, this DESE hack was more than a simple ?right click.?
THE FACTS: An individual accessed source code and then went a step further to convert and decode that data in order to obtain Missouri teachers? personal information. (1/3) pic.twitter.com/JKgtIpcibM
— Governor Mike Parson (@GovParsonMO) October 14, 2021
Again, even if it took a few steps, that’s still not hacking. It’s still a case where the state agency made that info available. That’s not on the journalists who responsibly disclosed it. It’s on the state for failing to protect the data properly (and for collecting and storing too much data in the first place).
Indeed, in doing this ridiculous show of calling them hackers and threatening prosecution, all the state of Missouri has done is make damn sure that the next responsible/ethical journalists and/or security researchers will not alert the state to their stupidly bad security. Why take the risk?
Filed Under: blame the messenger, dese, disclosure, ethical disclosure, hacking, mike parson, private information, schools, social security numbers, st. louis, teachers, vulnerabilities
Companies: st. louis post dispatch
FBI Sat On Ransomware Decryption Key For Weeks As Victims Lost Millions Of Dollars
from the is-this-one-of-those-'greater-good'-things-I-don't-understand-becaus dept
The vulnerability equities process meets the FBI’s natural tendency to find and hoard illegal things until it’s done using them. And no one walks away from it unscathed. Welcome to the cyberwar, collateral damage!
If an agency like the NSA comes across an exploit or unpatched security flaw, it’s supposed to notify affected tech companies so they can fix the problem to protect their customers and users. That’s the vulnerability equities process in theory. In practice, the NSA (and others) weigh the potential usefulness of the exploit versus the damage it might cause if it’s not fixed and make a disclosure decision. The NSA claims in public statements it’s very proactive about disclosing discovered exploits. The facts say something different.
Then there’s the FBI, which has engaged in criminal acts to further investigations. Perhaps most famously, the FBI took control of a dark web child porn server and ran it for a few weeks so it could deploy its malware (Network Investigative Technique, according to the FBI) to users of the site. Not only did it continue to distribute child porn during this time, but it reportedly optimized the system to maximize its malware distribution.
The trend continues. As Ellen Nakashima and Rachel Lerman report for the Washington Post (alternative link here), the FBI could have stopped a massive ransomware attack but decided it would be better if it just sat on what it knew and watched things develop.
The FBI refrained for almost three weeks from helping to unlock the computers of hundreds of businesses and institutions hobbled by a major ransomware attack this summer, even though the bureau had secretly obtained the digital key needed to do so, according to several current and former U.S. officials.
The key was obtained through access to the servers of the Russia-based criminal gang behind the July attack. Deploying it immediately could have helped the victims, including schools and hospitals, avoid what analysts estimate was millions of dollars in recovery costs.
The worse news is it wasn’t just the FBI, which is already known for running criminal enterprises while engaging in investigations. The report says this refusal to release the key was a joint agreement with “other agencies,” all of which apparently felt the nation (and the rest of the world) would be better served by the FBI keeping the key to itself while it tried to hunt down the criminals behind the ransomware attack.
And it turned out to be totally worth it!
The planned takedown never occurred because in mid-July REvil’s platform went offline — without U.S. government intervention — and the hackers disappeared before the FBI had a chance to execute its plan, according to the current and former officials.
FBI Director Chris Wray, testifying before Congress, said the tradeoff was necessary because it could help prevent future attacks (unproven) and time was needed to develop a tool that would help those hit by the ransomware.
“These are complex . . . decisions, designed to create maximum impact, and that takes time in going against adversaries where we have to marshal resources not just around the country but all over the world.”
He also suggested that “testing and validating” the decryption key contributed to the delay.
I, too, would testify before Congress that things were complex and time-consuming, especially when the end result was the bad guys getting away while victims remained victims. I would, however, perhaps consider not belaboring the “it will be long and hard” point when the private sector has demonstrated that it actually won’t be that long, and possibly not even all that hard.
Emsisoft, however, was able to act quickly. It extracted the key from what the FBI provided Kaseya, created a new decryptor and tested it — all within 10 minutes, according to Fabian Wosar, Emsisoft chief technology officer. The process was speedy because the firm was familiar with REvil’s ransomware. “If we had to go from scratch,” Wosar said, “it would have taken about four hours.”
The FBI took three weeks to turn over the key to the first of many victims. During that time, it apparently failed to accomplish what Emisisoft developed in 10 minutes, as well as failing to catch any of the perpetrators. Faced with this not-so-subtle undercutting of its “we really were just trying to save the world” narrative, the FBI — via its parent organization — has decided to shut the fuck up.
The Justice Department and White House declined to comment.
Sure, the FBI could still be pursuing some leads, but the timing of REvil’s disappearance and the FBI’s release of the key to one of ransomware victims suggests the FBI only decided to release because it was no longer of any use to the investigation. It may still possess some limited use to those whose data is still locked up, but pretty much every victim has moved on and attempted to recover from the incident. The cost — as is detailed in the Washington Post report — is in the hundreds of millions. Some victims are still trying to recover. Others are back in business, but only after losing millions to downtime.
Who pays for this? Well, the victims do. And taxpayers will too, if the government decides to compensate some of the companies victimized by ransomware and victimized again by the FBI. The FBI, however, will hardly feel a thing, since the going rate for temporary chagrin is a rounding error in the agency’s reputational damage column.
Filed Under: decryption, doj, fbi, ransomware, revil, vep, vulnerabilities, vulnerabilities equities process
Germany's Constitutional Court Ponders Whether Government Users Of Zero-Day Surveillance Malware Have A Duty To Tell Software Developers About The Flaws
from the resolving-conflicting-aims dept
As Techdirt has reported previously, the use of malware to spy on suspects — or even innocent citizens — has long been regarded as legitimate by the German authorities. The recent leak of thousands of telephone numbers that may or may not be victims of the Pegasus spyware has suddenly brought this surveillance technique out of the shadows and into the limelight. People are finally starting to ask questions about the legitimacy of this approach when used by governments, given how easily the software can be — and apparently has been — abused. An interesting decision from Germany’s constitutional court shows that even one of the biggest fans of legal malware is trying to work out how such programs based on zero-days can be deployed in a way that’s compatible with fundamental rights. The court’s press release explains:
The complainants [to the constitutional court] essentially assert that, by enacting the authorisation laid down in [the German Police Act], the Land Baden-Württemberg violated the guarantee of the confidentiality and integrity of information technology systems — a guarantee arising from fundamental rights — because under that provision, the authorities have no interest in notifying developers of any vulnerabilities that come to their attention since they can exploit these vulnerabilities to infiltrate IT systems for the purpose of source telecommunications surveillance, which is permitted under [the German Police Act]. Yet if the developers are not notified, these vulnerabilities and the associated dangers — in particular the danger of third-party attacks on IT systems — will continue to exist.
That is, the failure to notify developers about the vulnerabilities means the authorities are putting IT systems in Germany at risk, and should therefore be stopped. The complainants went on to argue that if the court nonetheless ruled that the use of such malware was not considered “inherently incompatible with the state?s duty of protection”, at the very least administrative procedures should be be established for evaluating the seriousness of the threat that leaving them unpatched would represent, and then deciding on a case-by-case basis whether the relevant developers should be notified.
The German constitutional court dismissed the complaint, but on largely technical grounds. The judgment said that the complainants did not have standing, because they had failed to substantiate a breach of the government’s duty of protection. Moreover, the top court said the question should first be considered exhaustively by the lower courts, before finally moving to the constitutional court if necessary. However, the judges did recognize that there was a tension between a desire to use zero-days to carry out surveillance, and the German government’s duty to protect the country and its computer systems:
In the present case, the duty of protection encompasses the obligation for the legislator to set out how the police are to handle such IT security vulnerabilities. Under constitutional law, it is not inherently impermissible from the outset for source surveillance to be performed by exploiting unknown security vulnerabilities, although stricter requirements for the justification of such surveillance apply due to the dangers posed to the security of IT systems. Furthermore, fundamental rights do not give rise to a claim that authorities must notify developers about any IT security vulnerabilities immediately and in all circumstances. However, the duty of protection does necessitate a legal framework that governs how — in a manner compatible with fundamental rights — an authority is to resolve the conflicting aims of protecting IT systems against third-party attacks that exploit unknown IT security vulnerabilities on the one hand, and on the other hand keeping such vulnerabilities open so that source surveillance can be carried out for the purpose of maintaining public security.
It’s not clear whether that call for a legal framework to regulate how the authorities can deploy malware, and when they must alert developers to the flaw it exploits, will be heeded any time soon in Germany. But in the light of the Pegasus leak, it seems likely that other countries around the world will start to ponder this same issue. That’s particularly the case since such malware is arguably the only way that the authorities can reliably circumvent encrypted communications without mandating the flawed and unworkable backdoors they have been calling for. If more countries decide to follow Germany in deploying these programs, the need for a legal framework to regulate their use will become even more pressing.
Follow me @glynmoody on Twitter, Diaspora, or Mastodon.
Filed Under: germany, malware, surveillance, vulnerabilities, zero day
Signal Founder Cracks Cellebrite Phone Hacking Device, Finds It Full Of Vulns
from the distinct-lack-of-'what-if-this-feel-into-the-wrong-hands'-thinking-by-Ce dept
A pretty hilarious turn of events has led to Cellebrite’s phone hacking tech being hacked by Signal’s Moxie Marlinspike, revealing the tech law enforcement uses to pull data from seized phones is host to major security flaws.
According to Marlinspike, the Cellebrite came into his possession thanks to some careless package handling.
By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. As I got closer, the dull enterprise typeface slowly came into focus: Cellebrite. Inside, we found the latest versions of the Cellebrite software, a hardware dongle designed to prevent piracy (tells you something about their customers I guess!), and a bizarrely large number of cable adapters.
This must be what actually happened. I mean, there’s a photo of a Cellebrite lying on the street. That should end any senseless law enforcement speculation about this device’s origin story.
The fun starts immediately, with Marlinspike finding all sorts of things wrong with Cellebrite’s own device security. This would seem to be a crucial aspect considering Cellebrite performs raw extractions of unvetted data from seized phones, which could result in the forced delivery of malware residing on the target device. But that doesn’t appear to concern Cellebrite, which seems to feel its products will remain unmolested because they’re only sold to government agencies.
Since almost all of Cellebrite’s code exists to parse untrusted input that could be formatted in an unexpected way to exploit memory corruption or other vulnerabilities in the parsing software, one might expect Cellebrite to have been extremely cautious. Looking at both UFED and Physical Analyzer, though, we were surprised to find that very little care seems to have been given to Cellebrite’s own software security. Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present.
Just one example of this carelessness is unpatched DLLs residing in the Cellebrite system software. One DLL used to handle extracted video content hasn’t been updated since 2012, ignoring more than 100 patches that have been made available since then.
This means it wouldn’t be much of a hassle to target Cellebrite devices with code that could corrupt not only the current data extraction but also the results of every previous extraction performed by that device.
[B]y including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.
That’s a major problem because phone extractions are performed to secure evidence to use in criminal cases. If law enforcement agencies can’t trust the data they’ve extracted or rely on the reports generated by Cellebrite to perform searches, they’re going to find their evidence tossed or impossible to submit in the first place.
Further inspection of Cellebrite’s software also shows the company has ported over chunks of Apple’s proprietary code intact and is using it to assist in iPhone extractions. Presumably, Cellebrite hasn’t obtained a license from Apple to use this code in its devices (and redistribute the code with every device sold), so perhaps we’ll be hearing something from Apple’s lawyers in the near future.
This table-turning was likely provoked by Cellebrite’s incredibly questionable claim it had “cracked” Signal’s encryption. Instead, as more information came out — including its use in criminal cases — it became clear Cellebrite did nothing more than anyone could do with an unlocked phone: open up the Signal app and obtain the content of those messages.
Fortunately for everyone not currently working for Cellebrite, a delivery incident occurred and a phone-hacking device was impacted. Signal isn’t worried that Cellebrite can break its encryption. In fact, it doesn’t appear to be worried at all. This examination of Cellebrite hacking tools will only result in a small cosmetic refresh for Signal.
In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software. […] We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time. There is no other significance to these files.
Maybe this will force Cellebrite to care a bit more deeply about its security and the security of its customers. Or maybe it will brute force its way past this, assuming its customers still have that “our word against yours” thing that tends to work pretty well in court. But it’s not the only player in the phone-cracking field. So it might want to step its security game up a bit. Or at least stop picking fights with encrypted services.
Filed Under: hacking, moxie marlinspike, signal, vulnerabilities
Companies: cellebrite
Still Not 'Going Dark:' Device Encryption Still Contains Plenty Of Exploitable Flaws
from the good-news-and-bad-news-(which-some-might-consider-good-news) dept
Law enforcement — especially at the federal level — has spent a great deal of time complaining about an oddity known only to the FBI and DOJ as “warrant-proof” encryption. Device users and customers just call this “encryption” and realize this protects them against criminals and malicious hackers. The federal government, however, sees device encryption as a Big Tech slap in the face. And so they complain. Endlessly. And disingenuously.
First off, law enforcement has access to a wide variety of tech solutions. It also has access to plenty of communications and other data stored in the cloud or by third parties that encryption can’t protect. And it has the users themselves, who can often be persuaded to allow officers to search their devices without a warrant.
Then there’s the protection being handed out to phone users. It’s got its own problems, as Matthew Green points out:
Authorities don’t need to break phone encryption in most cases, because modern phone encryption sort of sucks.
More specifically, even the gold standard (Apple’s) for encryption still leaves some stuff unencrypted. Once unlocked after a period of rest (say, first thing in the morning), the phone is placed into an “AFU” (after first unlock) state where crypto keys are stored in the phone. These stay in memory until erased. Most common use of phones won’t erase them. And they’re only erased one at a time, leaving several sets resident in memory where cops (and criminals!) using phone-cracking tech can still access them.
A report [PDF] put together by Matthew Green, Maximilian Zinkus, and Tushar Jois highlights the exploitable flaws of device encryption efforts by Apple, Google, and other device manufacturers. And there’s not a lot of darkness going on, despite law enforcement’s protestations.
This reaction is best exemplified by the FBI’s “Going Dark” initiative, which seeks to increase law enforcement’s access to encrypted data via legislative and policy initiatives. These concerns have also motivated law enforcement agencies, in collaboration with industry partners, to invest in developing and acquiring technical means for bypassing smartphone security features. This dynamic broke into the public consciousness during the 2016 “Apple v. FBI” controversy, in which Apple contested an FBI demand to bypass technical security measures. However, a vigorous debate over these issues continues to this day. Since 2015 and in the US alone, hundreds of thousands of forensic searches of mobile devices have been executed by over 2,000 law enforcement agencies, in all 50 states and the District of Columbia, which have purchased tools implementing such bypass measures.
The research here shows there’s no need for legislative mandates or court orders to access most of the contents of suspects’ iPhones. There’s plenty to be had just by exploiting the shortcomings of Apple’s built-in encryption.
[W]e observed that a surprising amount of sensitive data maintained by built-in applications is protected using a weak “available after first unlock” (AFU) protection class, which does not evict decryption keys from memory when the phone is locked. The impact is that the vast majority of sensitive user data from Apple’s built-in applications can be accessed from a phone that is captured and logically exploited while it is in a powered-on (but locked) state.
This isn’t theoretical. This has actually happened.
[W]e found circumstantial evidence in both the DHS procedures and investigative documents that law enforcement now routinely exploits the availability of decryption keys to capture large amounts of sensitive data from locked phones. Documents acquired by Upturn, a privacy advocate organization, support these conclusions, documenting law enforcement records of passcode recovery against both powered-off and simply locked iPhones of all generations.
Utilizing Apple’s iCloud storage greatly increases the risk that a device’s contents can be accessed. Using iCloud to sync messages results in the decryption key being uploaded to Apple’s servers, which means law enforcement, Apple, and malicious hackers all have potential access. Device-specific file encryption keys also make their way to Apple via other iCloud services.
Over on the Android side, it’s a bigger mess. Multiple providers and manufacturers all use their own update services. Devices are phased out for ongoing protection by both, resulting in user confusion as to which devices still offer software updates and the latest in encryption tech. Cheaper devices sometimes bypass these niceties entirely, leaving low-cost options the most vulnerable to exploitation. And, while the DOJ and FBI may spend the most time complaining about Apple, it only commands about 15% of the smartphone market. This means most devices law enforcement seizes aren’t secured by the supposedly “impenetrable” encryption provided by Apple.
Google’s cloud services offer almost no protection for Android users. App creators must opt in to certain security measures. In most cases, data backed up to Google’s cloud services is only protected by encryption keys Google holds, rather than the user uploading the data. Not only is encryption not much of a barrier, but neither is the legal system. A great deal of third party data — like the comprehensive data sets maintained by Google — can be accessed with only a subpoena.
The rest of the report digs deep into the strengths and limitations of encryption offered to phone users. But the conclusion remains unaltered: law enforcement does have multiple ways to access the contents of encrypted devices. And some of these solutions scale pretty easily. While it’s not cheap, it’s definitely affordable. While there will always be those who “got away,” law enforcement isn’t being hindered much by encryption that provides security to all phone users, whether or not they’re suspected of criminal activity.
Filed Under: encryption, going dark, law enforcement, vulnerabilities
Tradeoffs: Facebook Helping The FBI Hack Tails To Track Down A Truly Awful Child Predator Raises Many Questions
from the icky-in-many-ways dept
Last week, Lorenzo Franceschi-Bicchierai at Vice had a bombshell of a story about Facebook helping the FBI track down a horrible, horrible person by paying a cybersecurity firm to build a zero-day attack on Tails, the secure operating system setup that is recommended by many, including Ed Snowden, for people who want to keep secrets away from the prying eyes of the government.
The story should make you uncomfortable on multiple levels — starting with the fact that the person at the center of the story, Buster Hernandez, is way up there on the list of truly terrible people, and there’s simply no reason to feel bad that this person is now locked up:
The crimes Buster Hernandez committed were heinous. The FBI’s indictment is a nauseating read. He messaged underage girls on Facebook and said something like ?Hi, I have to ask you something. Kinda important. How many guys have you sent dirty pics to cause I have some of you?,? according to court records.
When a victim responded, he would then demand that she send sexually explicit videos and photos of herself, otherwise he would send the nude photos he already had to her friends and family (in reality, he didn?t have any nude photos). Then, and in some cases over the course of months or years, he would continue to terrorize his victims by threatening to make the photos and videos public. He would send victims long and graphic rape threats. He sent specific threats to attack and kill victims? families, as well as shoot up or bomb their schools if they didn?t continue to send sexually explicit images and videos. In some cases, he told victims that if they killed themselves, he would post their nude photos on memorial pages for them.
And it gets worse from there. It’s good that the FBI tracked him down.
But, from there, you suddenly start to run into a bunch of other uncomfortable questions regarding Facebook’s involvement here. And each of those questions helps demonstrate the many tradeoffs that a company like Facebook (or lots of other internet companies) face in dealing with awful people online. And to be clear there is no “good” answer here. Every approach has some good elements (getting a horrible person away from continuing to terrorize young girls) and some not so great elements (helping the FBI hack Tails, which is used by journalists, whistleblowers, and dissidents around the globe).
The article notes that there was a vigorous debate within Facebook about this decision, but the folks in charge decided that tracking this person down outweighed the concerns on the other side:
?The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls,? a Facebook spokesperson said. ?This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice.?
Former employees at Facebook who are familiar with the situation told Motherboard that Hernandez’s actions were so extreme that the company believed it had been backed into a corner and had to act.
?In this case, there was absolutely no risk to users other than this one person for which there was much more than probable cause. We never would have made a change that affected anybody else, like an encryption backdoor,? said a former Facebook employee with knowledge of the case. ?Since there were no other privacy risks, and the human impact was so large, I don?t feel like we had another choice.?
That does sound like a balancing of the risk/rewards here, but the idea that handing over a backdoor to the FBI puts no one else’s privacy at risk may raise some eyebrows. The description of the zero day certainly sounds like it could be used against others:
Facebook hired a cybersecurity consulting firm to develop a hacking tool, which cost six figures. Our sources described the tool as a zero-day exploit, which refers to a vulnerability in software that is unknown to the software developers. The firm worked with a Facebook engineer and wrote a program that would attach an exploit taking advantage of a flaw in Tails? video player to reveal the real IP address of the person viewing the video. Finally, Facebook gave it to an intermediary who handed the tool to the feds, according to three current and former employees who have knowledge of the events.
And while the Facebook spokesperson tried to play down the idea that this was setting an expectation, it’s not really clear that’s true:
Facebook told Motherboard that it does not specialize in developing hacking exploits and did not want to set the expectation with law enforcement that this is something it would do regularly. Facebook says that it identified the approach that would be used but did not develop the specific exploit, and only pursued the hacking option after exhausting all other options.
But this may be hard to swallow, given that this is the very same FBI that has been pushing tech companies to develop backdoors to encryption for years, and in the famous San Bernardino case, tried to use the All Writs Act to force Apple to create a type of backdoor on iOS to break into a phone.
And obviously, cooperating one time doesn’t mean you need to cooperate every time, but it will at least raise questions. Especially at a time when Facebook is supposedly moving all of its messaging systems to fully encrypted. Can the setup there be fully trusted after this story?
As Bruce Schneier rightfully points out, it’s fine for the FBI to figure out how to use lawful hacking to track down Hernandez. That is it’s job. It’s much less clear, though, that Facebook should be handing that info over to the FBI which could then use it elsewhere as well. It certainly does not appear that the FBI or Facebook revealed to the developers of Tails that their system had this vulnerability. Indeed, Tails only found out about it from the Vice story:
A spokesperson for Tails said in an email that the project?s developers ?didn’t know about the story of Hernandez until now and we are not aware of which vulnerability was used to deanonymize him.? The spokesperson called this “new and possibly sensitive information,” and said that the exploit was never explained to the Tails development team.
So… that’s a problem. The FBI, under the Vulnerabilities Equities Program, is supposed to reveal these kinds of vulnerabilities — though it frequently does not (or hangs on to them for a long time before sharing). At the very least, this confirms lots of people’s suspicions that the Trump administration’s updating of the VEP process was little more than window dressing.
Senator Ron Wyden — who is often the only one in Congress paying attention to these things — also seemed quite concerned about how this all went down:
?Did the FBI re-use it in other cases? Did it share the vulnerability with other agencies? Did it submit the zero-day for review by the inter-agency Vulnerabilities Equity Process?? Wyden said in a statement, referring to the government process that is supposed to establish whether a zero-day vulnerability should be disclosed to the developers of the software where the vulnerability is found. ?It?s clear there needs to be much more sunlight on how the government uses hacking tools, and whether the rules in place provide adequate guardrails.?
And thus, we’re all left in an uncomfortable place. It’s good that the FBI was able to trackdown and find Hernandez, and stop him from preying on any more victims. But, Facebook’s direct involvement raises tons of uncomfortable questions, as does the FBI’s decision to keep this vulnerability a secret (at the very least, it seems like Facebook maybe should have tipped off the Tails folks as well, once the FBI nabbed Hernandez). In an ideal world, the FBI would have figured out how to track down Hernandez without Facebook paying a firm to build the zero-day attack — and then the FBI would have notified Tails’ developers of the vulnerability. But, of course, that’s not what happened.
Filed Under: buster hernandez, fbi, hack, tails, tracking, vep, vulnerabilities, zero day
Companies: facebook