security – Techdirt (original) (raw)

Apple Snuck In Code That Automatically Reboots Idle IPhones And Cops Are Not Happy About It

from the phone-cracking-now-has-a-countdown-timer dept

Detroit law enforcement officials got a bit of shock last week when some seized iPhones rebooted themselves, despite being in airplane mode and, in one case, stored inside a Faraday bag. Panic — albeit highly localized — ensued. It was covered by Joseph Cox for 404 Media, who detailed not only the initial panic, but the subsequent responses to this unexpected development.

Law enforcement officers are warning other officials and forensic experts that iPhones which have been stored securely for forensic examination are somehow rebooting themselves, returning the devices to a state that makes them much harder to unlock, according to a law enforcement document obtained by 404 Media.

The exact reason for the reboots is unclear, but the document authors, who appear to be law enforcement officials in Detroit, Michigan, hypothesize that Apple may have introduced a new security feature in iOS 18 that tells nearby iPhones to reboot if they have been disconnected from a cellular network for some time. After being rebooted, iPhones are generally more secure against tools that aim to crack the password of and take data from the phone.

The problem (for the cops, not iPhone owners) is that the reboot takes the phone out of After First Unlock (AFU) state — a state where current phone-cracking tech can still be effective — and places it back into Before First Unlock (BFU) state, which pretty much renders phone-cracking tech entirely useless.

The speculation as to the source of these unexpected reboots was both logical and illogical. The logical assumption was that Apple had, at some point, added some new code to the latest iOS version without informing the public this new feature had been added.

The other guesses were just kind of terrible and, frankly, a bit worrying, considering their source: law enforcement professionals tasked with finding technical solutions to technical problems.

The law enforcement officials’ hypothesis is that “the iPhone devices with iOS 18.0 brought into the lab, if conditions were available, communicated with the other iPhone devices that were powered on in the vault in AFU. That communication sent a signal to devices to reboot after so much time had transpired since device activity or being off network.” They believe this could apply to iOS 18.0 devices that are not just entered as evidence, but also personal devices belonging to forensic examiners.

These are phones, not Furbies. There needs to be some avenue for phone-to-phone communication, which can’t be achieved if the phones are not connected to any networks and/or stored in Faraday cages/bags. The advisory tells investigators to “take action to isolate” iOS 18 devices to keep them from infecting (I guess?) other seized phones currently awaiting cracking.

Fortunately, a day later, most of this advisory was rendered obsolete after actual experts took a look at iOS 18’s code. Some of those experts work for Magnet Forensics, which now owns Grayshift, the developer of the GrayKey phone cracker. This was also covered by Joseph Cox and 404 Media.

In a law enforcement and forensic expert only group chat, Christopher Vance, a forensic specialist at Magnet Forensics, said “We have identified code within iOS 18 and higher that is an inactivity timer. This timer will cause devices in an AFU state to reboot to a BFU state after a set period of time which we have also identified.”

[…]

“The reboot timer is not tied to any network or charging functions and only tied to inactivity of the device since last lock [sic],” he wrote.

It’s an undocumented feature in the latest version of iOS, apparently. And one that isn’t actually a bug dressed in “feature” clothing. This was intentional, as was Apple’s decision to keep anyone from knowing about until it was discovered, presumably. Apple has issued no statement confirming or denying the stealthy insertion of this feature.

Law enforcement officials and the tech contractors they work with aren’t saying much either. Everything published by 404 Media was based on screenshots taken from a law enforcement-only group chat or secured from a source in the phone forensics field. Magnet Forensic has only offered a “no comment,” along with the acknowledgement the company is aware this problem now exists.

This means iPhones running the latest iOS version will need to be treated like time bombs by investigators. The clock will start running the moment they remove the phones from the networks they use.

This isn’t great news for cops, but it’s definitely great news for iPhone owners. And not just the small percentage who are accused criminals. Everyone benefits from this. And the feature will deter targeting of iPhones by criminals, who are even less likely to be able to beat the clock with their phone-cracking tech. Anything that makes electronic devices less attractive to criminals is generally going to cause additional problems for law enforcement because both entities — to one degree or another — know the true value of a seized/stolen phone isn’t so much the phone itself as it is the wealth of information those phones contain.

Filed Under: device cracking, device encryption, device security, encryption, law enforcement, security
Companies: apple, grayshift, magnet forensics

Criminals Are Still Using Bogus Law Enforcement Subpoenas To Obtain Users’ Info

from the abusing-the-same-tools-the-cops-abuse dept

Maybe if law enforcement didn’t abuse subpoenas so frequently, it might be a little bit more difficult for criminals to do the same thing. Subpoenas can be used to order companies and service providers to turn over user data and information. But they don’t require law enforcement to run this request past a court first, so subpoenas are the weapon of choice if investigators just don’t have the probable cause they need to actually obtain a warrant.

The FBI has a long history of abusing its subpoena power, crafting National Security Letters to obtain information it thinks it might not be able to acquire if it allowed a court to review the request. In fact, FBI investigators have been known to send out NSLs demanding the same info requested by their rejected warrant applications.

Most companies don’t have the time or personnel to vet every subpoena they receive to ensure it’s legitimate and only demanding info or data that can be legally obtained without a warrant. As long as it originates from a law enforcement email address or has some sort of cop shop logo on it, they’ll probably comply.

This has led to several successful exfiltrations of personal data by cybercriminals. The latest wave of bogus subpoenas has apparently been effective enough, the FBI (which is part of the problem) has decided it’s time to step in. Here’s Zack Whittaker with the details for TechCrunch:

The FBI’s public notice filed this week is a rare admission from the federal government about the threat from fraudulent emergency data requests, a legal process designed to help police and federal authorities obtain information from companies to respond to immediate threats affecting someone’s life or property. The abuse of emergency data requests is not new, and has been widely reported in recent years. Now, the FBI warns that it saw an “uptick” around August in criminal posts online advertising access to or conducting fraudulent emergency data requests, and that it was going public for awareness.

“Cyber-criminals are likely gaining access to compromised US and foreign government email addresses and using them to conduct fraudulent emergency data requests to US based companies, exposing the personal information of customers to further use for criminal purposes,” reads the FBI’s advisory.

The full notice [PDF] gives more detail on how this is being accomplished, which involves utilizing data and personal info obtained through previous hacks or data leaks. Once a criminal has enough information to impersonate a cop, all they need is some easy-to-find subpoena boilerplate and a little bit of info about their targets. It also helps to know what might motivate faster responses while limiting the number of questions asked by service providers.

In some cases, the requests cited false threats, like claims of human trafficking and, in one case, that an individual would “suffer greatly or die” unless the company in question returns the requested information.

To combat this, the FBI suggests recipients of law enforcement subpoenas start doing the sort of thing they should have been doing all along, which is also the sort of thing that law enforcement agencies seem to consider being a low-level form of obstruction. Investigators tend to be “We’ll be asking the questions here” people and seem to resent even the most minimal pushback when engaging in fishing expeditions via subpoena.

Private Sector Companies receiving Law Enforcement requests should apply critical thinking to any emergency data requests received. Cyber-criminals understand the need for exigency, and use it to their advantage to shortcut the necessary analysis of the emergency data request. FBI recommends reviewers pay close attention to doctored images such as signatures or logos applied to the document. In addition, FBI recommends looking at the legal codes referenced in the emergency data request, as they should match what would be expected from the originating authority.

The rest of the notice tells law enforcement agencies to do all the basic security stuff they should have been doing all along to prevent exactly this sort of thing from happening.

But what’s not suggested as a fix is one of the more obvious solutions: move away from utilizing subpoenas and rely on warrants instead. This will prevent service providers stepping into the role of magistrate judge when receiving subpoenas to determine whether the request is legitimate and is properly supported by existing law. It also will make it more difficult for cybercriminals to do little more than send emails from compromised accounts to fraudulently obtain user information. While it’s not impossible to forge court orders and warrants, it’s a bit more difficult than only having to impersonate a single person or law enforcement entity when sending bogus paperwork to tech companies.

Of course, no law enforcement agency would be willing to make this switch even if it meant protecting thousands of innocent people from being victimized by cybercriminals. Whatever makes things easier for cops to get what they want also makes it easier for criminals to do the same thing. If nothing else, maybe a few law enforcement officials will realize the parallels this has to mandating weakened encryption or encryption backdoors: what works better for cops works better for criminals.

Filed Under: cybercrime, fbi, privacy, security, subpoenas

Supreme Court Helps AT&T, Verizon Avoid Accountability For Spying On Your Every Movement

from the civilization-was-nice-while-it-lasted dept

Thu, Nov 14th 2024 05:25am - Karl Bode

We’ve noted for years how wireless companies were at the forefront of over-collecting your sensitive daily movement data, then selling access to any nitwit with two nickels to rub together. That resulted in no limit of scandals from stalkers using the data to spy on women, to law enforcement (and people pretending to be law enforcement) using it to track peoples’ movement.

Earlier this year, after four years of legal wrangling and delays (caused in part by the telecom industry’s sleazy attack on the Gigi Sohn FCC nomination), the FCC announced they had voted to finally formalize $192 Million in fines against Verizon, AT&T, and T-Mobile.

The fines were likely a pittance compared to the money the three companies made off of location data sales, but at least it was something. Now those efforts are at risk thanks to, you guessed it, The Supreme Court and Trumpism. All three companies are arguing in court that recent Supreme Court rulings mean the FCC doesn’t actually have the authority to do, well, anything anymore:

“Verizon, AT&T, and T-Mobile are continuing their fight against fines for selling user location data, with two of the big three carriers submitting new court briefs arguing that the Federal Communications Commission can’t punish them.”

I’ve noted repeatedly how several recent Supreme Court rulings, most notably Loper Bright, will result in most U.S. corporations insisting that effectively all federal consumer protection efforts are now illegal. That’s going to result in untold legal chaos and widespread consumer protection, public safety, labor, and environmental harms across every industry that touches every corner of your life.

There’s a segment of folks who think that’s hyperbole or that the result won’t be quite that bad. But every time you turn around you find another instance of a company leveraging recent Supreme Court rulings to defang corporate oversight (which despite a lot of pretense, was precisely what was intended).

In this case, the wireless companies are leveraging the Supreme Court’s June 2024 ruling in _Securities and Exchange Commission v. Jarkes_y, which confirmed a Trumplican Fifth Circuit order stating that “when the SEC seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial.” That order wasn’t in place when the FCC first proposed its fines:

“The FCC disputed the 5th Circuit ruling, saying among other things that Supreme Court precedent made clear that “Congress can assign matters involving public rights to adjudication by an administrative agency ‘even if the Seventh Amendment would have required a jury where the adjudication of those rights is assigned to a federal court of law instead.”

There’s always a lot of bullshit logic Trumplicans have used to prop up our corrupt Supreme Court’s dismantling of corporate oversight. Most notably that this is just all some sort of good faith effort to rebalance institutional power and rein in regulators that had somehow been running amok (you’re to ignore that most U.S. regulators can barely find where they put their pants on a good day).

But as you can see here (and will be seeing repeatedly and often in vivid and painful detail), the goal really is to create flimsy framework effectively arguing that all federal consumer protection efforts, however basic, are now illegal. It’s not going to be subtle, it absolutely is going to kill people at scale, and the majority of Trump supporters are utterly oblivious to what they’ve been supporting.

In this case, the FCC was trying to uphold the most minimal standards of accountability possible, and even that wasn’t allowable under this new regulatory paradigm.

All is not lost: most consumer protection battles will now shift to the states. But if you currently live in a state where consumer, labor, and environmental protections were already out of fashion, you’re going to be repeatedly and painfully finding yourself shit out of luck. It would have been nice if election season journalism had tried to make it clear of the kind of stakes we’re talking about here.

Filed Under: fcc, jarkesy, location, loper bright, privacy, security, stalkers, surveillance, wireless
Companies: at&t, t-mobile, verizon

24,000 Abandoned Redbox DVD Rental Kiosks Are Leaking Sensitive Customer Information

from the privacy-shmivacy dept

Thu, Oct 24th 2024 04:00pm - Karl Bode

You probably remember Redbox, the DVD-rental kiosk company that went went bankrupt last June. The story behind the bankruptcy is interesting, in case to you missed it. The company failed to pivot to streaming (you might recall the failed joint venture with Verizon), and the bankruptcy has been profoundly ugly in a scorched Earth kind of way.

Frustrated employees (who stopped receiving health insurance last May) have apparently been stripping the company for parts, including selling used DVDs all over eBay. The company’s kiosks have also been left abandoned everywhere. 404 Media had a good story about how some innovative tinkerers have been making interesting use of the abandoned machines (of course they’re capable of running Doom).

But Ars Technica notes another problem: many of the abandoned machines still have the sensitive data of customers left on the hard drives. That includes rental histories, email addresses, zip codes, and, in some cases, credit card numbers, all going back to at least 2015:

“[The Redbox] logged lots of information, including debugging information from the transaction terminal, and they left old records on the device. This probably saved them some time on QAing software bugs, but it exposed all their users to data being leaked.”

There are numerous mistakes here, including storing any of this data locally and logging way more data during transactions than was reasonably needed. Flaws that transparent security research could have identified and prompted a fix for before it became a problem.

Redbox and its corporate parent, Chicken Soup for the Soul Entertainment, clearly not only sucked at business, but sucked at sucking at business. They were warned about potential privacy violations during bankruptcy proceedings. Pretending for a minute the U.S. isn’t too corrupt to pass modern privacy laws, there’s not much of a company left to hold accountable for the privacy-related “oversight.”

Now a lot of this data is old. And however bad this sounds it can’t hold a candle to the data collected on you by a vast array of dodgy international regulators, who routinely leak vast U.S. consumer datasets into the wild because the U.S. is literally too corrupt to pass a basic privacy law or regulate data brokers.

Still, it’s a problem: as the Wall Street Journal notes, there’s an estimated 24,000 of these abandoned rental kiosks scattered all over the U.S., and retail landlords are struggling like hell to just find somebody to come take them away.

Filed Under: bankruptcy, consumers, data, dvd rental, kiosks, privacy, red box kiosk, security
Companies: chicken soup for the soul, redbox

U.S. Finally Restricts Sale Of Location Data To Foreign Adversaries, But We’re Still Too Corrupt To Pass A Basic Internet-Era Privacy Law

from the very-late-to-the-party dept

Thu, Oct 24th 2024 05:32am - Karl Bode

Back in February, the Biden administration issued an executive order preventing the “large-scale transfer” of Americans’ personal data to “countries of concern.” The restrictions cover genomic data, biometric data, personal health data, geolocation data, and financial data, with the goal of preventing this data from being exploited by foreign intelligence agencies.

This week the administration fleshed out their planned restrictions in more detail. In a new fact sheet outlining plans for a new national-security program restricting the bulk transfer of consumer data, the government says it will focus primarily of the sale to “countries of concern” including China, Cuba, Iran, North Korea, Russia, and Venezuela.

The executive order and proposed rule defines “bulk” as such:

“The proposed rule would establish the following bulk thresholds: human genomic data on over 100 U.S. persons, biometric identifiers on over 1,000 U.S. persons, precise geolocation data on over 1,000 U.S. devices, personal health data on over 10,000 U.S. persons, personal financial data on over 10,000 U.S. persons, certain covered personal identifiers on over 100,000 U.S. persons, or any combination of these data types that meets the lowest threshold for any category in the dataset.”

While it’s certainly smart to finally start tracking the sale of sensitive U.S. consumer data to foreign countries in more detail (and blocking direct sales to some of the more problematic adversaries), it’s kind of like building barn doors four years after all the animals have already escaped.

We’ve noted for most of the last two decades how a huge variety of apps, telecoms, hardware vendors, and other services and companies track pretty much your every click, physical movement, and behavior, then sell access to that data to a broad array of super dodgy and barely regulated data brokers.

These data brokers then turn around and sell access to this data to a wide assortment of random nitwits, quite often without any sort of privacy and security standards. That’s resulted in a flood of scandals from stalkers tracking women to anti-abortion zealots buying clinic visitor data in order to target vulnerable women with health care misinformation.

This continues to happen for two reasons: at every last step, U.S. leaders put making money above public safety and consumer protection. And the U.S. government has discovered that buying this data is a fantastic way to avoid having to get pesky warrants. This all occurs to the backdrop of a relentless effort to turn all U.S. consumer protection regulators into decorative cardboard cutouts.

So nothing has changed foundationally. We’re literally too corrupt to pass even a baseline privacy law for the internet era, and outside some scattered efforts we really don’t consistently regulate data brokers. Those data brokers in turn have been so fast and loose with broad consumer datasets, it’s been utterly trivial for foreign intelligence agencies around the world to gain access to that data.

It’s nice that it’s 2024 and the U.S. government only just realized this is all a problem, and some basic guard rails are better than nothing, but it’s still not good enough. The U.S. needs comprehensive internet-era privacy laws that hold companies and executives accountable for lax security and privacy standards, and anything short of that (like freaking out exclusively about TikTok) is performance.

Filed Under: behavioral, consumers, data brokers, executive order, genomic, intelligence, location data, privacy, security, wireless

FCC Fines T-Mobile $31.5 Million After Carrier Was Hacked 8 Times In 5 Years

from the surely-you've-learned-your-lesson-THIS-time dept

Wed, Oct 9th 2024 05:25am - Karl Bode

U.S. wireless giant T-Mobile gets hacked a lot. In fact, the company has been hacked eight times in the last five years, with several of the intrusions exposing the sensitive personal data of millions of T-Mobile customers. The last hack, revealed in a 2023 SEC filing, exposed the names, addresses, social security numbers, and other sensitive information of over 37 million T-Mobile subscribers.

It took half a decade, but the FCC has finally taken action, announcing last week that it struck a new settlement with T-Mobile related to the breaches. As part of the deal, T-Mobile has agreed to pay 15.75milliontorampupitssecuritystandardsandpractices(moneyitshouldhavealreadyspentontheissue),andanother15.75 million to ramp up its security standards and practices (money it should have already spent on the issue), and another 15.75milliontorampupitssecuritystandardsandpractices(moneyitshouldhavealreadyspentontheissue),andanother15.75 million civil penalties to the U.S. Treasury.

“Consumers’ data is too important and much too sensitive to receive anything less than the best cybersecurity protections,” FCC boss Jessica Rosenworcel said in a prepared statement. “We will continue to send a strong message to providers entrusted with this delicate information that they need to beef up their systems or there will be consequences.”

One could argue that a $15.75 million fine years after the fact isn’t quite the deterrent Rosenworcel insists, given T-Mobile’s made untold millions (or billions) of dollars over the last decade playing fast and loose with consumer privacy.

As with so many modern companies, T-Mobile over-collects data then doesn’t take the necessary steps to protect said data. It then lobbies state and federal lawmakers to ensure we don’t shore up U.S. privacy protections (as it did when Republicans gutted the FCC’s fairly modest broadband privacy rules, or when it lobbies to kill new federal privacy laws), and the cycle repeats itself in perpetuity.

T-Mobile has a bit of a history of being sloppy with the vast location data it collects on users, then fighting tooth and nail against whatever slapdash accountability U.S. regulators can feebly muster. T-Mobile recently dramatically expanded the company’s collection of user browsing and app usage data via a new program dubbed “app insights.”

In T-Mobile’s case, its federally-backed quest to erode sector competition and merge with Sprint not only resulted in untold layoffs and an immediate end to all wireless data price competition in the U.S., it also distracted the company from doing a better job on consumer privacy and data security.

So yes, it’s nice to see the FCC take belated action, but it shouldn’t be confused with more serious accountability for T-Mobile or its executives. Nor should anybody confuse occasional fines (which may be reduced if they’re paid at all), with having a real federal privacy law, consistent privacy enforcement, or antitrust reform preventing companies from becoming impossibly unaccountable in the first place.

Filed Under: 5g, broadband, data breach, fcc, hackers, jessica rosenworcel, privacy, privacy law, security, telecom, wireless
Companies: t-mobile

Chinese Access To AT&T/Verizon Wiretap System Shows Why We Cannot Backdoor Encryption

from the backdoors-can-be-opened-by-spies-too dept

Creating surveillance backdoors for law enforcement is just asking for trouble. They inevitably become targets for hackers and foreign adversaries. Case in point: the US just discovered its wiretapping system has been compromised for who knows how long. This should end the encryption backdoor debate once and for all.

The law enforcement world has been pushing for backdoors to encryption for quite some time now, using their preferred term for it: “lawful access.” Whenever experts point out that backdooring encryption breaks the encryption entirely and makes everyone less safe and less secure, you’ll often hear law enforcement say that it’s really no different than wiretapping phones, and note that that hasn’t been a problem.

Leaving aside the fact that it’s not even that much like wiretapping phones, this story should be thrown back in the faces of all of law enforcement folks believing that backdooring “lawful access” into encryption is nothing to worry about. Chinese hackers have apparently had access to the major US wiretapping system “for months or longer.”

A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers, potentially accessing information from systems the federal government uses for court-authorized network wiretapping requests.

For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.

According to the reporting, the hackers, known as “Salt Typhoon,” a known Chinese state-sponsored hacking effort, were able to breach the networks of telco giants Verizon and AT&T.

The Wall Street Journal says that officials are freaking out about this, saying that the “widespread compromise is considered a potentially catastrophic security breach.”

Here’s the thing: whenever you set up a system that allows law enforcement to spy on private communications, it’s going to become a massive target for all sorts of sophisticated players, from organized crime to nation states. So, this shouldn’t be a huge surprise.

But it should also make it clear why backdoors to encryption should never, ever be considered a rational decision. Supporters say it’s necessary for law enforcement to get access to certain information, but as we keep seeing, law enforcement has more ways than ever to get access to all sorts of information useful for solving crimes.

Putting backdoors into encryption, though, makes us all less safe. It opens up so many private communications to the risk of hackers getting in and accessing them.

And again, for all the times that law enforcement has argued for backdoors to encryption being just like wiretaps, it seems like this paragraph should destroy that argument forever.

The surveillance systems believed to be at issue are used to cooperate with requests for domestic information related to criminal and national security investigations. Under federal law, telecommunications and broadband companies must allow authorities to intercept electronic information pursuant to a court order. It couldn’t be determined if systems that support foreign intelligence surveillance were also vulnerable in the breach.

It’s also worth highlighting how this breach was only just discovered and has been in place for months “or longer” (meaning years, I assume). Can we not learn from this, and decide not to make encryption systems vulnerable to such an attack by effectively granting a backdoor that hackers will figure out a way to get into?

On an unrelated note, for all the talk of how TikTok is a “threat from China,” it seems like maybe we should have been more focused on stopping these kinds of actual hacks?

Filed Under: breach, china, encryption, lawful access, security, wiretaps
Companies: at&t, verizon

Kaspersky Leaves U.S., Deletes Itself, Swaps Everybody’s Antivirus For Software Nobody Asked For

from the didn't-ask-for-this dept

Wed, Sep 25th 2024 05:26am - Karl Bode

Back in 2017, the Trump administration signed new rules banning Russian-based Kaspersky software on all government computers. Last June, the Biden administration took things further and banned distribution and sale of the software, stating that the company’s ties to the Russian government made its intimacy with U.S. consumer devices and data a national security threat.

While there are justifiable security concerns here, much like the ban of TikTok, the decision wasn’t absent of lobbying influence of domestic companies looking to dismantle a competitor. It’s relatively easy to get Congress heated up about national security concerns, because it tends to mask anti-competitive lobbying in a way you can brush aside non transparently for the greater good of the world [echoes].

Nor is a ban entirely consistently with broader U.S. policy, since U.S. government corruption prevents it from passing a meaningful privacy law, or regulating dodgy international data brokers that traffic in no limit of sensitive U.S. location and behavior data.

China and Russia don’t really need TikTok or AV software, they can simply buy access to your daily movement and browsing data from data brokers. Or, thanks to our lack of privacy laws or real accountability for lazy and bad actors, they can hack into any number of dodgy apps, software, or hardware with substandard security.

Regardless, this week Kaspersky Labs effectively left the U.S., but not before engaging in a practice that doesn’t exactly scream “high security standards.” The company effectively deleted its products from U.S. user computers without anybody’s consent, then replaced it with UltraAV’s antivirus solution — also without informing users.

Many users understandably saw this nonconsensual transaction take place and assumed they’d been hacked or infected with a virus:

“I woke up and saw this new antivirus system on my desktop and I tried opening kaspersky but it was gone. So I had to look up what happened because I was literally having a mini heart attack that my desktop somehow had a virus which uninstalled kaspersky somehow,” one user said.”

One problem is that Kaspersky had emailed customers just a few weeks ago, assuring them they would continue receiving “reliable cybersecurity protection.” They didn’t make any mention of the fact that this would involve deleting software and making installation choices consumers hadn’t approved of, suggesting that their exit from the security software industry won’t be all that big of a loss.

That said, it would be nice if U.S. consternation about consumer privacy were somewhat more… consistent.

The U.S. isn’t actually serious about U.S. consumer privacy because we make too much money off of the reckless collection and sale of said data to even pass baseline privacy laws. And the U.S. government has grown too comfortable being able to buy consumer data instead of getting a warrant. But we do like to put on a show that protecting consumer data is a top priority all the same.

Filed Under: antivirus, ban, consumers, national security, privacy, security, software
Companies: kaspersky

Aussie Gov’t: Age Verification Went From ‘Privacy Nightmare’ To Mandatory In A Year

from the topsy-turvy-down-under dept

Over the last few years, it’s felt like the age verification debate has gotten progressively stupider. People keep insisting that it must be necessary, and when others point out that there are serious privacy and security concerns that will likely make things worse, not better, we’re told that we have to do it anyway.

Let’s head down under for just one example. Almost exactly a year ago, the Australian government released a report on age verification, noting that the technology was simply a privacy and security nightmare. At the time, the government felt that mandating such a technology was too dangerous:

“It is clear from the roadmap at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness or implementation issues,” the government’s response to the roadmap said.

The technology must work effectively without circumvention, must be able to be applied to pornography hosted outside Australia, and not introduce the risk to personal information for adults who choose to access legal pornography, the government stated.

“The roadmap makes clear that a decision to mandate age assurance is not yet ready to be taken.”

That’s why we were a bit surprised earlier this year when the government announced a plan to run a pilot program for age verification. However, as we pointed out at the time, just hours after the announcement of that pilot program, it was revealed that a mandated verification database used for bars and clubs in Australia was breached, revealing sensitive data on over 1 million people.

You would think that might make the government pause and think more deeply about this. But apparently that’s not the way they work down under. The government is now exploring plans to officially age-gate social media.

The federal government could soon have the power to ban children from social media platforms, promising legislation to impose an age limit before the next election.

But the government will not reveal any age limit for social media until a trial of age-verification technology is complete.

The article is full of extremely silly quotes:

Prime Minister Anthony Albanese said social media was taking children away from real-life experiences with friends and family.

“Parents are worried sick about this,” he said.

“We know they’re working without a map. No generation has faced this challenge before.

“The safety and mental and physical health of our young people is paramount.

“Parents want their kids off their phones and on the footy field. So do I.”

This is ridiculous on all sorts of levels. Many families stay in touch via social media, so taking kids away from it may actually cut off their ability to connect with “friends and families.”

Yes, there are cases where some kids cannot put down phones and where obvious issues must be dealt with, as we’ve discussed before. But the idea that this is a universal, across-the-board problem is nonsense.

Hell, a recent study found that more people appeared to be going into the great outdoors because of seeing it glorified on social media. Some are worried that people are too focused on the great outdoors because it’s being overly glorified on social media.

Again, there’s a lot of nuance in the research that suggests this is not a simple issue of “if we cut kids off of social media, they’ll spend more time outside.” Some kids use social media to build up their social life which can lead to more outdoor activity, while some don’t. It’s not nearly as simple as saying that they’ll magically go outdoors and play sports if they don’t have social media.

Then you combine that with the fact that the Australian government knows that age verification is inherently unsafe, and this whole plan seems especially dangerous.

But, of course, politicians love to play into the latest moral panic.

South Australian Premier Peter Malinauskas said getting kids off social media required urgent leadership.

“The evidence shows early access to addictive social media is causing our kids harm,” he said.

“This is no different to cigarettes or alcohol. When a product or service hurts children, governments must act.”

Except, it’s extremely different than cigarettes and alcohol, both of which are actually consumed by the body and insert literal toxins into the bloodstream. Social media is speech. Speech can influence, but you can’t call it inherently a toxin or inherently good or bad.

The statement that “addictive social media is causing our kids harm” is literally false. The evidence is way more nuanced, and there remain no studies showing an actual causal relationship here. As we’ve discussed at length (backed up by multiple studies), if anything the relationship may go the other way, with kids who are already dealing with mental health problems resorting to spending more time on social media because of failures by the government to provide resources to help.

In other words, this rush to ban social media for kids is, in effect, an attempt by government officials to cover up their own failures.

The government could be doing all sorts of things to actually help kids. It could invest in better digital literacy, training kids how to use the technology more appropriately. It could provide better mental health resources for people of all ages. It could provide more space and opportunities for kids to freely spend time outdoors. These are all good uses of the government’s powers that tackle the issues they claim matter here.

Surveilling kids and collecting private data on them which everyone knows will eventually leak, and then banning them from spaces that many, many kids have said make their lives and mental health better, seems unlikely to help.

Of course, it’s only at the very end of the article linked above that the reporters include a few quotes from academics pointing out that age verification could create privacy and security problems, and that such laws could backfire. But the article never even mentions that the claims made by politicians are also full of shit.

Filed Under: age verification, anthony albanese, australia, kids, mental health, moral panic, peter malinauskas, privacy, security, social media

Cox Media Group Brags It Spies On Users With Device Microphones To Sell Targeted Ads, But It’s Not Clear They Actually Can

from the watching-you-watching-me dept

Thu, Aug 29th 2024 05:31am - Karl Bode

For years, the cable industry has dreamed of a future where they could use your cable box to actively track your every behavior using cameras and microphones and then monetize the data. At one point way back in 2009, Comcast made it clear they were even interested in using embedded microphones and cameras to monitor the number of people in living rooms and listen in on conversations.

Last December, internal documents obtained by 404 Media indicated that cable giant Cox Communications claimed to have finally achieved this longstanding vision: it was now able to monitor consumers via microphones embedded in phones, smart TVs, and cable boxes, leverage the audio data, then exploit it to target those users with tailored advertising.

At the time, the Cox Media Group (CMG) website openly bragged about the technology, crowing about how such surveillance was perfectly legal (though, even under our pathetic existing privacy and wiretap laws, it very likely isn’t). Shortly after the 404 Media story appeared, Cox deleted the website in question and issued a statement denying they were doing anything out of the ordinary:

CMG businesses do not listen to any conversations or have access to anything beyond a third-party aggregated, anonymized and fully encrypted data set that can be used for ad placement. We regret any confusion and we are committed to ensuring our marketing is clear and transparent.

Eight months later and 404 Media has obtained another pitch deck being used by Cox, crowing about its ability to listen in on consumers in order to sell them targeted ads under the company’s “Active Listening” program. This pitch deck advertises the company’s partnerships with Google, Amazon, Microsoft. Google says it removed CMG from its Partners Program after an “investigation” prompted by 404 Media.

It’s not clear Cox is truly capable of doing what it claims or if it’s overstating its abilities just to woo ad partners. But the marketing deck is pretty clear:

“The power of voice (and our devices’ microphones),” the slide deck starts. “Smart devices capture real-time intent data by listening to our conversations. Advertisers can pair this voice-data with behavioral data to target in-market consumers. We use AI to collect this data from 470+ sources to improve campaign deployment, targeting and performance.”

If real, it likely includes the myriad “smart” television sets that increasingly have little to no real consumer privacy standards. It may also include everything from smart phones and cable boxes to the myriad other household “smart” devices with embedded mics, from home security hubs to your smart refrigerator.

Cox’s original, since deleted website crowing about its “active listening” tech even went so far as to compare its own technology to a black mirror episode:

What would it mean for your business if you could target potential clients who are actively discussing their need for your services in their day-to-day conversations? No, it’s not a Black Mirror episode—it’s Voice Data, and CMG has the capabilities to use it to your business advantage.

Since the U.S. is too corrupt to pass a meaningful modern internet-era privacy law or regulate data brokers, it remains a sort of wild west when it comes to consumer surveillance and monetization. Companies routinely justify the behavior by insisting the data is “anonymized”; a meaningless, gibberish word used to pretend these kinds of ad surveillance systems are legal, private, and secure.

Because corporate lobbying has increasingly boxed in privacy regulation at the FCC and FTC, the folks supposedly tasked with investing potential privacy abuses lack the staff, resources, and authority to police the problem at the massive scale it’s happening. And that’s before recent Supreme Court rulings further stripped away the independence and authority of U.S. regulators.

The U.S. government, keen to bypass warrants by buying consumer data from data brokers themselves, has repeatedly made it clear that making money is more important than consumer trust and public safety. As a result we have countless companies monitoring your every fart, and non-transparently selling it to any number of noxious individuals who can use it to cause active harm (see: Wyden’s revelations on abortion clinic visitor data).

Filed Under: behavior ads, consumers, microphones, privacy, security, smart tvs, surveillance, tracking
Companies: cox