location – Techdirt (original) (raw)

Supreme Court Helps AT&T, Verizon Avoid Accountability For Spying On Your Every Movement

from the civilization-was-nice-while-it-lasted dept

We’ve noted for years how wireless companies were at the forefront of over-collecting your sensitive daily movement data, then selling access to any nitwit with two nickels to rub together. That resulted in no limit of scandals from stalkers using the data to spy on women, to law enforcement (and people pretending to be law enforcement) using it to track peoples’ movement.

Earlier this year, after four years of legal wrangling and delays (caused in part by the telecom industry’s sleazy attack on the Gigi Sohn FCC nomination), the FCC announced they had voted to finally formalize $192 Million in fines against Verizon, AT&T, and T-Mobile.

The fines were likely a pittance compared to the money the three companies made off of location data sales, but at least it was something. Now those efforts are at risk thanks to, you guessed it, The Supreme Court and Trumpism. All three companies are arguing in court that recent Supreme Court rulings mean the FCC doesn’t actually have the authority to do, well, anything anymore:

“Verizon, AT&T, and T-Mobile are continuing their fight against fines for selling user location data, with two of the big three carriers submitting new court briefs arguing that the Federal Communications Commission can’t punish them.”

I’ve noted repeatedly how several recent Supreme Court rulings, most notably Loper Bright, will result in most U.S. corporations insisting that effectively all federal consumer protection efforts are now illegal. That’s going to result in untold legal chaos and widespread consumer protection, public safety, labor, and environmental harms across every industry that touches every corner of your life.

There’s a segment of folks who think that’s hyperbole or that the result won’t be quite that bad. But every time you turn around you find another instance of a company leveraging recent Supreme Court rulings to defang corporate oversight (which despite a lot of pretense, was precisely what was intended).

In this case, the wireless companies are leveraging the Supreme Court’s June 2024 ruling in _Securities and Exchange Commission v. Jarkes_y, which confirmed a Trumplican Fifth Circuit order stating that “when the SEC seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial.” That order wasn’t in place when the FCC first proposed its fines:

“The FCC disputed the 5th Circuit ruling, saying among other things that Supreme Court precedent made clear that “Congress can assign matters involving public rights to adjudication by an administrative agency ‘even if the Seventh Amendment would have required a jury where the adjudication of those rights is assigned to a federal court of law instead.”

There’s always a lot of bullshit logic Trumplicans have used to prop up our corrupt Supreme Court’s dismantling of corporate oversight. Most notably that this is just all some sort of good faith effort to rebalance institutional power and rein in regulators that had somehow been running amok (you’re to ignore that most U.S. regulators can barely find where they put their pants on a good day).

But as you can see here (and will be seeing repeatedly and often in vivid and painful detail), the goal really is to create flimsy framework effectively arguing that all federal consumer protection efforts, however basic, are now illegal. It’s not going to be subtle, it absolutely is going to kill people at scale, and the majority of Trump supporters are utterly oblivious to what they’ve been supporting.

In this case, the FCC was trying to uphold the most minimal standards of accountability possible, and even that wasn’t allowable under this new regulatory paradigm.

All is not lost: most consumer protection battles will now shift to the states. But if you currently live in a state where consumer, labor, and environmental protections were already out of fashion, you’re going to be repeatedly and painfully finding yourself shit out of luck. It would have been nice if election season journalism had tried to make it clear of the kind of stakes we’re talking about here.

Filed Under: fcc, jarkesy, location, loper bright, privacy, security, stalkers, surveillance, wireless
Companies: at&t, t-mobile, verizon

Just Because CSLI Warrants Are New-Ish Doesn’t Mean You Can You Can Skimp On The Probable Cause

As far back as I can remember, cell site location info (CSLI) was always covered by the Third Party Doctrine. That court-created doctrine said anything “voluntarily” handed over to third parties can be obtained by the government. Without a warrant.

That not only includes bank records, phone records, and other transactional records we possibly haven’t even considered to be of interest to the government, but for the longest time — long after nearly everyone carried a cell phone with them wherever they went — location data generated by connections to cell towers could also be had without a warrant.

“Voluntary?” Hardly. To use your cell phone, you need to connect to a tower. It’s either/or, a binary calculation that says using a cell phone means creating a digital trail of your movements.

In 2018, the Supreme Court finally decided this was not only not exactly a voluntary transaction, but that the government shouldn’t have the power to engage in long-term tracking of citizens without a warrant. Enter the warrant requirement, which meant obtaining weeks or months of CSLI now needed a bit more paperwork and a bit more respect for the Fourth Amendment.

In terms of law enforcement work, a half-decade ago is a minute ago. It takes years for cops to finally comprehend the meaning and scope of constitutional decisions that don’t play out in their favor. They can fuck up for months or years and still get a “good faith” pass because law is hard and stuff.

Sooner or later, courts get a little tired of giving cops passes for deliberately failing to stay abreast of legal developments that occurred years ago. The Supreme Court of Georgia recently took law enforcement to task for thinking it could search someone’s cell phone using nothing but copied-pasted boilerplate and conclusory statements in its sworn affidavit.

The same sort of thing is going on here. I’m not even going to try to summarize the case to this point. This suppression order is the 519th document on the docket in a case that involves multiple charges, multiple defendants, and a still-unresolved prosecution of drug conspiracy case.

In fact, the order [PDF] doesn’t even make it clear it’s a suppression order. Multiple things are being handled here, and the federal court placed this one on the docket with a summary that leads with “ORDER GRANTING 355 Motion to Participate in Voir Dire as to Angela Cable.” Whew.

But this order does include a pretty thorough rejection of the government’s CSLI warrant half-assery. A wiretap that captured conversations between the co-conspirators also captured a single phone call involving defendant Angela Cable that may have included references to drug trafficking. This lone call became the basis for further government intrusion, some of which the court doesn’t find acceptable.

From this single call, the affiant concluded Defendant Cable was using her cell phone “to facilitate transactions involving drugs” and GPS data from her phone provider would “assist agents in locating and identifying vehicles and the locations that [were] being used as ‘stash houses’ for illegal drugs and/or drug proceeds.” Based on that analysis and information regarding the affiant’s background and experience, a Magistrate Judge issued a search warrant for geolocation data regarding Defendant Cable’s cell phone.

Ah, the old “training and experience” hook, which was attached to a couple of conclusory statements, some boilerplate, and a lot of assumptions about what this location info would reveal to investigators.

The magistrate judge issued a report and recommendation denying Cable’s attempt to suppress the CSLI. The district court says the magistrate is wrong about a few things.

Probable cause is still the standard, even if the warrant requirement is, in the grand scheme of things, fairly new. While the affidavit did provide information that linked Cable to her co-defendants and their alleged drug running, it did not do much to link her personal movements — those that could be ascertained from the location data — to the criminal acts being investigated.

The Court thus agrees the affidavit established, first, Defendant Cable’s involvement with Ruiz in the distribution of methamphetamine and, second, her use of her cell phone to assist in this illegal activity. But the affidavit provided no link between her mere use of the cell phone and probable cause to conclude the phone’s geolocation data would provide evidence of a crime. The affidavit alleged GPS data from the phone “will assist agents in locating and identifying vehicles and the locations that [were] being used as ‘stash houses” for illegal drugs and/or drug proceeds.” The affidavit did not, however, explain any basis for that conclusion.

A warrant has to do more than establish probable cause that criminal acts have occurred. It needs to link the suspected activity to the place being searched. In this case, it was Cable’s cell provider. The only data sought was location data, which wasn’t evidence of anything… at least not according to what had already been observed during the investigation.

Absent some allegation linking the movement of the phone to evidence of illegal activity, the mere use of a phone to conduct illegal activity does not establish probable cause to believe the location information tracked by the phone will provide evidence of a crime. People can, of course, use cell phones to speak about a crime without their locations providing evidence of the crime. So the mere use of a phone does not establish probable cause that one’s movements will provide evidence of a crime.

And what law enforcement had at the point the warrant was sought was nothing that suggested her location data would be evidence of anything other than her possession and use of a cell phone.

The affidavit includes no factual allegations to suggest Defendant Cable visited stash houses, moved drugs or drug proceeds between locations, or made any other movements as part of the drug trafficking. Merely conferencing together two people on a phone (albeit for illegal purposes) does not make the location from which the call was made (let alone movements while not using the phone) relevant to the investigation.

The court reverses this part of the magistrate’s recommendation. Probable cause isn’t an extremely high bar to clear. But no effort was made to step over it here.

In a conclusory manner, the affiant characterizes Defendant Cable as a “courier” but provides no basis for that assertion, and the other facts alleged do not support it.

But even though this is (at least partly) a suppression order, there is no suppression here. Good faith trumps bad police work and the location data survives to be used against Cable during her jury trial.

Having reviewed the warrant, the Court concludes that, even if the warrant lacked probable cause, the government would nonetheless be entitled to the benefit of the good-faith exception.

Welp. That’s the way it goes sometimes. Good faith beats bad warrant in a court of law. However, not every decision goes this way. And cops who think an affidavit needs nothing more than boilerplate with some “training and experience” seasoning need to be called out by courts more often, even if the end result is judicial forgiveness. If they’ve been told once, they’re not nearly as likely to be considered to be acting in “good faith” the next time around.

Filed Under: 4th amendment, angela cable, csli, location, probable cause, warrants

from the never-use-what3words dept

A couple years we wrote about What3Words, and noted that it was a clever system that created an easy way to allow people to better share exact locations in an easily communicated manner (every bit of the globe can be described with just 3 words — so something like best.tech.blog is a tiny plot near Hanover, Ontario). While part of this just feels like fun, a key part of the company’s marketing message is that the system is useful in emergency situations where someone needs to communicate a very exact location quickly and easily.

However, as we noted in our article, as neat and clever as the idea is, it’s very, very proprietary, and that could lead to serious concerns for anyone using it. In our article, we wrote about a bunch of reasons why What3Words and its closed nature could lead to problems — including the fact that the earth is not static and things move around all the time, such that these 3 word identifiers may not actually remain accurate. But there were other problems as well.

And, apparently one of those problems is that they’re censorial legal bullies. Zach Whittaker has the unfortunate story of how What3Words unleashed its legal threat monkeys on a security researcher named Aaron Toponce. Toponce had been working with some other security researchers who had been highlighting some potentially dangerous flaws in the What3Words system beyond those we had mentioned a few years back. The key problem was that some very similar 3 word combos were very close to one another, such that someone relying on them in an emergency could risk sending people to the wrong location.

The company insists that this is rare, but the research (mainly done by researcher Andrew Tierney) indicates otherwise. He seemed to find a fairly large number of similar 3 word combos near each other. You can really see this when Tierney maps out some closely related word combos:

When this happens, you get cells with these offset areas *very* closely matched.

We can see that the row above the banding has a "q" (the value on "n" on the lower left) that is approximately 14,560,000 lower than the cell below. pic.twitter.com/pYumzdxyTh

— Cybergibbons (@cybergibbons) April 27, 2021

In a follow up article, Tierney detailed a bunch of examples where this confusion could be dangerous. Some of them are really striking. Here’s just one:

?I think I?m having a heart attack. I?m walking at North Mountain Park. Deep Pinks Start.? ? 1053m.

(Try reading both out)

https://what3words.com/deep.pink.start

https://what3words.com/deep.pinks.start

Anyway, Toponce had been tweeting about Tierney’s findings, and talked about WhatFreeWords, which had been “an open-source, compatible implementation of the What3Words geocoding algorithm.” It was a reverse engineered version of the proprietary What3Words system. That tool was created back in 2019, but a week after it went online, What3Words lawyers sent incredibly overbroad takedown letters about it to everyone who had anything even remotely connected to WhatFreeWords, and had it pulled offline basically everywhere.

First up: this is ridiculous. While reverse engineering is unfortunately fraught with legal risk, there are many areas in which it is perfectly legal. And it seems like WhatFreeWords implementation should be legal. But it appeared to have been a fun side project, and not worth the legal headache.

Even though WhatFreeWords was disappeared from the world in late 2019, it appears that Toponce still had some of the code. So in tweeting about Tierney’s research, he offered up the tool to researchers to help investigate more problems with What3Words, similar to what Tierney had found.

And that’s when What3Words’ lawyers pounced. And, in pouncing, the mere chilling effects of the legal threat worked:

I've been served legal threats by @what3words. Both via email and post.

I am complying with all their demands. This is not a battle worth fighting.

Just let it be known however, they are evil.

— Aaron Toponce ?? (@AaronToponce) April 30, 2021

Toponce also admits he couldn’t even sleep after receiving the threat letter. This is an underappreciated aspect of the insanely litigious nature of many censorial bullies these days. Even if you’re in the right, getting sued can be completely destructive. Toponce was trying to help security researchers better research an application that is promoted for being safe and security researchers should be allowed to make use of reverse engineering to do exactly that. But, What3Words and their bullying lawyers made sure that’s impossible.

To be fair to their bullying lawyers, the threat letter is not as aggressive as some others, and they even make it explicit that they are not seeking that Toponce stop criticizing the company:

In this connection, and to be clear, our client does not require the deletion of your criticism of and feedback in respect of its service.

But… it still makes pretty stringent demands.

i) delete all copies of “What Free Words” and any other works derivative of W3W’s software and wordlist presently in your possession or under your control; ii) confirm, to the best of your knowledge, the identities of all parties / individuals to whom you have provided copies or derivations of the software and/or wordlist; iii) agree that you will not in the future make further copies or derivations of and/or distribute copies or derivations of the software and/or wordlist; iv) delete any Tweets or other online references made to the copies / derivations of our client’s software and wordlist and that are connected with or emanate from the “What Free Words”, and agree not to make similar representations in the future.

Of course, there are some questions about what intellectual property is actually being infringed upon here as well. When the company’s lawyers got the original WhatFreeWords site taken down, they claimed copyright and trademark rights, though extraordinarily broadly. They claim their own software is covered by copyright, but WhatFreeWords isn’t using their software. They also claim that all the 3 word combos are covered by copyright and… eh… it might be in the UK where W3W is based, but in the US, it would be harder to claim that three random word combos are creative enough to get a copyright. Also, in the US there would be a strong fair use defense. Unfortunately, in the UK, there is a ridiculous concept known as “database rights” that let you claim a right over a mere collection of things, even if you have no claim to the underlying rights. But, even so, it seems that there should be a fair use defense here. The UK has a fair dealing exception for research and private study, which seems like it should apply as well.

As for the trademark claims, well, no one’s going to get confused about it, since it’s pretty clear that WhatFreeWords was designed explicitly not to be from What3Words, and in this particular case, it’s not being offered widely, just to knowledgeable security researchers. Even more insane: the original threat letter over WhatFreeWords claimed that there could be criminal penalties for violating consumer protection laws, and that’s just insane.

Still, as Mike Dunford notes in his thread about this situation, W3W’s decision to focus on locking up and threatening everyone perhaps explains why so few people know about or use What3Words. Imagine if they had built this as an open tool that others could build on and incorporate into other offerings. Then they could have others experiment and innovate and get more people to adopt it. By making it proprietary, and locking it down with threats and asshole lawyers, there’s simply no reason to bother.

The only proper response to this is never, ever use What3Words for anything that matters. Beyond not giving in to censorial, abusive bullies, their legal reaction to a security researcher doing reverse engineering work to help find potentially dangerous problems with What3Words screams loudly to the world that What3Words has no confidence that it’s products are safe. They’re scared to death of security researchers being able to really test their work.

Both of these reasons means that What3Words should be remembered as little more than a failed.dumpster.fire rather than the cool.mapping.idea it could have been.

Filed Under: 3 words, aaron toponce, andrew tierney, bullies, copyright, location, open source, security, threats, trademark, whatfreewords
Companies: what3words

If We're So Worried About TikTok, Why Aren't We Just As Worried About AdTech And Location Data Sales?

from the you're-not-being-consistent dept

Wed, Sep 9th 2020 01:34pm - Karl Bode

We’ve noted a few times how the TikTok ban is largely performative, xenophobic nonsense that operates in a bizarre, facts-optional vacuum.

The biggest pearl clutchers when it comes to the teen dancing app (Josh Hawley, Tom Cotton, etc.) have been utterly absent from (or downright detrimental to) countless other security and privacy reform efforts. Many have opposed even the most basic of privacy rules. They’ve opposed shoring up funding for election security reform. Most are utterly absent when we talk about things like our dodgy satellite network security, the SS7 cellular network flaw exposing wireless communications, or the total lack of any meaningful privacy and security standards for the internet of broken things.

As in, most of the “experts” and politicians who think banning TikTok is a good idea don’t seem to realize it’s not going to genuinely accomplish much in full context. Chinese intelligence can still glean this (and much more data) from a wide variety of sources thanks to our wholesale privacy and security failures on countless other fronts. It’s kind of like banning sugary soda to put out a forest fire, or spitting at a thunderstorm to slow its advance over the horizon.

Yet the latest case in point: Joseph Cox at Motherboard (who has been an absolute wrecking ball on this beat) discovered that private intel firms have been able to easily buy user location data gleaned from phone apps, allowing the tracking of users in immensely granular fashion:

“A threat intelligence firm called HYAS, a private company that tries to prevent or investigates hacks against its clients, is buying location data harvested from ordinary apps installed on peoples’ phones around the world, and using it to unmask hackers. The company is a business, not a law enforcement agency, and claims to be able to track people to their “doorstep.”

This, of course, comes on the heels of countless scandals of this type, where app makers, telecoms, or other companies collect and monetize your sensitive location data with zero meaningful oversight and little to no transparency, selling it to any nitwit with a nickel. The global adtech location surveillance market is such a complicated mess, even experts and journalists have a hard time tracking what data is being collected and who it’s being sold to:

“The news highlights the complex supply chain and sale of location data, traveling from apps whose users are in some cases unaware that the software is selling their location, through to data brokers, and finally to end clients who use the data itself. The news also shows that while some location firms repeatedly reassure the public that their data is focused on the high level, aggregated, pseudonymous tracking of groups of people, some companies do buy and use location data from a largely unregulated market explicitly for the purpose of identifying specific individuals.”

Do folks hyperventilating about TikTok not realize Chinese intelligence can also access this data? If so, why haven’t I seen equal histrionics in relation to location data from folks like Josh Hawley? This massive, international network of telecoms, adtech vendors, and data brokers are engaged in wholesale, largely unaccountable surveillance of vast swaths of human beings. And yet, outside of a few lawmakers like Ron Wyden, countless lawmakers and regulators who’ve risked embolism with their TikTok outrage have been utterly silent when it comes to the threats posed by companies like HYAS:

“HYAS differs in that it provides a concrete example of a company deliberately sourcing mobile phone location data with the intention of identifying and pinpointing particular people and providing that service to its own clients. Independently of Motherboard, the office of Senator Ron Wyden, which has been investigating the location data market, also discovered HYAS was using mobile location data. A Wyden aide said they had spoken with HYAS about the use of the data. HYAS said the mobile location data is used to unmask people who may be using a Virtual Private Network (VPN) to hide their identity, according to the Wyden aide.”

Either you care about U.S. data security and privacy or you don’t, and I’m beginning to suspect that most of the folks who think TikTok poses an existential threat to the republic aren’t engaging in a good faith understanding of the actual problem. With no privacy rules, transparency, or consistency we’re a sitting duck for malicious actors, be they state-sponsored hackers, sex offending jackasses, or U.S. law enforcement officers out over their skis.

Want to genuinely shore up U.S. security and privacy problems? Pass a simple but meaningful privacy law for the internet era. Fund election security reform. Shore up our communications network security. Stop hamstringing and defunding privacy regulators at the FTC. Mandate transparency in the adtech market. Create some unified standards for the privacy dumpster fire that is the internet of things. Hyperventilating over a single Chinese-owned teen dancing app, then acting as if you’ve cured cancer is dangerous, counterproductive, and aggressively stupid in full context.

Filed Under: adtech, data, data brokers, location, location data, privacy
Companies: tiktok

These Wireless Location Data Scandals Are Going To Be A Very Big Problem For Ajit Pai

from the get-your-popcorn-ready dept

Tue, Jan 29th 2019 06:41am - Karl Bode

It took the press the better part of a decade to finally realize that cellular carriers have been routinely hoovering up and selling your daily location data to every nitwit on the planet with zero meaningful ethical guidelines or oversight. And while this stuff is certainly nothing new, the recent Motherboard report showing how cavalierly your private data is bought and sold along a massive chain of shady operators seems to have finally woken everybody up on the subject.

Whether we actually do something about it is another issue entirely.

Pressure has started to mount on FCC boss Ajit Pai in particular. Why? While people rightfully obsessed on Pai’s attacks on net neutrality, the repeal itself effectively involved neutering most FCC oversight of ISPs and wireless carriers, then shoveling any remaining authority to an FTC that lacks the authority or resources to really police telecom. This neutering of already tepid oversight was always the telecom lobby’s plan, and unless you’ve got a severe case of denial, it’s obvious the Pai FCC acted as a mindless rubber stamp in helping the industry’s biggest players achieve this goal.

Of course the GOP helped as well, by quickly kowtowing to telecom sector lobbyists and, in March of 2017, voting to kill some fairly modest FCC privacy rules before they could take effect. Those rules, in addition to some other requirements, would have given consumers far more power over how their location data is shared and sold among what, in some instances, has been proven to be a chain that in at least one case was some 70 companies long.

The problem for Pai is he now has to go before Congress and explain how demolishing the FCC’s ability to actually police this problem serves the common good. And, as Gizmodo notes, how he worked very closely with industry to specifically ensure these companies can’t be seriously held accountable for a long, long history of really dubious behavior:

“To put it another way, the feckless ineptitude displayed by Pai since this phone-tracking scandal first broke nine months ago is not unintentional but reflects the precise level of power major telecoms wanted him to wield and no more. This circumstance, under which virtually anyone can pay money to physically locate the owner of a mobile phone, was engineered?as was the crippling of the agency that, under the former administration, would have had complete authority to pursue and punish those responsible.”

Pai likely realizes the bad optics of this perfect storm. Under Pai, the FCC has largely ignored media requests for comments from reporters over the last few years, unless they were coming from outlets unwilling to criticize the agency (The Daily Caller comes quickly to mind). That changed this week however, when Pai was literally forced to directly answer the FCC press mailbox to assure everybody he’d be getting to the bottom of the location data scandal just as soon as the government re-opens:

Updated with comment from Ajit Pai: ?The FCC already has been investigating this issue. Unfortunately, the investigation had to be suspended because of the partial government shutdown. It will resume once the shutdown has ended.? https://t.co/E5htowU8JS

— Motherboard (@motherboard) January 24, 2019

It’s going to take a thorough investigation to explore the scope of the problem and ensure carriers are living up to their promises to cease this data collection and sale. The problem: you’d be pretty hard pressed to find a lawyer that believes the FCC has enough remaining authority to actually do anything about this, thanks in large part to Pai’s efforts to neuter the agency at carrier lobbyist request. The FCC does have some remaining authority under Customer Proprietary Network Information (CPNI) rules (expanded in 2005 to include location data), but it’s far from clear that’s enough, or that Pai would act anyway.

While Pai is busy trying to tapdance around those questions, the lawsuit over net neutrality will also be heating up, showcasing how Pai’s FCC engaged in all manner of dubious behavior from concocting a DDOS attack to blocking inquiries into comment fraud to try and downplay massive backlash to his assault on net neutrality. On top of the fact his agency made up tons of data to justify the extremely unpopular decision. None of this is going to be a particularly enjoyable ride for Mr. “internet freedom,” whose post-FCC political ambitions couldn’t be more obvious.

Filed Under: ajit pai, fcc, location, location data, privacy

Ajit Pai Refuses To Brief Congress On What He Plans To Do About Wireless Location Data Scandals

from the thanks-but-no-thanks dept

Wed, Jan 16th 2019 06:22am - Karl Bode

So last week yet another location data scandal emerged for the wireless industry, highlighting once again how carriers are collecting your location data, then selling it to a universe of sometimes shady partners with little to no oversight or accountability. Like the Securus and LocationSmart scandals before it, last week’s Motherboard report highlighted how all manner of dubious dudebros (and law enforcement officers) have been abusing this data for years, and the Ajit Pai FCC has yet to so much as mention the problem, much less spend a single calorie addressing it in any meaningful way.

Shortly after the scandal broke last week, Frank Pallone, the Chair of the House Committee on Energy and Commerce, asked Pai (pdf) to brief Congress on the steps the agency was taking to address the wireless sector’s long-standing failure to adequately address location data abuse. Pai’s response? Yeah, no thanks.

In a statement issued by Pallone, he says Pai’s office claimed that since the location data scandal wasn’t putting lives at risk, Pai could not attend such a briefing during the government shutdown:

“Today, FCC Chairman Ajit Pai refused to brief Energy and Commerce Committee staff on the real-time tracking of cell phone location, as reported by Motherboard last week. In a phone conversation today, his staff asserted that these egregious actions are not a threat to the safety of human life or property that the FCC will address during the Trump shutdown.

While the FCC’s working on a skeleton crew right now due to the shut down, there’s nothing actually stopping Pai from wandering down the road to answer a few questions, something Pallone was quick to highlight in his statement:

“There?s nothing in the law that should stop the Chairman personally from meeting about this serious threat that could allow criminals to track the location of police officers on patrol, victims of domestic abuse, or foreign adversaries to track military personnel on American soil. The Committee will continue to press the FCC to prioritize public safety, national security, and protecting consumers.”

Granted Pai wasn’t doing much about this problem when the government was open, either.

Academics and other privacy experts have told me this could easily be addressed using the FCC and FTC authority we already have (read: we don’t even need a new privacy law), we’ve just chosen to kowtow to telecom lobbyists instead. In fact the FCC’s privacy rules would have addressed the issue by giving consumers more control of how their location data is shared and sold, but sector lobbyists made quick work of those rules back in 2017. Even having Pai publicly state that this behavior is unacceptable might go a long way toward addressing the issue, though he’s yet to do even that.

Pai has made it fairly clear by now that he sees government consumer protection oversight as largely unnecessary, and all criticism of his unpopular policies as entirely political in nature, therefore making it OK to ignore (the myopia of that belief system most obviously exemplified by his attacks on net neutrality). As a result, you should expect the FCC to continue to do little to nothing about location data scandals. At least until there’s enough scandals of this type to push public outrage past the breaking point, finally making it clear that doing absolutely nothing is no longer an option. So, 2025 or so?

Filed Under: ajit pai, congress, e&c, fcc, frank pallone, house energy & commerce committee, location, location data, privacy, shutdown

How Bike-Sharing Services And Electric Vehicles Are Sending Personal Data To The Chinese Government

from the why-we-can't-have-nice-things dept

A year ago, Techdirt wrote about the interesting economics of bike-sharing services in China. As the post noted, competition is fierce, and the profit margins slim. The real money may be coming from gathering information about where people riding these bikes go, and what they may be doing, and selling it to companies and government departments. As we warned, this was something that customers in the West might like to bear in mind as these Chinese bike-sharing startups expand abroad. And now, the privacy expert Alexander Hanff has come across exactly this problem with the Berlin service of the world’s largest bike-sharing operator, Mobike:

data [from the associated Mobike smartphone app] is sent back to Mobike’s servers in China, it is shared with multiple third parties (the privacy policy limits this sharing in no way whatsoever) and they are using what is effectively a social credit system to decrease your “score” if you prop the bike against a lamp post to go and buy a loaf of bread.

Detailed location data of this kind is far from innocuous. It can be mined to provide a disconcertingly complete picture of your habits and life:

through the collection and analysis of this data the Chinese Government now likely have access to your name, address (yes it will track your address based on the location data it collects), where you work, what devices you use, who your friends are (yes it will track the places you regularly stop and if they are residential it is likely they will be friends and family). They also buy data from other sources to find out more information by combining this data with the data they collect directly. They know what your routines are such as when you are likely to be out of the house either at work, shopping or engaging in social activities; and for how long.

As Hanff points out, most of this is likely to be illegal under the EU’s GDPR. But Mobike’s services are available around the world, including in the US. Although Mobike’s practices can be challenged in the EU, elsewhere there may be little that can be done.

And if you think the surveillance made possible by bike sharing is bad, wait till you see what can be done with larger vehicles. As many people have noted, today’s complex devices no longer have computers built in: they are, essentially, computers with specialized capabilities. For example, electric cars are computers with an engine and wheels. That means they are constantly producing large quantities of highly-detailed data about every aspect of the vehicle’s activity. As such, the data from electric cars is a powerful tool for surveillance even deeper than that offered by bike sharing. According to a recent article from Associated Press, it is an opportunity that the authorities have been quick to seize in China:

More than 200 manufacturers, including Tesla, Volkswagen, BMW, Daimler, Ford, General Motors, Nissan, Mitsubishi and U.S.-listed electric vehicle start-up NIO, transmit position information and dozens of other data points to [Chinese] government-backed monitoring centers, The Associated Press has found. Generally, it happens without car owners’ knowledge.

What both these stories reveal is how the addition of digital capabilities to everyday objects — either indirectly through smartphone apps, as with Mobike, or directly in the case of computerized electric vehicles — brings with it the risk of pervasive monitoring by companies and the authorities. It’s part of a much larger problem of how to enjoy the benefits of amazing technology without paying an unacceptably high price in terms of sacrificing privacy.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: bike sharing, china, electric vehicles, gdpr, location, location info, privacy

Google's Location Info Failure Might Interest The FTC

from the do-better dept

Earlier this week, the Associated Press did a story revealing that even for Google users (on both Android and iPhone) who turned off location tracking Google was still tracking their location in some cases.

Google says that will prevent the company from remembering where you?ve been. Google?s support page on the subject states: ?You can turn off Location History at any time. With Location History off, the places you go are no longer stored.?

That isn?t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. (It?s possible, although laborious, to delete it .)

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like ?chocolate chip cookies,? or ?kids science kits,? pinpoint your precise latitude and longitude ? accurate to the square foot ? and save it to your Google account.

If you squint, you can kind of see why this might have happened. Apps like Maps and weather more or less need your location info to work well (though, the search part is a bit more baffling). But, even so, this seems like a huge blunder by Google, a company that should absolutely know better. The latest, of course, is that Google has quietly moved to update the language that users see to “clarify” that some location data may still be recorded:

But its help page for the Location History setting now states: ?This setting does not affect other location services on your device.? It also acknowledges that ?some location data may be saved as part of your activity on other services, like Search and Maps.?

Previously, the page stated: ?With Location History off, the places you go are no longer stored.?

It’s entirely possible, if not likely, that the location history feature is completely disconnected from the location specific data within these other apps. But, still, the average consumer is not going to realize that. Indeed, the tech savvy consumer is mostly unlikely to understand that. And Google’s new “clarification” isn’t really going to do a very good job actually clarifying this for people either. Google certainly has done a better job than a lot of other companies both in providing transparency about what data it collects on you and giving you controls to see that data, and delete some of it. But this was still a boneheaded move, and it’s simply ridiculous that someone at the company didn’t spot this issue and do something about it sooner.

As I’ve been pointing out for a while, a big part of why so many people are concerned about privacy on digital services is because those services have done a piss poor job of both informing users what’s happening, and giving them more control over the usage of their data. This kind of situation is even worse, in that under the guise of giving users control (a good thing), Google appears to have muddied the waters over what information it was actually collecting.

I also wonder if this will make the FTC’s ears perk up. There is still an FTC consent decree that binds the company with regards to certain privacy practices, and that includes that the company “shall not misrepresent in any manner, expressly or by implication… the extent to which consumers may exercise control over the collection, use, or disclosure of covered information.” And “covered information” includes “physical location.”

Would these practices count as misrepresenting the extent to which consumers could stop Google from collecting location info? It certainly seems like a case could be made that it does. There are many areas where it feels like people attack the big internet companies just because they’re big and easy targets. Sometimes those attacks are made without understanding the underlying issues. But sometimes, I’m amazed at how these companies fail to take a thorough look at their own practices. And this is one of those cases.

Filed Under: ftc, google maps, location, location info, privacy, transparency
Companies: google

DHS Deploying Stingrays Hundreds Of Times A Year

from the not-so-much-natsec-as-it-is-basic-warrant-service dept

It’s no secret most law enforcement agencies own or have access to Stingray devices. But some deployment totals can still raise eyebrows. The Baltimore PD, for example, deployed Stingrays 4,300 times over an 8-year period — more than once per day. And it hid these behind pen register orders, so that judges, defendants, and defense lawyers had no idea exactly how the PD located suspects.

Thanks to Buzzfeed’s FOIA request, we now know another government agency has been firing up its Stingrays at least once a day. And it’s one of the nation’s largest.

A document obtained by BuzzFeed News shows the US Department of Homeland Security used secretive cell phone–tracking devices nationwide more than 1,800 times from 2013 to 2017.

The information, obtained through a Freedom of Information Act request, shows that Homeland Security Investigations, a major investigative arm of DHS, used what’s known as cell-site simulator over-the-air technology 1,885 times from Jan. 1, 2013, to Oct. 11, 2017 throughout the US.

There’s not a lot to be gleaned from the document [PDF], other than the total number of deployments and cities where they may have been deployed. Given the DHS’s purview, one would assume these are deployed only in serious criminal investigations. That assumption would be wrong, as DHS component ICE has already shown.

Sen. Ron Wyden recently asked US Immigration and Customs Enforcement for information on the agency’s use of the devices after it was determined ICE used a cell-site simulator to arrest an undocumented immigrant. Among the questIons Wyden sought answers to was what steps the agency had taken to limit interference to the phones of people not being investigated.

ICE may be making the most use of DHS Stingray devices. In its answers to Wyden’s questions, the agency made it clear it uses Stingrays for all sorts of banal things, like tracking down pretty much anyone its looking for or simply sniffing out phone details for future subpoenas.

Of course, while it’s doing this hundreds of times a year, the phone service of everyone DHS agencies aren’t looking for is interrupted. But that’s OK with ICE, because the only phone service anyone really needs is emergency service, according to director Thomas Homan.

“In all circumstances, devices are always able to dial 911 without any disruption of service,” Homan said.

So, not really a problem, according to ICE — even if ICE is doing nothing more than readying a subpoena.

This is why the Supreme Court’s take on Carpenter will be important. A ruling following the current view on third party data might encourage the federal government to ditch its voluntary Stingray warrant requirement. It will also encourage other law enforcement agencies to continue hiding evidence of Stingray use behind pen register requests, leading defendants and presiding judges to believe the phone they tracked in real time was actually just historical cell site location data.

Filed Under: dhs, imsi catcher, location, privacy, stingrays

Investigation Finds Google Collected Location Data Even With Location Services Turned Off

from the questionable-practice-raises-Fourth-Amendment-questions dept

What if you take every precaution you can possibly take to avoid leaving a digital trail of your movements… and it still doesn’t matter?

Many people realize that smartphones track their locations. But what if you actively turn off location services, haven’t used any apps, and haven’t even inserted a carrier SIM card?

Even if you take all of those precautions, phones running Android software gather data about your location and send it back to Google when they’re connected to the internet, a Quartz investigation has revealed.

Since the beginning of 2017, Android phones have been collecting the addresses of nearby cellular towers—even when location services are disabled—and sending that data back to Google.

So much for going off the grid. There are some caveats to Google’s permissionless collection of cell site location data, with the most significant being the fact Google didn’t store the auto-collected cell tower info. That doesn’t excuse the practice, but it at least keeps it from becoming tracking data the government can access without a warrant.

Google’s collection of cell tower data occurred when notifications were pushed or phone users utilized the phone’s built-in messaging service. In both cases, it’s reasonable to assume users weren’t expecting Google to be collecting this data. (It wouldn’t be necessarily reasonable to assume cell providers weren’t, as that’s what’s needed to deliver messages and notifications if the user isn’t using a WiFi connection.) But no one would reasonably assume the operating system would still send cell tower info to Google with the SIM card pulled.

This is a troubling practice to be engaged in, no matter how temporary the storage of cell site data. It flies directly in the face of what phone users expect when they shut off location services or undertake other affirmative actions to minimize their digital footprint.

SIDEBAR:

This does raise some interesting Fourth Amendment questions, even if the circumstances under which the collection occurred make it unlikely these factors will ever be the centerpiece of a motion to suppress evidence. US courts have made it clear on multiple occasions there’s no expectation of privacy in cell site location records. Judges have stated cell phone users should know cell companies collect tower location data to provide service to their phones. According to this line of thinking, the third party location records have no expectation of privacy because phone users are aware of the realities of cell phone usage: phones connect to towers and create records of the tower’s location.

The question in this case would be whether the expectation of privacy is still nonexistent when phone users undertake deliberate efforts to disable the collection of location records. It would seem these efforts would restore an expectation of privacy — at least if judges are going to be consistent and intellectually honest. As some judges have pointed out, defendants who don’t like being tracked by their cell phones can just not use them. (This is still a somewhat ridiculous assertion — roughly comparable to the TSA suggesting people who don’t like invasive searches/biometric data gathering can just choose to not fly. Both ignore the realities of the modern world.)

If a person makes efforts to prevent collection of location info and a company does it anyway, should law enforcement still have warrantless access to these records? This remains a hypothetical question, but given the amount of surreptitious tracking performed by a number of tech companies (providers, ad networks, etc.), it won’t remain hypothetical forever.

Phones generate a wealth of third party records just a subpoena away from being in the government’s possession. Users cannot possibly be aware of all the information gathered by multiple companies each time they use their smartphone, but they do “reasonably expect” shutting off location services means no one (outside of their service provider) will be gathering location data. Would someone, in performing these actions, be granted a higher expectation of privacy as a result of their actions? Or would a court treat savvier digital natives the way it treats the unwashed masses who make zero effort to limit collection of location info?

Filed Under: 4th amendment, android, location, privacy, tracking
Companies: google