apple – Techdirt (original) (raw)

Apple Dumps Suit Against NSO Group After Israeli Government Walks Off With A Bunch Of The Company’s Files

from the friends-in-the-highest-places dept

Well, it worked. We’ll have to see how this plays out in the lawsuit WhatsApp brought against NSO Group, but it has managed to shed one litigant thanks to intervention from the home team: the Israeli government.

In July, documents obtained by Distributed Denial of Secrets (DDoS) revealed the desperate measures NSO Group deployed to avoid having to turn over internal information during discovery in multiple lawsuits, including one filed by Apple. Knowing that discovery was inevitable, NSO met with Israeli government officials and asked them to secure a blocking order from the nation’s courts to prevent having to comply with discovery requests.

The government secured these orders and went to work shortly after WhatsApp served NSO with its discovery requests. According to the paperwork, the government needed to seize a bunch of the company’s internal documents for “national security” reasons, speculating disingenuously and wildly that turning over any information about NSO’s Pegasus phone-hacking malware would make the nation itself less secure.

Shortly thereafter, the Israeli government engaged in a performative raid of NSO’s offices to seize anything NSO felt might be disadvantageous in these lawsuits. WhatsApp is still in the litigation game, hoping to obtain anything the Israeli government hasn’t already seized that might relate to its claims of unauthorized access by NSO customers deploying Pegasus malware via the company’s US servers.

Apple, however, has decided it’s not going to spend any more money or time trying to win a rigged game, as Joseph Menn reports for the Washington Post.

Apple asked a court Friday to dismiss its three-year-old hacking lawsuit against spyware pioneer NSO Group, arguing that it might never be able to get the most critical files about NSO’s Pegasus surveillance tool and that its own disclosures could aid NSO and its increasing number of rivals.

[…]

“While Apple takes no position on the truth or falsity of the Guardian Story described above, its existence presents cause for concern about the potential for Apple to obtain the discovery it needs,” the iPhone maker wrote in its filing Friday. Israeli officials have not disputed the authenticity of the documents but have denied interfering in the U.S. litigation.

As for that last sentence, that’s a dodge. Of course the Israeli government interfered with this litigation. That it didn’t actually insert itself directly into either of these bases doesn’t change the fact that the raid it performed because NSO Group asked it to means the company no longer has the documents sought by US litigants in its possession.

The more surprising assertion is Apple’s: that part of its reason for dropping the lawsuit is to avoid having to turn over any of its own stuff in response to discovery requests. But the rationale is very much an Apple thing: the company feels giving more information to NSO — especially in open court — will just be used to facilitate the creation of new hacking tools for NSO (or its competitors) to use against Apple’s customers.

That’s more of a concern for Apple, which is seeking to protect an entire operating system. WhatsApp’s concerns are more limited. While it too would probably prefer any information it hands over in court not be used against it by malware merchants, it only has to worry about a single service, rather than the underlying infrastructure (so to speak) shared by dozens of Apple products.

Discovery is underway in the WhatsApp case, so hopefully we’ll be seeing some interesting developments there soon. But given what’s happened here, NSO and its Israel-based competitors have some really interesting (and disturbing) options when it comes to thwarting lawsuits over the constant abuse of its Pegasus malware.

Filed Under: israel, lawsuit, malware, pegasus, spyware, surveillance
Companies: apple, nso group

Brazil Bans ExTwitter In Battle With Musk, Takes VPNs & Users Down With It

from the everything-about-this-story-is-terrible dept

In the battle between Elon Musk and Brazilian Supreme Court Justice Alexandre de Moraes, the biggest losers are Brazilians. They are now at risk of being stripped of VPNs while facing massive fines if they somehow get around a countrywide ban on ExTwitter.

Yesterday, I wrote about the standoff between Elon Musk and Brazil, and how neither side comes out of it looking very good. Where it was left was that Brazilian Supreme Court Justice Alexandre de Moraes was (1) freezing Starlink assets and (2) threatening to ban ExTwitter entirely from the country.

As we noted, there was nothing particularly new about the second point. Brazil has done this in the past with WhatsApp and Telegram. The freezing of Starlink’s assets already appeared to be an overreach and suggested how far Moraes would be willing to go in this posturing battle.

Apparently, he was willing to go even further, to the point of potentially blocking VPNs entirely.

On Friday, it was announced that ExTwitter would indeed be banned across Brazil. But what may be most interesting (or, rather, scary) is the method. First, ISPs and app stores have been ordered to block access to the app within five days. That’s not all that new, even if it is generally problematic. Countries simply should not be banning apps on the open internet.

But then it gets worse. The original ruling said that app stores are also told they need to ban VPNs. Here’s a translation of the order.

(2.1) APPLE and GOOGLE in Brazil to insert technological obstacles capable of making it impossible for users of the IOS (APPLE) and ANDROID (GOOGLE) systems to use the “X” application and remove the “X” application from the APPLE STORE and GOOGLE PLAY STORE stores and, similarly, in relation to applications that enable the use of VPN (‘virtual private network’), such as, for example: Proton VPN, Express VPN, NordVPN, Surfshark, TOTALVPN, Atlas VPN, Bitdefender VPN;

(2.2) That manage backbone access services in Brazil, so that they insert technological obstacles capable of making it impossible for users of the “X” application to use it;

(2.3) Internet service providers, represented by their Presidents, such as ALGAR TELECOM, OI, SKY, LIVE TIM, VIVO, CLARO, NET VIRTUA, GVT, etc…, to insert technological obstacles capable of making the use of the “X” application unfeasible; and

(2.4) Those who manage personal mobile service and switched fixed telephone service, to insert technological obstacles capable of making the use of the “X” application unfeasible;

I initially thought that first section couldn’t possibly mean that app stores also had to ban VPNs. But that’s what it pretty clearly says and what multiple Brazilian reports claim.

The end result is taking away VPNs from millions of Brazilians, which is an awful lot of collateral damage just because Elon Musk is a jackass. VPNs have many legitimate uses other than accessing ExTwitter after a ban in Brazil.

A few hours after the decision, Moraes seemed to walk back that section of the ruling, though perhaps only temporarily. In a second short ruling, he “suspended the execution” of that item “until there is a statement from the parties in the proceedings” in order to “avoid any unnecessary and reversible inconvenience to third-party companies.”

In other words, after Moraes hears from “the parties in the proceedings,” the VPN ban could come back. It’s unclear if that’s just ExTwitter, or if Apple/Google are included, given they were the ones directed to block VPNs.

Then, the original order (in a part that has not been rescinded) also threatens to fine anyone who is able to get around the block nearly $9,000 dollars per day:

(3) THE APPLICATION OF A DAILY FINE of R$50,000.00 (fifty thousand reais) to individuals and legal entities that engage in conduct involving the use of technological subterfuges to continue communications carried out by “X”, such as the use of VPN (‘virtual private network’), without prejudice to other civil and criminal sanctions, as provided by law.

When Brazil tried banning Telegram recently, this element was in there too, with the fines being twice as high. Though apparently it was never used.

Either way, this got a lot of attention very quickly. The NY Times notes that civil society folks are spooked by the VPN demands:

“This is the first time they asked for VPN blocking. This is something unprecedented,” Paula Bernardi, a Brazil-based policy adviser at the Internet Society, which pushes for an open internet. She said the Brazilian government could potentially now ask VPN providers to reveal who used their services to access X. “That’s going to be a very heated debate,” she said.

I was already concerned about the efforts by Moraes here, even if Elon is being terrible in response. But it’s fucking crazy that he ordered Google and Apple to ban VPNs entirely and then also threatened huge fines to users of VPNs (even if there’s a low likelihood of it being enforced).

One other oddity in all this is that Apple apparently started banning VPNs from its iOS App Store last week, perhaps knowing this order was coming. But that only raises even more questions. Did Apple know the details of this “unprecedented” order a week early? Why would it agree to ban VPNs?

As I said on today’s Ctrl-Alt-Speech, neither side looks good here. Now that it’s turned into a kind of schoolyard fight between Moraes and Musk where each one seems to be going further and further to piss off the other, it’s getting worse and worse for the public. Indeed, I’ve seen some speculation that Moraes doesn’t even have the authority to issue such a widespread ban on VPNs, but no one seems to be stopping him either way.

This is no longer about policy or law. It’s just become about egos. And because of that, everyone loses*.

* With the possible exception of Bluesky, which has had a flood of users from Brazil in the past two days. I will remind people that I am on the board of Bluesky, but this is not how I want Bluesky to gain new users. Bluesky itself has many advantages, including that it would be much more difficult to “ban” the service like this given its decentralized nature. But seeing pissing matches between Musk and Brazil seems like an unfortunate way for it to get more attention.

Filed Under: alexandre de moraes, app stores, ban, brazil, countrywide block, elon musk, vpns
Companies: apple, google, twitter, x

Streisand Effect: Apple Gets Concept Renderer To Take Down Concept Art For Being ‘Too Realistic’

from the out-of-the-bag dept

Let’s dial the clock all the way back to 2007 for just a moment. Beyonce was topping the charts with Irreplacable. Nicholas Cage, the greatest actor of all time, helped make National Treasure: Book of Secrets a hit at the box office. And Apple finally settled a years-long lawsuit against a website called Think Secret, which published rumors it came across about what Apple was going to release next. That settlement included a requirement that Think Secret cease publication, despite it having never leaked anything about Apple directly. Instead, it merely published the leaked information it received. And if that sounds like journalism to you, well, you’re not alone.

But the point is that Apple has a long history of litigious behavior when it comes to leaks and those who report on them. On the flipside, Apple has long had a copacetic view of what are referred to as “concept creators.” These are the folks that, with presumably no insider information or leaks, create mockups of Apple products that don’t exist. Sort of like a near-science fiction wishlist for Apple products, if you will. Or even if you won’t; I don’t care, it’s still what they do.

It appears that those two different types of publication may have inadvertently converged when concept creator Antonio De Rosa produced the following image of what he called the “iPhone Air Flip” phone.

De Rosa created that image as a concept. It’s not a real phone… or is it?

See, Apple’s lawyers contacted him and asked him to alter “some of [his] concepts”, which included that one, based on the reporting I’ve seen. They did so, according to these lawyers, because they were concerned that there would be confusion in the public. Not that they wanted De Rosa to cease creating these sorts of concepts, mind you. They just wanted this one changed.

The official explanation is that because De Rosa’s designs are so widely circulated, they “may actually create consumer confusion”. Apple’s lawyers said that, and also said that they didn’t want De Rosa to stop posting; “we’d rather talk through the nuances of the issue and ideally find a reasonable solution that works for everyone.”

According to De Rosa directly, however, the explanation was a little different.

“Too realistic” is really important here, because it is leading that same public Apple was so afraid of confusing to draw a conclusion that sure seems to make sense to me.

But the conspiracy explanation is that De Rosa’s renders are too good not just because of his obvious talent, but because he’s got too close for comfort with some of his renders. The lawyers don’t appear to have singled out any renders in particular, but if I were a betting woman my money would be on the iPhone Air.

If that’s indeed the one that’s got Apple worried, that raises another question: why? It’s possible that the concept is incredibly close to what Apple’s making. But it’s also possible – and funnier –that the concept is so pretty that the real folding iPhone won’t be as exciting.

So there are a lot of unknowns here, obviously. Still, for De Rosa to walk away with the idea that any of his images were “too realistic” for Apple’s tastes doesn’t really comport with its claims of customer confusion, except if the image includes something that the real Apple phone doesn’t.

But either way, look! That image in the post? The same one that is being widely circulated across even more publications now, in true Streisand Effect? Seems to be a whole lot more eyes on De Rosa and his renders than there was before Apple got its lawyers involved.

Filed Under: antonio de rosa, iphone, realistic, renders
Companies: apple

Suing Apple To Force It To Scan iCloud For CSAM Is A Catastrophically Bad Idea

from the this-would-make-it-harder-to-catch-criminals dept

There’s a new lawsuit in Northern California federal court that seeks to improve child safety online but could end up backfiring badly if it gets the remedy it seeks. While the plaintiff’s attorneys surely mean well, they don’t seem to understand that they’re playing with fire.

The complaint in the putative class action asserts that Apple has chosen not to invest in preventive measures to keep its iCloud service from being used to store child sex abuse material (CSAM) while cynically rationalizing the choice as pro-privacy. This decision allegedly harmed the Jane Doe plaintiff, a child whom two unknown users contacted on Snapchat to ask for her iCloud ID. They then sent her CSAM over iMessage and got her to create and send them back CSAM of herself. Those iMessage exchanges went undetected, the lawsuit says, because Apple elected not to employ available CSAM detection tools, thus knowingly letting iCloud become “a safe haven for CSAM offenders.” The complaint asserts claims for violations of federal sex trafficking law, two states’ consumer protection laws, and various torts including negligence and products liability.

Here are key passages from the complaint:

[Apple] opts not to adopt industry standards for CSAM detection… [T]his lawsuit … demands that Apple invest in and deploy means to comprehensively … guarantee the safety of children users. … [D]espite knowing that CSAM is proliferating on iCloud, Apple has “chosen not to know” that this is happening … [Apple] does not … scan for CSAM in iCloud. … Even when CSAM solutions … like PhotoDNA[] exist, Apple has chosen not to adopt them. … Apple does not proactively scan its products or services, including storages [sic] or communications, to assist law enforcement to stop child exploitation. …

According to [its] privacy policy, Apple had stated to users that it would screen and scan content to root out child sexual exploitation material. … Apple announced a CSAM scanning tool, dubbed NeuralHash, that would scan images stored on users’ iCloud accounts for CSAM … [but soon] Apple abandoned its CSAM scanning project … it chose to abandon the development of the iCloud CSAM scanning feature … Apple’s Choice Not to Employ CSAM Detection … Is a Business Choice that Apple Made. … Apple … can easily scan for illegal content like CSAM, but Apple chooses not to do so. … Upon information and belief, Apple … allows itself permission to screen or scan content for CSAM content, but has failed to take action to detect and report CSAM on iCloud. …

[Questions presented by this case] include: … whether Defendant has performed its duty to detect and report CSAM to NCMEC [the National Center for Missing and Exploited Children]. … Apple … knew or should have known that it did not have safeguards in place to protect children and minors from CSAM. … Due to Apple’s business and design choices with respect to iCloud, the service has become a go-to destination for … CSAM, resulting in harm for many minors and children [for which Apple should be held strictly liable] … Apple is also liable … for selling defectively designed services. … Apple owed a duty of care … to not violate laws prohibiting the distribution of CSAM and to exercise reasonable care to prevent foreseeable and known harms from CSAM distribution. Apple breached this duty by providing defective[ly] designed services … that render minimal protection from the known harms of CSAM distribution. …

Plaintiff [and the putative class] … pray for judgment against the Defendant as follows: … For [an order] granting declaratory and injunctive relief to Plaintiff as permitted by law or equity, including: Enjoining Defendant from continuing the unlawful practices as set forth herein, until Apple consents under this court’s order to … [a]dopt measures to protect children against the storage and distribution of CSAM on the iCloud … [and] [c]omply with quarterly third-party monitoring to ensure that the iCloud product has reasonably safe and easily accessible mechanisms to combat CSAM….”

What this boils down to: Apple could scan iCloud for CSAM, and has said in the past that it would and that it does, but in reality it chooses not to. The failure to scan is a wrongful act for which Apple should be held liable. Apple has a legal duty to scan iCloud for CSAM, and the court should make Apple start doing so.

This theory is perilously wrong.

The Doe plaintiff’s story is heartbreaking, and it’s true that Apple has long drawn criticism for its approach to balancing multiple values such as privacy, security, child safety, and usability. It is understandable to assume that the answer is for the government, in the form of a court order, to force Apple to strike that balance differently. After all, that is how American society frequently remedies alleged shortcomings in corporate practices.

But this isn’t a case about antitrust, or faulty smartphone audio, or virtual casino apps (as in other recent Apple class actions). Demanding that a court force Apple to change its practices is uniquely infeasible, indeed dangerous, when it comes to detecting illegal material its users store on its services. That’s because this demand presents constitutional issues that other consumer protection matters don’t. Thanks to the Fourth Amendment, the courts cannot force Apple to start scanning iCloud for CSAM; even pressuring it to do so is risky. Compelling the scans would, perversely, make it way harder to convict whoever the scans caught. That’s what makes this lawsuit a catastrophically bad idea.

(The unconstitutional remedy it requests isn’t all that’s wrong with this complaint, mind. Let’s not get into the Section 230 issues it waves away in two conclusory sentences. Or how it mistakes language in Apple’s privacy policy that it “may” use users’ personal information for purposes including CSAM scanning, for an enforceable promise that Apple would do that. Or its disingenuous claim that this isn’t an attack on end-to-end encryption. Or the factually incorrect allegation that “Apple does not proactively scan its products or services” for CSAM at all, when in fact it does for some products. Let’s set all of that aside. For now.)

The Fourth Amendment to the U.S. Constitution protects Americans from unreasonable searches and seizures of our stuff, including our digital devices and files. “Reasonable” generally means there’s a warrant for the search. If a search is unreasonable, the usual remedy is what’s called the exclusionary rule: any evidence turned up through the unconstitutional search can’t be used in court against the person whose rights were violated.

While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.

If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.

Congress knows this. That’s why, in the federal statute requiring providers to report CSAM to NCMEC when they find it on their services, there’s an express disclaimer that the law does not mean they must affirmatively search for CSAM. Providers of online services may choose to look for CSAM, and if they find it, they have to report it – but they cannot be forced to look.

Now do you see the problem with the Jane Doe lawsuit against Apple?

This isn’t a novel issue. Techdirt has covered it before. It’s all laid out in a terrific 2021 paper by Jeff Kosseff. I have also discussed this exact topic over and over and over and over and over and over again. As my latest publication (based on interviews with dozens of people) describes, all the stakeholders involved in combating online CSAM – tech companies, law enforcement, prosecutors, NCMEC, etc. – are excruciatingly aware of the “government agent” dilemma, and they all take great care to stay very far away from potentially crossing that constitutional line. Everyone scrupulously preserves the voluntary, independent nature of online platforms’ decisions about whether and how to search for CSAM.

And now here comes this lawsuit like the proverbial bull in a china shop, inviting a federal court to destroy that carefully maintained and exceedingly fragile dynamic. The complaint sneers at Apple’s “business choice” as a wrongful act to be judicially reversed rather than something absolutely crucial to respect.

Fourth Amendment government agency doctrine is well-established, and there are numerous cases applying it in the context of platforms’ CSAM detection practices. Yet Jane Doe’s counsel don’t appear to know the law. For one, their complaint claims that “Apple does not proactively scan its products or services … to assist law enforcement to stop child exploitation.” Scanning to serve law enforcement’s interests would make Apple a government agent. Similarly, the complaint claims Apple “has failed to take action to detect and report CSAM on iCloud,” and asks “whether Defendant has performed its duty to detect and report CSAM to NCMEC.” This conflates two critically distinct actions. Apple does not and cannot have any duty to detect CSAM, as expressly stated in the statute imposing a duty to report CSAM. It’s like these lawyers didn’t even read the entire statute, much less any of the Fourth Amendment jurisprudence that squarely applies to their case.

Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.

The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles. But now these lawyers have apparently decided to volunteer as tribute. If their gambit backfires, they’ll be the ones responsible for the consequences.

Riana Pfefferkorn is a policy fellow at Stanford HAI who has written extensively about the Fourth Amendment’s application to online child safety efforts.

Filed Under: 4th amendment, class action, csam, evidence, proactive scanning, scanning
Companies: apple

And Just Like That, PC Emulator Apps Are Allowed On Apple’s App Store Afterall

from the about-face! dept

It was just a few weeks ago that we were discussing how an update Apple made to its rules for its App Store allowed for some retro-console game emulator apps, but not retro-PC game emulator apps for some reason. When Apple made the policy change, developer Chaoji Li submitted his app, iDOS, for consideration, only to have it rejected. Adding to the frustration were reps for the App Store suggesting that Li “make changes” to the app and resubmit it, but could not articulate what those changes should be. Elsewhere, Apple pointed that the policy change specifically allowed for “console” emulators and not PC emulators, though why that particular distinction made it into the policy at all was never explained.

What this all creates is a platform that is changing its own policies for its own reasons, all behind an opaque wall, with those attempting to submit to the app store left in the dark. And it all comes off as an organization that is making these changes haphazardly. Including welcome changes, which just occurred with Apple revising the policy yet again such that iDOS is now allowed after all in the App Store.

Developer Chaoji Li’s announcement of iDOS 3’s availability didn’t have a tone of triumph to it, though—more like exhaustion, given the app’s struggles over the years:

“It has been a long wait for common sense to prevail within Apple. As much as I want to celebrate, I still can’t help being a little bit cautious about the future. Are we good from now on?

I hope iDOS can now enjoy its turn to stay and grow.

P.S. Even though words feel inadequate at times, I would like to say thank you to the supporters of iDOS. In many ways, you keep iDOS alive.”

And so now it’s live in the App Store. What caused Apple to change its stance on retro PC emulation? Who knows! Will Apple change its mind somewhere down the road and nix the app from its platform once again? Who knows! Can other PC emulator apps successfully be listed alongside iDOS? Definitely maybe!

But at least for now we can recognize that the policy change from Apple is a good one. There is zero reason why console emulators should be allowed but not an app like iDOS. But to foster a really healthy ecosystem for its App Store, it sure would help for Apple to be more transparent about its rules in the future.

Filed Under: app store, emulators, gatekeepers, idos
Companies: apple

Leaked Docs Show Cellebrite Is Still Trailing Apple In The Device Security Arms Race

from the still-mostly-secure-on-the-home-front dept

Good news for phone owners. Perhaps a little less great for law enforcement, which presumably still doesn’t have the capability to crack the latest cell phones.

Not that it’s all bad news for law enforcement. Whether or not compelled password production is a constitutional violation is still an open question. Those whose phones are secured with biometrics are definitely less protected by the Constitution than those using passcodes. And, despite all the crying you might hear from officials (like, say, consecutive FBI directors), law enforcement still has plenty of options to obtain evidence that don’t involve cracking encrypted devices but rather serving warrants to service providers to obtain stuff stored in the cloud.

Cellebrite has been selling its phone-cracking tech for several years now. But it’s stuck in a one step forward, one step back loop as device makers patch exploitable flaws, including those used by purveyors of these devices.

Joseph Cox of 404 Media managed to obtain some very recent documents that apparently show the limitations of Cellebrite’s tech. The documents were leaked in April 2024, which doesn’t necessarily mean they document Cellebrite’s latest software version, but they do at least provide a fairly up-to-date snapshot of the tech’s capabilities.

For all locked iPhones able to run 17.4 or newer, the Cellebrite document says “In Research,” meaning they cannot necessarily be unlocked with Cellebrite’s tools. For previous iterations of iOS 17, stretching from 17.1 to 17.3.1, Cellebrite says it does support the iPhone XR and iPhone 11 series. Specifically, the document says Cellebrite recently added support to those models for its Supersonic BF [brute force] capability, which claims to gain access to phones quickly. But for the iPhone 12 and up running those operating systems, Cellebrite says support is “Coming soon.”

As Cox notes in his article, this means Cellebrite is capable of cracking iPhones released through the first part of 2020, but possibly only if they haven’t been updated to the latest iOS version. That’s still a significant number of phones, which means staying ahead of Cellebrite possibly means having to be an early adopter or, at the very least, ensuring the latest updates have been applied to your phone.

The same can’t be said for Android, something pretty much everyone has already known. Not only are carriers hit-and-miss when it comes to regular Android updates, the wide variety of manufacturers and models means it’s often difficult to tell which Android model is more secure (or, more accurately, less compromised). The rule of thumb, though, is that newer is better, at least in terms of crack-thwarting.

The second document shows that Cellebrite does not have blanket coverage of locked Android devices either, although it covers most of those listed. Cellebrite cannot, for example, brute force a Google Pixel 6, 7, or 8 that has been turned off to get the users’ data, according to the document. The most recent version of Android at the time of the Cellebrite documents was Android 14, released October 2023. The Pixel 6 was released in 2021.

Cellebrite has confirmed the authenticity of the leaked documents but told 404 Media that it does not completely reflect its current line of products or their capabilities. So, these should be taken with at least as large a grain of salt as Cellebrite’s statement. If these documents accurately portray Cellebrite’s offerings, one would expect the company to claim they don’t in order to keep criminals (or journalists, activists, politicians, dissidents, etc.) guessing about the current state of cracking tech.

Then there’s the fact that Cellebrite is not the only player in this market, even if it appears to be the most well-known. Competitors are presumably engaged in the same race against patches and system updates in order to provide something worth paying for to government customers.

Finally, the Israel-based company appears to have been stung a bit by the steady deluge of negative press covering phone-hacking malware purveyors like NSO Group and Candiru, both of which have been blacklisted by the US government for selling their goods to known human rights violators.

“Cellebrite does not sell to countries sanctioned by the U.S., EU, UK or Israeli governments or those on the Financial Action Task Force (FATF) blacklist. We only work with and pursue customers who we believe will act lawfully and not in a manner incompatible with privacy rights or human rights,” the email added.

Well, great, I guess. That answers a question no one asked, but as long as you’re in the news, I suppose it’s smart to get out ahead of the criticism, even if it’s still unspoken at this point.

While some in law enforcement might view this reporting as a half-empty glass where the tech they use will always be a step or two behind the efforts of device manufacturers, everyone else should see this as more than half-full. More companies and developers are putting more time and effort into ensuring the devices they sell are as secure as humanly possible. That’s a net win for everyone, even if you halfway believe the often-hysterical proclamations of government officials who think device security is the enemy of public safety.

It may not necessarily discourage device theft, but it does limit the damage done by those who steal devices. And it helps protect journalists, dissidents, activists, and political opposition leaders from abusive tech deployments just as much as it “protects” criminals from having their seized devices cracked. Non-criminals will always outnumber criminals. And that fact shouldn’t be ignored by law enforcement officials just because it makes things a bit tougher when it comes to extracting data from seized devices.

Filed Under: cellphone cracking, encryption, fbi
Companies: android, apple, cellebrite

Apple Continues To Genuflect To Vladimir Putin In The Russian Apple App Store

from the cowards dept

Back when Vladimir Putin first launched his aggressive war of choice on Ukraine, much of the Western world mobilized into action in a way that was fairly impressive. All kinds of companies and brands voluntarily began pulling out of the market, sometimes at the request of Ukraine itself. Much was made of tech firms pulling out of the market or suspending service in Russia at that time, specifically. Apple was one of those companies, suspending hardware sales and some services in Russia, though it kept the Russian App Store live and available.

In the intervening couple of years, however, that voluntary embargo in Russia has softened. And, with the App Store still open, Apple has continued to bend to the will of Vladimir Putin when it comes to policing the App Store for anything the Kremlin decides it doesn’t like.

Apple has removed several apps offering virtual private network (VPN) services from the Russian AppStore, following a request from Roskomnadzor, Russia’s media regulator, independent news outlet Mediazona reported on Thursday.

The VPN services removed by Apple include leading services such as ProtonVPN, Red Shield VPN, NordVPN and Le VPN. Those living in Russia will no longer be able to download the services, while users who already have them on their phones can continue using them, but will be unable to update them.

So, what to think about all of this? Certainly some folks will point out that Apple has no choice but to comply with Russian law while operating the App Store in country. And, sure, that’s true. But operating the store is in and of itself a choice that Apple is making. And Apple is a company that has been particularly vocal when it comes to protecting the privacy and rights of its users. It seems that moral stance includes some kind of a carve out for Russians, however.

Apple can do this, of course. But what it cannot do is accept the cheers for pulling out of Russia and for its customer-privacy focus while also accepting its role as digital policeman for the Kremlin. Pick a lane, you can’t have both. And the company is specifically doing the political bidding of the Russian Big Bad, it should be pointed out.

Despite suspending all sales of its own products in Russia in March 2022, Apple has continued to comply with Russian government regulations and has deleted at least 19 apps from the Russian AppStore since 2023.

At Roskomnadzor’s request, in March Apple removed an app developed by late Russian opposition politician Alexey Navalny’s team that was designed to help Russians choose who to vote for to maximise the impact of the anti-Putin vote, in a move that echoed the removal of another Navalny-designed app in 2021.

So the question is what Apple wants to be. A privacy advocate for its customers that is willing to stand up to government, as it has done in the United States? Or a cynical money-focused corporation willing to take what is essentially political action in favor of government against both opposition forces and its own customers, as it has in Russia.

Pick one, Apple. It cannot be both.

Filed Under: app store, content moderation, russia, vpn
Companies: apple, nordvpn, protonvpn

Apple’s New Emulator Policy Is, For Some Reason, Only For Consoles And Not Retro PC Games

from the ¯\_(ツ)_/¯ dept

When it comes to policy decisions generally, and with technology platforms specifically, all we can really ask is that a policy be coherently stated and implemented in a uniform fashion. You may dislike said policy, but at a minimum it should be legible and enforced sensibly. Take, for instance, Apple’s updated policy on allowing emulation apps on its iOS platforms. While Apple traditionally disallowed such apps in its App Store, the recent change has finally allowed for the legal use of emulator apps designed to let people play “retro” or homebrew games via emulators. As a reminder, here is the actual policy language in question:

4.7 Mini apps, mini games, streaming games, chatbots, plug-ins, and game emulators

Apps may offer certain software that is not embedded in the binary, specifically HTML5 mini apps and mini games, streaming games, chatbots, and plug-ins. Additionally, retro game console emulator apps can offer to download games. You are responsible for all such software offered in your app, including ensuring that such software complies with these Guidelines and all applicable laws. Software that does not comply with one or more guidelines will lead to the rejection of your app. You must also ensure that the software adheres to the additional rules that follow in 4.7.1 and 4.7.5. These additional rules are important to preserve the experience that App Store customers expect, and to help ensure user safety.

Well, apparently one very specific word in that policy is something that Apple considers or paramount importance: “console.” We learn this via Chaoji Li, the developer of a DOSBox style emulator designed to allow iOS users to play retro PC games on their devices. With the updated Apple policy, Li submitted his app for consideration in the App Store, only to have it rejected by Apple as being against the policy. And, because this is Apple we’re talking about, the company rejected it in the most annoying way possible.

They have decided that iDOS is not a retro game console, so the new rule is not applicable. They suggested I make changes and resubmit for review, but when I asked what changes I should make to be compliant, they had no idea, nor when I asked what a retro game console is. It’s still the same old unreasonable answer along the line of “we know it when we see it.”

Now, Apple continues to point that the policy change refers to retro game “consoles”, not PC emulation applications. And, hey, that’s true! Also, from a practical application standpoint, what the actual hell is the difference? The spirit of the change in policy was to allow users to emulate games they presumably already own, only on their iOS devices rather than on their original platform. From that standpoint, what is the difference between a NES emulator and DOSBox?

Why does Apple treat the idea of a DOSBox-style emulator running an ancient copy of Microsoft Excel differently than the idea of Delta running a copy of NES Tetris on the same device? Is loading the Windows 95 Version of KidPix Studio Deluxe on your iPhone really all that different from playing an emulated copy of Mario Paint on that same iPhone?

A virtual machine or emulator running a modern PC operating system under iOS could theoretically offer some generalized competition for the apps Apple offers in its official App Store. But surely there’s a limit to how much that applies when we’re talking about emulating older computing environments and defunct operating systems. Just as Apple’s iOS game emulation rules only apply to “retro” game consoles, a rule for PC emulation could easily be limited to “retro” operating systems (say, those that are no longer officially supported by their original developers, as a rule of thumb).

It seems that either Apple’s App Store gatekeepers don’t understand the policy, or they do understand and are faithfully implementing it while being unable to actually articulate why this specific distinction is made.

Either way, it seems that retro PC game fans are being left off the party invite list for reasons nobody has thus far been able to articulate.

Filed Under: emulators, video games
Companies: apple

Hey Journalists: Not Every Elon Musk Brain Fart Warrants An Entire News Cycle

from the sound-and-fury,-signifying-nothing dept

Tue, Jun 11th 2024 11:59am - Karl Bode

So on Monday you probably saw that [Apple announced](http://Apple Intelligence) it was more tightly integrating “AI” into its mobile operating system, both via a suite of AI-powered tools dubbed Apple Intelligence, and tighter AI integration with its Siri voice assistant. It’s not that big of a deal and (hopefully) reflects Apple’s more cautious approach to AI after Google told millions of customers to eat rocks and glue.

Apple was quick to point out that the processing for these features would happen on device to (hopefully) protect privacy. If Apple’s own systems can’t handle user inquiries, some of them may be offloaded to OpenAI’s ChatGPT, attempting to put a little distance between Apple and potential error-prone fabulism:

“Apple struck a deal with OpenAI, the maker of ChatGPT, to support some of its A.I. capabilities. Requests that its system can’t field will be directed to ChatGPT. For example, a user could say that they have salmon, lemon and tomatoes and want help planning dinner with those ingredients. Users would have to choose to direct those requests to ChatGPT, ensuring that they know that the chatbot — not Apple — is responsible if the answers are unsatisfying.”

Enter Elon Musk, who threw a petulant hissy fit after he realized that Apple had decided to partner with OpenAI instead of his half-cooked and more racist Grok pseudo-intelligence system. He took to ExTwitter to (falsely) claim Apple OS with ChatGPT integration posed such a dire privacy threat, iPhones would soon be banned from his companies and visitors would have to leave theirs in a copper-lined faraday cage:

This is, of course, a bunch of meaningless gibberish not actually based on anything technical. Musk just made up some security concerns to malign a competitor. The ban of iPhones will likely never happen. And to Luddites, his reference to a faraday cage certainly sounds smart.

Here’s the thing: nearly every app on your phone and every device in your home is tracking your every movement, choice, and behavior in granular detail, then selling that information to an international cabal of largely unregulated and extremely dodgy data brokers. Brokers that then turn around and sell that information to any nitwit with two nickels to rub together, including foreign intelligence.

So kind of like the TikTok hysteria, the idea that Apple’s new partnership with OpenAI poses some unique security and privacy threat above and beyond our existing total lack of any meaningful privacy whatsoever in a country too corrupt to pass an internet privacy law is pure performance.

Keep in mind that Musk’s companies have a pretty well established track record of playing extremely fast and loose with consumer privacy themselves. Automakers are generally some of the worst companies in tech when it comes to privacy and security, and according to Mozilla, Tesla is the worst of the worst. So the idea that Musk was engaging in any sort of good faith contemplation of privacy is simply false.

Still, it didn’t take long before the click-hunting press turned Musk’s meaningless comments into an entire news cycle. Resources that could have been spent on any number of meaningful stories were instead focused on platforming a throwaway comment by a fabulist that literally didn’t mean anything:

I’m particularly impressed with the Forbes headline, which pushes two falsehoods in one headline: that the nonexistent ban hurt Apple stock (it didn’t), while implying the ban already happened.

I’m unfortunately contributing to the news cycle noise to make a different point: this happens with every single Musk brain fart now, regardless of whether the comment has any meaning or importance. And it needs to stop if we’re to preserve what’s left of our collective sanity.

Journalists are quick to insist that it’s their noble responsibility to cover the comments of important people. But journalism is about informing and educating the public, which isn’t accomplished by redirecting limited journalistic resources to cover platform bullshit that means nothing and will result in nothing meaningful. All you’ve done is made a little money wasting people’s time.

U.S. newsrooms are so broadly conditioned to chase superficial SEO clickbait ad engagement waves they’ve tricked themselves into thinking these kinds of hollow news cycles serve an actual function. But it might be beneficial for the industry to do some deep introspection into the harmful symbiosis it has forged with terrible people and bullshit (see: any of a million recent profiles of white supremacists).

There are a million amazing scientific developments or acts of fatal corporate malfeasance that every single day go uncovered or under-covered in this country because we’ve hollowed out journalism and replaced it with lazy engagement infotainment.

And despite Musk’s supposed disdain for the press, his circus sideshow has always heavily relied on this media dysfunction. As his stock-fluffing house of cards starts to unravel, he’s had to increasingly rely on gibberish and controversy to distract, and U.S. journalism continues to lend a willing hand.

First it spent fifteen years hyping up Musk’s super-genius engineering mythology, despite mounting evidence that Musk was more of a clever credit-absconding opportunist than any sort of revolutionary thinker. After 20 years of this, the press still treats every belch the man has as worthy of the deepest analysis under the pretense they’re engaging in some sort of heady public service.

The public interest is often served by not covering the fever dreams of obnoxious opportunists, but every part of the media ecosystem is financially incentivized to do the exact opposite. And instead of any sort of introspection into the symbiosis the media has formed with absolute bullshit, we’re using badly crafted automation to supercharge all of the sector’s worst impulses at unprecedented new scale.

Filed Under: ai, artificial intelligence, chatgpt, clickbait, elon musk, hype, language leaning models, seo, siri
Companies: apple, openai, tesla, twitter, x

Apple Vision Pro Sales Slow To A Trickle Despite Months Of Gushing Tech Press Hype

from the another-hype-cycle-come-and-gone dept

Thu, Apr 25th 2024 05:30am - Karl Bode

When the Apple Vision Pro launched back in February, the press had a sustained, two-month straight orgasm over the product’s potential to transform VR and the world of spatial computing.

Downplayed were little sticking points like the lack of app support; the short battery life (despite a bulky external battery pack Steve Jobs would have never approved of); the need for expensive additional prescription lenses for glasses wearers (more complex issues like astigmatism weren’t supported); or the fact that VR makes about 40 to 70 percent of the target audience for these products want to puke.

Much like the Metaverse, the Apple Vision Pro roared into the tech press hype bubble like a freight train, then retreated like a bit of a simpering wimp. Reports are now that sales for the headsets are fairly pathetic:

“Some Apple stores are reportedly down to selling just a handful of Vision Pros in an entire week, according to Bloomberg. The hype around the Apple Vision Pro has fallen dramatically since the headset sold 180,000 units during its January pre-order weekend. Demand for demos of the technology is also reportedly “way down” since the product’s launch. Even worse, the report says, many people who book appointments to test Vision Pros simply don’t show up anymore.”

It was clear that this was basically a glorified prototype aimed at helping Apple figure out future iterations that people can actually afford and that can make it through an entire movie without having to charge the battery. But at the same time Apple wanted to present the impression that this was utterly transformative technology, available today that would transform the way you work and play.

This has all been fairly representative of a tech press that, over the years, has become more of an extension of tech company marketing departments than any sort of journalistic endeavor. The vast majority of the Vision Pro coverage implied this was a product that would be utterly revolutionary; yet as Bloomberg notes, even avid early adopters seem to have forgotten the headset exists:

“I had initially used the Vision Pro whenever I watched a movie or YouTube, or when I wanted a more immersive screen for my Mac at home. These days, with the initial buzz wearing off, it seems clear that the Vision Pro is too cumbersome to use on a daily basis. Going through the process of attaching the battery, booting it up and navigating the interface often doesn’t feel worth it. And a killer app hasn’t emerged that would compel me to pick it up. It’s far easier to just use my laptop as a laptop and watch video on either my computer or big-screen TV.”

So again you’ve got a tech press hype cycle that professed we were witnessing something revolutionary, only for the product to wind up being… not that. Lost in a lot of the tech coverage for the Vision Pro was the fact that people just generally don’t like having a giant sweaty piece of plastic strapped to their fucking face.

The first thing Vision Pro fans will say to justify a lack of public interest is that this wasn’t supposed to sell well. That it was a water-testing prototype designed to benefit future iterations. And maybe that’s true; maybe it isn’t. Maybe Apple (a company that’s shifted from Jobs-era risk taking innovation to quality-focused iteration) will be the company that nails the perfect VR experience.

But I think it’s equally possible they won’t be. That they’re supplanted by a hungrier, smaller, less risk-averse and younger company. In which case all early adopters will be left with is hype-filled memories and a very expensive relic.

I’m interested in VR. I still think it holds promise. I’ve owned numerous headsets. I’ve tinkered with most modern VR apps and gaming titles. And I still generally don’t think this tech truly becomes interesting until cords are eliminated, self-contained battery life becomes wholly irrelevant, weight is a non-factor, and the interfaces become seamlessly intuitive to the point of near-magic.

We’re still quite a way from all of that, no matter how much wish-casting the technology press tech industry marketing apparatus engage in.

Filed Under: AR, headset, tech press, virtual reality, vision pro, vr
Companies: apple