ios – Techdirt (original) (raw)

First Approved Emulator App Appears In Apple’s App Store Under New Rules

from the good-start dept

Well, that was fast. It was just earlier this month that we talked about some interesting new rules Apple instituted for its App Store when it comes specifically to emulation apps. While emulators in and of themselves are not in anyway illicit, Apple did its best to keep them off its platform, and off iPhones generally. It did so under the public theory that apps that allow in-app callouts to outside software that is not within the app itself represented a security risk. ROMs to run on these emulators was the example that precluded emulator apps from appearing in the store. The reality is that Apple has a history of both valuing strict control over what goes on its devices combined with the never ending hatred console-makers have for emulators generally.

So it was with all of that historical context that I viewed the rule changes Apple announced in early April pessimistically. A lot of the commentary surrounding what Apple would actually allow centered on it being primarily console manufacturers or game publishers themselves releasing their own emulators for purchase. Well, it turns out that pessimism was somewhat misguided, as the App Store just saw its first approved third-party emulator released.

Apple’s decision earlier this month to open the iOS App Store to generic retro game emulators is already bearing fruit. Delta launched Wednesday as one of the first officially approved iOS apps to emulate Nintendo consoles from the NES through the N64 and the Game Boy through the Nintendo DS (though unofficial options have snuck through in the past).

All that history means Delta is far from a slapdash app quickly thrown together to take advantage of Apple’s new openness to emulation. The app is obviously built with iOS in mind and already integrates some useful features designed for the mobile ecosystem. While there are some updates we’d like to see in the future, this represents a good starting point for where Apple-approved game emulation can go on iOS.

Now, the rest of the Ars Technica post delves mostly into a review of the app itself, what it does well, and what it fails at. And that’s all fine, but the point here is that emulation outside of the strict control of Apple and/or console-makers appears to be officially back in the App Store. I can already hear the gnashing of teeth from the folks at Nintendo over all of this, and I certainly don’t expect that we won’t hear from the company in some disapproving form on this and other emulators that might appear, but they really don’t have much of a leg to stand on.

As we’ve mentioned in the past, emulators are not, by themselves, infringing on copyright typically. They don’t typically ship with BIOS files to make the emulator do its thing. Nor do they ship with ROMs that might be infringing. Instead, it’s a tool with plenty of legitimate uses. Homebrew games for old consoles are out there. Games where the companies that made and published them no longer exist, in some cases with rights never having been bought or transferred to another entity exist. Ripped ROMs from owned games by individuals exist.

So the emulator embargo is gone. Somehow, I don’t believe that companies like Nintendo are going to see their livelihoods ruined as a result.

Filed Under: app store, emulator, ios
Companies: apple

Yet Another Study Shows Apple’s Hyped Privacy Standards Are Often Empty Theater

from the hollow-bravado dept

Thu, Nov 10th 2022 06:24am - Karl Bode

For the last few years Apple has worked overtime trying to market itself as a more privacy-focused company. 40-foot billboards of the iPhone with the slogan “Privacy. That’s iPhone” have been a key part of company marketing for years. The only problem: researchers keep highlighting how a lot of Apple’s well-hyped privacy changes are performative in nature.

The company’s “do not track” button, first included in iOS 14.5, has received the lion’s share of this attention, and has been frequently heralded as such an incredible revolution in consumer privacy it wound up making Mark Zuckerberg very, very sad.

But while Apple may be better on privacy than some companies (which isn’t hard), studies keep showing that numerous app makers have been able to simply tap dancing around Apple’s heavily hyped do not track restrictions for some time, often without any penalty by Apple months after being contacted by reporters. In short, “do not track” doesn’t deliver on its promise.

Similarly, the iPhone Analytics setting makes an explicit promise to users that if they flip a button, they’ll be able to “disable the sharing of Device Analytics altogether.” But researchers have now shown that’s really not true either:

Tommy Mysk and Talal Haj Bakry, two app developers and security researchers at the software company Mysk, took a look at the data collected by a number of Apple iPhone apps—the App Store, Apple Music, Apple TV, Books, and Stocks. They found the analytics control and other privacy settings had no obvious effect on Apple’s data collection—the tracking remained the same whether iPhone Analytics was switched on or off.

Researchers found that even when they flipped off every privacy button on the iPhone, the app store still tracked every behavior in real time, including what apps a user searched for, what ads were seen, and how long ads were looked at. The App store app also tracked user ID numbers, phone model, screen resolution, keyboard languages, and details on your internet connection.

Apple, as it often does, responded to this and previous allegations on its privacy being a bit hollow by simply… not commenting.

Again, that’s not to say that Apple isn’t better on privacy than many of its competitors, but in the free for all privacy landscape across telecom, adtech, apps and hardware, that just doesn’t mean much. We’re a country that, for thirty years, prioritized making money over consumer privacy, and we’re too collectively corrupt to pass even a baseline federal privacy law for the internet era.

What we get instead is theater. Buttons that do little. Promises and policies that mean less. Lots of empty lip service paid toward consumer choice. And when we do get privacy legislation it’s often either terribly crafted, left unenforced, or so filled with loopholes as to largely be just a performance validating bad behaviors companies already engage in. And there’s very little indication any of this will change soon.

Filed Under: advertising, consumer rights, data privacy, ios, iphone, privacy, snooping, surveillance, tracking
Companies: apple

Meta Sued For Tap Dancing Around Apple’s New App Privacy Rules

from the privacy-theater dept

Tue, Sep 27th 2022 06:28am - Karl Bode

Last year, Apple received ample coverage about how the company was making privacy easier for its customers by introducing a new, simple, tracking opt-out button for users as part of an iOS 14.5 update.

Apple marketing and press reports heavily hyped the App Tracking Transparency system, which purportedly gave consumers control of which apps were able to collect and monetize user data or track user behavior across the internet. Advertisers (most notably Facebook) cried like a disappointed toddler at Christmas, given the obvious fact that giving users more control over data collection and monetization, means less money for them.

But we also noted how Apple’s changes were being a bit overstated. About a year ago researchers began to notice that Apple’s opt-out system only really blocked app makers from accessing one bit of data: your phone’s ID for Advertisers, or IDFA. There were numerous ways for companies to ignore Apple’s changes and track users anyway, so they quickly got to work doing exactly that.

That includes Meta. Apple security researcher Felix Krause found that Facebook was getting around Apple’s system by directing any link a user clicks on in the Facebook app to a new in-app browser window, where Meta was able to inject a code, alter the external websites, and track user behavior online… without user consent or awareness.

As a result, Meta is now facing two different class action lawsuits accusing it of violating state and federal privacy laws, including the Wiretap Act, the California Invasion of Privacy Act (CIPA), and the California Violation of the Unfair Competition Law. Both suits cite Krause’s research, noting it “revealed that Meta has been injecting code into third-party websites, a practice that allows Meta to track users and intercept data that would otherwise be unavailable to it.”

Granted this is the same company busted for pitching users on a “privacy protecting VPN” that wound up to be little more than glorified spyware that tracked user behavior when they went to other platforms. Meta has whined endlessly about Apple’s opt-in changes, claiming that they’re already costing the company around $10 billion in revenues annually.

But they’re lucky Apple (or the market in general) hasn’t taken things further. Keep in mind Apple’s privacy changes, while important, are being dramatically overstated for branding purposes. Numerous app makers have been simply tap dancing around the restrictions for some time, often without any penalty by Apple months after being contacted by reporters.

Filed Under: adtech, app tracking transparency, ios, privacy, security, surveillance, tracking
Companies: apple, meta

NSO Group Hacking Prompts Apple To Add A ‘Lockdown Mode’ To Its Devices

from the can't-wait-to-hear-what-Chris-Wray-has-to-say-about-this dept

Israeli malware maker NSO Group’s frequent targeting of iPhones has led to multiple rounds of patches, a federal lawsuit, and Apple instituting a notification program to inform customers their devices have been compromised.

Apple’s next move in this particular arms race will help defend users against malware deployment by government agencies, many of which use exploits purchased from NSO Group and its competitors.

Apple said Wednesday that it will introduce an innovative security feature to give potential targets of government hacking an easy way to make their iPhones safer.

The company said it would be releasing the new “Lockdown Mode” in test versions of its operating systems shortly, with full distribution in the fall as part of iOS 16 for iPhones as well as the operating systems for iPads and Mac computers.

This addition won’t be difficult to deploy, making it much more user-friendly than other options. A single button press in the Options is all it takes. The phone reboots in “lockdown mode,” blocking most attachments contained in messages, preventing the phone from previewing web links, and — somewhat surprisingly — disabling wired connections to other devices.

That last feature will prevent state-sponsored hackers or law enforcement from accessing the device’s contents or installing exploits on phones seized from detainees and arrestees. It won’t start rolling out until September, but one should expect to see law enforcement officials start complaining about this feature sooner than that.

Expect the FBI to take the lead on the complaining. It has spent years claiming encryption dead-ends investigations and allows criminals to hide evidence from investigators. It will likely make the same claim about this option, even as it publicly admits state-sponsored hacking is an omnipresent concern.

In just the last week, the FBI and Britain’s MI5 intelligence organization took the rare step of issuing a joint warning of the “immense” threat Chinese spies pose to “our economic and national security,” and that its hacking program is “bigger than that of every other major country combined.”

According to the FBI, it’s okay for the government and large businesses to protect themselves against malicious hackers by limiting attack services and deploying encryption. But it’s not okay for the average iPhone user to do the same thing because a cop may possibly want to examine a device’s contents at some point.

And that is what’s being addressed with Apple’s “lockdown mode.” State-sponsored hackers and purchased exploits aren’t just being deployed against government agencies, large corporations, and political leaders. It’s also being used against journalists, dissidents, government critics, and religious minorities.

Apple’s move makes sense and shows the company actually cares about protecting its customers from malware, exploits, and other forms of device compromise — no matter who’s doing the dirty work. It’s bound to anger law enforcement. But, just like encryption itself, you can’t lock out the bad guys without locking up some of the good guys. It either provides protection or it’s a compromise that will only lead to compromised devices.

Filed Under: hacking, ios, iphones, lockdown mode, security
Companies: apple

Apple's 'Do Not Track' Button Is Privacy Theater

from the privacy-theater-incorporated dept

Fri, Dec 10th 2021 06:46am - Karl Bode

Earlier this year Apple received ample coverage about how the company was making privacy easier for its customers by introducing a new, simple, tracking opt-out button for users as part of an iOS 14.5 update. Early press reports heavily hyped the concept, which purportedly gave consumers control of which apps were able to collect and monetize user data or track user behavior across the internet. Advertisers (most notably Facebook) cried like a disappointed toddler at Christmas, given the obvious fact that giving users more control over data collection and monetization, means less money for them.

By September researchers had begun to notice that Apple’s opt-out system was somewhat performative anyway. The underlying system only really blocked app makers from accessing one bit of data: your phone’s ID for Advertisers, or IDFA. There were numerous ways for app makers to track users anyway, so they quickly got to work doing exactly that, collecting information on everything from your IP address and battery charge and volume levels, to remaining device storage, metrics that can be helpful in building personalized profiles of each and every Apple user.

Privacy advocates and the press noted how this was all giving Apple users a false sense of security without really fixing much. Privacy experts and press outlets also repeatedly informed Apple this was happening, but nothing changed. In fact, the Financial Times notes that six months after the feature was introduced, Apple has further softened its stance on the whole effort:

“But seven months later, companies including Snap and Facebook have been allowed to keep sharing user-level signals from iPhones, as long as that data is anonymised and aggregated rather than tied to specific user profiles.

Here’s the thing. There’s been just an absolute torrent of studies showing how “anonymizing” data is a gibberish term. It only takes a few additional snippets of data to identify “anonymized” users, yet the term is still thrown around by companies as a sort of “get out of jail free” card when it comes to not respecting user privacy. There’s an absolute ocean of data floating around the data broker space that comes from apps, OS makers, hardware vendors, and telecoms, and “anonymizing” data doesn’t really stop any of them from building detailed profiles on you.

Apple’s opt-out button is largely decorative, helping the company brand itself as hyper privacy conscious without actually doing the heavy lifting required of such a shift:

“Lockdown Privacy, an app that blocks ad trackers, has called Apple?s policy ?functionally useless in stopping third-party tracking?. It performed a variety of tests on top apps and observed that personal data and device information is still ?being sent to trackers in almost all cases.”

A lot of folks spend a lot of time trying to tap dance around a fundamental truth: any effort to give consumers more control and clear insight over what’s being collected or sold reduces revenues by billions of dollars annually, even if only a fraction of existing users take advantage. And nobody with their face buried deep in the data monetization trough wants that. It’s why it’s 2021 and the U.S. still doesn’t even have a basic, clear privacy law for the internet era. Not even a super clean one mandating basic transparency requirements and working opt-out tools.

So what we get instead is a lot of gibberish and privacy theater by a lot of folks who don’t want to take even a tiny hit in revenues in exchange for healthier markets and happier users.

We also get just an endless parade of semantics, like ISP claims they “don’t sell access to your data” (no, they just give massive “anonymized” datasets away for free as part of a nebulous, broader arrangement they do get paid for). We get tracking opt-out tools that don’t actually opt you out of tracking, or opt you back in any time changes are made. And we get endless proclamations about how everybody supports codifying federal privacy laws from companies that immediately turn around and spend millions of dollars lobbying to ensure even a basic privacy law never sees the light of day.

At some point this combination of feckless oversight, rampant overcollection of data, minimal transparency, and repeated failure to adhere to the basics on data security will result in a privacy scandal that makes the last fives years’ worth of scandals look like a grade school picnic. When that happens, we might finally see some traction on at least a basic law that mandates transparency, opt-out tools that actually work, and penalties for lax security. Until that momentum shift happens, the majority of “privacy reform” efforts are going to have a high ratio of meaningless song and dance.

Filed Under: apps, data, do not track, ios, opt-in, opt-out, privacy
Companies: apple, facebook, snap

Apple Gives Chinese Government What It Wants (Again); Pulls Quran App From Chinese App Store

from the consequences-of-heavy-buy-in dept

Apple has generally been pretty good about protecting users from government overreach, its recent voluntary (and misguided) foray into client-side scanning of users’ images notwithstanding. But that seemingly only applies here in the United States, which is going to continue to pose problems for Apple if it chooses to combat local overreach while giving foreign, far more censorial governments greater and greater control.

Like many other tech companies, Apple has no desire to lose access to one of the largest groups of potential customers in the world. Hence its deference to China, which has seen the company do things like pull the New York Times app in China following the government’s obviously bullshit claim that the paper was a purveyor of “fake news.”

Since then, Apple has developed an even closer relationship with the Chinese government, which culminated in the company opening data centers in China to comply with the government’s mandate that all foreign companies store Chinese citizens’ data locally where it’s much easier for the government to demand access.

On a smaller scale, Apple pulled another app — one that encrypted text messages on platforms that don’t provide their own encryption — in response to government demands. Once again, Apple left Chinese citizens at the mercy of their government, apparently in exchange for the privilege of selling them devices that promised them security and privacy while actually offering very little of either.

The latest acquiescence by Apple will help the Chinese government continue its oppression of the country’s Uighur minority — Muslim adherents that have been subjected to religious persecution for years. Whoever the government doesn’t cage, disappear, or genocide into nonexistence will see nothing but the bottom of a jackboot for years to come. Apple is aiding and abetting the jackboot, according to this report by the BBC.

Apple has taken down one of the world’s most popular Quran apps in China, following a request from officials.

Quran Majeed is available across the world on the App Store – and has nearly 150,000 reviews. It is used by millions of Muslims.

The BBC understands that the app was removed for hosting illegal religious texts.

The app is developed by Pakistan Data Management Services, a software company that dates back nearly 50 years. China pretty much owns Pakistan at this point, but this has nothing to do with Pakistan’s purchased allegiance to the Chinese government, and everything to do with punishing religious beliefs (and believers) the Chinese government doesn’t like.

Apple’s compliance cuts off access to nearly 1 million Chinese users of the app, as the BBC reports. This is happening despite the fact the Chinese government pays lip service to a limited form of religious freedom.

The Chinese Communist Party officially recognises Islam as a religion in the country.

And yet, it claims the primary religious text of the faith is “illegal.

Apple is also at least partly owned by China, albeit not in any formal sense. It relies heavily on Chinese manufacturing to produce its devices. This means Apple faces both upline and downline issues if it refuses to comply with the Chinese government’s demands. The company is in a difficult position, what with shareholders in the US (and all over the world) expecting continued growth and profitability. But it’s not as though it’s an impossible situation. Sometimes you have to sacrifice profits for principle.

Apple — with its reliance on Chinese manufacturing — may be in too deep to make a principled stand. China has the upper hand for now. If Apple wants to continue to be seen as a world leader in device security and personal privacy protections, it needs to start figuring out how to end its abusive relationship with the Chinese government.

Filed Under: app store, china, content moderation, ios, quran, religion
Companies: apple

Research Shows Apple's New Do Not Track App Button Is Privacy Theater

from the privacy-theater dept

Tue, Sep 28th 2021 06:32am - Karl Bode

While Apple may be attempting to make being marginally competent at privacy a marketing advantage in recent years, that hasn’t always gone particularly smoothly. Case in point: the company’s new “ask app not to track” button included in iOS 14.5 is supposed to provide iOS users with some protection from apps that get a little too aggressive in hoovering up your usage, location, and other data. In short, the button functions as a more obvious opt out mechanism that’s supposed to let you avoid the tangled web of privacy abuses that is the adtech behavioral ad ecosystem.

But of course it’s not working out all that well in practice, at least so far. A new study by the Washington Post and software maker Lockdown indicates that many app makers are just…ignoring the request entirely. In reality, Apple’s function doesn’t really do all that much, simply blocking app makers from accessing one bit of data: your phone’s ID for Advertisers, or IDFA. But most apps have continued to track a wide swath of other usage and location data, and the overall impact on user privacy has proven to be negligible:

“Among the apps Lockdown investigated, tapping the don?t track button made no difference at all to the total number of third-party trackers the apps reached out to. And the number of times the apps attempted to send out data to these companies declined just 13 percent.”

Researchers found the new system actually provided users with a false sense of security and privacy when very little had actually changed.

Even when consumers “opted out,” most of the apps were still collecting data metrics like volume level, IP address, battery level, browser, cellular carrier, and a long list of other data points, allowing companies to craft elaborate profiles of individual consumers. And little to none of this is being meaningfully disclosed to actual users. Perpetually, the adtech industry tries to argue that none of this is a big deal because much of this data is “anonymized,” but that’s long been nonsense. Studies repeatedly show that when there are enough data points floating about in the wild, nobody on the internet is truly anonymous.

The fact gets buried in conversations on this subject, but the entire adtech tracking ecosystem, from app makers and “big tech” to telecom and every data broker in between, is a largely unaccountable mess. All operating in a country with no meaningful internet-era privacy law, and privacy regulators that intentionally have a tiny fraction of the resources and funding as their overseas contemporaries. So when you see privacy scandal after privacy scandal emerge, it’s important to understand that this is a conscious policy choice driven by greed, not just some organic dysfunction that showed up one random Tuesday.

And unfortunately, so far, most of the big pronouncements by major tech giants about consumer privacy, whether it’s Apple’s shiny new app privacy button or Google’s FLOC technology, aren’t doing much to actually fix the problem. And, in some cases, they have the potential to make a bad problem worse. Actually fixing this problem would cost a whole lot of people a whole lot of money, so instead we get (waves hands around at a wide variety of privacy theater) whatever the hell this is supposed to be.

Filed Under: apps, do not track, idfa, ios, privacy, privacy theater
Companies: apple

Apple Patches Up Devices In Response To The Exposure Of Yet Another NSO Group Exploit

from the soon-they-will-make-a-board-with-a-nail-so-big-it-will-destroy-them-all dept

Israeli digital arms merchant NSO Group continues to sell its malware to a wide variety of governments. The governments it sells to, which includes a bunch of notorious human rights abusers, continue to use these exploits to target dissidents, activists, journalists, religious leaders, and political opponents. And the manufacturers of the devices exploited by governments to harm people these governments don’t like (NSO says “criminals and terrorists,” long-term customers say “eh, whoever”) continue to patch things up so these exploits no longer work.

The circle of life continues. No sooner had longtime critic/investigator of NSO Group’s exploits and activities — Citizen Lab — reported the Bahrain government was using “zero click” exploits to intercept communications and take control of targeted devices then a patch has arrived. Apple, whose devices were compromised using an exploit Citizen Lab has dubbed FORCEDENTRY, has responded to the somewhat surprising and altogether disturbing news that NSO has developed yet another exploit that requires no target interaction at all to deploy.

Apple released a patch Monday against two security vulnerabilities, one of which the Israeli surveillance company NSO Group has exploited, according to researchers.

The updated iOS software patches against a zero-click exploit that uses iMessage to launch malicious code, which in turn allows NSO Group clients to infiltrate targets — including the phone of a Saudi activist in March, researchers at Citizen Lab said.

The backdoor being closed involves a pretty clever trick of the trade. Since links require clicks and images don’t, the exploit utilizes a tainted gif to crash Apple’s image rendering library, which is then used to launch a second exploit that gives NSO customers control of these devices, allowing them to browse internal storage and eavesdrop on communications.

It’s not the first time NSO has developed a zero-click exploit that affects iOS devices. It’s just the latest exposed by Citizen Lab’s incredible investigation efforts. Thanks to Citizen Lab, more Apple device users around the world are better protected against malicious hackers… working for a company that sells exploits to government agencies. And whatever can be nominally exploited for good (the terrorists and criminals NSO continues to claim its customers target, despite an ever-growing mountain of evidence that says otherwise) can be exploited by governments and malicious hackers who don’t even have sketchy “national security” justifications to raise in the defense of their actions.

The arms race continues. It appears marketers of exploits will continue to do what they’ve always done: maintain over-the-air superiority for as long as possible. And while it may seem this is just part of the counterterrorism game, NSO Group’s tacit approval of the targeting of dissidents, journalists, and others who have angered local governments (but have never committed any terrorist or criminal acts) shows it’s not willing to stop profiting from the misery of people being hunted and harmed by repressive regimes.

Filed Under: ios, iphone, malware, patches, surveillance
Companies: apple, nso group

'Press X To Apply Fourth Amendment:' Documents Show How GrayKey Brute Forces IOS Passwords

from the device-helpfully-backlit-to-combat-going-darkness dept

Consecutive FBI directors (James Comey, and Chris Wray) have declared a small scale war on encryption. Both of these directors relied on inflated numbers to make their case — an error chalked up to software rather than rhetorical convenience. (The FBI has refused to hand over a correct count of encrypted devices in its possession for more than three years at this point.)

The FBI’s narrative keeps getting interrupted by inconvenient facts. Proclamations that the criminal world is “going dark” are often followed by the announcement of new exploits that give law enforcement the ability to decrypt phones and access their contents.

Grayshift is one of the vendors selling phone-cracking tech to law enforcement agencies. The company has an ex-Apple security engineer on staff and has been duking it out with the device manufacturer for the past few years. It seems to be able to find exploits faster than Apple can patch them, leading to a tech arms race that law enforcement appears to be able to win from time to time.

Joseph Cox at Motherboard has obtained more documents about Grayshift’s phone-cracking device, GrayKey. Apple prides itself on providing secure devices. But it appears GrayKey is still capable of bypassing iOS security features, enabling investigators to brute force device passwords. And it can still do this even if the targeted device is on the verge of battery death.

The instructions describe the various conditions it claims allow a GrayKey connection: the device being turned off (known as Before First Unlock, or BFU); the phone is turned on (After First Unlock, or AFU); the device having a damaged display, and when the phone has low battery.

“GrayKey known to install agent with 2 to 3% battery life,” the document reads, referring to the “brute force agent” GrayKey installs on the phone in order to unlock the device.

This suggests the agent doesn’t demand too much from the processor when installing. It also suggests GrayKey’s devices are portable, allowing cops to attempt to access phone contents while away from the office with limited options for charging seized devices.

The device includes a 1.5-billion word dictionary that can be utilized during brute force attacks to guess alphanumeric passwords. The instructions obtained by Motherboard also indicate the device has the power to extract metadata from “inaccessible” files — something it can apparently do even if the device is still in a locked state.

And Grayshift truly cares about your rights, Joe and Judy Criminal Suspect.

“Prior to connecting any Apple mobile device to GrayKey, determine if proper search authority has been established for the requested Apple mobile device,” the document reads.

Yeaaaaaahhhhh… that should do it. Grayshift has no way of enforcing this so cops are on the honor system. And we’ve all seen how great cops are at keeping themselves honest. This little nod towards Supreme Court precedent and Fourth Amendment doesn’t even ask for something like a supervisor’s passcode prior to operation to help ensure all the proper paperwork is in order. Left to their own devices, cops are bound to illegally access suspects’ devices.

And if brute forcing doesn’t work, there’s another built-in option — one covered here previously. GrayKey can surreptitiously install a very targeted keylogger that records the passcode when it’s entered by the phone’s owner. Cops can get their largesse on and give suspects back their devices so they can copy down phone numbers or let people know where they’re at. And when suspects unlock their devices to this, cops are CC’ed by Grayshift’s malware.

The battle between government contractors and device makers continues. And as long as it remains a battle in which neither party has proven to be able to hold a lead, it’s disingenuous to claim — as Chris Wray and James Comey have — that encryption is a barrier impossible to overcome.

Filed Under: 4th amendment, cracking, doj, encryption, fbi, going dark, graykey, hacking, ios
Companies: apple, grayshift

Parler Was Allowed Back In The Apple App Store Because It Will Block 'Hate Speech,' But Only When Viewed Through Apple Devices

from the fracturing dept

Last month we noted that Apple told Congress that it was allowing Parler’s iOS app to return to its app store, after the company (apparently) implemented a content moderation system. This was despite Parler’s then interim CEO (who has since been replaced by another CEO) insisting that Parler would not remove “content that attacks someone based on race, sex, sexual orientation or religion.” According to a deep dive by the Washington Post, the compromise solution is that such content will be default blocked only on iOS devices, but will be available via the web or the sideloaded Google app, though they will be “labeled” as hate by Parler’s new content moderation partner, Hive.

Posts that are labeled ?hate? by Parler?s new artificial intelligence moderation system won?t be visible on iPhones or iPads. There?s a different standard for people who look at Parler on other smartphones or on the Web: They will be able to see posts marked as ?hate,? which includes racial slurs, by clicking through to see them.

Hive is well known in the content moderation space, as it is used by Chatroulette, Reddit and some others. Hive mixes “AI” with a large team of what it refers to as “registered contributors” (think Mechanical Turk-style crowdsourced gig work). Of course, it was only just last year that the company announced that its “hate model” AI was ready for prime time, and I do wonder how effective it is.

Either way, this is interesting for a variety of reasons. One thing we’ve talked about in the past with regards to content moderation is that one of the many problems is that different people have different tolerances for different kinds of speech, and having different moderation setups for different users (and really pushing more of the decision making to the end users themselves) seems like an idea that should get a lot more attention. Here, though, we have a third party — Apple — stepping in and doing that deciding for the users. It is Apple’s platform, so of course, they do get to make that decision, but it’s a trend worth watching.

I do wonder if we’ll start to see more pressure from such third parties to moderate in different ways to the point that our mobile app experiences and our browser experiences may be entirely different. I see how we end up in such a world, but it seems like a better solution might be just pushing more of that control to the end users themselves to make those decisions.

The specific setup here for Parler is still interesting:

Parler sets the guidelines on what Hive looks for. For example, all content that the algorithms flag as ?incitement,? or illegal content threatening physical violence, is removed for all users, Peikoff and Guo said. That includes threats of violence against immigrants wanting to cross the border or politicians.

But Parler had to compromise on hate speech, Peikoff said. Those using iPhones won?t see anything deemed to be in that category. The default setting on Android devices and the website shows labels warning ?trolling content detected,? with the option to ?show content anyway.? Users have the option to change the setting and, like iOS users, never be exposed to posts flagged as hate.

Peikoff said the ?hate? flag from the AI review will cue two different experiences for users, depending on the platform they use. Parler?s tech team is continuing to run tests on the dual paths to make sure each runs consistently as intended.

Of course, AI moderation is famously mistake-prone. And both Parler and Hive execs recognize this:

Peikoff said Hive recently flagged for nudity her favorite art piece, the ?To Mennesker? naked figures sculpture by Danish artist Stephan Sinding, when she posted it. The image was immediately covered with a splash screen indicating it was unsafe.

?Even the best AI moderation has some error rate,? Guo said. He said the company?s models show that one to two posts out of every 10,000 viewed by the AI should have been caught on Parler but aren?t.

I do question those percentages, but either way it’s another interesting example of how content moderation continues to evolve — even if Parler’s users are angry that they won’t be able to spew bigotry quite as easily as previously.

Filed Under: content moderation, hate speech, ios, platforms
Companies: apple, hive, parler