face recognition – Techdirt (original) (raw)
San Francisco Amends Facial Recognition Ban After Realizing City Employees Could No Longer Use Smartphones
from the just-as-long-as-the-only-exceptions-are-cameras-pointed-at-city-employees dept
In May, San Francisco became the first city in the United States to ban facial recognition tech by city agencies. Being on the cutting edge has its drawbacks, as the city has now found out several months later. Tom Simonite and Gregory Barber of Wired report the city’s legislation inadvertently nuked many of its employees’ devices.
After San Francisco in May placed new controls, including a ban on facial recognition, on municipal surveillance, city employees began taking stock of what technology agencies already owned. They quickly learned that the city owned a lot of facial recognition technology—much of it in workers’ pockets.
City-issued iPhones equipped with Apple’s signature unlock feature, Face ID, were now illegal—even if the feature was turned off, says Lee Hepner, an aide to supervisor Aaron Peskin, the member of the local Board of Supervisors who spearheaded the ban
The law forbids the use of facial recognition tech, even if all it’s doing is allowing employees to use their own faces to unlock their phones. An untold number of devices were rendered useless by the ban, but it’s probably safe to assume the law was broken repeatedly until its very quiet amendment last week. Municipal agencies are once again allowed to procure devices that utilize facial recognition tech as long as they’re “critically necessary” and there are no other alternatives.
This does not mean agencies can continue to purchase facial recognition tech that does anything more than secure employees’ devices. And it also means the San Francisco Police Department had to give up one of its toys — one city leadership apparently knew nothing about.
Around the same time, police department staffers scurried to disable a facial recognition system for searching mug shots that was unknown to the public or Peskin’s office. The department called South Carolina’s DataWorks Plus and asked it to disable facial recognition software the city had acquired from the company, according to company vice president Todd Pastorini.
This surveillance tool went unacknowledged during the city’s institution of a facial recognition ban. As Wired reports, San Francisco claimed it had stopped testing facial recognition software in 2017. This denial sidestepped the untested (I guess) acquisition of DataWorks facial recognition tech with a contract that was originally due to run through 2020. According to documents obtained by Wired, the SFPD “dismantled” its DataWorks servers and allowed the contract to lapse after its 90-day trial period. That apparently ended in January before the law took effect.
Even so, it’s not exactly comforting that the SFPD was able to secure and test drive facial recognition tech with zero public notice. City legislators made no mention of this tech or the SFPD’s prior exploration of facial recognition when they began moving forward with the legislation earlier this year.
The other concern is a new one: the SF legislature has already amended its ban to allow city use of smartphones with biometric security features. While this may have been necessary to ensure employees could use city-issued devices, it also shows the city can be talked into punching holes in its brand new legislation. The city may hold firm in the future, but there’s a good chance it will create other loopholes if the arguments are persuasive enough. It all depends on the definition of “critically necessary” — terms that can become especially malleable following a mass tragedy or an uptick in violent crime, for example.
But for now, the ban holds, minus the inadvertent collateral damage. The city’s government should still be applauded for its willingness to put its citizens above its own interests with this legislation, but any further requests for exceptions should be greeted with an overabundance of caution.
Filed Under: face id, face recognition, facial recognition, iphones, san francisco, sfpd
Companies: dataworks
Ring Has A 'Head Of Face Recognition Tech,' Says It's Not Using Facial Recognition Tech. Yet.
from the currently-not-doing-this-thing-we're-considering-doing dept
Amazon has developed facial recognition tech it’s inordinately proud of. Known as “Rekognition,” it’s not nearly as accurate as its deliberately misspelled moniker suggests it is. It drew Congressional heat last year when it misidentified a number of Congress members as criminals.
There has been no interplay between Amazon’s Rekognition software and the Ring doorbell cameras its subsidiary is pushing to cops (who then push them to citizens). Yet. Maybe there will never be. But it’s pretty much an inevitability that Ring cameras will, at some point, employ facial recognition tech.
There’s probably no hurry at the moment. The doorbell camera company doesn’t seem all that concerned about optics — not after partnering with 400 law enforcement agencies en route to securing 97% of the doorbell camera market. When not writing press releases and social media posts for cop shops, Ring is waging a low-effort charm offensive with vapid blog posts meant to boost its reputation as a crime-fighting device while burying all the questionable aspects of its efforts — like encouraging “sharing” of footage with law enforcement so they don’t have to go through the hassle of obtaining a warrant.
Ring is toughening up a bit in the face of all this bad press. It’s engaging directly with critics on Twitter to rebut points they haven’t made and answer questions they didn’t actually ask. It responded to the ACLU’s post that theorized about Amazon’s forays into surveillance tech, positing that the company’s Rekognition software and Ring doorbell cameras make for a dynamic surveillance duo — one that faces outwards from millions of private homes around the nation.
Ring says it does not use facial recognition tech in its doorbells. It has made this statement multiple times in the past couple of weeks. That’s good news. But it’s not the end of the story. Nicole Nguyen and Ryan Mac of BuzzFeed are countering Ring’s PR push by pointing out that it’s a little weird for a company that says it does not use facial recognition tech to employ someone directly tasked with exploring facial recognition opportunities. (via Boing Boing)
While Ring devices don’t currently use facial recognition technology, the company’s Ukraine arm appears to be working on it. “We develop semi-automated crime prevention and monitoring systems which are based on, but not limited to, face recognition,” reads Ring Ukraine’s website. BuzzFeed News also found a 2018 presentation from Ring Ukraine’s “head of face recognition research” online and direct references to the technology on its website.
Maybe the stateside version isn’t ready to mix in the tech, but its Ukraine arm seems poised to explore this option. The presentation BuzzFeed located was created by Oleksandr Obiednikov, who listed himself as Ring’s “Head of Face Recognition Tech” in his presentation about “alignment-free face recognition.”
Ring’s US operations also indicate Ring is looking into this, even if it hasn’t added the tech yet.
In November 2018, Ring filed two patent applications that describe technology with the ability to identify “suspicious people” and create a “database of suspicious persons.”
So, the company’s assertions about facial recognition tech appear to be true, but only because it has added the qualifier “currently” to its statements. The pairing of doorbell cameras to unproven, often-inaccurate facial recognition tech is all but assured. Ring’s denials would be a whole lot more palatable if it wasn’t exploring this option elsewhere in the world.
We may only be on the outskirts of a corporation-enabled dystopia at the moment, but a future full of unblinking eyes containing biometric scanning capabilities is swiftly approaching. And this surveillance state won’t be the product of the show of force by the government but the result of private companies using law enforcement to expand their user base with a series of “would you kindly?” requests.
Filed Under: doorbells, face recognition, ring, surveillance
Companies: amazon, ring
Student Files $1 Billion Lawsuit Against Apple Over Supposedly Faulty Facial Recognition Tech That Falsely Accused Him Of Theft
from the probably-something-low-tech-going-on-here dept
An 18-year-old resident of New York City is suing Apple for $1 billion. His lawsuit alleges Apple uses facial recognition technology as part of its stores’ security systems and that this led directly to him being accused of multiple thefts across a handful of states… despite him bearing zero resemblance to the thief caught on tape.
Ousmane Bah’s lawsuit [PDF] alleges Apple failed in its duty of care by attributing all these thefts to him, despite him not being the thief, resulting in numerous harms and injuries.
As a result of this action, Defendant’s facial recognition technology associated the face of the perpetrator of multiple crimes with Mr. Bah’s name and address. This led to Mr. Bah being charged with multiple crimes of larceny across a number of states.
Mr. Bah has been undoubtedly harmed by Defendant’s wrongful actions. He has been forced to travel to multiple states, including Massachusetts, New Jersey, and Delaware. He has also been subject to a shocking and traumatic arrest made by the NYPD at his home at four o’clock in the morning.
Mr. Bah’s education has also been negatively affected due to Defendant’s actions. He has been forced to miss multiple days of school in order to travel in response to charges wrongfully made against him. Additionally, on the day that he was arrested in New York, he was supposed to take a midterm exam. His grade was negatively affected.
But while the lawsuit leans into the facial recognition theory, Apple has stated it does not use this tech in its store security systems — ones run by Security Industry Specialists, Inc., which is the other defendant named in Bah’s lawsuit.
While facial recognition tech is definitely in Apple’s wheelhouse, to date it’s only used as a biometric security feature. Phone owners can unlock their phones using their faces — something that’s definitely handy, if not of much help when Constitutional rights are at stake.
Bah’s lawsuit leans into this theory, peppering it with links to Face ID articles and footnotes expressing concern about facial recognition tech’s notorious unreliability. But Bah’s lawsuit narrative seems to undercut his claims about facial recognition tech and the damage done.
There’s a far more reasonable explanation for what happened here, and it’s all in the narrative and allegations. This seems to be an old school case of mistaken identity, rather than unproven tech fingering the wrong suspect.
Bah had recently applied for a New York driver’s license. He had a photoless paper permit to tide him over until his actual ID was mailed to him. He lost this permit, which then apparently made its way into the thief’s hands and, ultimately, into the hands of Apple’s security people.
This information was delivered to him by an NYPD detective’s educated guesses about the source of theft claims against Bah.
Detective Reinhold of the NYPD soon realized that Mr. Bah was wrongfully arrested and that he was not the suspect of the crimes perpetrated against Defendant. Detective Reinhold stated that he had viewed the surveillance video from the Manhattan store and concluded that the suspect “looked nothing like” Mr. Bah.
At that point, Detective Reinhold also explained that Defendant’s security technology identifies suspects of theft using facial recognition technology. Further, he suspected that the person who had committed the crimes must have presented Mr. Bah’s interim permit as identification during one of his multiple offenses against Defendant which took place over many months and in multiple states.
Detective Reinhold’s speculation about facial recognition tech may be faulty, but the fact that the suspect presented a photoless ID identifying himself as Ousmane Bah likely explains why Apple reported the faux Bah to the police. When the suspect stole things from other stores, security personnel saw it was the same person they only knew as Bah from the ID the suspect had presented to them.
Since Apple didn’t have a photo to cross-reference with its camera footage, the Bah seen in the recordings was the only Bah it knew, even if it wasn’t actually Ousmane Bah, upstanding citizen and recent high school graduate. Fortunately for the real Bah, charges have been dropped in all but one case. New Jersey is the sole holdout but those charges will likely follow the rest into the prosecutor’s trash can.
Unfortunately for Bah, bogus charges result in real hardships, real reputational damage, and one very real 4 a.m. arrest. But is that $1 billion-worth of damages? It’s highly unlikely Bah will walk away with anything approaching this amount. In fact, Apple’s good faith efforts to catch a thief known only as “Ousmane Bah” don’t really show the company acting negligently. It was working with the information it had. The information may have been bad, but Apple didn’t have any way of knowing that when it turned over information to law enforcement.
Simply saying something does not make it so — which is definitely going to get pointed out by a judge when they reach this part of Bah’s arguments:
The identification of Mr. Bah and subsequent charges filed against him, as well as the traumatic arrest which took place at his home, were completely preventable by Defendant. All of these events were caused by Defendant’s negligent acceptance of an interim permit, which did not contain a photo, did not properly describe the suspect presenting it, and clearly stated that it could not be used for identification purposes, as a valid form of identification.
Additionally, had Defendant taken action to correct their error after it had become aware that its facial recognition technology had continuously wrongfully implicated Mr. Bah, further injury could easily have been avoided.
It’s tough to “easily avoid” something Apple wasn’t aware of until after Bah’s arrest by the NYPD. Only after this arrest occurred did information begin to circulate that Bah wasn’t the thief in the security videos — information that first made its way to the Boston PD, not to Apple’s security team.
Bah claims Apple kept “prosecuting” these charges even after it might have been aware it had misidentified the suspect. But Apple doesn’t prosecute charges. Prosecutors do. There’s nothing on the record at this point indicating Apple security was approached by law enforcement with information showing the wrong person had been named as a suspect but chose to continue pressing charges against Bah.
In the end, this appears to be a very unfortunate situation in which everyone did what they could but still ended up harming a teen with a clean criminal record. Apple worked with the information it had, aided in part by Bah never reporting his interim ID card going missing. The officers who did handle Bah’s cases discovered he was the wrong person, passed that information on to others, and had prosecutors drop the charges. Facial recognition tech was dragged into this lawsuit via speculation from an NYPD detective, but Bah’s narrative is just as speculative, accusing Apple of using faulty tech when it most likely used tech — CCTV — everyone’s been using for years.
Filed Under: apple stores, face recognition, facial recognition, ousmane bah, security
Companies: apple
Microsoft Posts List Of Facial Recognition Tech Guidelines It Thinks The Government Should Make Mandatory
from the good-rules,-cloudy-motive dept
Earlier this year, Microsoft faced backlash for appearing to be working with ICE to provide it with facial recognition technology. A January blog post from its Azure Government wing stated it had acquired certification to set up and manage ICE cloud services. The key bit was this paragraph, which definitely made it seem Microsoft was joining ICE in the facial recognition business.
This ATO [Authority to Operate] is a critical next step in enabling ICE to deliver such services as cloud-based identity and access, serving both employees and citizens from applications hosted in the cloud. This can help employees make more informed decisions faster, with Azure Government enabling them to process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.
Roughly five months later, this blog post was discovered, leading to Microsoft receiving a large dose of social media shaming. A number of its own employees signed a letter opposing any involvement at all with ICE. A July blog post from the president of Microsoft addressed the fallout from the company’s partnership with ICE. It clarified that Microsoft was not actually providing facial recognition tech to the agency and laid out a number of ground rules the company felt would best serve everyone going forward.
This starting point has now morphed into a full-fledged rule set Microsoft will apparently be applying to itself. Microsoft’s Brad Smith again addresses the positives and negatives of facial recognition tech, especially when it’s deployed by government agencies. The blog post is a call for government regulation, not just of tech companies offering this technology, but for some internal regulation of agencies deploying this technology.
Smith’s post is long, thoughtful, and detailed. I encourage you to read it for yourself. But most of it falls under these headings — all issues Microsoft believes should be addressed via federal legislation.
First, especially in its current state of development, certain uses of facial recognition technology increase the risk of decisions and, more generally, outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination.
Second, the widespread use of this technology can lead to new intrusions into people’s privacy.
And third, the use of facial recognition technology by a government for mass surveillance can encroach on democratic freedoms.
The three points affect everyone involved: the government, facial recognition tech developers, and private sector end users. It asks the government to police itself, as well as any vendors it deals with. It’s a big ask, especially since the government has historically shown minimal restraint when exploiting new surveillance technology. It often falls on the nation’s courts to regulate the government’s tech use, rather than the government being proactively cautious when rolling out new tools and toys.
But it also demands a lot from the private sector and suggests those who can’t follow these rules Microsoft has laid out shouldn’t be allowed to offer their services to the government. Here’s what Smith proposes as a baseline for the tech side:
Legislation should require tech companies that offer facial recognition services to provide documentation that explains the capabilities and limitations of the technology in terms that customers and consumers can understand.
New laws should also require that providers of commercial facial recognition services enable third parties engaged in independent testing to conduct and publish reasonable tests of their facial recognition services for accuracy and unfair bias.
[…]
While human beings of course are not immune to errors or biases, we believe that in certain high-stakes scenarios, it’s critical for qualified people to review facial recognition results and make key decisions rather than simply turn them over to computers. New legislation should therefore require that entities that deploy facial recognition undertake meaningful human review of facial recognition results prior to making final decisions for what the law deems to be “consequential use cases” that affect consumers. […]
Finally, it’s important for the entities that deploy facial recognition services to recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers. This provides additional reason to ensure that humans undertake meaningful review, given their ongoing and ultimate accountability under the law for decisions that are based on the use of facial recognition.
This is the burden on the tech side. What the government needs to do is just not use it for mass surveillance or the continuous surveillance of certain people. Microsoft suggests warrants for continuous surveillance using facial recognition tech with the expected exceptions for emergencies and public safety risks.
When combined with ubiquitous cameras and massive computing power and storage in the cloud, a government could use facial recognition technology to enable continuous surveillance of specific individuals. It could follow anyone anywhere, or for that matter, everyone everywhere. It could do this at any time or even all the time. This use of facial recognition technology could unleash mass surveillance on an unprecedented scale.
[…]
We must ensure that the year 2024 doesn’t look like a page from the novel “1984.” An indispensable democratic principle has always been the tenet that no government is above the law. Today this requires that we ensure that governmental use of facial recognition technology remain subject to the rule of law. New legislation can put us on this path.
It’s all good stuff that would protect citizens and curb abusive tech deployment if implemented across the board by tech companies. But that would likely require a legislative mandate, according to Microsoft. The end result is Microsoft asking the same entity its feels may abuse the tech to lay down federal guidelines for development and deployment.
I don’t have any complaints about what Microsoft’s proposing. I only question why it’s proposing it. When a large corporation starts asking for government regulation, it’s usually because increased regulation would keep the market smaller and help Microsoft weed out a few possible competitors. I wouldn’t say this is the only reason Microsoft is handing out a long wish list of government mandates, but there’s no way this isn’t a factor.
Microsoft’s management likely has genuine concerns about this tech and its future uses. Somewhat coincidentally, it’s also in the best position to make these arguments. Other than a supposed misunderstanding about selling facial recognition tech to ICE, the company hasn’t set its reputation on fire and/or been caught handing the government loads of tools that can be repurposed for oppression.
Other players in the facial recognition market have already ceded the high ground. Amazon has been handing out tech to law enforcement agencies even as Congress members are demanding answers from the company about its facial recognition software. Google may not be pushing facial recognition tech, but with it currently engaged in building an oppressor-friendly search engine for China’s government, it can’t really portray itself as a champion of civil liberties. Facebook has used facial recognition tech for years, but is currently so toxic no one really wants to hear what it has to say about privacy or government surveillance. Apple may have some guidance to offer, but the DOJ likely uses Tim Cook headshots for dartboards, making it less than receptive to the company’s thoughts on biometric scanning. As for the rest of the players in the field — the multiple contractors who sell surveillance equipment to the governments all over the world — they have zero concerns about government abuse or respecting civil liberties, so Microsoft’s post may as well be written in Etruscan for all they’ll get out it.
I’m in firm agreement with Brad Smith/Microsoft that facial recognition tech is a threat to privacy and civil liberties. I also believe the companies crafting/selling this tech should vet their products thoroughly and be prepared to shut them down if they can’t eliminate bias or if products are being used to conduct pervasive, unjustified surveillance. I don’t believe most tech companies will do this voluntarily and know for a fact the government will not actively police use of these systems. The status quo — zero accountability from governments and government contractors — cannot remain in place. The courts may right some wrongs eventually, but until then, suppliers of facial recognition technology are complicit in the resulting civil liberties violations.
I applaud Microsoft for calling for action. But I will hold that applause until it becomes apparent Microsoft will maintain these standards internally, with or without a legislative mandate. If other companies choose to sign on as… I don’t know… ethical surveillance tech dealers, that would be great. Asking the government to regulate tech development the preferred course of action, but a surveillance tech Wild West isn’t an ideal outcome either. Ideally, the government would set higher standards for adoption and deployment of tech along the lines Microsoft has proposed, policing itself by vetting its vendors better. But if the federal government was truly interested in limiting its abuse of tech developments, we would have seen some evidence of it already.
These suggestions should be voluntarily adopted by other tech companies, if for no other reason than it insulates them from elimination should the government decide its going to up its acquisition and deployment standards. Microsoft scores a PR win, if nothing else, simply by being first. I appreciate its staking out its stance on this issue, but remain cautiously pessimistic about the company’s ability to live up to its own standards.
Filed Under: face recognition, regulations
Companies: microsoft
Body Cam Company Files Patent For Built-In Facial Recognition Tech
from the papers,-please-v.-2.0 dept
Police body cameras are the savior that failed to materialize. Accountability was the end goal, but obstacles were immediately erected by internal policies, cop-friendly legislation, and existing public records carve-outs for anything “investigation”-related.
Making things worse are the officers themselves. When excessive force or other unconstitutional tactics are deployed, body cams seem to malfunction at an alarming rate. And that’s only if officers can be bothered to turn them on at all. Body cams have served up a bunch of exonerating footage and delivered evidence to prosecutors, but have done little to make law enforcement more accountable.
This trend isn’t in any danger of reversing. Body cam manufacturers are seeking to expand their offerings, but the focus appears to be on giving law enforcement the extras it wants, rather than what the public is actually seeking. A good summary of recent body cam developments by Sidney Fussell at The Atlantic contains a discussion a new patent application by body cam manufacturer Digital Ally.
While the patent application contains some nice “triggering” effects that may result in more captured footage of questionable incidents, it also contains something that would turn passive recordings into active surveillance.
In mid-September, Digital Ally, a cloud-storage and video-imaging company, announced a series of patents for cameras that would automatically be triggered by various stimuli, not just an officer pressing record. Theoretically, these would end the problem of both re-creation and cameras inexplicably failing to record use-of-force scenarios.
Some of the “triggering events” in Digital Ally’s patents are for crisis response, in the event of a car crash or a gun being pulled from its holster. But some of the auto-triggers would cause the cameras to record simply as police move through public spaces. As described in one patent, an officer could set his body camera to actively search for anyone with an active warrant. Using face recognition, the camera would scan the faces of the public, then compare that against a database of wanted people. (The process could work similarly for a missing person.) If there’s a match, the camera would begin to record automatically.
As Fussell points out, the tech isn’t ready yet. Processing power is still an issue. Facial recognition software needs to perform a lot of complex calculations quickly to provide near-instant feedback, and it’s unlikely Digital Ally has found a way to cram that into a body cam yet. Moving the processing to the cloud solves space management problems, but would require far faster connections than are ordinarily available to portable devices.
But it won’t be this way forever. And that has a lot of privacy implications, even when the footage is gathered in public spaces. Law enforcement doesn’t have the right to demand you identify yourself in most situations, but inserting facial recognition tech in body cams strips away a small protection against government power citizens currently possess. The software will provide officers with personal information even when there’s no reason for officers to have it. All of this is problematic even before you get to the issue of false positives and the havoc they can wreak.
It’s almost inevitable facial recognition tech will be deployed by law enforcement. The DHS is already aiming cameras at passengers flying into the country to search for criminal suspects and others it wants to subject to additional screening. Any public place that routinely hosts large gatherings of people will be next in line for biometric scanning. And once the tech catches up with the dream, body cams will ID pedestrians by the hundreds, even when cops aren’t actively searching for subjects or suspects.
No matter how it’s pitched in the future, it’s important to remember law enforcement isn’t somehow “owed” every tech advance that comes its way. Supporters will say facial recognition tech makes both officers and the public safer. This may be true, but so do all sorts of privacy/Constitutional violations. Our rights protect us from the government. And that includes the right to go about your business without being asked by officers to identify yourself. Removing this right via technical wizardry and making it a passive experience for all involved doesn’t make it any less of a violation.
Filed Under: body cameras, face recognition, patents, police, privacy
Companies: digital ally
Congress Members Demand Answers From, Investigation Of Federal Facial Rec Tech Users
from the software-terrible-with-names,-worse-with-faces dept
The ACLU’s test of Amazon’s facial recognition software went off without a hitch. On default settings, the software declared 28 Congressional members to be criminals after being “matched” with publicly-available mugshots. This number seemed suspiciously low to cynics critical of all things government. The number was also alarmingly high, as in an incredible amount of false positives for such a small data set (members of the House and Senate).
Amazon argued the test run by the ACLU using the company’s “Rekognition” software was unfair because it used the default settings — 80% “confidence.” The ACLU argued the test was fair because it used the default settings — 80% confidence. Amazon noted it recommended law enforcement bump that up to 95% before performing searches but nothing in the software prompts users to select a higher setting for more accurate results.
This upset members of Congress who weren’t used to be called criminals… at least not by a piece of software. More disturbing than the false positives was the software’s tendency to falsely match African-American Congressional reps to criminal mugshots, suggesting the act of governing while black might be a criminal activity.
Congressional members sent a letter to Amazon the same day the ACLU released its report, demanding answers from the company for this abysmal performance. Ron Wyden has already stepped up to demand answers from the other beneficiaries of this tech: federal law enforcement agencies. His letter [PDF] reads like an expansive FOIA request, only one less likely to be arrive with redactions and/or demands the scope of the request be narrowed.
Wyden is asking lots of questions that need answers. Law enforcement has rushed to embrace this technology even as multiple pilot programs have generated thousands of bogus matches while returning a very small number of legitimate hits. Wyden wants to know what fed agencies are using the software, what they’re using it for, and what they hope to achieve by using it. He also wants to know who’s supplying the software, what policies are governing its use, and where it’s being deployed. Perhaps most importantly, Wyden asks if agencies using facial recognition tech are performing regular audits to quantify the software’s accuracy.
That isn’t the only facial recognition letter-writing Wyden has signed his name to. The Hill reports Congressional reps have also sent one to the Government Accountability Office, asking it to open an investigation into facial recognition software use by federal agencies.
“Given the recent advances in commercial facial recognition technology – and its expanded use by state, local, and federal law enforcement, particularly the FBI and Immigration and Customs Enforcement – we ask that you investigate and evaluate the facial recognition industry and its government use,” the lawmakers wrote.
The letter, signed by Rep. Jerrold Nadler and Sens. Ron Wyden, Cory Booker, Christopher Coons (D-Del.) and Ed Markey (D-Mass.), asks the GAO to examine “whether commercial entities selling facial recognition adequately audit use of their technology to ensure that use is not unlawful, inconsistent with terms of service, or otherwise raise privacy, civil rights, and civil liberties concerns.”
The public has a right to know what public surveillance methods are being deployed against it and how accurate or useful these tools are in achieving agencies’ stated goals. Privacy expectations all but vanish when the public goes out in public, but that doesn’t mean their daily movements can automatically be considered grist for a government surveillance mill. Whatever privacy implications there are likely have not been addressed pre-deployment if recent surveillance tech history is any indication. Before the government wholeheartedly embraces tech with unproven track results, federal agencies need to spend some quality time with the people they serve and their overseers that act as a proxy for direct supervision.
Filed Under: congress, criminals, face recognition, matching, rekognition, ron wyden
Companies: amazon
Congress Members Want Answers After Amazon's Facial Recognition Software Says 28 Of Them Are Criminals
from the but-they're-all-crooks-amirite dept
Hey, American citizens! Several of your Congressional representatives are criminals! Unfortunately, this will come as a completely expected news to many constituents. The cynic in all of us knows the only difference between a criminal and a Congressperson is a secured conviction.
We may not have the evidence we need to prove this, but we have something even better: facial recognition technology. This new way of separating the good and bad through the application of AI and algorithms is known for two things: being pushed towards ubiquity by government agencies and being really, really bad at making positive identifications.
At this point it’s unclear how much Prime members will save on legal fees and bail expenditures, but Amazon is making its facial recognition tech (“Rekognition”) available to law enforcement. It’s also making it available to the public for testing. ACLU took it up on its offer, spending $12.33 to obtain a couple dozen false hits using shots of Congressional mugs.
In a test the ACLU recently conducted of the facial recognition tool, called “Rekognition,” the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.
The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.
The bad news gets worse.
The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.).
And here’s the chilling lineup of usual suspects according to Amazon’s Rekognition:
Using 25,000 publicly-available mugshots and Rekognition’s default settings, the ACLU picked up a bunch of false hits in very little time. This is only a small portion of what’s available to law enforcement using this system. Agencies have access to databases full of personal info and biometric data for hundreds of thousands of people, including people who’ve never been charged with a crime in their lives.
The obvious downside to a false hit is, at minimum, the unjustified distribution of identifying info to law enforcement officers to confirm/deny the search results. At most, it will be the loss of freedom for someone wrongly identified as someone else. Recourse takes the form of lawsuits with a high bar for entry and slim likelihood of success, thanks to several built-in protections for law enforcement officers.
Amazon continues to market this system to law enforcement agencies despite its apparent shortcomings. Very little has been written about the successes of facial recognition technology. There’s a good reason for this: there aren’t that many. There certainly haven’t been enough to justify the speedy rollout of this tech by a number of government agencies.
This little experiment has already provoked a response from Congressional members who are demanding answers from Amazon about the ACLU’s test results. Amazon, for its part, claims the ACLU’s test was “unfair” because it used the default 80% “confidence” setting, rather than the 95% recommended for law enforcement. The ACLU has responded, noting this is the default setting on Rekognition and nothing prompts the user — which could be a law enforcement officer — to change this setting to eliminate more false positives. In any event, at least Congress is talking about it, rather than nodding along appreciatively as federal agencies deploy the tech without public consultation or mandated privacy impact reports turned in.
Filed Under: congress, face recognition, john lewis, rekognition
Companies: aclu, amazon
Facial Recognition Company Says It Won't Sell To Law Enforcement, Knowing It'll Be Abused
from the taking-a-stand dept
We just recently wrote about employees at Amazon speaking out inside the company to complain about the company selling its face recognition tools (called: “Rekognition”) to law enforcement. That resulted in the CEO of a maker of facial recognition software, Brian Brackeen, to publicly state that his company, Kairos, will not sell to law enforcement.
The full article is worth reading, but he notes that the technology will be abused and misused by law enforcement — and often in a way that will lead to false arrests and murder:
Having the privilege of a comprehensive understanding of how the software works gives me a unique perspective that has shaped my positions about its uses. As a result, I (and my company) have come to believe that the use of commercial facial recognition in law enforcement or in government surveillance of any kind is wrong ? and that it opens the door for gross misconduct by the morally corrupt.
To be truly effective, the algorithms powering facial recognition software require a massive amount of information. The more images of people of color it sees, the more likely it is to properly identify them. The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them.
And misidentification could lead to wrongful conviction, or far worse.
As he states later in the piece:
There is no place in America for facial recognition that supports false arrests and murder.
It’s good to see this, and whether you support the police or not, we should appreciate this moment — just as we should appreciate the people at Amazon who stood up and complained about this. Too often lately, the tech industry is getting slammed for not taking into account the impact of their technology in their rush to push forward innovation at any costs. I’ve always felt that that narrative is a bit exaggerated. I talk to a lot of entrepreneurs who really do think quite a lot about how their technology may impact the world — both good and bad — but it’s good to see people in the industry speaking out publicly about how that might happen, and why they need to make sure not to oversell the technology in a way where it’s likely to cause real harm.
Filed Under: brian brackeen, face recognition, law enforcement
Companies: kairos
Judge OKs Class Action Status For Illinoisans Claiming Facebook Violated State Privacy Law
from the face-off dept
The last time we discussed Illinois’ Biometric Information Privacy Act, a 2008 law that gives citizens in the state rights governing how companies collect and protect their biometric data, it was when a brother/sister pair attempted to use the law to pull cash from Take-Two Interactive over its face-scanning app for the NBA2K series. In that case, the court ruled that the two could not claim to have suffered any actual harm as a result of using their avatars, with their real faces attached, in the game’s online play. One of the chief aspects of the BIPA law is that users of a service must not find their biometric data being used in a way that they had not intended. In this case, online play with these avatars was indeed the stated purpose of uploading their faces and engaging in online play to begin with.
But now the law has found itself in the news again, with a federal court ruling that millions of Facebook users can proceed under a class action with claims that Facebook’s face-tagging database violates BIPA. Perhaps importantly, Facebook’s recent and very public privacy issues may make a difference compared with the Take-Two case.
A federal judge ruled Monday that millions of the social network’s users can proceed as a group with claims that its photo-scanning technology violated an Illinois law by gathering and storing biometric data without their consent. Damages could be steep — a fact that wasn’t lost on the judge, who was unsympathetic to Facebook’s arguments for limiting its legal exposure.
Facebook has for years encouraged users to tag people in photographs they upload in their personal posts and the social network stores the collected information. The company has used a program it calls DeepFace to match other photos of a person. Alphabet’s cloud-based Google Photos service uses similar technology and Google faces a lawsuit in Chicago like the one against Facebook in San Francisco federal court.
Both companies have argued that none of this violates BIPA, even when this face-data database is generated without users’ permission. That seems to contradict BIPA, where fines between 1,000and1,000 and 1,000and5,000 can be assessed with every use of a person’s image without their permission. Again, recent news may come into play in this case, as noted by the lawyer for the Facebook users in this case.
“As more people become aware of the scope of Facebook’s data collection and as consequences begin to attach to that data collection, whether economic or regulatory, Facebook will have to take a long look at its privacy practices and make changes consistent with user expectations and regulatory requirements,” he said.
Now, Facebook has argued in court against this moving forward as a class by pointing out that different users could make different claims of harm, impacting both the fines and outcomes of their claims. While there is some merit to that, the court looked at those arguments almost purely as a way for Facebook to try to get away from the enormous damages that could potentially be levied under a class action suit, and rejected them.
As in the Take-Two case, Facebook is doing everything it can to set the bar for any judgement on the reality of actual harm suffered by these users, of which the company claims there is none.
The Illinois residents who sued argued the 2008 law gives them a “property interest” in the algorithms that constitute their digital identities. The judge has agreed that gives them grounds to accuse Facebook of real harm. Donato has ruled that the Illinois law is clear: Facebook has collected a “wealth of data on its users, including self-reported residency and IP addresses.” Facebook has acknowledged that it can identify which users who live in Illinois have face templates, he wrote.
We’ve had our problems with class actions suits in the past, but it shouldn’t be pushed aside that this case has the potential for huge damages assessed on Facebook. It’s also another reminder that federal privacy laws are in sore need of modernization, if for no other reason than to harmonize how companies can treat users throughout the United States.
Filed Under: bipa, class action, face recognition, face scanning, illinois
Companies: facebook
ACLU Obtains Documents Showing Amazon Is Handing Out Cheap Facial Recognition Tech To Law Enforcement
from the Prime-membership-not-required dept
More bad news on the privacy front, thanks to one of America’s largest corporations. Documents obtained by the ACLU show Amazon is arming law enforcement agencies with cheap facial recognition tech, allowing them to compare any footage obtained from a variety of sources to uploaded mugshot databases.
The company has developed a powerful and dangerous new facial recognition system and is actively helping governments deploy it. Amazon calls the service “Rekognition.”
Marketing materials and documents obtained by ACLU affiliates in three states reveal a product that can be readily used to violate civil liberties and civil rights. Powered by artificial intelligence, Rekognition can identify, track, and analyze people in real time and recognize up to 100 people in a single image. It can quickly scan information it collects against databases featuring tens of millions of faces, according to Amazon.
It’s already been deployed to several areas around the country, with Amazon acting as the government’s best friend a la AT&T historic proactive cooperation with NSA surveillance efforts. The documents [PDF] obtained by the ACLU show Amazon has been congratulated by local law enforcement officials for a “first-of-its-kind public-private partnership,” thanks to its deployment efforts. On top of providing deployment assistance, Amazon also offers troubleshooting and “best practices” for officers using the tech. It has even offered free consulting to agencies expressing an interest in Rekognition.
These efforts aren’t surprising in and of themselves, although Amazon’s complicity in erecting a law enforcement surveillance structure certainly is. Amazon is looking to capture an underserved market, and the more proactive it is, the more market it will secure before competitors arrive. To further cement its position in the marketplace, Amazon is limiting what law enforcement agencies can say about these public-private partnerships.
In the records, Amazon also solicits feedback and ideas for “potential enhancements” to Rekognition’s capabilities for governments. Washington County even signed a non-disclosure agreement created by Amazon to get “insight into the Rekognition roadmap” and provide additional feedback about the product. The county later cited this NDA to justify withholding documents in response to the ACLU’s public records request.
Documents also suggest Amazon is looking to partner with body camera manufacturers to add its facial recognition tech. This is something body camera manufacturers are already considering, and licensing an established product is far easier than building one from the ground up.
The system is powerful and can apparently pull faces from real-time footage to compare to databases. It also allows agencies to track individuals. It puts passive cameras on surveillance steroids, giving any person who strolls past a government camera a chance to be mistaken for a wanted suspect. To date, facial recognition software has managed to generate high numbers of false positives, while only producing a handful of valid arrests.
These efforts have been deployed with zero input from the largest stakeholder in any government operation: the general public.
Because of Rekognition’s capacity for abuse, we asked Washington County and Orlando for any records showing that their communities had been provided an opportunity to discuss the service before its acquisition. We also asked them about rules governing how the powerful surveillance system could be used and ensuring rights would be protected. Neither locality identified such records.
It may be the NDAs discourage public discussion, but more likely the agencies acquiring the tech knew the public wouldn’t be pleased with having their faces photographed, tracked, stored indefinitely, and compared to pictures stored in law enforcement databases. And if public agencies are unwilling to discuss these programs with the public, they’re far less likely to create internal policies governing use of the tech. Amazon’s push to secure a sizable portion of this market is only making things worse, and its use of NDAs is going to further distance these public agencies from being accountable to the people they serve.
Filed Under: face recognition, law enforcement, police, privacy, rekognition, surveillance
Companies: aclu, amazon