gao – Techdirt (original) (raw)

GAO Tells TSA It Needs To Make Sure Its Screening Tech Still Works Well, Isn’t Racist

from the show-us-you-actually-care-about-security dept

The TSA was imposed on us following the terrorist attacks on September 11, 2001. Supposedly necessitated by this “new” terrorist threat, the TSA shrugged into action, becoming another layer of irritating bureaucracy standing between benign travelers and their freedom of movement.

Since then, it has gotten worse. The TSA has spent billions on tech, training, and polyester blends and delivered us little more than a new way to get hassled by The Man. What the TSA has been completely incapable of doing in its two decades of existence is prevent any terrorist attacks. Those threats have most often been handled by passengers who were subjected to intrusive screenings only to find themselves on board with would-be terrorists the so-called “experts” were unable to detect.

The TSA has taken every failure — and those have been several and spectacular — in stride. It has asked for more power, more intrusive screening methods and devices, and more facial recognition tech at more airports. And it has obtained all of this, despite being unable to show any of this has made it better at its job.

Facial recognition tech is on its way to becoming the only way passengers will be allowed to board planes. Sure, it’s technically opt-out, but TSA agents aren’t all that forthcoming about this option and tend to treat those who do bypass the biometric screeners as inherently suspicious.

And that’s on top of what the TSA already has. It has spent years obtaining all manner of body/substance screening tech that still hasn’t done much more than prove that letting machines do TSA agents’ jobs for them hasn’t necessarily made us any safer.

The latest report [PDF] from the Government Accountability Office doesn’t offer any signs of improvement. While it does note the TSA has “taken steps” to implement recommendations the GAO handed out on two previous occasions (2019 and 2022), this 2023 report says there’s still a long way to go before the TSA can be considered to be in compliance, much less competent.

Buying and deploying screening tech may sound like a solution to ever-shifting threats (threats that do not include hijacking/blowing up planes, as the TSA inadvertently admitted years ago). But all that money doesn’t mean a thing if no one’s ensuring the purchased devices are doing their job and/or aren’t due to be phased out for something more accurate.

TSA certifies technologies to ensure they meet requirements before deployment, and its officers are to regularly calibrate deployed technologies to demonstrate they are minimally operational. However, neither of these actions ensures that technologies continue to meet requirements after deployment.

“Works out of the box” is not the same thing as “still functioning well” years later. How bad could it be? Well, the report says the TSA was already failing to ensure continued functionality almost a decade ago.

In 2015 and 2016, the Department of Homeland Security (DHS) tested a sample of deployed explosives trace detection and bottled liquid scanner units and found that some no longer met detection requirements.

The TSA’s response was to start working on an update of its tech calibration policies. It started this process in 2020. As we get set to close out 2023, this project still hasn’t been completed. As the GAO points out, the processes currently in place only ensure screening tech is “minimally operational,” a phrase that means it powers up when the power button is pushed, but little else. Meeting the baseline is not the same thing as having equipment capable of detecting explosives years after installation. Machinery doesn’t tend to improve the longer it’s in operation. But the TSA apparently feels its job is done once the machines have been installed and powered up.

The TSA has its excuses, but the GAO says those excuses have zero worth.

TSA officials stated that there are challenges in designing a process to ensure that screening technologies continue to meet detection requirements after deployment. For example, TSA officials stated that it is not feasible to conduct live explosives testing in airports. Further, while it is relatively easy to temporarily transfer smaller screening technologies, such as explosives trace detection units, to a controlled setting for live explosives testing, it would not be feasible to transfer larger installed units, such as advanced imaging technology. However, as we have previously reported, independent test measures exist to test these technologies such as a national standard for measuring image quality in explosives detection systems.

Then there’s the combination of problems caused by the TSA’s tech and the officers who operate it. After meeting with twelve “discussion groups” composed of TSA officers, the GAO found (back in 2022) that the screening tech “alarm[ed] frequently on certain passengers.” Triggering the most “alarms” were transgender passengers, people wearing religious headwear, and “passengers with certain hair types and styles” (read: flying while black). Here’s what those groups had to say:

The officers stated that they push a blue or pink button on the advanced imaging technology machine to specify the gender passengers are scanned as, based on their visual assessment of the passengers’ gender presentation. The officers stated that passengers may undergo additional screening if the gender button selected on the machine does not match the gender of the passenger. In addition, officers noted that transgender passengers may trigger alarms depending on the nature of their transition, because the technology may register potential threats in the groin and chest areas.

The officers also stated that the advanced imaging technology cannot adequately screen certain hair types and styles (e.g., heavy braids), which can result in some passengers, including Black women, triggering alarms on the machines.

Furthermore, officers stated that passengers who have medical conditions, prostheses, or disabilities that prevent them from holding the required position for advanced imaging technology screening (i.e., stand with their arms positioned over their heads) may be required to undergo additional screening.

Not great. As the ACLU and National Center for Transgender Equity pointed out to the GAO and TSA, binary selection is not the way to go. In addition, the Sikh Coalition stated Sikhs were guaranteed to received additional screening because their turbans would “automatically” result in the generation of an alarm by passenger screening tech.

In response, the TSA claimed it frequently met with advocacy groups to help determine what issues were being caused by its tech and its (apparently hands-off) approach to this tech deployment. What the TSA wasn’t able to demonstrate — at least not to the GAO’s satisfaction — was that all of this stated concern for these particular travelers had resulted in any meaningful changes in policies or procedures.

While the TSA did decide to eliminate the pink/blue options and replace with a simple, gender-neutral “scan” button, the rest of its promised changes have yet to materialize. The TSA claims it is collecting more data on “referrals” based on machine alarms and has briefed officers on these issues, but the GAO has yet to see any of this data firsthand.

To fully implement our recommendations, TSA will need to provide evidence that it has collected data on passenger referrals and used these data to assess the extent to which its screening practices align with its anti-discrimination policies to identify any needed actions to improve compliance.

In other words, show your work. Saying is not doing. Much of what’s discussed here has been known by the TSA since 2015. The rest of it the TSA was surely aware of but not made officially aware of until the 2019 GAO report. But certainly it was aware its tech was disproportionately singling out certain passengers. But it chose to do nothing about it until its failures were made public by its oversight. Government agencies with this much power should always strive to do better. What they shouldn’t do is wait around until they’ve been instructed to do better several times over the past decade.

Filed Under: gao, racism, screening, security theater, tsa

Oversight Report Finds Several Federal Agencies Are Still Using Clearview’s Facial Recognition Tech

from the look,-we-honestly-thought-no-one-would-keep-asking-questions dept

Two years ago, the Government Accountability Office (GAO) released its initial review of federal use of facial recognition tech. That report found that at least half of the 20 agencies examined were using Clearview’s controversial facial recognition tech.

A follow-up released two months later found even more bad news. In addition to widespread use of Clearview’s still-unvetted tech, multiple DHS components were bypassing internal restrictions by asking state and local agencies to perform facial recognition searches for them.

On top of that, there was very little oversight of this use at any level. Some agencies, which first claimed they did not use the tech, updated their answer to “more than 1,000 searches” when asked again during the GAO’s follow-up.

While more guidelines have been put in place since this first review, it’s not clear those policies are being followed. What’s more, it appears some federal agencies aren’t ensuring investigators are properly trained before setting them loose on, say, Clearview’s 30+ billion image database.

That’s from the most recent report [PDF] by the GAO, which says there’s still a whole lot of work to be done before US residents can consider the government trustworthy as far as facial recognition tech is concerned.

For instance, here’s the FBI’s lack of responsibility, which gets highlighted on the opening page of the GAO report.

FBI officials told key internal stakeholders that certain staff must take training to use one facial recognition service. However, in practice, FBI has only recommended it as a best practice. GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service.

The FBI told the GAO it “intends” to implement a training requirement. But that’s pretty much what it said it would do more than a year ago. Right now, it apparently has a training program. But that doesn’t mean much when hardly anyone is obligated to go through it.

This audit may not have found much in the way of policies or requirements, but it did find the agencies it surveyed prefer to use the service offered by an industry pariah than spend taxpayers’ money on services less likely to make them throw up in their mouths.

Yep. Six out of seven federal agencies prefer Clearview. The only outlier is Customs and Border Protection, although that doesn’t necessarily mean this DHS component isn’t considering adding itself to a list that already includes (but is not limited to) the FBI, ATF, DEA, US Marshals Service, Homeland Security Investigations, and the US Secret Service.

We also don’t know how often this tech is used. And we don’t know this because these federal agencies don’t know this.

Six agencies with available data reported conducting approximately 63,000 searches using facial recognition services from October 2019 through March 2022 in aggregate—an average of 69 searches per day. We refer to the number of searches as approximately 63,000 because the aggregate number of searches that the six agencies reported is an undercount. Specifically, the FBI could not fully account for searches it conducted using two services, Marinus Analytics and Thorn. Additionally, the seventh agency (CBP) did not have available data on the number of searches it performed using either of two services staff used.

In most cases, neither the agency nor the tech provider tabulated searches. Thorn only tracked the last time a source photo was searched against, not every time that photo had been searched. And, as the GAO notes, its 2021 report found some agencies couldn’t even be bothered to track which facial recognition tech services were being used by employees, much less how often they were accessed.

Most of the (undercounted) 63,000 searches ran through Clearview. Almost every one of these searches was performed without adequate training.

[W]e found that cumulatively, agencies with available data reported conducting about 60,000 searches—nearly all of the roughly 63,000 total searches—without requiring that staff take training on facial recognition technology to use these services.

All of the surveyed agencies have been using facial recognition tech since 2018. And here’s how they’re doing when it comes to handling things like mandated privacy impact assessments and other privacy-focused prerequisites that are supposed to be in place prior to the tech’s deployment. In this case, green means ok [“agency addressed requirement, but not fully”], baby blue means completed fully, and everything else means incomplete.

If there’s any good news to come out of this, it’s that the US Secret Service, DEA, and ATF have all halted use of Clearview. But just because Clearview is the most infamous and most ethically dubious provider of this tech doesn’t mean the other options are so pristine and trustworthy, these agencies should be allowed to continue blowing off their training and privacy impact mandates. These agencies have had two years to get better at this. But it appears they’ve spent most of that time treading water, rather than moving forward.

Filed Under: cbp, dhs, facial recognition, fbi, gao, us government
Companies: clearview, clearview ai

Oversight Agency Says DHS Needs To Stop Screwing Around And Accurately Track Use Of Force By Officers

from the if-you-can-hit-people,-you-can-hit-a-keyboard dept

There are no incentives in place to encourage accurate reporting of force deployment by law enforcement agencies. Tracking use of force means agencies are basically generating evidence for civil rights lawsuits. That’s why force reporting is, at best, inconsistent.

At its worst, it’s simply dishonest. The lack of solid deterrents means agencies simply won’t generate this data, lest it be used against them at some point in time. Policy changes rarely change anything, since they’re almost always unmoored from any substantial form of punishment.

Sure, a few outliers might make a genuine effort to accurately report these numbers, but there’s no concerted or consistent effort being made by the vast majority of agencies affected by reforms, directives, policy changes, etc. that supposedly mandate accurate reporting on force deployment.

So, this report [PDF] from the Government Accountability Office (GAO) reflects more of the same status quo. Directives and recommendations have been handed out for years, including more recent reform efforts mean to limit excessive force deployment. But no one’s actually making anyone comply. That’s how we end up with this:

On May 25, 2022, Executive Order 14,074 required the heads of federal law enforcement agencies, including DHS, to ensure their agencies’ use of force policies reflect principles of valuing and preserving human life and meet or exceed DOJ’s use of force policy.

[…]

While DHS requires the four agencies GAO reviewed to submit data on uses of force, the data submitted to DHS undercount the frequency that officers used force against subjects. For example, agencies sometimes submitted data to DHS that counted multiple reportable uses of force as a single “incident.”

To be sure, the cops (federal or not) brought this upon themselves. Two solid years of protests against police violence (provoked by the murder of George Floyd by Minneapolis police officer Derek Chauvin) forced the new presidential administration to roll back directives installed by its predecessor — someone who chose to believe it was the policed who were the actual problem.

Following high-profile deaths during law enforcement encounters and the subsequent public demonstrations in the summer of 2020, as well as events at the southern border in September 2021, the President signed an executive order on May 25, 2022, that addressed issues related to the use of force in federal law enforcement. The executive order noted the importance of strengthening trust between law enforcement officers and the communities they serve, as well as ensuring the criminal justice system serves and protects all people equally.

Well, you can’t rebuild trust if you’re unwilling to report force deployment accurately. And so, it appears DHS entities have a long way to go if they’re going to hold themselves up as examples worthy of being emulated.

The four agencies investigated by the GAO are no one’s idea of trustworthy. The CBP and ICE spent four years under Donald Trump erasing whatever goodwill they might have built up prior to his election. The Federal Protective Service flew under the radar until it was deployed to Portland, Oregon, where it promptly began brutalizing protesters, vanishing people off the street, and ignoring multiple court orders telling it to stop doing all of the above. And as for the US Secret Service, it’s never violated rights en masse, but it’s definitely home to multiple, still-unaddressed problems that range from moral turpitude to blatant obstruction.

The GAO’s previous examination led to a handful of recommendations. But despite having months (and all the money in taxpayers’ wallets) to do so, more than half of these DHS components had failed to anything more than promise to try to try.

As of February 2023, DHS had addressed our recommendation to develop standards for its agencies about what types of use of force should be reported but had not fully addressed the others. For example, it established a working group to oversee data collection, but that group had not yet developed monitoring mechanisms to ensure that reporting information is consistent and complete.

We also recommended that ICE and Secret Service modify their policies to ensure officials document the determinations of whether officers’ uses of force were within policy. As of February 2023, Secret Service had addressed GAO’s recommendation by issuing a new policy to document determinations, but ICE had not yet done so.

From what’s included in the report, it appears most of the agencies believed that mandates for use-of-force tracking meant they should do things like engage in more firearms certification, improve proficiency in less-lethal force deployment, say something nice about de-escalation for four hours a year, and avoid any discussion about implicit bias. Very little of the post-Executive Order efforts appear to actually be aimed at addressing the problem the EO was trying to address, i.e. abusive acts by federal officers.

Use-of-force reporting mandates are all over the place. Some federal officers are required to at least verbally report force deployment by the end of their shifts. ICE officers are required to “verbally” report this information within an hour of its occurrence. As for the permanent record, written reports are required anywhere from “by the end of shift” to 72 hours after the incident.

Because standards are inconsistent across DHS components and agencies/officers are rarely interested in accurately reporting their possible rights violations, the reported totals can’t be trusted.

Here’s how the Federal Protective Service (FPS) serves itself by under-reporting force deployment:

We found that officers sometimes report multiple uses of force in one report. For example, during demonstrations in Portland, Oregon, in February 2021, some individual officers used force multiple times during the course of an evening, but reported these uses to FPS on a single reporting form. In one case, over the course of 30 minutes, one officer deployed his less-lethal weapon three separate times, each time hitting a different individual. The officer reported these three uses of force to FPS in one report.

CBP does the same thing:

CBP data show that more than 1,700 use of force incidents occurred across the 2021-2022 fiscal year period. Of these, 291 incidents involved multiple officers using force, and 216 involved use of force against multiple subjects. For instance, in one encounter with migrants at the U.S. border, four officers reported using force on a group of 62 subjects. CBP recorded these uses of force as one incident.

Obviously, things need to change. The GAO (again) issues more recommendations, including additional reporting training for officers who are either unaware of the reporting requirements or simply choose to ignore them.

The problem is the GAO can’t actually make anyone in the government punish anyone else in the government for breaking the rules. So, it’s up to the DHS to do this. And if it won’t, it’s up to Congress. But if this has been a problem for years and recent social unrest has failed to move the dial, the obvious conclusion is that no one who can actually do anything about this wants to do anything about this.

Filed Under: dhs, gao, police brutality, police violence, use of force

GAO Would Like The FCC To Explain Why It Still Maintains A Pathetic, Dated Definition Of ‘Broadband’

from the setting-the-bar-at-ankle-height dept

Wed, May 3rd 2023 05:33am - Karl Bode

The US has always had a fairly pathetic definition of “broadband.”

Originally defined as anything over 200 kbps in either direction, the definition was updated in 2010 to a pathetic 4 Mbps down, 1 Mbps up. It was updated again in 2015 by the FCC to a better, but still arguably pathetic 25 Mbps downstream, 3 Mbps upstream. As we noted then, the broadband industry whined incessantly about having any higher standards, as it would only further highlight industry failure, the harm of monopolization, and a lack of competition.

Unfortunately for them, public pressure has only grown to push the US definition of broadband even higher. Especially as the government prepares to spend an historic $42 billion in broadband subsidies as part of the recently passed infrastructure bill.

In 2021, a small coalition of Senators wrote the Biden administration to recommend that 100 Mbps in both directions become the new baseline. After some lobbying by cable and wireless companies (whose upstream speeds couldn’t match that standard), FCC boss Jessica Rosenworcel conceded the agency should probably adopt a new standard: 100 Mbps downstream 20 Mbps up.

Not much has happened since.

Enter the General Accountability Office (GAO), which last week once again issued a report pointing out how the FCC has done a terrible job keeping the definition of broadband updated in modern era. The report also (too politely) notes the FCC hasn’t really bothered to explain to the public why it clings so desperately to the dated 25/3 standard:

“Our analysis of notices of inquiry and deployment reports shows that FCC has not consistently communicated from year to year how it reviews the broadband speed benchmark and determines whether to update it.”

That reason, of course, is regulatory capture. During the Trump era, the FCC was little more than a rubber stamp to the nation’s biggest telecom monopolies, which have fought against higher standards knowing full well it will only highlight limited competition, patchy availability, and slow speeds.

During the Biden era, FCC boss Jessica Rosenworcel has talked a good game about “bridging the digital divide,” but has also been generally averse to meaningfully criticizing those same monopolies. Efforts to appoint a popular reformer to the agency (Gigi Sohn), were demolished by a bipartisan coalition of corrupt Senators, who (surprise!) also pandered to giants like AT&T, Comcast, Charter, and Verizon.

That’s left the FCC (quite intentionally) without the voting majority needed to overcome the agency’s Republican commissioners, who consistently vote in lockstep with the interests of the telecom lobby.

As a result, the FCC has become increasingly irrelevant in the consumer protection arena or when it comes to defining meaningful competitive and deployment standards. That’s shifted the onus to state and local regulators, who may also be incompetent and corrupt depending on where you live.

This is, as they say, why we can’t have nice things. And why the U.S. consistently ranks somewhere in the middle of the pack when it comes to next-generation broadband. We not only lack consistent competition in the U.S. telecom sector, we lack regulators with anything even vaguely resembling a backbone, even on the most rudimentary of issues deemed controversial by industry.

Filed Under: broadband, digital divide, fcc, fiber optic broadband, gao, goals, high speed internet, Mbps, telecom

from the government-is-not-always-your-mortal-enemy,-weirdo dept

Wed, Dec 7th 2022 03:43pm - Karl Bode

For years, scientific researchers have warned that Elon Musk’s Starlink low Earth orbit (LEO) satellite broadband constellations are harming scientific research. Simply put, the light pollution Musk claimed would never happen in the first place is making it far more difficult to study the night sky, a problem researchers say can be mitigated somewhat but never fully eliminated.

Musk and company claim they’re working on upgraded satellites that are less obtrusive to scientists, but it’s Musk, so who knows if those solutions actually materialize. Musk isn’t alone in his low-orbit satellite ambitions. Numerous other companies, including Jeff Bezos’ Blue Origin, are planning to fling tens of thousands of these low-orbit satellite “megaconstallations” into the heavens.

One 2020 paper argued that the approval of these low-orbit satellites by the FCC technically violated the environmental law embedded in the 1970 U.S. National Environmental Policy Act (NEPA). Scientific American notes how the FCC has thus far sidestepped NEPA’s oversight, thanks to a “categorical exclusion” the agency was granted in 1986 — long before LEO satellites were a threat.

Last week yet another study emerged from the U.S. Government Accountability Office (GAO, full study here), recommending that the FCC at least revisit the issue:

“We think they need to revisit [the categorical exclusion] because the situation is so different than it was in 1986,” says Andrew Von Ah, a director at the GAO and one of the report’s two lead authors. The White House Council on Environmental Quality (CEQ) recommends that agencies “revisit things like categorical exclusions once every seven years,” Von Ah says. But the FCC “hasn’t really done that since 1986.”

Despite the fact that low-earth orbit solutions like Starlink generally lack the capacity to be meaningfully disruptive to the country’s broadband monopolies, and are, so far, too expensive to address one of the biggest obstacles to adoption (high prices due to said monopolies), the FCC has generally adopted a “we’re too bedazzled by the innovation to bother” mindset until recently.

The FCC this year did recently decide to roll back nearly a billion in Trump-era subsidies for Starlink (in part because the company misled regulators about coverage, but also because the FCC doubted they’d be able to deliver promised speeds and coverage). And the FCC did recently enact laws tightening up requirements for discarding older, failed satellites to address “space junk.”

But taking a tougher stand here would require the FCC taking a bold stance on whether or not NEPA actually applies to the “environment” of outer space and low-Earth orbit, which remains in debate. This is an agency that can’t even be bothered to publicly declare with any confidence that telecom monopolies exist or are a problem, so it seems pretty unlikely they’d want to wade into such controversy.

Like a lot of Musk efforts (like the fatal public potential of misrepresented “full self driving” technology), the issue has been simplistically framed as one of innovation versus mean old pointless government bureaucracy. This simplistic distortion has resulted in zero meaningful oversight as problems mount, something that impacts not just the U.S. (where most launches occur), but every nation on the planet:

“Our society needs space,” says Didier Queloz, an astronomer and Nobel laureate at the University of Cambridge. “I have no problem with space being used for commercial purposes. I just have a problem that it’s out of control. When we started to see this increase in satellites, I was shocked that there are no regulations. So I was extremely pleased to hear that there has been an awareness that it cannot continue like that.”

I’d expect this issue gets punted into the bowels of agency policy purgatory. Even if the agency does act it will be years from now, and unlikely to apply to the satellite licenses already doled out to companies like Starlink and Amazon. And while there are several bills aimed at tightening up restrictions in the space, it seems unlikely any of them are going to survive a dysfunctional and corrupt Congress.

That means that the light pollution caused by LEO satellites will continue to harm scientific researchers, who’ve been forced to embrace expensive, temporary solutions to the problem that are very unlikely to scale effectively as even more LEO companies set their sights on the heavens.

Filed Under: astronomy, gao, high speed internet, leo, light pollution, low earth orbit satellites, mega constellations, starlink, telecom
Companies: spacex, starlink

1,000 Deaths In Custody Went Unreported Last Year Because US Justice System Doesn’t Care About The People It Jails

from the disposable-human-beings dept

Tossing people into prison is throwing them away. They’re no longer real human beings. They’re just items being processed, moved through the system at whatever pace the system feels is appropriate. And once you’ve begun dehumanizing the people in your care, you can easily stop caring about them.

A recent report [PDF] by the Government Accountability Office covering DOJ in-custody death data collection processes highlights just how little anyone cares what happens to people jails and prisons claim to be rehabilitating. The title of the report sounds innocuous enough: Additional Action Needed to Help Ensure Data Collected by DOJ Are Utilized. But the details are horrific.

The Death in Custody Reporting Act (DCRA) was passed in 2013 and went into force the following year. But seven years down the road, the law has apparently changed very little about this reporting process. Nor has it acted as a deterrent against non-reporting or under-reporting deaths. The criminal justice system hums along, discarding human lives and replacing them with incomplete or missing data.

The DOJ is tasked with collecting this data and assuring compliance from state and local entities. It has not done so, despite having had several years to put this in motion.

While states across the U.S. and DOJ have undertaken multi-year efforts to gather death in custody data, the department has not yet studied the state data, for purposes of the report required by DCRA. DOJ officials told us in September 2022 that they had not studied the data to determine the means by which the information could be used to reduce deaths in custody, in part, because the data provided by states were incomplete or missing.

By law, the Attorney General may impose a penalty on states that fail to comply with DCRA reporting requirements (i.e., do not provide data on deaths in custody as required). However, DOJ’s efforts to determine states’ compliance with DCRA have been delayed and DOJ has not yet made such determinations. In addition, even if these data were of sufficient quality, DOJ officials indicated the department is not required to publish these data pursuant to DCRA and, as of September 2022, has no plans to do so.

It’s been all carrot (DOJ grants to those complying) and no stick (zero penalties for the uncooperative). That has led to the ongoing debacle the DOJ insists (as it has for years) it takes very seriously. It has managed to put together a pretty solid collection of data on federal prisons, in which nearly 2,700 people have died since 2014.

Unfortunately, the DOJ has decided it won’t publish state and local data simply because it’s not required to, which means it has no obligation to get these figures right because nobody will be seeing them. And, so it hasn’t done anything to ensure better reporting from state and local entities, something that has resulted in a massive undercount of deaths in custody in the United States.

Most state submissions contained incomplete records. Of the 47 states that submitted data, we found that two states had provided 100 percent of records with all the required elements. In contrast, seven states did not report any records with all of the required elements

[…]

Some states did not accurately account for all deaths in custody that occurred in fiscal year 2021. By reviewing documentation available on state government web sites and public databases on arrest related deaths, we identified nearly 1,000 deaths that occurred during fiscal year 2021 that states did not report in response to DCRA.

And that thousand unreported deaths may actually be an undercount.

Not all states made data on deaths in correctional facilities available at the time we conducted our audit work and therefore, we were unable to test the completeness of all states’ submissions. As a result, the number of prison deaths we identified may be narrower than the universe of prison deaths not reported to DOJ for fiscal year 2021.

And some of these state agencies got paid without doing the homework to earn it.

[F]our states that accepted JAG awards did not report any deaths in custody in their state—even though reporting this information is a requirement of receiving the grant funding and deaths occurred in their state during this time period.

This could get straightened out if the DOJ had any apparent interest in obtaining reliable data from state agencies. But it doesn’t. It has promised Congress and its other oversight it will, any day now, put a plan in place to ensure better reporting. That’s what it’s been saying since 2016. But, as of July 2022, the DOJ admitted to GAO investigators that it still had yet to complete an assessment it had promised to deliver by October 2021.

Going forward, it looks to be more of the same, unless someone can finally talk the DOJ into doing its job properly.

DOJ has developed a framework for determining states’ compliance. However, it has not developed a detailed implementation plan that includes metrics and corresponding performance targets for determining state compliance, or roles and responsibilities for taking corrective action should these efforts not fully succeed. Specifically, DOJ documentation identifies criteria for determining compliance and actions it could take to increase compliance. However, DOJ does not have specific metrics and performance targets on, for example, the number of states it expects to achieve full compliance, or by when it expects this to occur. Further, DOJ has not identified roles and responsibilities for taking corrective actions.

Nothing will change. What the GAO has seen here, it will see again in the future. The 2014 law will continue to be shrugged off by the DOJ and the agencies reporting to it. Accurate and timely data could give the DOJ a heads up on problematic facilities and take steps towards reducing in-custody deaths. But since no one involved in counting up this human cost seems to care whether prisoners live or die, reporting will continue to be incomplete, inadequate, and consequence-free for those blowing off the law’s requirements.

Filed Under: doj, gao, prisoners

GAO’s Facial Recognition Testimony Doesn’t Explain Why Federal Agencies Aren’t Fixing Problems Reported A Year Ago

from the or-any-other-important-questions-really dept

The Government Accountability Office (GAO) recently submitted testimony [PDF] to the House Subcommittee on [takes deep breath] Investigations and Oversight and Committee on Science, Space, and Technology. Candace Wright, the GAO’s Director of Science, Technology Assessment, and Analytics explained the findings of previous GAO reports on facial recognition use by federal agencies.

Two of those reports were published last year. The first appeared in June and it showed federal agencies were doing nearly nothing to track employees’ use of facial recognition tech.

Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees. These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Most federal agencies that reported using non-federal systems did not own systems. Thus, employees were relying on systems owned by other entities, including non-federal entities, to support their operations.

Thirteen of the fourteen agencies examined by the GAO (a list that includes ICE, ATF, CBP, DEA, FBI, and the IRS) did not have any processes in place to track use of non-federal facial recognition tech.

This lack of internal oversight led directly to the behavior observed in the GAO’s second report, delivered in August. Either due to a lack of tech on-site or a desire to avoid what little internal oversight exists, federal agencies were often asking state and local agencies to do their dirty face rec work for them.

Unfortunately, this testimony — delivered nearly a year after the GAO’s released its findings — doesn’t provide any answers about this lack of internal oversight. Nor does it suggest things are moving forward on the internal oversight front as a result of its earlier investigations.

The status remains quo, it appears. About the only thing this testimony adds to the facial recognition discussion is the unfortunate fact that federal agencies feel zero compunction to better control use of this tech. It also adds a bit of trivia to the FRT mix by discussing a few little known uses of the tech by the government.

Four agencies—the Departments of Health and Human Services, Transportation, and Veterans Affairs, and NASA—reported using FRT as a tool to conduct other research. For example, Transportation reported that the Federal Railroad Administration used eye tracking to study alertness in train operators. Similarly, NASA also reported that it used eye tracking to conduct human factors research. In addition, the Department of Veterans Affairs reported it used eye tracking as part of a clinical research program that treats post-traumatic stress disorder in veterans.

Nor does the report explain why agencies surveyed under-reported their use of Clearview’s highly controversial facial recognition software. The information in the GAO’s June 2021 report is contradicted by public records obtained by Ryan Mac and Caroline Haskins of BuzzFeed, strongly suggesting five agencies flat out lied to the GAO.

In a 92-page report published by the Government Accountability Office on Tuesday, five agencies — the US Capitol Police, the US Probation Office, the Pentagon Force Protection Agency, Transportation Security Administration, and the Criminal Investigation Division at the Internal Revenue Service — said they didn’t use Clearview AI between April 2018 and March 2020. This, however, contradicts internal Clearview data previously reviewed by BuzzFeed News.

This misleading — whether deliberate or not — goes unmentioned in the GAO’s testimony. And apparently no follow-up investigation was performed to see if agencies were doing anything to prevent the sort of thing seen here:

Officials from another agency initially told us that employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches.

A year down the road, and all the GAO can report is that three of the 13 agencies that had no internal tracking processes are now in the process of implementing “at least one” of the three recommendations the GAO handed out nearly 13 months ago following its first report.

Most of the testimony is handed over to discussing much quicker movements by federal agencies, i.e. the expanded deployment of questionable tech far ahead of mandated Privacy Impact Assessments or assessment efforts to track the reliability of the tech being deployed.

This testimony is incredibly underwhelming, to say the least. This is the Government Accountability Office doing the talking here. And it’s apparently unable to encourage more than a rounding error’s-worth of accountability gains. This leaves it to Congress, an entity that’s largely unconcerned with increasing government accountability because it might make things uncomfortable for them as they seek to extend four-year terms to de facto lifetime appointments.

The government has a facial recognition tech problem. And it’s going to get too big to handle quickly if findings like those reported by the GAO a year ago continue to be ignored by federal agencies and the oversight this testimony was delivered to. If the GAO can’t be bothered to ask tough questions from agencies that misled it months ago, it seems unlikely Congressional reps with multiple interests to serve (sometimes even those of their constituents!) are going to hold any agency accountable for playing fast and loose with questionable tech and citizens’ rights.

Filed Under: facial recogintion, gao, oversight

Gov't Accountability Office Says FBI Should Probably Just Give Up The Use Of Force Reporting It Never Bothered Doing

from the waste-not,-left-still-wanting dept

In 1994, Congress passed a law (the Violent Crime Control and Law Enforcement Act) that ordered the Department of Justice to “acquire data about the use of excessive force by law enforcement officers” and publish an annual report. The DOJ immediately handed this responsibility off to the International Association for Chiefs of Police, which produced a single report in 2001 and has done nothing since.

The problem was the process was entirely voluntary. And it doesn’t appear that, outside an act of Congress, it can be changed. The DOJ does not directly oversee any state or local law enforcement agencies. The involvement of the IACP might have encouraged more participation if the IACP was interested in participating in this data gathering, but the facts speak for themselves. There’s nothing in this for law enforcement. And since no one can force law enforcement to send the DOJ use-of-force data, participation by the nation’s 18,000 law enforcement agencies (as of 2015) was as low as 3%.

In 2015, as excessive force deployment and killings by police officers repeatedly gained national attention, the FBI declared itself the hero and rode to the rescue, promising a new, better, still-entirely-voluntary use-of-force database. This time it offered a carrot — federal funds — in exchange for information. It did better than the previous (lack of) effort, managing to gather data from nearly than a third of US law enforcement agencies.

Better than nothing, I suppose. But that’s the thing: it’s still nothing. The FBI has gathered this hugely imperfect data set from a smallish group of self-reporters. And it has done nothing with the data. The annual reporting has never materialized. Thanks to this wealth of inactivity, the FBI and DOJ may soon be able to give up this responsibility — one they clearly never wanted. It won’t be because it can’t be made to work. It will be because no one wants to put in the work. And it may also be because the Government Accountability Office is suggesting the DOJ stop spending tax dollars on work it clearly isn’t doing.

[D]ue to insufficient participation by law enforcement agencies, the FBI has not met thresholds set by the Office of Management and Budget for publishing use of force data or continuing the effort past December 2022. Further, as of February 2021, the FBI had not assessed alternative data collection strategies.

That’s from the summary of the GAO report. The full report [PDF] goes into more detail, but the conclusion is still foregone: the DOJ never wanted to do this and has spent most of three decades not doing it. The failures found by the GAO aren’t the result of the DOJ and FBI struggling mightily and still coming up short. It’s a mandate the DOJ received in 1994 and immediately abandoned. The 2015 effort was mostly PR — an attempt to show the federal government cared enough about police violence to at least try to tally it up. And the FBI put in all the effort that empty promise required.

Here’s the half-assery the GAO observed. Both the BJS (Bureau of Justice Statistics) and the FBI were supposed to collect and publish data. Here’s what the BJS managed to do with its time, energy, and personnel over the past half-decade:

[O]ver the 5-year period from fiscal year 2016 through fiscal year 2020, BJS published results from this survey twice. Further, one of those publications was a retrospective report of previously published data that were collected from 2002 through 2011.

Confronted with this failure, the BJS asked the GAO if it had tried looking somewhere else for the data the BJS was supposed to be publishing. The GAO looked and said that’s not even the same thing.

BJS officials also stated that the Law Enforcement Management and Statistics Survey was another means through which DOJ published required data on excessive force. However, BJS publishes information on policies and procedures related to officers’ use of force collected through this survey, but does not publish any information specifically on excessive force by law enforcement officers.

A total lack of effort by everyone involved. The DOJ said the FBI collected the information (but did not publish it) and claimed that ended the DOJ’s involvement — a strange assertion for an agency directly overseeing the FBI to say. Stranger still, the FBI said it had not been informed this was its job, despite making two public announcements (2015 and 2019) saying it would be doing these things.

To ensure Americans were deprived of any useful info about excessive force deployment, the FBI deliberately made a mess of the data given to it by a small percentage of law enforcement agencies.

According to FBI documentation, the National Use-of-Force Data Collection does not differentiate between incidents involving reasonable force and incidents involving excessive force. Specifically, the collection does not contain information on whether officers followed their department’s policy or acted lawfully in any given incident. Therefore, it is unclear how DOJ could use these data to publish a summary on excessive force by law enforcement officers.

And again, another failure to do the things asked of it by the FBI.

In addition, the FBI began collecting these data in 2019 and has not yet published any use of force incident data collected through the program…

As was noted above, the FBI and DOJ have no backup plan. If their original effort didn’t work, the solution appears to be to let it die. The GAO says no alternative efforts have been considered to ensure greater collection or more frequent publication. (Or, indeed, ANY publication of collected data.) The proposals the GAO heard from these entities suggest officials were just making stuff up on the spot.

When you’re concerned about which agencies might be engaging in more deadly/excessive force than others (as was partially the point of this database), a random sample and some extrapolation is going to provide cover for agencies with endemic problems and paint an unrealistic picture about law enforcement force deployment.

The FBI’s business plan for the collection states that using a sample of agencies may be a potential alternative data collection mechanism.

At this point, the FBI only has a “sample of agencies.” The collection has never approached 100% of the nation’s law enforcement agencies. At best, it has managed to collect police killing data from 55% of these agencies. When it comes to force deployment, the percentage is much lower.

Just to drive home the point once more: no one in the DOJ wanted to do this job. The Bureau of Justice Statistics has published 130 reports from 2016 to 2020. Total number of use-of-force reports during that same time period? One.

The good news for those people who spent years not doing what they were paid to do? They won’t have to not do it much longer.

[T]he collection itself may be discontinued as soon as the end of 2022.

That’s the way the FBI wants it. No news is good news. Or, at the very least, its news that can’t be disputed by data it barely collected and never published. All that’s left is the public perception of law enforcement force deployment — something that definitely hasn’t improved over the past five years. The nation’s law enforcement agencies — including those at the federal level — have managed to rack up nearly thirty years of non-participation trophies. Never investigating the problem means never having to confront the problem. And if you screw around long enough, people will stop asking you to do stuff you don’t want to do.

Filed Under: fbi, gao, police brutality, police shootings, use of force

GAO's Second Report On Facial Recognition Tech Provides More Details On Federal Use Of Clearview's Unvetted AI

from the still-greater-than-zero-agencies,-unfortunately dept

A couple of months ago, the Government Accountability Office completed the first pass of its review of federal use of facial recognition technology. It found a lot to be concerned about, including the fact that agencies were using unproven tech (like Clearview’s ethical nightmare of a product) and doing very little to ensure the tech was used responsibly.

Some agencies appeared to have no internal oversight of facial recognition tech use, leading to agencies first telling the GAO no one was using the tech, only to update that answer to “more than 1,000 searches” when they had finished doing their first pass at due diligence.

A more complete report [PDF] has been released by the GAO, which includes answers to several questions asked of federal agencies using the tech. Unfortunately, it confirms that many agencies are bypassing what little internal controls are in place by asking state and local agencies to run searches for them. DHS entities (CBP, ICE) did the most freelancing using downstream (governmentally-speaking) databases and tech.

For whatever reason, CBP and ICE (which have access to their own tech) are using agencies in Ohio, Nebraska, Michigan, Kansas, and Missouri (among others) to run searches for criminal suspects and to “support operations.” A whole lot of non-border states are allowing agencies to bypass internal restrictions on use of the tech.

And there’s a whole lot of Clearview use. Too much, in fact, considering the number of agencies using this highly questionable product exceeds zero.

The US Air Force says it engaged in an “operational pilot” beginning in June 2020, utilizing Clearview to run searches on biometric information gathered with “mobile biometric devices, including phones.”

The Inspector General for the Department of Health and Human Services also apparently used Clearview. The report says the HHS OIG “conducted an evaluation of the system in an attempt to identify unknown subjects of a criminal investigation.” Experimentation, but with the added bonus of possibly infringing on an innocent person’s life and liberty!

Also on the list are CBP, ICE, and US Secret Service. ICE appears to be the only agency actually purchasing Clearview licenses, spending a total of $214,000 in 2020. The CBP, however, is getting its Clearview for free, utilizing the New York State Intelligence Center’s access to run searches. The Secret Service gave Clearview a test drive in 2019 but decided it wasn’t worth buying.

The Department of the Interior says it has both stopped and started using Clearview. Under “Accessed commercial FRT [facial recognition technology] system, the DOI claims:

Interior uses Clearview AI to verify the identity of an individual involved in a crime and research information on a person of interest. Interior may submit photos (e.g., surveillance photos) for matching against the Clearview AI’s repository of facial images from open sources. U.S. Park Police reported it stopped using Clearview AI as of June 2020.

But under “New access to commercial FRT system,” the DOI states:

Interior reported its U.S. Fish and Wildlife Service began using a trial version of Clearview AI in May 2020, and purchased an annual subscription in June 2020.

The DOI is both a current and former customer, depending on which component you speak to, apparently.

The DOJ is an apparent believer in the power of Clearview, providing access to the ATF, DEA, FBI, and US Marshals Service. But there must be a lot of sharing going on, because the DOJ only purchased $9,000-worth of licenses.

Interestingly, the DOJ also notes it received an upgrade from Axon, which provides body-worn cameras. Axon has apparently added a new feature to its product: “Facial Detection.” Unlike facial recognition, the product does not search for faces to run against a biometric database. Instead, the system “reviews footage” to detect faces, which can then be marked for redaction.

This FRT-related expenditure is also interesting, suggesting the DOJ may actually be trying to quantify the effectiveness of body cameras when it comes to deterring officer misconduct.

DOJ reported that it awarded an $836,000 grant to the Police Foundation for the development of techniques to automate analysis of body worn camera audio and video data of police and community interactions. In particular, these techniques could (1) allow an evaluation of officers’ adherence to principles of procedural justice and (2) validate the ratings generated by the automated process using a randomized control trial comparing software ratings of videos to evaluations performed by human raters under conditions of high and low procedural justice.

Finally, there’s this unnecessarily coy statement by the IRS about its use of commercial facial recognition systems.

A third-party vendor performed facial recognition searches on behalf of the IRS for domestic law enforcement purposes. Additional details on the search are sensitive.

Whatever. It’s probably Clearview. And if it isn’t, it probably will be at some point in the near future, given federal agencies’ apparent comfort with deploying unproven, unvetted tech during criminal investigations.

The report is probably the most comprehensive account of facial recognition tech by the federal government we have to work with at the moment. It shows there’s a lot of it being used, but it hasn’t become completely pervasive. Yet. Most agencies use the tech to do nothing more than identify employees and prevent unauthorized access to sensitive areas. Some agencies are digging into the tech itself in hopes of improving it. But far too many are still using a product which has been marketed with false statements and has yet to have its accuracy tested by independent researchers. That’s a huge problem, and, while it’s not up to the GAO to fix it, the report should at least make legislators aware of an issue that needs to be addressed.

Filed Under: doi, doj, facial recognition, gao
Companies: clearview

GAO Tells US Government Its Speed Definition For Broadband Sucks

from the keeping-the-bar-at-ankle-height dept

Fri, Jul 16th 2021 05:58am - Karl Bode

The US has always had a fairly pathetic definition of “broadband.” Originally defined as anything over 200 kbps in either direction, the definition was updated in 2010 to a pathetic 4 Mbps down, 1 Mbps up. It was updated again in 2015 by the Wheeler FCC to a better, but still arguably pathetic 25 Mbps downstream, 3 Mbps upstream. As we noted then, the broadband industry whined incessantly about having any higher standards, as it would only further highlight industry failure and a lack of competition.

Unfortunately for them, pressure continues to grow to push the US definition of broadband even higher. Back in March, a coalition of Senators wrote the Biden administration to recommend that 100 Mbps in both directions become the new baseline. And last week, the General Accounting Office (GAO) issued a new report noting that the current standard of 25 Mbps down, 3 Mbps up is simply too pathetic to be useful. The focus was on small businesses, but the GAO politely noted that the FCC should update its definition soon:

“Is broadband fast enough for small business owners? As they shift to more advanced uses of broadband, their speed needs are likely increasing. However, the FCC has not updated its speed benchmark for 6 years. We recommended that the FCC determine whether its current definition of broadband really meets the needs of small businesses.”

Granted entrenched ISPs fight tooth and nail against upgrading the standard for several reasons. One, higher speed standards means having to work harder for the billions in subsidies we throw at them for networks that routinely wind up half-deployed anyway. Two, better broadband definition more clearly highlights the lack of broadband competition, especially at faster speeds. That, in turn, brings more public and policymaker attention to their regional monopolies, and the state and federal corruption that protects and enables it. All bad things if you’re a largely unaccountable telecom monopoly.

Former FCC boss and industry BFF Ajit Pai refused to upgrade the FCC’s broadband definition during his term, something current interim FCC boss Jessica Rosenworcel said “confounds logic.” But if your over-arching policy goal is to protect AT&T, Verizon, and Comcast revenues from any threat to the status quo, it’s perfectly logical. Rosenworcel will now need to update the definition on her watch, something she can’t do (thanks to partisan gridlock) until the Biden administration gets around to finally staffing the FCC (which it appears in no rush to actually do).

Filed Under: broadband, definitions, fcc, gao