oversight – Techdirt (original) (raw)

Oversight Report: Chicago PD Doesn’t Care Its Officers Are Involved With Right Wing Extremists

from the thin-blue-line-erased-yet-again dept

The city of Chicago’s Inspector General is back at it, pointing out things are very, very wrong with the Chicago Police Department. Not that anyone needed any reminders. A long history of disinterest in disciplining misbehaving officers has led to everything from an off-the-books black site operation to more than 100 misconduct charges being racked up by officers involved in a single wrong house raid.

Like lots of other law enforcement agencies, the Chicago PD has officers who are members of far right extremist groups. A lot of this came to light during the FBI’s investigation of the January 6 insurrection, where it was discovered that law enforcement officers from all over the nation traveled to Washington DC — not to help secure the Capitol building or protect those inside, but to engage in criminal activity of their own.

The Chicago PD is no exception. This latest report details how many officers are involved with far right groups like the Proud Boys and the Three Percenters. It also details how little the PD has done to root out the potential insurrectionists in its midst. (via Chicago Fox affiliate FOX 32)

The report [PDF] leads off with the responses it received from the Chicago PD as well as the mayor’s office. There’s no good news/bad news thing going on here. It’s all bad news, and the lack of accountability apparently will begin at the top:

In a written response attached at Appendix B, the Mayor’s Office reports that “the Johnson Administration and the Chicago Police Department remain fully committed to rooting out extremist, anti-government, and biased organizations in our law enforcement ranks. There is no place in the CPD for those who participate in such organizations.” The Mayor’s Office further says that it is“committed to working with CPD and across departments and agencies to ensure that there is a comprehensive and meaningful approach to preventing, identifying, and eliminating extremist, anti-government, and biased associations within CPD” and says that it will “work with” a variety of entities in this pursuit. OIG appreciates the Mayor’s Office’s response, but notes that the Mayor’s Office neither accepts OIG’s recommendation nor commits to any specific action at all.

“Fully committed” up to the point the response was sent to the OIG’s office. No further commitment has been stated or noted.

After detailing the history of and harmful acts committed by far right extremist groups (the three detailed are the Proud Boys, Oath Keepers, and Three Percenters), the OIG moves on to point out that the CPD has, in the past, rooted out cops with ties to bigoted extremist groups. You know, like the KKK, to name just one.

More than half a century ago, CPD initiated an investigation into the alleged memberships of multiple CPD members in the Ku Klux Klan (KKK). One such member was Officer Donald Heath, the admitted grand dragon of the KKK in Illinois at the time.

[…]

In the Police Board’s findings, they found Heath and two other CPD members violated Rule 2 by being associated with an extremist group, the KKK, and fired them.

That was 50 years ago. Apparently, being involved with white supremacists (or entities that embrace those views along with their own stated goals) was an offense worth of termination. Five decades later, things have not improved. They’ve gotten worse.

As the OIG notes, allowing officers to join extremist groups — especially ones that consider lawbreaking an essential part of their “resistance” and consider themselves to be, if not actual white nationalists, closely aligned with their philosophies — further damages already tenuous relationships with the communities these officers serve. Looking the other way only encourages more officers to associate with extremists, which is the sort of thing that leads directly to officers committing federal crimes while attempting to overturn a lawful national election.

Here’s the sort of thing that’s far more common now, despite the rules on associating with extremist groups having gone unchanged over the past five decades.

BIA [Bureau of Internal Affairs] reached a finding of Not Sustained on the allegation that the accused [officer] was a member of a “far-right terror group,” determining that no evidence existed that the accused officer committed any misconduct on duty. However, BIA’s analysis failed to acknowledge that relevant CPD rules explicitly apply to both on- and off-duty conduct. Additionally, BIA’s analysis did not consider whether the officer’s membership in the Oath Keepers, by itself, constituted a violation of CPD policy.

OIG recommended that BIA reopen the investigation to conduct any necessary additional investigative activity including, but not limited to, re-interviewing the accused member to determine what, if any, rules, regulations, or policies of CPD he refused to obey because he believed them to be illegal or unconstitutional according to the precepts of the Oath Keepers.

OIG also recommended that BIA conduct and document an appropriate analysis of whether the accused member’s membership in the Oath Keepers violated any of the Department’s Rules and Regulations, including but not limited to Rules 2 and 3. BIA accepted OIG’s recommendation and reopened its investigation. After meeting with OIG to discuss the case, BIA reclosed the investigation leaving its original findings unchanged.

All three of the groups mentioned in this report have their own mission statements that assert members will choose to ignore or disobey laws they don’t agree with and, if need be, utilize violence to achieve those aims. No cop shop should desire to employ people who think only certain laws should be respected and consider all the laws they don’t personally like to be optional.

There’s also the citation of “Rule 2.” Rule 2 has been on the PD’s books for years. It’s the same one that was used more than 50 years ago to fire two officers for being members of the KKK. Without rewriting the rule, the official stance at the CPD is that simply being a member of groups like this is not, in and of itself, a violation of this rule. CPD officials have made this declaration despite the rule expressly forbidding all kinds of things that might make the department look less trustworthy:

This Rule applies to both the professional and private conduct of all members. It prohibits any and all conduct which is contrary to the letter and spirit of Departmental policy or goals or which would reflect adversely upon the Department or its members. It includes not only all unlawful acts by members but also all acts, which although not unlawful in themselves, would degrade or bring disrespect upon the member or the Department, including public and open association with persons of known bad or criminal reputation in the community unless such association is in the performance of police duties. It also includes any action contrary to the stated policy, goals, rules, regulations, orders, or directives of the Department.

The report then notes it can only find one case where this rule was applied to an officer in recent years. Conveniently enough, it was used to discipline a recruit (the most expendable of law enforcement officers) for saying something that could be construed as gang-related.

CPD has recently applied Rule 2 to a member’s association with a group—specifically, a street gang—undermining any suggestion that it is unable to do so. In August 2023, a CPD Lieutenant recommended termination of a CPD Recruit for using “street gang terminology” in violation of Rule 2 and Rule 6- “Disobedience of an order or directive, whether written or oral.” “In less than two weeks after being hired by CPD, the request was granted and the Recruit was separated from the Police Department.”

It was alleged that the CPD Recruit, while standing in formation in a hallway at CPD’s Education and Training Division, stated, “on BD, y’all gonna make me bug up in this bitch. I’m trying to hold this hood shit in but y’all bringing it out on me on BD,” after allegedly being bumped into by another recruit and their duffle bag. The CPD Lieutenant in their termination request wrote that they were aware of the phrase “On BD” to be “common street gang terminology used by members of the Black Disciples street gang to swear upon their allegiance to said gang…”

That is a justifiable application of Rule 2. But it only seems to apply to (presumably) black recruits or those who use gang terminology used by black gang members. The CPD has told the OIG’s office Rule 2 just doesn’t apply to (presumably) white police officers who wear Three Percenter insignias while on patrol or spend their free time hanging out with bigots and white nationalists who have plainly stated they’ll break the laws they don’t like and physically harm those trying to enforce the disliked laws.

There’s a good chance CPD brass considers membership in the Proud Boys, et al to be a feature, not a bug. After all, plenty of police officials have openly stated they won’t enforce laws they don’t like (mainly things like gun control efforts or sanctuary city statutes). And there’s no law enforcement agency in the land that doesn’t generously deploy double standards to protect the worst officers they employ. The fact that these extremist groups direct most of their animosity against liberals, minorities, and LGBTQ+ persons is just icing on the cake. It aligns with the implicit biases that have plagued law enforcement agencies since their inception.

The refusal of the CPD to treat this issue seriously shows it’s unwilling to reach across the divide it’s created to earn the trust of the communities it serves. The mayor’s office is no better, offering up nothing but vague statements about doing something while offering up nothing in the way of actual improvement. This report highlights a problem and serves the purpose of making the public more aware of endemic law enforcement issues. Unfortunately, Chicago residents are likely already well-aware how much they’re being underserved by the PD and city leaders. In the end, it’s just documentation of business as usual. And no one with the power to change things for the better seems to have any interest in actually making that happen.

Filed Under: chicago, chicago police department, extremists, oversight, police accountability, police misconduct

San Diego Mayor, Police Chief Claim City’s Surveillance Oversight Law Is Just ‘Obstruction’

from the stop-touching-our-stuff dept

Law enforcement agencies aren’t used to oversight or accountability. That’s something that has rarely been deemed essential to the act of policing. After all, if the Supreme Court can create “qualified immunity” out of thin air to protect (most) cops from the consequences of their unconstitutional actions, surely podunk locals shouldn’t assume they’re more qualified than the Supreme Court (or the cops themselves) to judge their actions.

But things haven’t been going cops’ way lately. This is entirely due to cops’ own actions, which have repeatedly dis-endeared them to the public. And even if the voting bloc is generally considered to be too stupid to competently criticize cops, cops are finding fewer supporters from those in voting booths as well as those being voted for.

But when cops have managed to set the nation on fire (figuratively and literally) on a nearly annual basis for the past thirty years, some legislators have decided it might be time to do something.

And “something” it almost always is. Sure, some legislators push dumbass “blue lives matter” laws but other legislators appear to believe the public might be better served by holding local law enforcement agencies to some sort of standard, rather than just allowing them to do what they want.

In San Diego, this belated recognition that allowing cops to go rogue on the regular might be a bad idea has manifested as a surveillance oversight ordinance that gives legislators and residents more say in what surveillance tech cops can obtain and how they can use. It passed last September with the city council’s approval, prompted in part by the city’s mishandling of a “smart” streetlight program that provoked plenty of negative comments from city residents.

They weren’t just streetlights. They were streetlights with surveillance cameras that also acted as automated license plate readers. Combining the necessary (lights for streets) with something that only benefited law enforcement (the rest of it) wasn’t what residents wanted. Hence the new ordinance, which places more guidelines on surveillance tech and deployment.

Now that these guidelines are in force, the city’s mayor (Todd Gloria) and the San Diego police chief are now claiming this minimal move towards more oversight and accountability is simply making it impossible for the PD to do its job. And the police chief has used loaded language that equates accountability with a criminal act.

At the briefing, San Diego police Chief David Nisleit said 17 police technologies used to investigate everything from traffic collisions to crimes against children can no longer be used or soon won’t be available for use since they have contracts that have either expired or will expire before the tools have the chance to go through the oversight process.

The department did not provide specific expiration dates for those technologies, which include powerful tools like Graykey, a device that can break into some locked cellphones.

“This ordinance is not oversight, it’s obstruction,” Nisleit said. “The flaws in this ordinance will hamper our ability to investigate serious crimes, protect victims and keep our community safe.”

Yep, that’s what he said: oversight is pretty much just obstruction of justice as far as he’s concerned. Forcing the PD to be a bit more selective when acquiring and deploying surveillance tech is apparently indiscernible from preventing a cop from arresting a suspect.

Mayor Gloria has already made it clear he’s going to have the law amended to remove a few layers of accountability. And he’s doing this despite the fact the PD and the city have yet to actually comply with the law. Once of its most minimal requirements — the production of a list of all technologies affected by the law — has yet to be completed despite the law being enacted nearly a year ago.

And these things are being said by city leaders (mostly just the mayor) even though no statements have been made about what exactly makes the law unworkable, other than the PD’s insistence that it is.

Although the city has spoken numerous times about how imperative changes to the law are, officials have yet to be specific about what sections of the ordinance they would seek to amend or how.

Seth Hall, the co-founder of community group San Diego Privacy, points out in his op-ed for San Diego Tribune that if there’s any “obstruction” happening here, it’s being committed by the mayor and the San Diego PD:

Nisleit has correctly diagnosed obstruction, but it is his and the mayor’s own obstruction, and Nisleit of all people would know.

Mayor Gloria is in the driver’s seat of this process. We have been waiting for nearly two years for the mayor to create and submit an inventory of the city’s existing surveillance technology, as required by the new law. Earlier this month, a messy inventory spreadsheet riddled with duplicates and inaccuracies was finally published in the agenda for the Privacy Advisory Board’s Oct. 5 meeting. That same day, Gloria and Nisleit held a press conference to decry the “obstruction” of an oversight process they themselves have been deliberately standing in the way of.

There has not been even one day since this ordinance was passed that it hasn’t been obstructed at the very first step by Gloria withholding the city’s inventory of existing surveillance technology. The privacy board, the community and the City Council have stood ready. Chief Nisleit need look no further than his own boss when placing blame for obstruction.

Don’t blame everyone else because you refuse to comply with a law you don’t like. And you, Mayor Gloria, don’t go to bat for law-breaking city entities just because you believe they’re more worthy of your attention and undeserved forgiveness than the people you’re actually supposed to be serving.

The problem here is the lack of compliance, not the law itself. The city has yet to comply with the law. The same can be said for the PD. If you can’t be bothered to comply with the law, it’s not a great look to go around complaining about it. All that’s happening right now is a bunch of pointless whining deliberately designed to restore the minimal accountability the SDPD has become accustomed to. And, unfortunately for San Diego residents, the person who’s supposed to put their interests first, has decided to align himself with a city agency that has the will and power to simply ignore laws it doesn’t like.

Filed Under: david nisleit, oversight, police oversight, san diego, san diego pd, todd gloria

Los Angeles Sheriff’s Department Goes Completely Rogue, Blocks Inspector General’s Access To Files, Facilities

from the law-enforcement-with-zero-respect-for-the-law dept

The Los Angeles Sheriff’s Department has been problematic pretty much ever since its inception. Its prior iteration — headed up by Sheriff Lee Baca — was an abhorrent mess. The LASD was (and still is!) home to gangs formed by deputies — cliques that encouraged members to violate rights and abuse those incarcerated in the county jail. Baca’s department became infamous for its internal corruption, something manifested by its obstruction of federal investigations and rogue jailhouse informant program.

Enter Sheriff Alex Villanueva. Elected after promising to clean up the troubled department, Villanueva soon showed he was more interested in shielding his officers from public scrutiny and ignoring the internal rot that had turned the agency into a menace to Los Angeles society.

The new sheriff created a handpicked “Public Integrity Unit,” an entity whose name seemed to indicate Villanueva would be cleaning up the department. Shortly thereafter it became apparent the unit was far more interested in targeting the department’s critics in the Los Angeles government.

Villanueva only amped things from there. He threatened county leaders with defamation suits for continuing to (accurately) portraying the department as infested with cliques of rogue deputies. He also sent his officers out to raid the homes of two prominent critics involved in civilian oversight of the department under the pretense the LASD was investigating fraudulent acquisition of county contracts.

With members of the county’s civilian oversight sufficiently cowed by legal threats, non-compliance, and seizure of their electronic devices, the sheriff has moved on to shutting down the internal remnants of LASD accountability, as Alene Tchekmedyian reports for the Los Angeles Times.

Los Angeles County Sheriff Alex Villanueva announced this week that he is banning Inspector General Max Huntsman from the department’s facilities and databases, effectively blocking the county watchdog from doing his job overseeing the Sheriff’s Department.

[…]

“Mr. Huntsman will be removed from all access to Department facilities, personnel, and databases effective immediately,” Villanueva wrote in a letter Wednesday to the Board of Supervisors. “This standard is applied to all Department personnel who are named as a suspect in a criminal case involving felony crimes.”

This astounding display of power follows Villanueva’s (unfounded) assertions that IG Huntsman is a “Holocaust denier” and his still-unproven claim that the Inspector General has committed crimes of his own, such as “stealing” confidential files on LASD officials from the department. The Inspector General has responded his access was limited to his office’s investigations and was lawfully obtained.

This suggests the LASD is doing nothing more than concocting criminal charges to bypass internal and external oversight — an impression that isn’t helped by the LASD’s failure to move forward with charges against the Inspector General, despite making these claims for more than three years.

What’s left is the undeniable impression that the LASD considers itself to be above the law. This isn’t necessarily new. It gave that same impression while run by Sheriff Lee Baca. But it’s extremely troubling a self-proclaimed reformer would so easily become part of the problem, deceiving the public for the apparent purpose of further elevating the rogue department above the people it’s supposed to serve.

Filed Under: alex villanueva, gangs, inspector general, lasd, los angeles, los angeles sheriff's department, max huntsman, oversight, transparency

An (Im)perfect Way Forward On Infrastructure Moderation?

from the infrastructure-moderation-appeals dept

Within every conversation about technology lies the moral question: is a technology good or bad? Or, is it neutral? In other words, are our values part of the technologies we create or is technology valueless until someone decides what to do?

This is the kind of dilemma Cloudflare, the Internet infrastructure company, found itself in earlier this year. Following increasing pressure to drop KiwiFarms, a troll site targeting women and minorities, especially, LGBTQ people, Cloudflare’s CEO, Matthew Prince, and Alissa Starzak, its VP for Public Policy, posted a note stating that “the power to terminate security services for the sites was not a power Cloudflare should hold”. Clouldflare was the provider of such security services to KiwiFarms.

Cloudflare’s position was impossible. On the one hand, Cloudflare, as an infrastructure provider, should not be making any content moderation decisions; on the other, KiwiFarm’s existence was putting the lives of people in danger. Although Cloudflare is not like “the fire department” as it claims (fire departments are essential for the societies to function and feel safe; Cloudflare is not essential for the functioning of the internet, though it does make it more secure), still moving content moderation down the internet stack can have a chilling effect on speech and the internet. At the end of the day, it is services, like Cloudflare’s, which get to determine who is visible in the internet.

Cloudflare ended up terminating KiwiFarms as a customer even though originally it said it wouldn’t. In a way, Cloudflare’s decision to reverse its own intention, placed content moderation at the infrastructure level front and center once again. Now though, it feels like we are running out of time; I am not sure how much more of such unpredictability and inconsistency can be tolerated before regulators step in.

Personally, the idea of content moderation at the infrastructure level makes me uncomfortable, especially because content moderation will move somewhere that is invisible to most. Fundamentally, I still believe that moving content moderation down at the infrastructure level is dangerous in terms of scale and impact. The Internet should remain agnostic of the data that moves around it and anyone who facilitates this movement should adhere to this principle. At least, this must be the rule. I don’t think this will be the priority in any potential regulation.

However, there is another reality that I’ve grown into: decisions, like the one Cloudflare was asked to make, have real consequences to real people. In cases like KiwiFarms inaction feels like aiding and abetting. If there is something that someone can do to prevent such reprehensible activity, shouldn’t they just go ahead, and do it?

That something will be difficult to accept. If content moderation is messy and complex for Facebook and Twitter, imagine for companies like Cloudflare and AWS. The same problems with speech, human rights and transparency will exist at the infrastructure level; just multiply them by a million. To be fair, infrastructure providers already engage in removal of websites and services in the internet. And, they have policies to do that. Cloudflare said so: “Thousands of times per day we receive calls that we terminate security services based on content that someone reports as offensive. Most of these don’t make news. Most of the time these decisions don’t conflict with our moral views.” Not all infrastructure providers have policies though and, in general, decisions about content removal taking place at the infrastructure level are opaque.

KiwiFarms will happen again. It might not be called that, but it’s a matter of time before a similarly disgusting case pops up. We need a way forward and fast.

So, here’s a thought: an “Oversight Board-type” of body for infrastructure. This body – let’s call it “_Infrastructure Appeals Panel_” – will be funded by as many infrastructure providers as possible and its role will be to scrutinize decisions infrastructure providers make regarding content. The Panel will need to have a clear mandate and scope and be global, which is important as the decisions made by infrastructure providers affect both issues of speech and the Internet. Its rules must be written by infrastructure providers and users, which is perhaps the single most difficult thing. As Evelyn Douek said “writing speech rules is hard”; it becomes even harder if one considers the possible chilling effect. And, this whole exercise becomes even more difficult if you need to add rules about the impact on the internet. Unlike the decisions social media companies make every day, decisions made at the infrastructure of the internet can also create unintended consequences to the way it operates.

Building such an external body is not easy and, many things can go wrong. Finding the right answers to questions regarding board member selection, independence, process and values becomes key for its success. And, although such systems can be arbitrary and abused, history shows they can also be effective. In the Middle Ages, for instance, at the time international trade was shaping, itinerant merchants sought to establish a system of adjudication, detached from local sovereign law and able to govern the practices and norms that were emerging at the time. The system of lex mercatoria originated from the need to structure a system that would be efficient in addressing the needs of merchants and, produce decisions that would carry value equivalent to the decisions reached through traditional means. Currently, content moderation at the infrastructure is an unchecked system, where players can exercise arbitrary power, which is further exacerbated by the lack of interest or understanding at what is happening at that level.

Most likely, this idea will not be enough to address all the content moderation issues at the infrastructure level. Additionally, if it is going to have any real chance of being useful, the Panel’s design, structure, and implementation as well as its legitimacy must be considered a priority. An external panel that is not scoped appropriately or does not have any authority, risks creating false accountability; the result is that policy makers get distracted while systemic issues persist. Lessons can be learned from the similar exercise of creating the Oversight Board.

The last immediate thing is for this Panel not to be seen as the answer to issues of speech or infrastructure. We should continue to discuss ways of addressing content moderation at the infrastructure level and try to institute the necessary safeguards and reforms on what is the best way to moderate content. There is never going to be a way to create fully consistent policies or agree on a set of norms. But, through transparency, which such a panel can provide, we can reach a state where the conversation becomes more focused and driven more by facts and less by emotions.

Konstantinos Komaitis is an internet policy expert and author. His website is at komaitis.org.

Filed Under: appeals, content moderation, infrastructure, oversight
Companies: cloudflare

Cops Complain After San Diego Residents Are Finally Allowed To Oversee City Surveillance Programs

from the public-service-entity-enraged-to-discover-it-needs-to-serve-the-public dept

There’s very little that seems to anger public servants more than mandates requiring them to serve the public. For years, the San Diego police department has expanded its surveillance programs. And for years, these expansions have gone unchallenged.

But now that the city has passed an ordinance requiring more direct oversight of police activity, cops are singing the thin blue line blues and claiming the public has no business overseeing the business of public agencies. The cop pushback against slightly increased accountability has begun, as David Hernandez reports for the San Diego Union-Tribune. (h/t Michael Vario)

After years of work to create oversight of surveillance technologies in San Diego, an ordinance that will govern how the city uses the technology received final approval from the City Council this week.

The work began after residents learned in 2019 that the city had installed a network of about 3,000 cameras on streetlights three years earlier, and police used the technology to investigate certain types of crimes. Some residents expressed concerns over potential civil liberty violations and over-policing, particularly in communities of color.

Under the ordinance, the City Council must approve the use of technology that can monitor and identify individuals. City staff members will need to issue reports that outline the intended use of such technology, and the public and a newly created privacy advisory board will be asked to weigh in.

This seems like the least the local government could do, especially when residents have made it clear they’re concerned about always-on surveillance and potential police abuse of the expanded surveillance network.

And this should be the bare minimum asked of police departments. These public agencies are supposed to weigh public safety efforts against the impact on constitutional rights and the public’s expectations that its movements won’t be constantly surveilled by their government.

But this minimal push towards accountability has been greeted by the San Diego Police Department (SDPD) as a declaration of war on the department. Cops may have guns, badges, and a shitload of power, but any time someone demands a little more accountability, police officials make it clear cops have the thinnest skin and the most extreme sense of entitlement.

San Diego police Capt. Jeffrey Jordon said the department uses a host of technological devices that will require approval, including body-worn cameras, polygraphs and forensic lab equipment.

“I’m not aware of any other cities in America that have to report out this many pieces of technology,” he said.

Hilarious. Cops like being ahead of the tech curve, but they truly hate being on the leading edge of accountability and transparency. Jeffrey Jordon should consider himself lucky to be an accountability pioneer. Instead, he acts like the city he serves should be part of the accountability long tail — so far removed from those acting boldly that no one will even notice the SDPD grabbing mandated coattails as its dragged into complying with expectations held by hundreds of law enforcement agencies around the country.

This SDPD official would rather be named “Least Likely to Win the Public’s Trust” than submit to cursory examinations of surveillance tech used by the department. And he makes this assertion despite being given access to a sizable loophole. The ordinance exempts officers participating in federal task forces, which means all the SDPD has to do to avoid this minor increase in public scrutiny is ask federal officers for assistance.

The SDPD’s opposition doesn’t just deserve criticism. It deserves ridicule. Officers sporting blue line flags and misappropriated Punisher gear, who engage in routine rights violations and intimidation are now crying about being asked to answer to the public. Rather than realize they have plenty of power that could be deployed for the public good, SDPD officials are complaining the new mandate will be, at best, slightly inconvenient. The blue in the “thing blue line” stands for bitchassness. When police leaders are asked to step up, they choose to complain about being expected to hold themselves and their officers to a higher standard.

Cry harder. Wipe your tears on your qualified immunity, multiple constitutional exemptions, and generous pension programs. An opportunity was presented that gave the SDPD a chance to repair its damaged relationship with residents. But rather than seize the opportunity, cop officials have chosen to pretend increased accountability is an insult to the business of law enforcement — something so far out of the norm it should be considered an aberration not worth of public support.

Filed Under: oversight, san diego, san diego police department, sdpd, surveillance, transparency

GAO’s Facial Recognition Testimony Doesn’t Explain Why Federal Agencies Aren’t Fixing Problems Reported A Year Ago

from the or-any-other-important-questions-really dept

The Government Accountability Office (GAO) recently submitted testimony [PDF] to the House Subcommittee on [takes deep breath] Investigations and Oversight and Committee on Science, Space, and Technology. Candace Wright, the GAO’s Director of Science, Technology Assessment, and Analytics explained the findings of previous GAO reports on facial recognition use by federal agencies.

Two of those reports were published last year. The first appeared in June and it showed federal agencies were doing nearly nothing to track employees’ use of facial recognition tech.

Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees. These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Most federal agencies that reported using non-federal systems did not own systems. Thus, employees were relying on systems owned by other entities, including non-federal entities, to support their operations.

Thirteen of the fourteen agencies examined by the GAO (a list that includes ICE, ATF, CBP, DEA, FBI, and the IRS) did not have any processes in place to track use of non-federal facial recognition tech.

This lack of internal oversight led directly to the behavior observed in the GAO’s second report, delivered in August. Either due to a lack of tech on-site or a desire to avoid what little internal oversight exists, federal agencies were often asking state and local agencies to do their dirty face rec work for them.

Unfortunately, this testimony — delivered nearly a year after the GAO’s released its findings — doesn’t provide any answers about this lack of internal oversight. Nor does it suggest things are moving forward on the internal oversight front as a result of its earlier investigations.

The status remains quo, it appears. About the only thing this testimony adds to the facial recognition discussion is the unfortunate fact that federal agencies feel zero compunction to better control use of this tech. It also adds a bit of trivia to the FRT mix by discussing a few little known uses of the tech by the government.

Four agencies—the Departments of Health and Human Services, Transportation, and Veterans Affairs, and NASA—reported using FRT as a tool to conduct other research. For example, Transportation reported that the Federal Railroad Administration used eye tracking to study alertness in train operators. Similarly, NASA also reported that it used eye tracking to conduct human factors research. In addition, the Department of Veterans Affairs reported it used eye tracking as part of a clinical research program that treats post-traumatic stress disorder in veterans.

Nor does the report explain why agencies surveyed under-reported their use of Clearview’s highly controversial facial recognition software. The information in the GAO’s June 2021 report is contradicted by public records obtained by Ryan Mac and Caroline Haskins of BuzzFeed, strongly suggesting five agencies flat out lied to the GAO.

In a 92-page report published by the Government Accountability Office on Tuesday, five agencies — the US Capitol Police, the US Probation Office, the Pentagon Force Protection Agency, Transportation Security Administration, and the Criminal Investigation Division at the Internal Revenue Service — said they didn’t use Clearview AI between April 2018 and March 2020. This, however, contradicts internal Clearview data previously reviewed by BuzzFeed News.

This misleading — whether deliberate or not — goes unmentioned in the GAO’s testimony. And apparently no follow-up investigation was performed to see if agencies were doing anything to prevent the sort of thing seen here:

Officials from another agency initially told us that employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches.

A year down the road, and all the GAO can report is that three of the 13 agencies that had no internal tracking processes are now in the process of implementing “at least one” of the three recommendations the GAO handed out nearly 13 months ago following its first report.

Most of the testimony is handed over to discussing much quicker movements by federal agencies, i.e. the expanded deployment of questionable tech far ahead of mandated Privacy Impact Assessments or assessment efforts to track the reliability of the tech being deployed.

This testimony is incredibly underwhelming, to say the least. This is the Government Accountability Office doing the talking here. And it’s apparently unable to encourage more than a rounding error’s-worth of accountability gains. This leaves it to Congress, an entity that’s largely unconcerned with increasing government accountability because it might make things uncomfortable for them as they seek to extend four-year terms to de facto lifetime appointments.

The government has a facial recognition tech problem. And it’s going to get too big to handle quickly if findings like those reported by the GAO a year ago continue to be ignored by federal agencies and the oversight this testimony was delivered to. If the GAO can’t be bothered to ask tough questions from agencies that misled it months ago, it seems unlikely Congressional reps with multiple interests to serve (sometimes even those of their constituents!) are going to hold any agency accountable for playing fast and loose with questionable tech and citizens’ rights.

Filed Under: facial recogintion, gao, oversight

Google Deletes Abortion Location Data As Attack On Roe Completely Realigns The Privacy Debate

from the everything-changes-now dept

Wed, Jul 6th 2022 06:26am - Karl Bode

In the wake of the Supreme Court’s dismantling of Roe, U.S. tech companies didn’t much want to talk about their role in securing women’s data. And they didn’t want to talk much about it because they know that the privacy standards and oversight of the entire US snoopvertising economy, from adtech and telecom to app makers, media giants, and the internet of things — is an unaccountable dumpster fire.

Concerns about things like the widespread abuse of location, clickstream, or behavioral data were routinely dismissed before hardline authoritarianism came to the United States. You couldn’t step a foot in any direction without seeing a white male tech insider insisting that you don’t need to meaningfully reform privacy and privacy oversight, because consumers don’t actually care about privacy:

just thinking today about how the people who have spent the past decade-plus insisting "no one really cares about online privacy" were so often men pic.twitter.com/SDrl5CulFH

— Will Oremus (@WillOremus) June 24, 2022

With the potential that women’s browsing, behavior, app, location, and other datasets could be easily purchased and abused by authoritarian state governments and vigilantes (to potentially deadly and life-destroying effect) — thirty-plus years of activist warnings are coming home to roost, and the need for reform isn’t going to be quite so easy to flippantly dismiss (not that these same folks won’t still try).

A lot of companies that made untold fortunes on the back of minimal accountability will soon face a brand new reality. The smarter ones are already getting ahead of a privacy debate paradigm shift that’s going to be fed repeated accelerant in the form of scandals that will make the Equifax or T-Mobile location data scandals look like a beach-side picnic.

After initially being silent, Google last week penned a blog post announcing it would be deleting any user location data for those that have been near abortion clinics. It’s a welcome start for a tech sector that couldn’t be bothered to even issue rote promises immediately post-Roe:

Today, we’re announcing that if our systems identify that someone has visited one of these places, we will delete these entries from Location History soon after they visit. This change will take effect in the coming weeks.

There’s not much detail on how this will work, and Google executives weren’t willing to chat about the changes with the press. Will the deleted gaps create entirely new red flags for law enforcement? What about incognito user searches? What about other sensitive data collected by Android OS and Google hardware?

Google did make it clear that having better, more ethical privacy standards and enforcement isn’t something it can do on its own, however powerful it may be:

Given that these issues apply to healthcare providers, telecommunications companies, banks, tech platforms, and many more, we know privacy protections cannot be solely up to individual companies or states acting individually. That’s why we’ve long advocated for a comprehensive and nationwide U.S. privacy law that guarantees protections for everyone, and we’re pleased to see recent progress in Congress.

To be clear, large companies like Facebook and Google do want a federal privacy law. But they want a federal privacy law their lawyers help write.

Google was also just busted for providing access to all manner of sensitive user data to a sanctioned Russian adtech company owned by the country’s biggest bank, so again, there’s often a sizeable gap between words and actions. And that gap is going to be relentlessly exposed post Roe by authoritarians that are going to push their decade-plus Supreme Court advantage as far as they possibly can.

To be clear, this isn’t an easy issue to solve. There’s no switch to flip. Companies repeatedly make it clear they’re not even fully in tune with the scope of their own data collection. Many activists, in turn, were quick to argue that if Google really wants to help, it could just collect less data overall:

I'm also still super curious how this will protect people in practice. Like … are there just gonna be big gaps in people's location history that they will then be asked by law enforcement to explain if Google is like "we have their location at this time but uh… not this time)

— Evan Greer is on Mastodon and Bluesky (@evan_greer) July 2, 2022

The Washington Post sang a similar tune. But in a country with no federal privacy law, regulators that lack the funding, authority, or manpower to actually police privacy, a Congress that’s completely gridlocked due to multi-sector corruption, and a high court making it extremely clear that already dwindling privacy and corporate accountability are on the chopping block, that just doesn’t seem likely anytime soon.

Most of the policymaking coming out of Congress in relation to “big tech” (from Google to TikTok) is generally just performative noise designed to agitate the base or protect online political propaganda. There’s no real incentive to truly get a competent privacy law or real antitrust reform done anytime soon, and the Supreme Court seems intent on dismantling the entire regulatory state as Congress dithers.

Given government failure and the broad privacy dysfunction across telecom, adtech, apps, hardware, and media, consumers don’t have a whole lot of autonomy on privacy. That leaves the onus on corporations to do better. And corporations only act on privacy when there’s a scandal so obvious and grotesque they have no choice. The post-Roe landscape is poised to deliver those scandals in spades.

Filed Under: abortion, adtech, data collection, oversight, privacy, telecom, women's health care, women's reproductive rights
Companies: google

Everyone’s Got Terrible Regulations For Internet Companies: Senator Bennet Wants A ‘Digital Platform Commission’

from the bad-politicians,-bad dept

Everyone these days seems to want to regulate social media. I mean, the reality is that social media is such a powerful tool for expression that everyone wants to regulate that expression. Sure, they can couch it in whatever fancy justifications they want, but at the end of the day, they’re still trying to regulate speech. And this is across the political spectrum. While, generally speaking, the Republican proposals are more unhinged, it’s not like the Democratic proposals are actually any more reasonable or constitutional.

The latest on the Democratic side to throw his hat in the ring is Colorado Senator Michael Bennet, who has a bizarre proposal for a new Digital Platform Commission. He compares it to the FDA, the FCC, and the more recent Consumer Financial Protection Bureau (CFPB). Except, this is different. Regulating social media means regulating speech. So the FDA and the CFPB examples are not relevant. The only one that actually involved regulating businesses engaged in the transmission of expression is the FCC, and that’s part of the reason why the FCC’s mandate has always been quite narrow, covering specific areas where there government can get involved: e.g., in regulating scarce spectrum.

Defenders of the law claim that it’s not there to regulate speech, but so much of what the bill tiptoes around is that Bennet is unhappy with what is clearly 1st Amendment protected activity by these websites. The bill kicks off by arguing that these websites have disseminated disinformation and hate speech — both of which are protected speech. It complains about them “abetting the collapse of trusted local journalism” which is a weird way to say “local journalism outfits failed to adapt.” It blames the websites for “radicalizing individuals to violence.” But you could just as easily say that about Fox News or OAN, but hopefully most people recognize how promulgating regulations in response to those organizations would be a serious 1st Amendment problem. It trots out a line claiming that social media has “enabled addiction” which is a claim that is often made, but without any real support or evidence.

It’s basically one big moral panic.

Anyway, what would this new Commission regulate? Well, a lot of stuff that touches on speech, even as it tries to pretend otherwise. Among other things, this Commission would be asked to issue rules on:

requirements for recommendation systems and other algorithmic processes of systemically important digital platforms to ensure that the algorithmic processes are fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias;

So, recommendations are opinions, and opinions are speech. That’s regulating speech. Also, given what we’ve seen with things like Texas’ social media law, which uses similar language, it’s not at all difficult to predict how a commission like this under a future Trump administration would push rules about “deceptive bias.”

I am perplexed at how a Democratic Senator could possibly write a law like this and not consider how a Trump administration would abuse it.

There’s a lot more in the bill, including other ideas that wouldn’t directly impact speech, but the whole thing is ridiculous. It’s setting up an entire new regulatory agency over social media. We know what happens in situations like this. You get regulatory capture (witness how often the FCC is controlled by telecom interests leading to a lack of competition).

You also get a death of innovation. Regulated industries are slow, lumbering, wasteful entities where it’s difficult, if not impossible, to generate new startups and competition. Effectively this bill would hand over most internet innovation to foreign companies.

It’s a ridiculously dangerous move.

I am perplexed. The last few years we’ve seen non-stop unhinged moral panic from both parties about the internet. As we noted last year, both parties are playing into this, because both parties are trying to twist the internet to their own interests. That’s not what the internet is for. The internet is supposed to be an open network for the public. Not managed by captured bureaucrats.

Filed Under: 1st amendment, digital platform commission, dpc, fcc, michael bennet, oversight, regulation, speech

NYPD Continues To Screw Over Its Oversight By Denying Access To Bodycam Footage

from the long-extended-middle-finger-of-the-law dept

The NYPD’s war on its oversight continues. The secretive law enforcement agency has spent years fighting accountability and transparency, making up its own rules and engaging in openly hostile actions against public records requesters, city officials, internal oversight, and the somewhat-independent CCRB (Civilian Complaint Review Board). Journalists say the NYPD is worse than the CIA and FBI when it comes to records requests. The FBI and CIA say it’s worse than a rogue state when it comes to respecting rights.

The NYPD probably doesn’t wonder why it houses bad cops. In fact, it probably doesn’t not even consider the worst of its ranks to actually be “bad” cops. Making things worse on the accountability side, the NYPD answers to two very powerful law enforcement unions, which makes it all but impossible for the department to punish bad cops, even if it wanted to. And while it’s subject to public oversight via the CCRB, it has the power to override the board’s decisions to ensure cops engaged in misconduct aren’t punished too harshly for violating rights and destroying the public’s trust.

The NYPD began wearing body cameras in 2017 as part of comprehensive reforms put in place by consent decrees issued by federal courts presiding over civil rights lawsuits over the NYPD’s surveillance of Muslims and its minority-targeting “stop and frisk” program.

But body cameras continue to be mostly useful to prosecutors and of negligible value to the general public that was supposed to benefit from this new accountability tool. As ProPublica reports, even the civilian oversight board can’t get the NYPD to hand over footage crucial to investigations of misconduct.

In some instances, the NYPD has told CCRB investigators no footage of an incident exists, only for the CCRB to later learn that it does. For example, during one investigation of an incident for which the NYPD said there was no footage, an officer later told investigators that she had her camera on.

Other times, the NYPD has acknowledged footage exists but refused to turn it over, citing privacy issues. In one case, an officer slammed a young man into the pavement, sending him to the hospital with a brain bleed. Seven body cameras worn by officers captured parts of the incident. But the NYPD withheld almost all the footage from CCRB investigators, on the grounds that a minor’s face could be seen in some of it.

This stonewalling is detailed in the NYPD Inspector General’s latest report [PDF], which finds yet again that the NYPD has zero interest in holding its officers accountable. The NYPD is supposed to be subject to its oversight, but the reality is the oversight is subject to the NYPD.

The agency must rely on NYPD to produce BWC footage that is responsive to a request, making the progress of investigations dependent on NYPD’s capacity and discretion. As mentioned above, and discussed further below, capacity and manpower issues at NYPD have caused extensive delay in the past, and easily could again. But the problems are also substantive. For example, an NYPD searcher may consider certain tags to be not relevant or responsive even if the requesting CCRB investigator would have disagreed. This is problematic since only the investigator has complete knowledge of the investigation, and therefore is best suited to know what may be relevant. In such a case, not only might relevant BWC footage be withheld, but the CCRB investigator would never even know that such footage existed in the first place.

Even when the CCRB obtains footage, it could be redacted into abstraction by NYPD liaisons, who everyone has to assume are acting in good faith, even when it appears apparent they aren’t.

When only NYPD has seen the unredacted footage, the redaction analysis, legal or otherwise, is impaired because NYPD has sole discretion regarding handling of the footage, but incomplete information as to the facts of the investigation and its procedural posture. This is especially problematic when the grounds asserted by NYPD for redaction are disputed or otherwise in doubt. Given the information disparities, the requesting investigative agency may find it difficult to effectively challenge the redactions, which can produce inaccurate or unnecessary redactions, cause additional delay, or force the requesting agency to make disclosures that may infringe on the independence or confidentiality of its investigations.

And, instead of things improving as time goes on, the NYPD’s cooperation with its oversight appears to be getting worse.

CCRB reported that in the second quarter of 2019, 99 percent of BWC video requests remained open for 20 or more business days, or longer than one month. From the beginning of 2018 through the second Quarter of 2019, the percentage of footage returned redacted grew from six to 63 percent. CCRB also reports that there were occasions in which it was not notified that video had been redacted, nor was it provided an explanation for such redactions.

Since the NYPD retains control of all body worn camera footage, it can claim no relevant footage exists, even when it does. And there’s no way for the CCRB to challenge this claim unless it comes up with evidence elsewhere that points to the existence of footage the NYPD claims is nonexistent.

Moreover, current (pre-MOU-implementation) procedures have resulted in a number of “false negative” returns. False negatives are defined as instances in which NYPD reports there is no relevant footage for a particular search, yet CCRB later learns that a pertinent video does exist. CCRB reports learning about false negatives through other police documents, during interviews, or via footage provided to the media, and attributes these false negatives in part to potential incompleteness of NYPD search criteria, incomplete tagging of videos in the system, as well as lack of geotagging of footage. Due to the inability to conduct its own searches, CCRB cannot be certain how many negative search results are accurate.

The solution is — as the IG suggests — that CCRB be given direct access to body camera footage. Quite obviously, the NYPD will never agree to this. And, since it runs its footage through Evidence.com — Axon’s proprietary portal — the NYPD will be able to stonewall future requests by pointing to the fine print in Axon’s contract or simply refusing to provide CCRB with access licenses.

Unless the city is willing to give the CCRB some teeth, it will continue to be nothing more than the illusion of oversight. The NYPD has to answer to the public, and it has shown it’s unwilling to do that on any level. New York legislators need to be willing to stare down the blue-uniformed 800-lb gorilla in its midst. If it can’t, the NYPD will continue to be as awful as it has been for decades.

Filed Under: bodycams, nypd, oversight, transparency

Oversight Unable To Discover Which FBI Agents Leaked Clinton Investigation Info Because Goddamn Everyone Was Leaking Stuff

from the selective-transparency dept

Selective leaking has always been a part of the federal government’s day-to-day business. When there are narratives to massage, controlled leaking is tolerated. Leaks that make the government look bad tend to result in prosecutions, but leaks that act as highly unofficial PR or align with the motivations of the agencies they’re leaked from are largely ignored.

Every so often, though, oversight is asked to keep an eye on leaking, if only to make it appear that all leaks are considered equal. In the interest of perceived fairness, the FBI’s Office of the Inspector General has taken a look at the incidents surrounding the selective release of information about the investigation of Hillary Clinton’s use of a personal email server during her stint at the State Department under Barack Obama.

There were questions about political motivations — ones not helped at all by selective leaks about the investigation. This was on top of supposedly official actions, like then-FBI director James Comey’s decision to hold a press conference to announce the outcome of the FBI’s investigation. In the agency’s determination, what Clinton did was unwise and gave the appearance of impropriety, but was not illegal.

That would have been it. But shortly before the 2016 election, James Comey decided it was time for him to act unwisely and give off an air of impropriety by announcing the FBI would be reopening its investigation of Clinton and her email server, thanks to developments in an unrelated case. Comey’s actions were also questionable, but apparently not actually illegal.

The OIG tried to dig into the FBI’s use of selective leaking during this time period. And it has arrived at the conclusion that it’s almost impossible to accurately point fingers, much less discourage powerful federal agencies from doing whatever the hell they want to, policies and laws notwithstanding. (h/t Brad Heath)

The report [PDF] leads off with a summary of the undoubtedly frustrating investigation. First, it points out how things should be handled…

Among the issues we reviewed in that report were allegations that FBI employees improperly disclosed non-public information regarding the FBI’s investigation into former Secretary of State Hillary Clinton’s use of a private email server. FBI policies strictly limit the employees who are authorized to speak to the media, and require all other employees to coordinate with or obtain approval from the FBI’s Office of Public Affairs (OPA) in connection with such communications.

And follows that up with a resigned statement about how things actually are…

Nonetheless, as described in our 2016 pre-election report, we found that these policies appeared to be widely ignored during the period we reviewed. Specifically, in our analysis of FBI telephone records, FBI email records, FBI text, and Microsoft Lync instant messages, we identified numerous FBI employees, at all levels of the organization and with no official reason to be in contact with the media, who were nevertheless in frequent contact with reporters.

The Federal Bureau of Sunk Ships (feat. Loose Lips). This would be a national embarrassment if it weren’t so hilarious. The FBI screws over public records requesters on a regular basis and issues tight-lipped “no comments” or Glomars whenever it’s being scrutinized by critics. But apparently everyone on staff is handing out secrets to half the Rolodex whenever they’re feeling particularly chatty about politically-tinged investigations.

And if this seems damning, imagine what the OIG would have found if it was actually engaged in an investigation determined to root out criminal activity.

Because this was a non-criminal administrative misconduct review, there was no legal basis to seek a court order to compel Internet service providers to produce to the OIG the content of any personal email communications for these FBI employees.

As much as I’d like to see some federal agents suddenly jobless, I appreciate the restraint of the OIG, which would have been treading into First Amendment waters by seeking content of communications with journalists. All the same, it appears the FBI doesn’t really need to keep paying people to staff its Public Relations department. It has all the unofficial help it needs to get the word out.

Those who were caught chatting it up with reporters tended to pass the buck, offering the Inspector General facts not in evidence to justify their casual disregard of FBI policy.

Employees interviewed by the OIG generally claimed that they believed their contacts were either authorized by OPA or a field office Special Agent in Charge (SAC) or Assistant Director in Charge (ADIC) to provide background about an FBI initiative or completed investigation, or were personal in nature. Given the absence, in most instances, of any documentary evidence reflecting the substance of these communications, the OIG was unable to determine whether these communications were consistent with the explanations provided by the FBI employees or instead involved the sharing of non-public information with reporters.

Behold the Federal Office of the Future! It’s not just telecommuting and flexible staffing. It’s also paper trail-less, which probably helps the environment in some way. God bless America.

In conclusion: holy shit.

Our ability to identify individuals who have improperly disclosed non-public information is often hampered by two significant factors. First, we frequently find that the universe of Department and FBI employees who had access to sensitive information that has been leaked is substantial, often involving dozens, and in some instances, more than 100 people.

Second, although FBI policy strictly limits the employees who are authorized to speak to the media, we found that this policy appeared to be widely ignored during the period we reviewed. We identified numerous FBI employees, at all levels of the organization and with no official reason to be in contact with the media, who were nevertheless in frequent contact with reporters. The large number of FBI employees who were in contact with journalists during this time period impacted our ability to identify the sources of leaks. For example, during the periods we reviewed, we identified dozens of FBI employees that had contact with members of the media.

Forget FOIA. Maybe all anyone needs is a barely targeted “request for comment.” And it would be ridiculous to believe the FBI is the only agency that behaves this way. The federal government — as multiple presidents have discovered — is staffed by the leakiest bunch of leakers who ever leaked. The problem isn’t the leaks. It’s the selective prosecution of leaks. If the leaks serve the government, they’re just part of the public service infrastructure. But if they embarrass the government or expose wrongdoing, they’re suddenly criminal acts. And that unequal treatment cannot be addressed by internal investigations. It’s a mindset problem — one the government has no intention of changing, no matter who’s occupying the Oval Office.

Filed Under: doj, fbi, hillary clinton, inspector general, james comey, journalism, leaks, oversight, reporting, transparency