autonomous killing – Techdirt (original) (raw)

Ukraine Turns To Flying Machine Guns And Autonomous AI-Controlled Drone Swarms To Counter Russian Numbers

from the crossing-the-line dept

It’s no secret that Ukraine is having a hard time in its fight against Russia at the moment. That’s in part because Ukraine is being limited in how deep into Russia it can attack using Western-supplied weapons. But mostly it is a matter of numbers: Russia has more men that it is willing to sacrifice in assaults, and more weapons and ammunition that it can use to pound Ukrainian positions and cities. As Alex Bornyakov, Ukraine’s deputy minister of digital transformation, told the UK Times: “We don’t have as many human resources as Russia, they fight, they die, they send more people, they don’t care, but that’s not how we see war.” Since it can’t match Russia in raw manpower and firepower, Ukraine has turned to technology to help it fight back.

In particular, Ukraine has been using drones in a way that is re-defining modern war. First, it is deploying them on an unprecedented scale. Back in December, Ukraine’s President Zelensky said that his country would produce one million drones in 2024. More recently, Hanna Hvozdyar, Ukraine’s Deputy Minister of Strategic Industries, stated that they would in fact produce two million drones this year, although these are unverified claims. In addition to sheer numbers, Ukraine is also pushing forward the boundaries of drone design. In May, Euromaidan Press reported that Ukrainian forces are mounting machine guns on heavy octocopter drones “to strafe Russian infantry assaults and fire into the trenches from above”:

Right now, Ukrainians are using two types of drones against the Russian infantry: grenade-dropping drones, and kamikaze drones. Dropping grenades accurately is extremely difficult, especially if the infantry is moving, or if the infantry has electronic warfare kits that necessitate operating from a much higher altitude, further reducing the accuracy.

The kamikaze drones, in turn, can only be used once. The development of gun mounts, combined with thermal vision and machine aiming, will change the setting completely.

Another innovative approach involves the use of AI to create a “swarm” of up to seven drones that can work cooperatively to attack tanks and carry out reconnaissance. The Times spoke to a Ukrainian entrepreneur working on this technology in Kyiv, Serhii Krupiienko:

“It’s the equivalent of bringing the steam engine into the factory all those years ago,” says Krupiienko, a software engineer who studied at Stanford University, California. “Our core mission is to get robots to do the fighting, not humans.

“They can communicate with each other, making decisions on which one attacks, which gathers intelligence — and they’ll do it faster than any human.”

Ukraine’s deputy minister of digital transformation Bornyakov told The Times that the country is testing another company’s intelligent swarm technology as well. The New York Times reports on a number of Ukrainian laboratories and factories working on other low-cost autonomous weapons. Bornyakov insisted that Ukraine will not allow any of these killing machines to go “completely autonomous”, without a human making the final decision. But when your soldiers are struggling against larger, better equipped forces in a war that will determine whether your country continues to exist in any meaningful sense, it will be hard to maintain that ethical position. And once that line is crossed, how wars are conducted will have changed forever.

Follow me @glynmoody on Mastodon and on Bluesky.

Filed Under: ai, autonomous killing, drone, kamikaze, kyiv, reconnaissance, robots, swarm, tanks, ukraine, zelensky

San Francisco Legislators Greenlight Killing Of Residents By Police Robots… And Then Kill It…

from the Robocop-is-not-something-to-aspire-to dept

Update: So we had this post about SF supervisors approving the killer robots in their initial vote, and had a note at the end that it still needed one more round of approvals by the Supervisors… and apparently widespread protests last night convinced the board to drop the proposal! The original (mostly obsolete) post is below.

For a while, the city of San Francisco appeared to be on the cutting edge of civil rights. It responded to the exponential growth of the facial recognition tech industry by banning use of the unproven, often-biased tech by government agencies, including the San Francisco Police Department.

This progressive take on policing was short-lived. The 2019 ban is no longer making headlines. Instead, a move towards a West Coast police state dominates reporting about the city and its legislators, who have apparently decided that because crime exists, freedoms and liberties need to be back-burnered for the time being.

The first indication that things were sliding extremely off the rails in San Francisco was the city’s decision to give the SFPD on-demand access to live feeds from privately owned security cameras. This intrusion on personal property was justified by a blog post from Mayor London Breed, who claimed it only made since because crimes were still happening. Apparently, “exigent circumstances” were no longer enough. To “protect public safety responsibly,” San Francisco cops needed to be able to ride piggyback on private feeds whenever they deemed it necessary to do so.

Because that just wasn’t totalitarian enough, city legislators proposed another increase in police powers. Killer robots, they said, seemingly unaware of the public’s everlasting opposition to government-deployed automatons armed with deadly weapons. Literally every dystopian bit of popular culture says this is a bad idea.

Everyone else is wrong, said legislators. Let the processor chips fall where they may. And now the proposal was approved, as the Associated Press reports.

Supervisors in San Francisco voted Tuesday to give city police the ability to use potentially lethal, remote-controlled robots in emergency situations — following an emotionally charged debate that reflected divisions on the politically liberal board over support for law enforcement.

The vote was 8-3, with the majority agreeing to grant police the option despite strong objections from civil liberties and other police oversight groups.

Those aligning themselves with Terminators 0-1000 had their excuses.

Supervisor Connie Chan, a member of the committee that forwarded the proposal to the full board, said she understood concerns over use of force but that “according to state law, we are required to approve the use of these equipments. So here we are, and it’s definitely not a easy discussion.”

Wait a minute. State law says city supervisors must approve non-human deployment of deadly force? That seems… well, incredibly unlikely. This sounds like someone trying to wash their hands of the whole issue, but with the blood of city residents rather than anything that would actually make their hands less dirty.

The SFPD also “understands” the concerns of citizens. And it promises residents will not be shot to death by its city-approved killer robots. They’ll only be blown the fuck up.

The San Francisco Police Department said it does not have pre-armed robots and has no plans to arm robots with guns. But the department could deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect” when lives are at stake, SFPD spokesperson Allison Maxie said in a statement.

Huh. It looks like the SFPD misspelled “kill” at least three times in its statement. I’m not sure how you “contact” someone with an explosive, but when the Unabomber did it, it was a federal crime. “Incapacitate” is just another way to pronounce “kill.” And “disorient” only makes sense if it means the explosives will make someone incapable of orienting themselves… you know, like when they’re reduced to chunks of flesh that require a mop-up team using actual mops.

This is supposed to make people feel better about allowing armed killers with zero calculable feelings to roll up on crime scenes with a metal fistful of C-4.

Supervisors amended the proposal Tuesday to specify that officers could use robots only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means. Only a limited number of high-ranking officers could authorize use of robots as a deadly force option.

Oh. OK. So the “amendment” shifts almost everything to the discretion of officers who will always claim they tried to de-escalate the hell out of the scene and got the shift commander on the horn before sending in a deadly blend of CPUs and explosives to “subdue” the suspect into a bloody paste incapable of alleging civil rights violations. If it’s found none of the things cops asserted prior to disintegrating a suspect are true, they’ll still be able to ask for immunity. At worst, they’ll be indemnified by the city — the same city that said killer robots are definitely something that’s needed as the city (despite some recent spikes in certain crime) enjoys historical lows in crime rates.

Here’s the thing: if you don’t want cops to get in trouble by deploying new deadly force methods without clear justification, the best thing you can do is NOT GIVE THEM THAT OPTION. Allowing cops to use remote-controlled bombs to, um, defuse situations will only result in a whole lot of post-facto forgiveness requests — pleas for mercy after they’ve already rendered someone incapable of being identified by their loved ones. There’s no way any police department in the nation can say it’s earned the trust to use something like this responsibly. Until officers can stop murdering people on the regular, the last thing they should be given access to is more ways to kill.

That said, this proposal isn’t the law just yet. The Supervisors need to vote on this again before it heads to Mayor Breed’s desk for signature.

Filed Under: autonomous killing, london breed, police robots, san francisco, sfpd

San Francisco Lawmakers Think It Might Be OK For Cops To Deploy Robots To Kill People

from the [extremely-Jim-Morrison-voice]-there's-a-killer-at-the-door dept

Lots of people like to pretend California is home to certifiable Communists — a socialist collective masquerading as a state. But California is not beholden to socialist ideals. It has its own dictatorial ideological bent, one that’s only slightly tamed by its election of liberal leaders.

Every move towards the left is greeted by an offset to the right. If anything, California is the Land of Compromise. Ideological shifts are short-lived. What really lasts are the things the California government does that give the government more power, even as they ensure the electorate that their concerns have been heard.

Case in point: San Francisco. In early 2019, the city passed a ban on facial recognition tech use by government agencies. This move placed it on the “left,” at least in terms of policing the police. (The law was amended shortly thereafter when it became clear government employees were unable to validate their identity on city-issued devices.)

Communist paradise indeed. But no, not really. San Francisco’s lawmakers may have had some good ideas about trimming the government’s surveillance powers, but those good ideas were soon compromised by law enforcement. And those compromises have been greeted with silence.

In May of this year, cops were caught accessing autonomous vehicle data in the hopes of obtaining evidence in ongoing investigations. A truly autonomous vehicle creates nothing but third-party data, so there was little need to worry about Fourth Amendment implications. But still it seems a city concerned with government overreach would express a little more concern about this cop opportunism.

Nothing happened in response to this revelation. Instead, four months later, city lawmakers approved on-demand access to private security cameras, reasoning that cops deserved this access because crime was still a thing. Mayor London Breed justified the move towards increased authoritarianism in a [checks notes] Medium post:

We also need to make sure our police officers have the proper tools to protect public safety responsibly. The police right now are barred from accessing or monitoring live video unless there are “exigent circumstances”, which are defined as events that involve an imminent danger of serious physical injury or death. If this high standard is not met, the Police can’t use live video feed, leaving our neighborhoods and retailers vulnerable.

These are the reasons why I authored this legislation. It will authorize police to use non-City cameras and camera networks to temporarily live monitor activity during significant events with public safety concerns, investigations relating to active misdemeanor and felony violations, and investigations into officer misconduct.

When the going gets tough, the elected toughs get chickenshit. All it took to generate carte blanche access to private security cameras was some blips on the crime radar. Whatever gains were made with the facial recognition tech ban were undone by the city’s unwillingness to stand by its principles when isolated incidents (hyped into absurdity by news broadcasters) made certain residents feel ways about stuff.

The news cycle may have cycled, but the desire to subject San Francisco to extensive government intrusion remains. If the cops can’t have facial recognition tech, maybe they should be allowed to kill people by proxy. It’s a super-weird take on law enforcement, but one that has been embraced by apparently super-weird city legislators, as Will Jarrett reports for Mission Local.

A policy proposal heading for Board of Supervisors approval next week would explicitly authorize San Francisco police to kill suspects using robots.

The new policy, which defines how the SFPD is allowed to use its military-style weapons, was put together by the police department. Over the past several weeks, it has been scrutinized by supervisors Aaron Peskin, Rafael Mandelman and Connie Chan, who together comprise the Board of Supervisors Rules Committee.

Yikes. Turning residents into Sarah Connor isn’t a wise use of government power. Giving police additional deadly force powers is unlikely to heal the immense rift that has developed as cops continue to kill people with disturbing frequency, all while enjoying the sort of immunity that comes with the territory.

Attempts to mitigate the new threat authorized by this proposal were undermined by the San Francisco PD, which apparently thinks killing people with modified Johnny Fives is a good idea:

Peskin, chair of the committee, initially attempted to limit the SFPD’s authority over the department’s robots by inserting the sentence, “Robots shall not be used as a Use of Force against any person.”

The following week, the police struck out his suggestion with a thick red line.

It was replaced by language that codifies the department’s authority to use lethal force via robots: “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to SFPD.”

The edit may seem all pointy-eared-Spock logical when taken at face value. But it isn’t. What cops believe poses an “imminent threat” to officers is so far outside the norm expected by “reasonable” citizens, it makes this edit meaningless. Cops are allowed to make highly-subjective judgment calls — the sort of thing that often leads to unarmed people (especially minorities) being killed by law enforcement officers. Add this right-optional autonomy to autonomous killing machines and you’re asking for the sort of trouble residents will be forced to subsidize as the city settles lawsuits triggered by cops who think a person’s mere existence is enough of a threat to justify deadly force.

Adding this to the arsenal of rights-optional weapons deployed by the SFPD ushers in a new era where cops can be judge, jury, and executioner. I mean, in many cases they already are. But this adds a level of Judge Dredd-adjacent dystopia where cops can try to claim it wasn’t them but rather the one-armed man robot. The San Francisco legislator should kill this bill deader than the residents the SFPD kills. The “imminent threat” justification is too vague and too easily abused to allow officers to absolve their own guilt by allowing a robotic assistant to perform killings on their behalf.

Filed Under: autonomous killing, london breed, robot police, robots, san francisco, sfpd