Killer drones
South Korea's military drones fly in formation during a joint military drill with the US at Seungjin Fire Training Field in Pocheon on May 25, 2023.
  • The US is among countries arguing against new laws to regulate AI-controlled killer drones. 
  • The US, China, and others are developing so-called "killer robots." 
  • Critics are concerned about the development of machines that can decide to take human lives. 

The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

The use of the so-called "killer robots" would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations — which also includes Russia, Australia, and Israel — who are resisting any such move, favoring a non-binding resolution instead, The Times reported.

"This is really one of the most significant inflection points for humanity," Alexander Kmentt, Austria's chief negotiator on the issue, told The Times. "What's the role of human beings in the use of force — it's an absolutely fundamental security issue, a legal issue and an ethical issue."

The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year.

In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China's People's Liberation Army's (PLA) numerical advantage in weapons and people.

"We'll counter the PLA's mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat," she said, reported Reuters.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

"Individual decisions versus not doing individual decisions is the difference between winning and losing — and you're not going to lose," he said.

"I don't think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves."

The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it's unclear if any have taken action resulting in human casualties.

The Pentagon did not immediately respond to a request for comment.

Read the original article on Business Insider