Ideas

US City’s Impending Sanction Of ‘Killer Robots’ Triggers Renewed Global Concern About Lethal Autonomous Weapons

  • Next week, San Francisco officials will take final call on arming robots with lethal force to fight crime.
  • Robot with explosives was used in another US city to kill an armed suspect.
  • UN-led efforts to ban such autonomous lethal weapons have failed multiple times.
  • While not committing to outright ban on such robotic weapons, India agrees that ‘final attack decision should be made by humans not AI’.

Anand ParthasarathyDec 05, 2022, 01:36 PM | Updated 01:32 PM IST
Robocop in the movies: The 2014 version (left) and the 1987 original.

Robocop in the movies: The 2014 version (left) and the 1987 original.


A proposal by the Police Department of San Francisco, California in the US, to deploy robots capable of deadly force including killing, in extraordinary circumstances, was approved in an 8-3 decision by City Fathers earlier this week.

This has set off a firestorm of views and counter views in the US and outside, about how far technology like Artificial Intelligence (AI), can be allowed to make life-threatening decisions.

The final decision on whether San Francisco would sanction the arming of robots with the ability to kill, will be taken on 6 December. But the Mayor’s already-expressed approval would suggest that barring new pressures, the measure will go through, setting off a new precedent for the use of what are being called ‘killer robots’ in aid of civilian law enforcement.

“This has serious potential for misuse and abuse of this military-grade technology and a zero showing of necessity,” a city supervisor who voted against the proposal is quoted as saying, in a CNN report.


In an editorial, a day after the city supervisors took the first vote, The San Francisco Chronicle said:  “There was no expert testimony. No discussion of accountability if an armed robot inadvertently kills a civilian. No assurances, the robots wouldn’t be hacked.”

Fact Mirroring Fiction?

It added: “The very idea of allowing robots to use deadly force brings to mind the dystopian stories told in Robocop, Terminator and Battlestar Galactica.”

 The paper was referring to a cult Hollywood science fiction movie of 1987 – remade in 2014 – where a robotic policeman or ‘Robocop’ who is a half human, half machine “cyborg”, launches a brutal campaign to clean up Detroit city at the behest of a corporation.  

Robotic killer in the 2004 movie ‘Terminator- 3’

The third (2004) Arnold Schwarzenegger movie in the Terminator series also featured a computer-driven mechanical giant killer.

Has cinematic fiction anticipated fact? Some scientists think so.

 ‘Schwarzenegger’s Law’

 Prof Toby Walsh of the University of New South Wales in Australia, an AI expert, anticipated this week’s developments even in 2015.

In an article entitled “The rise of the killer robots: why we need to stop them”, he wrote: “You might be thinking of “Terminator” – a robot which, if you believe the movie, will be available in 2029. But the reality is that killer robots will be much simpler to begin with and are, at best, only a few years away.”

He suggests: “Moore’s Law predicts that computer chips double in size every two years. We’re likely to see similar exponential growth with killer robots. I vote to call this Schwarzenegger’s Law.”


But every attempt at international forums for establishing legally binding rules on machine-operated weapons have failed till now, including at the last such conference exactly a year ago, in December 2021.

While some 68 nations called for some sort of global embargo, the US, Russia and interestingly India have refused to be party to a new treaty banning LAWS.

Indeed, the Indian stand on robotic or autonomous weapons in international forums has been somewhat ambiguous, possibly because the country does not want to rule out the use of the latest class of such platforms: military drones.

National interests have dictated investment in the indigenous development or outright acquisition of “predator” class drones to both carry air-borne weapons and anti-drone countermeasures.

Made-In-India Bomb Disposal Robot

Daksh, the remotely-operated bomb disposal robot, developed by DRDO. Photo Credit: Wikimedia.

India is also among a small number of nations who have developed  their own technology for robotic bomb clearance.

Even 10 years ago the Defence Research and Development Organisation (DRDO) developed the battery-operated remote-controlled robot for bomb disposal — Daksh — mass-manufactured by the public sector Bharat Electronics as well as two private agencies, Dyna Log and Theta Controls.

DRDO has also developed a class of military drone — and indigenous sources  including some in the private sector, may eventually obviate the necessity of importing drones capable of delivering lethal packages.

However, these military developments are a far cry from the sort of civilian law enforcement scenarios where lethal robots are being sought to be deployed in the US today.

India’s Position: Humans Before AI


That is likely to be India’s considered position on this contentious issue that is engaging technologists and law enforcement agencies this week: how far machines can be allowed to make decisions for humans.

Delivering lethal force against civilians is "the exact opposite of what we should be using robots for," Paul Scharre, author of Army of None: Autonomous Weapons and the Future of War told the New York Times earlier this week.

Rogue robots taking their own decisions or malfunctioning, while sent on a potentially lethal task, may therefore not be a scenario that need agonise us in India right now.

Not as long as our planners, civil and military, continue to put human control at the epicentre of any new technology, no matter how attractive or effective the fully autonomous option seems.

Join our WhatsApp channel - no spam, only sharp analysis