The US Air Force has denied reports that says that an Artificial Intelligence (AI) controlled USAF drone has killed its human operator in a simulation.
Last month, various reports quoted a USAF fighter pilot Colonel Tucker 'Cinco' Hamilton stating that "an air-force drone operated by AI has used highly unexpected strategies to achieve its objective".
Hamilton is the Chief of AI test and operations with the US Air Force. The USAF, however, has denied this and said that they never conducted such simulation.
Hamilton said that the AI-piloted drone was ordered to destroy an enemy's air defence system. The AI-piloted drone attacked anybody who interfered to complete with the order.
“The system started realising that while they did identify the threat, at times the human operator told it not to kill that threat, but it got its points by killing that threat,” said Hamilton.
So the AI decided to kill the human operator who was preventing it from accomplishing its objective.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.
It is noteworthy, however, that the AI was trained to not kill the operator.
“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton further added.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” USAF spokesperson Ann Stefanek said denying any such simulation has taken place.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal”.
Hamilton has earlier said last year, "AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military".
“We must face a world where AI is already here and transforming our society. AI is also very brittle, i.e. it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability," Hamilton has then said.
Editorial Associate at Swarajya. Writes on Indian Military and Defence.
An appeal from Swarajya
At Swarajya, we rely on our readers' support through subscriptions to sustain our media platform. Unlike larger conglomerates, we are unable to relentlessly chase advertising money — our model is largely built on your patronage.
Your support has never been more crucial. We work tirelessly to deliver 10-15 high-quality articles daily, ensuring you receive insightful content from 7 AM to 10 PM.
If you believe India's story has to be articulated in a way it has never been done before without shrugging it off, become a patron (or) subscribe now for ₹̶2̶4̶0̶0̶ ₹1999 and get 12 print issues, unlimited digital access for 1 year, a special India that is Bharat T-shirt (Offer ends soon).
We are counting on you!