On Monday, April 29, 2024, Austria emphasized the need for renewed efforts to regulate the use of artificial intelligence (AI) in weapons systems during a conference. This call comes amidst growing concerns about the development of autonomous weapons, commonly referred to as “killer robots.”
As AI technology advances, the prospect of machines or Killer robots capable of making life-and-death decisions without human intervention becomes increasingly plausible. This development presents significant moral and legal challenges that demand urgent attention and resolution.
“We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control,” Austrian Foreign Minister Alexander Schallenberg told the meeting of non-governmental and international organisations as well as envoys from 143 countries.
“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines,” he said in an opening speech to the conference entitled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.”
After years of discussions at the United Nations, there have been few tangible results.
“It is so important to act and to act very fast,” the president of the International Committee of the Red Cross, Mirjana Spoljaric, told a panel discussion at the conference.
“What we see today in the different contexts of violence are moral failures in the face of the international community. And we do not want to see such failures accelerating by giving the responsibility for violence, for the control over violence, over to machines and algorithms,” she added.
“We have already seen AI making selection errors in ways both large and small, from misrecognizing a referee’s bald head as a football, to pedestrian deaths caused by self-driving cars unable to recognize jaywalking,” Jaan Tallinn, a software programmer and tech investor, said in a keynote speech.
“We must be extremely cautious about relying on the accuracy of these systems, whether in the military or civilian sectors.”
Artificial Intelligence has already made its presence felt on the battlefield. In Ukraine, drones have demonstrated the ability to locate their targets even when severed from their operators by signal-jamming technology. Moreover, reports are suggesting that the Israeli military is exploring the utilisation of AI to aid in the detection of bombing targets in Gaza, a development that the United States is looking into.