Since the premiere of ‘Terminator’, killer robots have been a topic in debates about the potential dangers of artificial intelligence. But in recent times they have ceased to be a neo-Luddite argument to become a matter of real concern for politicians and leaders of the technology industry.
During a recent interview for the Daily Telegraph, Microsoft Chairman Brad Smith himself stated that, to this day, killer robots are “unstoppable”, because the great military powers (USA, China, Russia, UK, Israel, South Korea, etc.) have already started a new arms race in the field of weapons equipped with AI.
Y, as already happened with nuclear weapons, a race in which many claim to be first and best can lead to excessive risk-taking by runners. Not to mention that a scenario in which human troops are safeguarded from the battlefield makes it “cheaper” for governments to declare wars.
South Korea, for example, has already arranged its robots SGR-1jointly developed by Samsung Techwin and Korea University, on the very edge of the Demilitarized Zone. These robots would be able to detect North Korean soldiers crossing the border and, technically, they could fire without the need for human intervention.
At the other end of Asia, Israel Aerospace Industries has created a smart missile called Harpy, programmed to prowl for hours until it detects emissions from a hostile radar system.
Meanwhile, the United States is developing the program Squad X, based on the use of AI robots in joint training with the Marines, in which these machines operate autonomously unless given orders.
New rules are needed for a new world
Smith recalled that we are facing technologies that are advancing very rapidly, that we will soon witness how drones (flying, swimmers and walkers) begin to be equipped with missiles or other weapons, capable of operating autonomously.
Many technologists are beginning to speak out calling on their governments to no AI can make combat decisions fully autonomouslyWithout depending at any point on the approval of a human, because an “error in judgment” of an intelligent robot is not only as likely as that of a human, but its consequences can be much worse.
And for this reason, to avoid these dangers as far as possible, Smith deems it necessary a new Geneva Convention adapted to today’s technological world, which endows us with “norms that protect both civilians and soldiers.”
Today there are already four of these conventions, all signed in the Swiss city of the same name, with which since 1864 it has been built a minimum international consensus on the ethical limits of war:
According to Smith, now that 70 years have passed since the approval of the last of these conventions, the time has come for the great world powers to agree on acceptable standards when applying artificial intelligence to science. war.
In the book he just published this month, called “Tools and Weapons”, Smith also advocates that Humanity be endowed with stricter rules on the use of facial recognition technology “to protect against potential abuse“.
It should be noted that, last August, a report by the Dutch NGO PAX pointed out that the main technology companies are putting the world in danger with their collaboration in the development of killer AIs, and ranked Microsoft and Amazon as the highest risk companies.