BEACON TRANSCRIPT – In the age of AI-driven weapons, it is increasingly hard to differentiate between who’s an ally and who is a foe. Killer robots rekindle ethical debates, and many countries have voted in favor of putting to rest the project involving the deployment of fully autonomous war machines on the battlefield.
To some degree, it would seem that the movie “Terminator” was somewhat prophetic. According to the movie, in the not-so-distant future, intelligent machines would turn against their former masters and lay waste upon the world. This scenario might feel a little bit far-fetched, but reality says otherwise.
In a recent elite meeting taking place at Davos, in the Swiss Alps, many top scientists and billionaires voted in favor of autonomous weapons disarmament. It would seem that this is the first time since the Cold War when the great minds of our world gather in order to discuss the possibility of total annihilation.
The meeting took place between the 19th and the 23rd of January, in the Davos resort. Angela Kane, former Representative for Disarmament Affairs cautioned about the use of highly intelligent and self-reliable super weapons. The weapons specialists also added that it may be too late to revert the situation.
Moreover, Kane also added that the deployment of intelligent weapons will ultimately lead to a shift of power in favor of countries who possess the technology and the expertise to fabricate such weapons of destruction.
There is also an ethical aspect to this whole issue and Stuart Russel managed to frame quite well the whole ethical debate. According to the computer specialist, there is a radical distinction between military drones, which require human users to make the decisions and to operate the machines.
On the other hand, AI-driven weapons, which are fully autonomous and can make decisions on their own, rendering a human operator useless.
One of the first aspects that have to be taken into consideration is the fact that there are certain conditions on the battlefield which cannot be put in machine terms. For example, how can AI-driven war machine differentiate between freedom fighters, civilians, and enemies? Moreover, we are still uncertain if a machine is even capable of operating in chaotic battlefield conditions without going haywire.
And what happens when these killer machines fall into the wrong hands? What if someone tampers with its programming? These are all legitimate questions when it comes to the idea of using intelligent machines as soldiers. As Russel would put is, by using AI-driven robots on the battlefield, man becomes devoid of all ethical considerations.
Killer robots rekindle ethical debates and more countries have started to think about the consequences of employing fully autonomous war machines on the battlefield.