In war it makes sense to keep more soldiers out of harm’s way-this means our future will be filled with autonomous weapons- tools that can select and target specific humans, even without human oversight.
With discoveries such as killer drones used by the American military to carry out assassinations, it begs the questions – is it ethical to use AI robots in war?
The primary reason against the use of AI in war is that a robot cannot deliberate or feel the weight of the decision to take a human life.
The second, is that if a robot should do something so terrible, there is no one to hold accountable or responsible. This lack of accountability is disrespectful to the enemy and rules of war. Like pledging beforehand that there will be no punishment for soldiers breaking the law.
However, should a robot be developed that is perfect- always killing the right person- minimizing the amount of harm necessary to complete the task, so why not?
Should such machines not be deployed because they can’t feel? allowing use of flawed humans who may shoot without thinking too much and are susceptible to errors, biases, negative emotions etc.
They shouldn’t be deployed. Because morality cannot be boiled down to a list of instructions. It requires experience, judgement and a moral sense that cannot be expressed by words. So no matter how complicated a machine gets, it can’t act for the right reasons.
The right reason, being the intuitive feeling of doing anything, like for e.g. like a soldier fighting to establish peace or for rewards. It also makes a tremendous difference of why a state goes to war, that’s why there is huge debate on why the US went to war with Iraq. Was it a justified human intervention or just a resource grab? – Reason and Motivation must always matter deeply in war.
Criticizing weapons for not having reason, though, is anthropomorphizing them when same standard doesn’t apply to weapons. Leading to the biggest argument against AI in war –
Autonomous weapons are not like highly-advanced missiles. With missile-launchers, there is always a human behind every bullet and bomb, who conveys their intention via pressing the trigger. Autonomous robots are NOT like that; they can make lethal decisions on their own- taking in the role traditionally filled by a human, namely, deciding who should live or die.