Skip to main content

Killer Robots: Should AI be used in wars?


In war it makes sense to keep more soldiers out of harm’s way-this means our future will be filled with autonomous weapons- tools that can select and target specific humans, even without human oversight.
With discoveries such as killer drones used by the American military to carry out assassinations, it begs the questions – is it ethical to use AI robots in war?
The primary reason against the use of AI in war is that a robot cannot deliberate or feel the weight of the decision to take a human life.
The second, is that if a robot should do something so terrible, there is no one to hold accountable or responsible. This lack of accountability is disrespectful to the enemy and rules of war. Like pledging beforehand that there will be no punishment for soldiers breaking the law.
However, should a robot be developed that is perfect- always killing the right person- minimizing the amount of harm necessary to complete the task, so why not?
Should such machines not be deployed because they can’t feel? allowing use of flawed humans who may shoot without thinking too much and are susceptible to errors, biases, negative emotions etc.
They shouldn’t be deployed. Because morality cannot be boiled down to a list of instructions. It requires experience, judgement and a moral sense that cannot be expressed by words. So no matter how complicated a machine gets, it can’t act for the right reasons.

Comments

Most Popular Post

Finding Customised Solutions From Software Development Companies

Services of Mobile App Development Companies Make The Automation More Distinct

Top 10 software development companies in Sydney