Should Killer Robots be Banned?
You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.
Humans have been using machines to kill each other for centuries and these machines have become ever more advanced and lethal. In more recent decades there has been considerable focus on developing autonomous weapons. That is, weapons that can locate and engage the enemy on their own without being directly controlled by human beings. The crude seeking torpedoes of World War II are an example of an early version of such a killer machine. Once fired, the torpedo would be guided by acoustic sensors to its target and then explode—it was a crude, suicidal mechanical shark. Of course, this weapon had very limited autonomy since humans decided when to fire it and at what target.
Thanks to advances in technology, far greater autonomy is now possible. One peaceful example of this is the famous self-driving cars. While some see them as privacy killing robots, they are not designed to harm people—quite the opposite, in fact. However, it is easy to see how the technology used to guide a car safely around people, animals and other vehicles could be used to guide an armed machine to its targets.
Not surprisingly, some people are rather concerned about the possibility of killer robots, or with less hyperbole, autonomous weapon systems. Recently there has been a push to ban such weapons by international treaty. While people are no doubt afraid of killer machines roaming about due to science fiction stories and movies, there are legitimate moral, legal and practical grounds for such a ban.
One concern is that while autonomous weapons might be capable of seeking out and engaging targets, they would lack the capability to make the legal and moral decisions needed to operate within the rules of war. As a specific example, there is the concern that a killer robot will not be able to distinguish between combatants and non-combatants as reliably as a human being. As such, autonomous weapon systems could be far more likely than human combatants to kill noncombatants due to improper classification.
One obvious reply is that while there are missions in which the ability to make such distinctions would be important, there are others where it would not be required on the part of the autonomous weapon. If a robot infantry unit were engaged in combat within a populated city, then it would certainly need to be able to make such a distinction. However, just a human bomber crew sent on a mission to destroy a factory would not be required to make such distinctions, an autonomous bomber would not need to have this ability. As such, this concern only has merit in cases in which such distinctions must be made and could be reasonably made by a human in the same situation. Thus, a sweeping ban on autonomous weapons would not be warranted by this concern.
A second obvious reply is that this is a technical problem that could be solved to a degree that would make an autonomous weapon at least as reliable as an average human soldier in making the distinction between combatants and non-combatants. It seems likely that this could be done given that the objective is a human level of reliability. After all, humans in combat do make mistakes in this matter so the bar is not terribly high. As such, banning such weapons would seem to be premature—it would need to be shown that such weapons could not make this distinction as well as an average human in the same situation.
A second concern is based on the view that the decision to kill should be made by a human being and not by a machine. Such a view could be based on an abstract view about the moral right to make killing decisions or perhaps on the view that humans would be more merciful than machines.
One obvious reply is that autonomous weapons are still just weapons. Human leaders will, presumably, decide when they are deployed and give them their missions. This is analogous to a human firing a seeking missile—the weapon tracks and destroys the intended target, but the decision that someone should die was made by a human. Presumably humans would be designing the decision making software for the machines and they could program in a form of digital mercy—if desired.
There is, of course, the science fiction concern that the killer machines will become completely autonomous and fight their own wars (as in Terminator and “Second Variety”). The concern about rogue systems is worth considering, but is certainly a tenuous basis for a ban on autonomous weapons.
Another obvious reply is that while a machine would probably lack mercy, they would also lack anger and hate. As such, they might actually be less awful about killing than humans.
A third concern is based on the fact that autonomous machines are just machines without will or choice (which might also be true of humans). As such, wicked or irresponsible leaders could acquire autonomous weapons that will simply do what they are ordered to do, even if that involves slaughtering children.
The obvious, but depressing, reply to this is that such leaders seem to never want for people to do bidding, however awful that bidding might be. Even a cursory look at the history of war and terrorism shows that this is a terrible truth. As such, autonomous weapons do not seem to pose a special danger in this regard: anyone who could get an army of killer robots would almost certainly be able to get an army of killer humans.
There is, of course, a legitimate concern that autonomous weapons could be hacked and used by terrorists or other bad people. However, this would be the same as such people getting access to non-autonomous weapons and using them to hurt and kill people.
In general, the moral motivation of the people who oppose autonomous weapons is laudable. They presumable wish to cut down on death and suffering. However, this goal seems to be better served by the development of autonomous weapons. Some reasons for this are as follows.
First, since autonomous weapons are not crewed, their damage or destruction will not result in harm or death to people. If a manned fighter plane is destroyed, that is likely to result in harm or death to a person. However, if a robot fighter plane is shot down, no one dies. If both sides are using autonomous weapons, then the causality count would presumably be lower than in a conflict where the weapons are all manned. To use an analogy, automating war could be analogous to automating dangerous factory work.
Second, autonomous weapons can advance the existing trend in precision weapons. Just as “dumb” bombs that were dropped in massive raids gave way to laser guided bombs, autonomous weapons could provide an even greater level of precision. This would be, in part, due to the fact that there is no human crew at risk and hence the safety of the crew would no longer be a concern. For example, rather than having a manned aircraft drop a missile on target while jetting by at a high altitude, an autonomous craft could approach the target closely at a lower speed in order to ensure that the missile hits the right target.
Thus, while the proposal to ban such weapons is no doubt motivated by the best of intentions, the ban itself would not be morally justified.