A Philosopher's Blog

Should Killer Robots be Banned?

Posted in Ethics, Law, Philosophy, Politics, Science, Technology by Michael LaBossiere on October 23, 2013
The Terminator.

The Terminator. (Photo credit: Wikipedia)

You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.

-Will Rogers

 

Humans have been using machines to kill each other for centuries and these machines have become ever more advanced and lethal. In more recent decades there has been considerable focus on developing autonomous weapons. That is, weapons that can locate and engage the enemy on their own without being directly controlled by human beings. The crude seeking torpedoes of World War II are an example of an early version of such a killer machine. Once fired, the torpedo would be guided by acoustic sensors to its target and then explode—it was a crude, suicidal mechanical shark. Of course, this weapon had very limited autonomy since humans decided when to fire it and at what target.

Thanks to advances in technology, far greater autonomy is now possible. One peaceful example of this is the famous self-driving cars. While some see them as privacy killing robots, they are not designed to harm people—quite the opposite, in fact. However, it is easy to see how the technology used to guide a car safely around people, animals and other vehicles could be used to guide an armed machine to its targets.

Not surprisingly, some people are rather concerned about the possibility of killer robots, or with less hyperbole, autonomous weapon systems. Recently there has been a push to ban such weapons by international treaty. While people are no doubt afraid of killer machines roaming about due to science fiction stories and movies, there are legitimate moral, legal and practical grounds for such a ban.

One concern is that while autonomous weapons might be capable of seeking out and engaging targets, they would lack the capability to make the legal and moral decisions needed to operate within the rules of war. As a specific example, there is the concern that a killer robot will not be able to distinguish between combatants and non-combatants as reliably as a human being. As such, autonomous weapon systems could be far more likely than human combatants to kill noncombatants due to improper classification.

One obvious reply is that while there are missions in which the ability to make such distinctions would be important, there are others where it would not be required on the part of the autonomous weapon. If a robot infantry unit were engaged in combat within a populated city, then it would certainly need to be able to make such a distinction. However, just a human bomber crew sent on a mission to destroy a factory would not be required to make such distinctions, an autonomous bomber would not need to have this ability. As such, this concern only has merit in cases in which such distinctions must be made and could be reasonably made by a human in the same situation. Thus, a sweeping ban on autonomous weapons would not be warranted by this concern.

A second obvious reply is that this is a technical problem that could be solved to a degree that would make an autonomous weapon at least as reliable as an average human soldier in making the distinction between combatants and non-combatants. It seems likely that this could be done given that the objective is a human level of reliability. After all, humans in combat do make mistakes in this matter so the bar is not terribly high.  As such, banning such weapons would seem to be premature—it would need to be shown that such weapons could not make this distinction as well as an average human in the same situation.

A second concern is based on the view that the decision to kill should be made by a human being and not by a machine. Such a view could be based on an abstract view about the moral right to make killing decisions or perhaps on the view that humans would be more merciful than machines.

One obvious reply is that autonomous weapons are still just weapons. Human leaders will, presumably, decide when they are deployed and give them their missions. This is analogous to a human firing a seeking missile—the weapon tracks and destroys the intended target, but the decision that someone should die was made by a human. Presumably humans would be designing the decision making software for the machines and they could program in a form of digital mercy—if desired.

There is, of course, the science fiction concern that the killer machines will become completely autonomous and fight their own wars (as in Terminator and “Second Variety”). The concern about rogue systems is worth considering, but is certainly a tenuous basis for a ban on autonomous weapons.

Another obvious reply is that while a machine would probably lack mercy, they would also lack anger and hate. As such, they might actually be less awful about killing than humans.

A third concern is based on the fact that autonomous machines are just machines without will or choice (which might also be true of humans). As such, wicked or irresponsible leaders could acquire autonomous weapons that will simply do what they are ordered to do, even if that involves slaughtering children.

The obvious, but depressing, reply to this is that such leaders seem to never want for people to do bidding, however awful that bidding might be. Even a cursory look at the history of war and terrorism shows that this is a terrible truth. As such, autonomous weapons do not seem to pose a special danger in this regard: anyone who could get an army of killer robots would almost certainly be able to get an army of killer humans.

There is, of course, a legitimate concern that autonomous weapons could be hacked and used by terrorists or other bad people. However, this would be the same as such people getting access to non-autonomous weapons and using them to hurt and kill people.

In general, the moral motivation of the people who oppose autonomous weapons is laudable. They presumable wish to cut down on death and suffering. However, this goal seems to be better served by the development of autonomous weapons. Some reasons for this are as follows.

First, since autonomous weapons are not crewed, their damage or destruction will not result in harm or death to people. If a manned fighter plane is destroyed, that is likely to result in harm or death to a person. However, if a robot fighter plane is shot down, no one dies. If both sides are using autonomous weapons, then the causality count would presumably be lower than in a conflict where the weapons are all manned. To use an analogy, automating war could be analogous to automating dangerous factory work.

Second, autonomous weapons can advance the existing trend in precision weapons. Just as “dumb” bombs that were dropped in massive raids gave way to laser guided bombs, autonomous weapons could provide an even greater level of precision. This would be, in part, due to the fact that there is no human crew at risk and hence the safety of the crew would no longer be a concern. For example, rather than having a manned aircraft drop a missile on target while jetting by at a high altitude, an autonomous craft could approach the target closely at a lower speed in order to ensure that the missile hits the right target.

Thus, while the proposal to ban such weapons is no doubt motivated by the best of intentions, the ban itself would not be morally justified.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

3 Responses

Subscribe to comments with RSS.

  1. ajmacdonaldjr said, on October 23, 2013 at 1:45 pm

    In the technological society, efficiency drives everything; and morality is passé…

    “Modern technology has become a total phenomenon for civilization, the defining force of a new social order in which efficiency is no longer an option but a necessity imposed on all human activity.” ~ Jacques Ellul

    “In the midst of increasing mechanization and technological organization, propaganda is simply the means used to prevent these things from being felt as too oppressive and to persuade man to submit with good grace. When man will be fully adapted to this technological society, when he will end by obeying with enthusiasm, convinced of the excellence of what he is forced to do, the constraint of the organization will no longer be felt by him; the truth is, it will no longer be a constraint, and the police will have nothing to do. The civic and technological good will and the enthusiasm for the right social myths — both created by propaganda — will finally have solved the problem of man.” ~ Jaques Ellul

  2. robotman said, on October 24, 2013 at 9:36 am

    Why focus only on human failings? There are so so many circumstances in warfare where humans have done what humans do best – act humanely and with kindness and empathy. Missions can be terminated when human judgement prevails. It is an every increasing problem to believe that just because humans sometimes act badly, that technology can do it better. Machines are certainly much better at arithmetic than me, but are they better at being humane, sympathetic and moral?

    This is a well argued paper but one of the problems with modern philosophy is that if you plug in the wrong premise all the follows is total shit.

    • Michael LaBossiere said, on October 24, 2013 at 3:22 pm

      A fair concern-it can be argued that human troops are preferable to robotic troops because humans have the quality of mercy that robots would presumably lack.

      However, a human commander would be just as likely to have the quality of mercy as any other human and could thus exercise this quality in the commands given to the robotic troops. But, it is fair to note that human troops do provide another level of possible mercy-they might refuse an order on moral grounds while robots that lack a moral capacity (or equivalent) would not.

      My main reason for focusing on human failings is that the concerns about killer robots generally also arise for killer humans. For example, the worry that killer robots pose a special threat because they will follow even wicked orders is legitimate-but there is the obvious fact that humans are also often quite willing to follow even wicked orders.

      Now, if most human combatants were ethical and consistently acted on the quality of mercy, then the possibility of killer robots would be a major worry. However, bad folks such as Hitler, Stalin, Pol Pot and so on have had little trouble finding people willing to do terrible things for them. This is not to completely dismiss the quality of mercy, but it is to say that the idea that killer robots will pose a new or special danger because they just do as they are commanded is offset by the fact that humans often do just that.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: