Peaceful protest is an integral part of the American political system. Sadly, murder is also an integral part of our society. The two collided in Dallas, Texas: after a peaceful protest, five police officers were murdered. While some might see it as ironic that the police rushed to protect the people protesting police violence, this actually serves as a reminder of how the police are supposed to function in a democratic society. This stands in stark contrast with the unnecessary deaths inflicted on citizens by bad officers—deaths that have given rise to many protests.
While violence and protests are both subjects worthy of in depth discussion, my focus will be on the ethical questions raised by the use of a robot to deliver the explosive device that was used to kill one of the attackers. While this matter has been addressed by philosophers more famous than I, I thought it worthwhile to say a bit about the matter.
While the police robot is called a robot, it is more accurate to describe it as a remotely operated vehicle. After all, the term “robot” is often taken as implying autonomy on the part of the machine. The police robot is remote controlled, like a sophisticated version of the remote controlled toys. In fact, a similar result could have been obtained by putting an explosive charge on a robust enough RC toy and rolling it within range of the target.
Since there is a human operator directly controlling the machine, it would seem that the ethics of the matter are the same as if more conventional machines of death (such as rifles or handguns) had been used to kill the shooter. On the face of it, the only difference is in how the situation is seen: a killer robot delivering a bomb sounds more ominous and controversial than an officer using a firearm. The use of remote controlled vehicles to kill targets is obviously nothing new—the basic technology has been around since at least WWII and the United States has killed many people with our drones.
If this had been the first case of an autonomous police robot sent to kill (like an ED-209), then the issue would be rather different. However, it is reasonable enough to regard this as the same old ethics of killing, only with a slight twist in regards to the delivery system. That said, it can be argued that the use of a remote controlled machine does add a new moral twist.
Keith Abney has raised a very reasonable point: if a robot could be sent to kill a target, it could also be sent to use non-lethal force to subdue the target. In the case of human officers, the usual moral justification of lethal force is that it is the best option for protecting themselves and others from a threat. If the threat presented by a suspect can be effectively addressed in a non-lethal manner, then that is the option that should be used. The moral foundation for this is set by the role of police in society: they are to protect the public and expected to take every legitimate effort to deliver suspects for trial in the criminal justice system. They are not supposed to function as soldiers that are engaging an enemy that is to be defeated—they are supposed to function as agents of the criminal justice system. There are, of course, cases in which suspects cannot be safely captured—these are situations in which the use of deadly force is justified, usually by imminent threat to the officer or citizens. A robot (or, more accurately, a remote controlled machine) can radically change the equation.
While a police robot is an expensive piece of hardware, it is not a human being (or even an artificial being). As such, it only has the moral status of property. In contrast, even the worst human criminal is a human being and thus has a moral status above that of a mere object. As such, if a robot is sent to engage a human suspect, then in many circumstances there would be no moral justification for using lethal force. After all, the officer operating the machine is in no danger as she steers the robot towards the target. This should change the ethics of the use of force to match other cases in which a suspect needs to be subdued, but presents no danger to the officer attempting arrest. In such cases, the machine should be outfitted with less-than-lethal options. While television and movies make safely disabling a human seem easy enough, it is actually rather challenging. For example, a rifle butt to the head is often portrayed as safely knocking a person out, when in reality it would cause serious injury or even death. Tasers, gas weapons and rubber bullets also can cause injury or death. However, the less-than-lethal options are less likely to kill a suspect and thus allow her to be captured for trial—which is the point of law enforcement. Robots could, as they often are in science fiction, be designed to withstand gunfire and physically grab a suspect. While this is likely to result in injury (such as broken bones) and could kill, it would be far less likely to kill than a bomb. An excellent example of a situation in which a robot would be ideal would be to capture an armed suspect barricaded in his house or apartment.
It must be noted that there will be cases in which the use of lethal force via a robot is justified. These would include cases in which the suspect presents a clear and present danger to officers or civilians and the best chance of ending the threat is the use of such force. An example of this might be a hostage situation in which the hostage taker is likely to kill hostages while the robot is trying to subdue him with less-than-lethal force.
While police robots have long been the stuff of science fiction, they do present a potential technological solution to the moral and practical problem of keeping officers and suspects alive. While an officer might be legitimately reluctant to stake her life on less-than-lethal options when directly engaged with a suspect, an officer operating a robot faces no such risk. As such, if the deployment of less-than-lethal options via a robot would not put the public at unnecessary risk, then it would be morally right to use such means.