In 2016 the Dallas police used a remotely operated robot to kill a suspect with a bomb. While this marked a new use for robots in the realm of domestic policing, the decision making process was entirely conventional. That is, humans decided to use the machine and then a human operator controlled it for the attack. As such a true policebot is still a thing of science fiction. That said, considering policebots provides an interesting way to discuss police profiling in a speculative setting. While it might be objected that the discussion should focus on real police profiling, there are advantages to discussing controversial matters within a speculative context. One important advantage is that such a setting can help dampen emotional responses and enable a more rational discussion. The speculative context helps make the discussion less threatening to some who might react with greater hostility to discussions focused on the actual world. Star Trek’s discussion of issues of race in the 1960s through the use of science fiction is an excellent example of this sort of approach. Now, to the matter of policebots.
The policebots under consideration are those that would be capable of a high degree of autonomous operation. At the low end of autonomy, they could be deployed to handle traffic laws on their own. On the higher end, they could operate autonomously to conduct arrests of suspects who might resist arrest violently. Near the highest end would be robotic police at least as capable as human beings.
While there are legitimate worries that policebots could be used as unquestioning servants of the state to oppress and control elements of the population (something we will certainly see), there are also good reasons for using suitably advanced policebots. One obvious advantage is that policebots would be more resilient and easier to repair than human officers. Policebots that are not people would also be far more expendable and thus could save human lives by taking on the dangerous tasks of policing (such as engaging armed suspects). Another advantage is that robots will probably not get tired or bored, thus allowing them to patrol around the clock with maximum efficiency. Robots are also unlikely to be subject to the corrupting factors that influence humans or suffer from personal issues. There is also the possibility that policebots could be far more objective than human officers—this is, in fact, the main concern of this essay.
Like a human office, policbots would need to identify criminal behavior. In some cases this would be fairly easy. For example, an autonomous police drone could easily spot and ticket most traffic violations. In other cases, this would be incredibly complicated. For example, a policebot patrolling a neighborhood would need to discern between children playing at cops & robbers and people engaged in actual violence. As another example, a policebot on patrol would need to be able to sort out the difference between a couple having a public argument and an assault in progress.
In addition to sorting out criminal behavior from non-criminal behavior, policebots would also need to decide on how to focus their attention. For example, a policebot would need to determine who gets special attention in a neighborhood because they are acting suspicious or seem to be out of place. Assuming that policebots would be programed, the decision making process would be explicitly laid out in the code. Such focusing decisions would seem to be, by definition, based in profiling and this gives rise to important moral concerns.
Profiling that is based on behavior would seem to be generally acceptable, provided that such behavior is clearly linked to criminal activities and not to, as an example, ethnicity. For example, it would seem perfectly reasonable to focus attention on a person who makes an effort to stick to the shadows around houses while paying undue attention to houses that seem to be unoccupied at the time. While such a person might be a shy fellow who likes staring at unlit houses as a pastime, there is a reasonable chance he is casing the area for a robbery. As such, the policebot would be warranted in focusing on him.
The most obviously controversial area would be using certain demographic data for profiles. Young men tend to commit more crimes than middle-aged women. On the one hand, this would seem to be relevant data for programing a policebot. On the other hand, it could be argued that this would give the policebot a gender and age bias that would be morally wrong despite being factually accurate. It becomes vastly more controversial when data about such things as ethnicity, economic class and religion are considered. If accurate and objective data links such factors to a person being more likely to engage in crime, then a rather important moral concern arises. Obviously enough, if such data were not accurate, then it should not be included.
Sorting out the accuracy of such data can be problematic and there are sometimes circular appeals. For example, someone might defend the higher arrest rate of blacks by claiming that blacks commit more crimes than whites. When it is objected that the higher arrest right could be partially due to bias in policing, the reply is often that blacks commit more crimes and the proof is that blacks are arrested more than whites. That is, the justification runs in a circle.
But suppose that objective and accurate data showed links between the controversial demographic categories and crime. In that case, leaving it out of the programing could make policebots less effective. This could have the consequence of allowing more crimes to occur. This harm would need to be weighed against the harm of having the policebots programmed to profile based on such factors. One area of concern is public perception of the policebots and their use of profiling. This could have negative consequences that could outweigh the harm of having less efficient policebots.
Another area of potential harm is that even if the policebots operated on accurate data, they would still end up arresting people disproportionally, thus potentially causing harm that would exceed the harm done by the loss of effectiveness. This also ties into higher level moral concerns about the reasons why specific groups might commit more crimes than others and these reasons often include social injustice and economic inequality. As such, even “properly” programmed policebots could actually be arresting the victims of social and economic crimes. This suggests an interesting idea for a science fiction story: policebots that decide to reduce crime by going after the social and economic causes of crime rather than arresting people to enforce an unjust social order.