Autonomous Weapons II: Autonomy Can Be Good
As the Future of Life Institute’s open letter shows, there are many people concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.
As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die. It is worth noting that the ship’s AI might qualify as a person, thus there could be one death. In contrast, the destruction of a crewed warship could results in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine—at least as long as their deployment reduced the number of deaths and injuries.
The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But, this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it certainly seems morally preferable for them to use autonomous weapons rather than employing conscripts and children.
It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is certainly worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower—something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.
At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human causalities and injuries. However, it seems somewhat more likely they would reduce human causalities, assuming that there are no other major changes in warfare.
A second appealing argument in favor of autonomous weapons is based on the fact that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were very primitive and quite inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb fusing and bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons would presumably be even more precise, thus reducing causalities even more. This seems to be desirable.
In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. Assuming that recognition software continues to be improved, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be far better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panic, thus making it far more likely they would maintain target discipline (only engaging what they should engage).
To make what should be an obvious argument obvious, if autonomous vehicles and similar technology is supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare.
It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest—these robots would just slaughter everyone in the area regardless of age or behavior. Human forces, one might contend, would show at least some discrimination or mercy.
The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of autonomous cars loaded with demolition explosives, but this would presumably not be reasons to have a ban on autonomous cars or demolition explosives. There is also the fact that dictators, warlords and terrorists are able to easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more such murders than would the use of human forces, police or terrorists.
A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce—analogous to Kalashnikov rifles. On the downside, as the authors argue, this would result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons would also create considerable savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would actually seem to be a great idea to use them to replace existing weapons.
A fourth argument in favor of autonomous weapons is that they could be deployed, with low political cost, on peacekeeping operations. Currently, the UN has to send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they are facing. However, if autonomous weapons will be as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost—people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.
An extension of this argument is that autonomous weapons could allow the nations of the world to engage groups like ISIS without having to pay the high political cost of sending in human forces. It seems likely that ISIS will persist for some time and other groups will surely appear that are rather clearly the enemies of the rest of humanity, yet which would be too expensive politically to engage with human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.
In light of the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons.