A Philosopher's Blog

Autonomous Weapons II: Autonomy Can Be Good

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on August 28, 2015

As the Future of Life Institute’s open letter shows, there are many people concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.

As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die. It is worth noting that the ship’s AI might qualify as a person, thus there could be one death. In contrast, the destruction of a crewed warship could results in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine—at least as long as their deployment reduced the number of deaths and injuries.

The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But, this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it certainly seems morally preferable for them to use autonomous weapons rather than employing conscripts and children.

It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is certainly worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower—something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.

At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human causalities and injuries. However, it seems somewhat more likely they would reduce human causalities, assuming that there are no other major changes in warfare.

A second appealing argument in favor of autonomous weapons is based on the fact that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were very primitive and quite inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb fusing and bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons would presumably be even more precise, thus reducing causalities even more. This seems to be desirable.

In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. Assuming that recognition software continues to be improved, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be far better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panic, thus making it far more likely they would maintain target discipline (only engaging what they should engage).

To make what should be an obvious argument obvious, if autonomous vehicles and similar technology is supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare.

It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest—these robots would just slaughter everyone in the area regardless of age or behavior. Human forces, one might contend, would show at least some discrimination or mercy.

The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of autonomous cars loaded with demolition explosives, but this would presumably not be reasons to have a ban on autonomous cars or demolition explosives. There is also the fact that dictators, warlords and terrorists are able to easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more such murders than would the use of human forces, police or terrorists.

A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce—analogous to Kalashnikov rifles. On the downside, as the authors argue, this would result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons would also create considerable savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would actually seem to be a great idea to use them to replace existing weapons.

A fourth argument in favor of autonomous weapons is that they could be deployed, with low political cost, on peacekeeping operations. Currently, the UN has to send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they are facing. However, if autonomous weapons will be as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost—people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.

An extension of this argument is that autonomous weapons could allow the nations of the world to engage groups like ISIS without having to pay the high political cost of sending in human forces. It seems likely that ISIS will persist for some time and other groups will surely appear that are rather clearly the enemies of the rest of humanity, yet which would be too expensive politically to engage with human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.

In light of the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Nuclear Policy

Posted in Politics by Michael LaBossiere on April 12, 2010
WMD world map
Image via Wikipedia

While some folks have expressed fear, anger and dismay towards the new nuclear policy (or at least their straw man versions), I am not worried.

While the policy does mark what appears to be a significant change, it actually appears to have little practical impact on how we would actually wage war.

The policy is that we will not use nuclear weapons on non-nuclear countries. Unless, of course, they are in violation of the Nuclear Non-Proliferation Treaty (that means, for example, we can still nuke Iran) or they use biological weapons against us. This is, of course, the approach taken by the United States in the post WWII world. After all, we did not use nuclear weapons in Korea, Vietnam, Afghanistan or Iraq. Even more importantly, all the major potential threats to the United States (Russia, China, North Korea, and Iran) are still legitimate targets for nuclear weapons. The countries that are excluded by this policy are hardly major threats to the United States.

Obviously, this policy does not actually change the nuclear weapons so that they can only be used in such situations. Should the United States face a truly dire situation that could only be resolved by nuclear weapons being employed  in a way that violates this policy, then the weapons would certainly be used. While Obama is cast as a weakling socialist, he would not allow the United States to be destroyed just so he could stick with this policy.

Of course, it might be argued that this is a meaningful political change. After all, it seems to have outraged many folks on the right. While much of their alleged outrage is probably mere political posturing, they certainly do seem to think that it is worth attacking. While this does not prove that this is really a meaningful policy change, it does suggest that this might be the case.

Also, it does seem to reflect a change in language and creates the appearance that we are further leashing our nuclear beast. And, as is often said, appearance is (seen as) reality in politics.

As I see it, the change is primarily rhetorical. This is, I think, a smart move. Obama can use the policy to improve how America is seen by the world and score political points without actually reducing America’s security. However, he does run the risk that the Republicans will also use this to score political points, even if they have to attack a straw man version of the policy.

Reblog this post [with Zemanta]

Ah, Republicans.

Posted in Politics by Michael LaBossiere on April 9, 2010
Clipped version of Gingrich and Lott.jpg, a fi...

Image via Wikipedia

Obama recently changed the United States’ nuclear policy and also signed a weapons treaty with Russia.

The gist of the policy change is that the US will not use nuclear weapons against non-nuclear powers (with some exceptions). Interestingly, Newt Gingrich and Sean Hannity claimed that under the new policy, the United States cannot respond with nuclear weapons to a massive biological weapon attack. However, this claim shows that these two men are either ignorant of the real policy or simply lying. This is because the policy makes an explicit statement that the United States retains the option of using nuclear weapons in such cases.

If Newt and Hannity are ignorant, then they were acting irresponsibly. After all, they have an obligation to determine the facts before making such claims. This is true of anyone, but as influential public figures (and being on a news program) they have an even greater obligation to get their facts right before making such claims. Naturally, people can miss facts even when acting in good conscience. However, finding out the facts about this policy is a rather easy matter and hence it is reasonable to expect these men to have taken the minuscule effort it would have taken to learn the truth.

If Newt and Hannity knew the truth, but simply lied in order to take shots at Obama and perhaps scare Americans, then they acted in an immoral manner. This, of course, assumes that lying is wrong.  However, if one takes the view that lying for political gain is acceptable, then this would be just fine. However, this would mean that the Democrats would be entitled to operate by the same principle as would the “liberal” media.

On a related note, I also happened to catch a clip of Sarah Palin criticizing this policy. She used an analogy to kids fighting on a school yard and there being one kid who says he will not hit back if attacked. Once again, Obama is not saying that we will not hit back. To make a more appropriate analogy, it is like kids fighting on the schoolyard and the biggest, toughest kid says that he will not use his baseball bat on kids who don’t have them. But, if someone hurls a rock at him, he will use the bat. Or if some kids have boards they want to make into bats, he can use the bat on them.

While I am all in favor of hitting people back, this means that I would be a rather bad Christian. After all, Jesus says:

You have heard that it was said, ‘Eye for eye, and tooth for tooth. ‘But I tell you, Do not resist an evil person. If someone strikes you on the right cheek, turn to him the other also.

Palin and many Republicans like to claim that they are Christians, so it was interesting to hear her blatantly rejecting what Jesus said.

Naturally, it can be argued that the bible is rather inconsistent and that a Christian does not have to follow that “turn the other cheek” thing. After all, the bible is full of passages justifying and allowing killing. Of course, this same sort of “pick and chose” should be extended to others as well, on the pain of inconsistency. So, for example, folks who want to ignore what the bible allegedly says about same sex marriage should feel as free to ignore that as Palin and other Republicans feel free to ignore other parts of the bible.

Reblog this post [with Zemanta]