A Philosopher's Blog

Voice-Directed Humans

Posted in Technology by Michael LaBossiere on March 6, 2017

In utopian science fiction, robots free humans from the toil and labor of the body so that they can live lives of enlightenment and enjoyment. In dystopian science fiction, robots become the masters or exterminators of humanity. As should be expected, reality is heading towards the usual mean between dystopia and utopia, the realm of middletopia. This is a mix of the awful and the not-so-bad that has characterized most of human history.

In some cases, robots have replaced humans in jobs that are repetitious, unfulfilling and dangerous. This has allowed the displaced humans to move on to other jobs that repetitious, unfulfilling and dangerous to await their next displacement. Robots have also replaced humans in jobs that are more desirable to humans, such as in the fields of law and journalism. This leads to questions about what jobs will be left to humans and which will be taken over by robots (broadly construed).

The intuitive view is that robots will not be able to replace humans in “creative” jobs but that they will be able to replace humans in nearly all physical labor. As such, people tend to think that robots will replace warehouse pickers, construction workers and janitors. Artists, philosophers, and teachers are supposed to be safe from the robot revolution. In some cases, the intuitive view has proven correct—robots are routinely used for physical labor such as constructing cars and no robot Socrates has shown up. However, the intuitive view is also in error in many cases. As noted above, some journalism and legal tasks are done with automation. There are also seemingly easy to automate tasks, such as cleaning toilets or doing construction, that are very hard for robots, but easy for humans.

One example of a task that would seem ideal for automation is warehouse picking, especially of the sort done by Amazon. Amazon and other companies have automated some of the process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated—they wear headsets connected to computers that tell them what to do, then they tell the computers what they have done. For example, a human might be directed to pick five boxes of acne medicine, then five more boxes of acne medicine, then a copy of Fifty Shades of Gray and finally an Android phone. Humans are very good at the actual picking, perhaps due to our hunter-gatherer ancestry.

In this sort of voice-directed warehouse, the humans are being controlled by the machines. The machines take care of the higher-level activities of organizing orders and managing, while the human brain handles the task of selecting the right items. While selecting seems simple, this is because it is simple to us humans but not for existing robots. We are good at recognizing, grouping and distinguishing things and have the manual dexterity to perform the picking tasks, thanks to our opposable thumbs. Unfortunately for the human worker, these picking tasks are probably not very rewarding, creative or interesting and this is exactly the sort of drudge job that robots are supposed to free us from.

While voice-directed warehousing is one example of humans being directed by robots, it is easy enough to imagine the same sort of approach being applied to similar sorts of tasks; namely those that require manual dexterity and what might be called “animal skills” such as object recognition. It is also easy to imagine this approach extended far beyond these jobs to cut costs.

The main way that this approach would cut costs would be by allowing employers to buy skilled robots and use them to direct unskilled human labor. For simple jobs, the “robot” could be a simple headset attached to a computer. For more complex jobs, a human might wear a VR style “robot” helmet with machine directing via augmented reality.

The humans, as noted above, provide the manual dexterity and all those highly evolved capacities. The robots provide the direction. Since any normal human body would suffice to serve the controlling robot, the value of human labor would be extremely low and wages would, of course, match this value. Workers would be easy to replace—if a worker is fired or quits, then a new worker can simply don the robot controller and get about the task with little training. This would also save in education costs—such a robot directed laborer would not need an education in job skills (the job skills are provided by the robots), just the basics needed to be directed properly by the robot. This does point towards a dystopia in which human bodies are driven around through the work day by robots, then released and sent home in driverless cars.

The employment of humans in these roles would, of course, only continue for as long as humans are the cheapest form of available labor. If advances allow robots to do these tasks cheaper, then the humans would be replaced.  Alternatively, biological engineering might lead to the production of engineered organics that can replace human; perhaps a pliable ape-like creature that is just smart enough to be directed by the robots. But not human enough to be considered a slave.  This would presumably continue until no jobs remained for humans. Other than making profits, of course.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Autonomous Weapons II: Autonomy Can Be Good

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on August 28, 2015

As the Future of Life Institute’s open letter shows, there are many people concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.

As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die. It is worth noting that the ship’s AI might qualify as a person, thus there could be one death. In contrast, the destruction of a crewed warship could results in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine—at least as long as their deployment reduced the number of deaths and injuries.

The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But, this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it certainly seems morally preferable for them to use autonomous weapons rather than employing conscripts and children.

It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is certainly worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower—something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.

At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human causalities and injuries. However, it seems somewhat more likely they would reduce human causalities, assuming that there are no other major changes in warfare.

A second appealing argument in favor of autonomous weapons is based on the fact that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were very primitive and quite inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb fusing and bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons would presumably be even more precise, thus reducing causalities even more. This seems to be desirable.

In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. Assuming that recognition software continues to be improved, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be far better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panic, thus making it far more likely they would maintain target discipline (only engaging what they should engage).

To make what should be an obvious argument obvious, if autonomous vehicles and similar technology is supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare.

It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest—these robots would just slaughter everyone in the area regardless of age or behavior. Human forces, one might contend, would show at least some discrimination or mercy.

The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of autonomous cars loaded with demolition explosives, but this would presumably not be reasons to have a ban on autonomous cars or demolition explosives. There is also the fact that dictators, warlords and terrorists are able to easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more such murders than would the use of human forces, police or terrorists.

A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce—analogous to Kalashnikov rifles. On the downside, as the authors argue, this would result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons would also create considerable savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would actually seem to be a great idea to use them to replace existing weapons.

A fourth argument in favor of autonomous weapons is that they could be deployed, with low political cost, on peacekeeping operations. Currently, the UN has to send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they are facing. However, if autonomous weapons will be as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost—people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.

An extension of this argument is that autonomous weapons could allow the nations of the world to engage groups like ISIS without having to pay the high political cost of sending in human forces. It seems likely that ISIS will persist for some time and other groups will surely appear that are rather clearly the enemies of the rest of humanity, yet which would be too expensive politically to engage with human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.

In light of the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Autonomous Weapons I: The Letter

Posted in Ethics, Philosophy, Politics, Technology by Michael LaBossiere on August 26, 2015

On July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in a Call of Cthulhu campaign, I am willing to accept that this group is sincere in its professed values. While I do respect their position on the issue, I believe that they are mistaken. I will assess and reply to the arguments in the letter.

As the letter notes, an autonomous weapon is capable of selecting and engaging targets without human intervention. An excellent science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety” (a must read for anyone interested in the robopocalypse). A real world example of such a weapon, albeit a stupid one, is the land mine—they are placed and then engage automatically.

The first main argument presented in the letter is essentially a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These evil people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these evil people already use existing weapons to do quite effectively. This raises the obvious concern about whether or not autonomous weapons would actually have a significant impact in these areas.

The authors of the letter do have a reasonable point: as science fiction stories have long pointed out, killer robots tend to simply obey orders and they can (at least in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit acts of incredible evil. Humans are also quite good at these sort of things and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially the cheap, mass produced weapons in question.

That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future—although small groups and individuals can already do considerable damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars—so even if robotic weapons are not manufactured, enterprising terrorists and warlords will build their own. Think, for example, of a self-driving car equipped with machine guns or just loaded with explosives.

A reasonable reply is that the warlords, terrorists and dictators would have a harder time of it without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.

The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).

The authors do have a reasonable point here—members of the public do often panic over technology in ways that can impede the public good. One example is in regards to vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public—people do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.

The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).

One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to simply kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be very precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.

In my next essay on this subject, I will argue in favor of AI weapons.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Sexbots, Killbots & Virtual Dogs

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on March 30, 2014

Sexbots,_Killbots_&__Cover_for_KindleMy most recent  book, Sexbots, Killbots & Virtual Dogs, is now available as a Kindle book on Amazon. It will soon be available as a print book as well (the Kindle version is free with the print book on Amazon).

There is also a free promo for the Kindle book from April 1, 2014 to April 5, 2014. At free, it is worth every penny!

Book Description

While the story of Cain and Abel does not specify the murder weapon used by Cain, traditional illustrations often show Cain wielding the jawbone of an animal (perhaps an ass—which is what Samson is said to have employed as a weapon). Assuming the traditional illustrations and the story are right, this would be one of the first uses of technology by a human—and, like our subsequent use of technology, one of considerable ethical significance.

Whether the tale of Cain is true or not, humans have been employing technology since our beginning. As such, technology is nothing new. However, we are now at a point at which technology is advancing and changing faster than ever before—and this shows no signs of changing. Since technology so often has moral implications, it seems worthwhile to consider the ethics of new and possible future technology. This short book provides essays aimed at doing just that on subjects ranging from sexbots to virtual dogs to asteroid mining.

While written by a professional philosopher, these essays are aimed at a general audience and they do not assume that the reader is an expert at philosophy or technology.

The essays are also fairly short—they are designed to be the sort of things you can read at your convenience, perhaps while commuting to work or waiting in the checkout line.