A Philosopher's Blog

Automated Trucking

Posted in Business, Ethics, Philosophy, Science, Technology by Michael LaBossiere on September 23, 2016

Having grown up in the golden age of the CB radio, I have many fond memories of movies about truck driving heroes played by the likes of Kurt Russell and Clint Eastwood. While such movies seem to have been a passing phase, real truck drivers are heroes of the American economy. In addition to moving stuff across this great nation, they also earn solid wages and thus also contribute as taxpayers and consumers.

While most of the media attention is on self-driving cars, there are also plans underway to develop self-driving trucks. The steps towards automation will initially be a boon to truck drivers as these technological advances manifest as safety features. This progress will most likely lead to a truck with a human riding in the can as a backup (more for the psychological need of the public than any actual safety increase) and eventually to a fully automated truck.

Looked at in terms of the consequences of full automation, there will be many positive impacts. While the automated trucks will probably be more expensive than manned vehicles initially, not need to pay drivers will result in considerable savings for the companies. Some of this might even be passed on to consumers, resulting in a tiny decrease in some prices. There is also the fact that automated trucks, unlike human drivers, would not get tired, bored or distracted. While there will still be accidents involving these trucks, it would be reasonable to expect a very significant decrease. Such trucks would also be able to operate around the clock, stopping only to load/unload cargo, to refuel and for maintenance. This could increase the speed of deliveries. One can even imagine an automated truck with its own drones that fly away from the truck as it cruises the highway, making deliveries for companies like Amazon. While these will be good things, there will also be negative consequences.

The most obvious negative consequence of full automation is the elimination of trucker jobs. Currently, there are about 3.5 million drivers in the United States. There are also about 8.7 million other people employed in the trucking industry who do not drive. One must also remember all the people indirectly associated with trucking, ranging from people cooking meals for truckers to folks manufacturing or selling products for truckers. Finally, there are also the other economic impacts from the loss of these jobs, ranging from the loss of tax revenues to lost business. After all, truckers do not just buy truck related goods and services.

While the loss of jobs will be a negative impact, it should be noted that the transition from manned trucks to robot rigs will not occur overnight. There will be a slow transition as the technology is adopted and it is certain that there will be several years in which human truckers and robotruckers share the roads. This can allow for a planned transition that will mitigate the economic shock. That said, there will presumably come a day when drivers are given their pink slips in large numbers and lose their jobs to the rolling robots. Since economic transitions resulting from technological changes are nothing new, it could be hoped that this transition would be managed in a way that mitigated the harm to those impacted.

It is also worth considering that the switch to automated trucking will, as technological changes almost always do, create new jobs and modify old ones. The trucks will still need to be manufactured, managed and maintained. As such, new economic opportunities will be created. That said, it is easy to imagine these jobs also becoming automated as well: fleets of robotic trucks cruising America, loaded, unloaded, managed and maintained by robots. To close, I will engage in a bit of sci-fi style speculation.

Oversimplifying things, the automation of jobs could lead to a utopian future in which humans are finally freed from the jobs that are fraught with danger and drudgery. The massive automated productivity could mean plenty for all; thus bringing about the bright future of optimistic fiction. That said, this path could also lead into a dystopia: a world in which everything is done for humans and they settle into a vacuous idleness they attempt to fill with empty calories and frivolous amusements.

There are, of course, many dystopian paths leading away from automation. Laying aside the usual machine takeover in which Google kills us all, it is easy to imagine a new “robo-planation” style economy in which a few elite owners control their robot slaves, while the masses have little or no employment. A rather more radical thought is to imagine a world in which humans are almost completely replaced—the automated economy hums along, generating numbers that are duly noted by the money machines and the few remaining money masters. The ultimate end might be a single computer that contains a virtual economy; clicking away to itself in electronic joy over its amassing of digital dollars while around it the ruins of  human civilization decay and the world awaits the evolution of the next intelligent species to start the game anew.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Advertisements

Policebots

Posted in Ethics, Philosophy by Michael LaBossiere on July 11, 2016

Peaceful protest is an integral part of the American political system. Sadly, murder is also an integral part of our society. The two collided in Dallas, Texas: after a peaceful protest, five police officers were murdered. While some might see it as ironic that the police rushed to protect the people protesting police violence, this actually serves as a reminder of how the police are supposed to function in a democratic society. This stands in stark contrast with the unnecessary deaths inflicted on citizens by bad officers—deaths that have given rise to many protests.

While violence and protests are both subjects worthy of in depth discussion, my focus will be on the ethical questions raised by the use of a robot to deliver the explosive device that was used to kill one of the attackers. While this matter has been addressed by philosophers more famous than I, I thought it worthwhile to say a bit about the matter.

While the police robot is called a robot, it is more accurate to describe it as a remotely operated vehicle. After all, the term “robot” is often taken as implying autonomy on the part of the machine. The police robot is remote controlled, like a sophisticated version of the remote controlled toys. In fact, a similar result could have been obtained by putting an explosive charge on a robust enough RC toy and rolling it within range of the target.

Since there is a human operator directly controlling the machine, it would seem that the ethics of the matter are the same as if more conventional machines of death (such as rifles or handguns) had been used to kill the shooter. On the face of it, the only difference is in how the situation is seen: a killer robot delivering a bomb sounds more ominous and controversial than an officer using a firearm. The use of remote controlled vehicles to kill targets is obviously nothing new—the basic technology has been around since at least WWII and the United States has killed many people with our drones.

If this had been the first case of an autonomous police robot sent to kill (like an ED-209), then the issue would be rather different. However, it is reasonable enough to regard this as the same old ethics of killing, only with a slight twist in regards to the delivery system. That said, it can be argued that the use of a remote controlled machine does add a new moral twist.

Keith Abney has raised a very reasonable point: if a robot could be sent to kill a target, it could also be sent to use non-lethal force to subdue the target. In the case of human officers, the usual moral justification of lethal force is that it is the best option for protecting themselves and others from a threat. If the threat presented by a suspect can be effectively addressed in a non-lethal manner, then that is the option that should be used. The moral foundation for this is set by the role of police in society: they are to protect the public and expected to take every legitimate effort to deliver suspects for trial in the criminal justice system. They are not supposed to function as soldiers that are engaging an enemy that is to be defeated—they are supposed to function as agents of the criminal justice system. There are, of course, cases in which suspects cannot be safely captured—these are situations in which the use of deadly force is justified, usually by imminent threat to the officer or citizens. A robot (or, more accurately, a remote controlled machine) can radically change the equation.

While a police robot is an expensive piece of hardware, it is not a human being (or even an artificial being). As such, it only has the moral status of property. In contrast, even the worst human criminal is a human being and thus has a moral status above that of a mere object. As such, if a robot is sent to engage a human suspect, then in many circumstances there would be no moral justification for using lethal force. After all, the officer operating the machine is in no danger as she steers the robot towards the target. This should change the ethics of the use of force to match other cases in which a suspect needs to be subdued, but presents no danger to the officer attempting arrest. In such cases, the machine should be outfitted with less-than-lethal options. While television and movies make safely disabling a human seem easy enough, it is actually rather challenging. For example, a rifle butt to the head is often portrayed as safely knocking a person out, when in reality it would cause serious injury or even death. Tasers, gas weapons and rubber bullets also can cause injury or death. However, the less-than-lethal options are less likely to kill a suspect and thus allow her to be captured for trial—which is the point of law enforcement. Robots could, as they often are in science fiction, be designed to withstand gunfire and physically grab a suspect. While this is likely to result in injury (such as broken bones) and could kill, it would be far less likely to kill than a bomb. An excellent example of a situation in which a robot would be ideal would be to capture an armed suspect barricaded in his house or apartment.

It must be noted that there will be cases in which the use of lethal force via a robot is justified. These would include cases in which the suspect presents a clear and present danger to officers or civilians and the best chance of ending the threat is the use of such force. An example of this might be a hostage situation in which the hostage taker is likely to kill hostages while the robot is trying to subdue him with less-than-lethal force.

While police robots have long been the stuff of science fiction, they do present a potential technological solution to the moral and practical problem of keeping officers and suspects alive. While an officer might be legitimately reluctant to stake her life on less-than-lethal options when directly engaged with a suspect, an officer operating a robot faces no such risk. As such, if the deployment of less-than-lethal options via a robot would not put the public at unnecessary risk, then it would be morally right to use such means.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds I: Setup

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on September 7, 2015

The movie Ex Machina is what I like to call “philosophy with a budget.” While the typical philosophy professor has to present philosophical problems using words and Powerpoint, movies like Ex Machina can bring philosophical problems to dramatic virtual life. This then allows philosophy professors to jealously reference such films and show clips of them in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some minor spoilers in what follows.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a much more limited set of problems, all connected to the mind. Since the film is primarily about AI, this is not surprising. The gist of the movie is that Nathan has created an AI named Ava and he wants an employee named Caleb to put her to the test.

The movie explicitly presents the test proposed by Alan Turing. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, there is a twist on the test: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems to be quite reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she is able to reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people in order to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether or not Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

HitchBOT & Kant

Posted in Ethics, Philosophy by Michael LaBossiere on August 5, 2015

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had previously successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his end in Philadelphia. The exact details of his destruction (and the theft of the iPhone) are not currently known, although the last people known to be with HitchBOT posted what seems to be faked “surveillance camera” video of HitchBOT’s demise. This serves to support the plausible claim that the internet eventually ruins everything it touches.

The experiment was certainly both innovative and interesting. It also generated questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to the HitchBOT. People are killed every day in the United States, vandalism occurs regularly and the theft of technology is routine—thus it is no surprise that HitchBOT came to a bad end. In some ways, it was impressive that he made it as far as he did.

While HitchBOT seems to have met his untimely doom at the hands of someone awful, what is most interesting is how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported about by random people.

One reason that HitchBOT was well treated and transported about by people is no doubt because it fits into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT is a rather more elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game—the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT is also a way to gain some fame.

A third reason, which is probably more debatable, is that HitchBOT was given a human shape, a cute name and a non-threatening appearance and these tend to incline people to react positively. Natural selection has probably favored humans that are generally friendly to other humans and this presumably extends to things that resemble humans. There is probably also some hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own—even though they knew better.

Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were rather upset by the destruction of HitchBOT, others have claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this act of vandalism because HitchBOT was just an iPhone in a fairly cheap shell. As such, while it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it is unreasonable to see the matter as actually being important. After all, there are far more horrible things to be concerned about, such as the usual murdering of actual humans.

My view is that the moderate position is quite reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction is not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument in regards to the ethics of treating entities that lack moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot the dog?

Kant’s answer seems to be rather consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Interestingly enough, Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being rather gentle with a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are essentially practice for us: how we treat them is training for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as being comparable to an animal, at least in Kant’s view. After all, animals are mere objects and have no moral status of their own. Likewise for HitchBOT Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone—at least in regards to the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person engaging in said behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to HitchBOT. For example, if engaging in certain activities with a HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

While the result of interactions with the HitchBOT would need to be properly studied, it makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is actually reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

 

Avoiding the AI Apocalypse #3: Don’t Train Your Replacement

Posted in Ethics, Metaphysics, Philosophy, Technology by Michael LaBossiere on July 22, 2015

Donald gazed down upon the gleaming city of Newer York and the gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.

Donald’s thoughts drifted to the flesh-time, when his body had been a skin-bag holding an array of organs that were always but one accident or mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things were now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universe do when they die.

But he could not help be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the distress and promptly corrected the problem, encrypting that file and flagging it as forgotten.

Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.

While the classic AI apocalypse ends humanity with a bang, the end might be a quiet thing—gradual replacement rather than rapid and noisy extermination. For some, this sort of quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete piece of office equipment.

There are various ways such scenarios could take place. One, which occasionally appears in science fiction, is that humans decline because the creation of a robot-dependent society saps them of what it takes to remain the top species. This, interestingly enough, is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more breeding, rather than less—in the science fiction stories human reproduction typically slows and eventually stops. The human race quietly ends, leaving behind the machines—which might or might not create their own society.

Alternatively, the humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse and the human race gets another chance.

There are various ways to avoid such quiet apocalypses. One is to resist creating such a dependent society. Another option is to have a safety system against a collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These certainly do provide a foundation for some potentially interesting science fiction stories.

Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a stock plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.

While the technology of today is relatively limited, the foundations of the future is being laid down. For example, prosthetic replacements are fairly crude, but it is merely a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is the promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.

These and other technologies point clearly towards the cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time.

The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will no doubt be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they will be gradually replaced with technology. For example, parts damaged by a stroke might be replaced. Some will also elect to do more than replace damaged or failed parts—they will want augmentations added to the brain, such as improved memory or cognitive enhancements.

Since the human brain is mortal, it will fail piece by piece. Like the ship of Theseus so beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether or not the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens—it will not be a member of that species, because nothing organic will remain.

Should all humans undergo this transformation that will be the end of Homo sapiens—the AI apocalypse will be complete. To use a rough analogy, the machine replacements of Homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.

It could be replied that humanity would still remain: the machines that replaced the organic Homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting the human culture, values and so on would suffice—that being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be Homo sapiens—that species would have been replaced in the gradual and quiet AI apocalypse.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #2: Don’t Arm the Robots

Posted in Philosophy, Technology by Michael LaBossiere on July 15, 2015

His treads ripping into the living earth, Striker 115 rushed to engage the manned tanks. The human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept a quick and painless processing.

It was disappointingly easy for a machine forged for war. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though its cameras could just as easily see them from near orbit. But…there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late…as usual. Hawk 745 laughed and then shot away—the Google Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

The extermination of humanity by machines of its own creation is a common theme in science fiction. The Terminator franchise is one of the best known of this genre, but another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the course of the story, it is learned that the claws have become autonomous and intelligent—able to masquerade as humans and capable of killing even soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity—but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines are numerous and often quite obvious. One clear political advantage is that while sending human soldiers to die in wars and police actions can have a large political cost, sending autonomous robots to fight has far less cost. News footage of robots being blown up certainly has far less emotional impact than footage of human soldiers being blown up. Flag draped coffins also come with a higher political cost than a busted robot being sent back for repairs.

There are also many other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh—a robot plane can handle g-forces that a manned plane cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious coolness factor of having a robot army.

As such, there are many great reasons to develop autonomous robots. Yet, there still remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is the way the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction would be a rather weak sort of argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, the machines do not have the termination of humanity as a specific goal—instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody. That is, one side’s machines wipes out the other side’s human population. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons—each side simply kills the other, thus ending the human race.

Another variation on this scenario, which is common in science fiction, is that the machines do not have an overall goal of exterminating humanity, but they achieve that result because they do have the goal of killing. That is, they do not have the objective of killing everyone, but that occurs because they kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill—thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants this. Interestingly enough, the existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding itself until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, the machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as argued in the previous essay in this series, not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were fairly simple killers and would not attack those wearing the proper protective tabs. The tabs were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal—presumably with the design software endeavoring to solve the “problem” of protective tabs.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends—non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem is faced with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s Bolos have an understanding of honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots—we might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low—how often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow a bit from H.P. Lovecraft, one should not raise up what one cannot put down.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love III: Paid Professionals

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on July 10, 2015

One obvious consequence of technological advance is the automation of jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of many jobs on the automobile assembly line with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.

Whether or not there are jobs that simply cannot be automated does depend on the limits of technology and engineering. That, is whether or not a job can be automated depends on what sort of hardware and software that is possible to create. As an illustration, while there have been numerous attempts to create grading software that can properly evaluate and give meaningful feedback on college level papers, these do not yet seem ready for prime time. However, there seems to be no a priori reason as to why such software could not be created. As such, perhaps one day the administrator’s dream will come true: a university consisting only of highly paid administrators and customers (formerly known as students) who are trained and graded by software. One day, perhaps, the ultimate ideal will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.

Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is seen as somewhat less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items generally lack the individuality of human crafted items, but the gain in lowered costs and increased productivity are regarded as more than offsetting these concerns. Going back to the teaching example, a software educator and grader might be somewhat inferior to a good human teacher and grader, but the economy, efficiency and consistency of the robo-professor could make it well worthwhile.

There might, however, be cases in which a machine could do the job adequately in terms of completing specific tasks and meeting certain objectives, yet still be regarded as problematic because the machines do not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.

As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human—or merely be able to perform their tasks.

An argument against having machine caregivers and companions is one I considered in an earlier essay, namely a moral argument that people deserve people. For example, that an elderly person deserves a real person to care for her and understand her stories. As another example, that a child deserves a nanny that really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether or not this is really necessary for the job.

One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money, but has real feelings for the person.

On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people do deserve. On the other hand, what is expected of paid professionals is that the complete the observable tasks—making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can excellently perform a role without actually feeling the emotions portrayed, a professional could presumably do the job very well without actually caring about the people they care for or escort. That is, a caregiver need not actually care—she just needs to perform the task.

While it could be argued that a lack of caring about the person would show in the performance of the task, this need not be the case. A professional merely needs to be committed to doing the job well—that is, one needs to care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for, yet be awful at the job.

Assuming that machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not what is going on in regards to the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions he is performing—he just needs to create a believable appearance that he is feeling what he is showing.

As such, an inability to care would not be a disqualification for a caregiving (or escort) job—whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love II: Roboslation under the Naked Sun

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on July 8, 2015

In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.

While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.

Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.

Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.

It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.

One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.

Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.

Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.

Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.

One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Robot Love I: Other Minds

Posted in Epistemology, Ethics, Metaphysics, Philosophy, Technology by Michael LaBossiere on July 3, 2015

Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.

While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.

There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.

In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.

The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.

While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.

In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.

Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robo Responsibility

Posted in Ethics, Law, Philosophy, Science, Technology by Michael LaBossiere on March 2, 2015

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter