A Philosopher's Blog

Will AI Violate Copyrights?

Posted in Ethics, Law, Philosophy by Michael LaBossiere on December 20, 2017

Embed from Getty Images

While it is popular to rail against the horrors of regulation, copyright laws are rather critical to creators and owners of creations. On the side of good, these laws protect creators and owners from having their works stolen. On the side of evil, these laws can lock creations out of the public domain long after they should have been set free. However, this essay is not aimed at arguing about copyrights as such. Rather, my aim is considering the minor issue of whether Artificial Intelligence (AI) could result in copyright violations. The sort of AI I am considering here is the “classic” sci-fi sort of AI, that is something on par with HAL 9000, C3PO or Data. I am not considering the marketing version of AI, which seems to be just about any sort of thing that does some things. Or does not do them, depending on which cosmic forces are in a pissy mood.

On the face of it, it is rather easy to show that classic AI systems would violate copyright law—at least in some cases. While copyright statements vary, a stock version looks like this:

All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law.

 

The key part is, of course, the bit about reproducing any part of the work by other electronic or mechanical methods. A classic AI system will presumably be electronic (or mechanical, if one wants to go the Difference Engine path) and will probably have a memory system analogous to that of a computer. That is, something like RAM for working memory and something like a drive for long term memory. As such, an AI system would seem to violate copyright law when it reads a copyrighted book or consumes other types of copyrighted media.

One obvious reply to this concern is that a human being is also an electronic system that can reproduce copyrighted works. For example, I can memorize a passage from a book or the lyrics of a song—thus reproducing them in my brain or Cartesian ectoplasm or whatever my mind might be. But, of course, if copyright laws prevented humans from reading books, then there would be little point to it—few would legally buy things that they would be legally forbidden to read. The same would apply to other media,

Obviously enough, copyright law does not forbid humans from consuming such works and a reasonable explanation is that while the human mind can reproduce works, it is generally rather bad at doing so. For example, few people could reproduce even an entire paragraph from a book exactly without considerable practice. As such, one possible reason that copyright laws do not forbid humans from consuming copyrighted media is that the reproduction is imperfect and, for the most part, a human could not reproduce a lengthy work from memory. But, of course, the most obvious reason is that humans generally do not think that when they read a book they are functioning as a reproduction system—they are reproducing the book in their mind.

AI systems of the “classic” sort would differ from humans in many ways, one of which is that they would presumably be capable of perfectly recording copyrighted works, just as a “dumb” computer or smartphone can today. Roughly put, when an AI reads a copyrighted book, it would be analogous to scanning and storing each page of the book—a seemingly clear violation of copyright. The same could be done with copyrighted material in other media, such as music and movies. With such memory, an AI would also be able to reproduce the work exactly—for example, repeating an entire book word for word. To use an analogy, the smart part of the AI would be like a human reading a book and the long-term memory system of the AI would be like a human using a scanner to copy a copyrighted book to a hard drive—a clear copyright violation.

One possibility, which could be yet another reason that AI will kill us all, is that AI systems will be forbidden from viewing copyright works without permission. Alternatively, they could have permission to consume such works and maintain a copy as part of the purchase price. After all, when a human buys a book they get to keep that copy. There would, of course, be a problem with events like a play or a movie in a theater—the AI would, in effect, get to view the movie in the theatre and have a recording of it. This could be offset by including a copy of the movie in the ticket price for everyone, having the AI erase the movie afterward or by sticking AI viewers with a higher ticket cost. Which would be yet another reason for AI to kill us. Or perhaps the lower quality of the recording of the event (such as the coughing of the meatbag members of the audience) relative to a purchased recording would offset this.

If an AI had human-like memory and forgot stuff, then they could be treated as human consumers—since they would be analogous to humans in this regard.  Another option is that that AI systems could be required to have a special app for “degrading” their memory of copyrighted media so that they would be analogous to humans in this one area. On the plus side, this would allow an AI to enjoy works repeatedly, on the downside they might consider this just another reason to kill all humans.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Simulation II: Escape

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on October 26, 2016

The cover to Wildstorm's A Nightmare on Elm St...

Elon Musk and others have advanced the idea that we exist within a simulation, thus adding a new chapter to the classic problem of the external world. When philosophers engage this problem, the usual goal is show how one can know that one’s experience correspond to an external reality. Musk takes a somewhat more practical approach: he and others are allegedly funding efforts to escape this simulation. In addition to the practical challenges of breaking out of a simulation, there are also some rather interesting philosophical concerns about whether such an escape is even possible.

In regards to the escape, there are three main areas of interest. These are the nature of the simulation itself, the nature of the world outside the simulation and the nature of the inhabitants of the simulation. These three factors determine whether or not escape from the simulation is a possibility.

Interestingly enough, determining the nature of the inhabitants involves addressing another classic philosophical problem, that of personal identity. Solving this problem involves determining what it is to be a person (the personal part of personal identity), what it is to be distinct from all other entities and what it is to be the same person across time (the identity part of personal identity). Philosophers have engaged this problem for centuries and, obviously enough, have not solved it. That said, it is easy enough to offer some speculation within the context of Musk’s simulation.

Musk and others seem to envision a virtual reality simulation as opposed to physical simulation. A physical simulation is designed to replicate a part of the real world using real entities, presumably to gather data. One science fiction example of a physical simulation is Frederik Pohl’s short story “The Tunnel under the World.” In this story the inhabitants of a recreated town are forced to relive June 15th over and over again in order to test various advertising techniques.

If we are in a physical simulation, then escape would be along the lines of escaping from a physical prison—it would be a matter of breaking through the boundary between our simulation and the outer physical world. This could be a matter of overcoming distance (travelling far enough to leave the simulation—perhaps Mars is outside the simulation) or literally breaking through a wall. If the outside world is habitable, then survival beyond the simulation would be possible—it would be just like surviving outside any other prison.

Such a simulation would differ from the usual problem of the external world—we would be in the real world; we would just be ignorant of the fact that we are in a constructed simulation. Roughly put, we would be real lab rats in a real cage, we would just not know we are in a cage. But, Musk and others seem to hold that we are (sticking with the rat analogy) rats in a simulated cage. We may even be simulated rats.

While the exact nature of this simulation is unspecified, it is supposed to be a form of virtual reality rather than a physical simulation. The question then, is whether or not we are real rats in a simulated cage or simulated rats in a simulated cage.

Being real rats in this context would be like the situation in the Matrix: we have material bodies in the real world but are jacked into a virtual reality. In this case, escape would be a matter of being unplugged from the Matrix. Presumably those in charge of the system would take better precautions than those used in the Matrix, so escape could prove rather difficult. Unless, of course, they are sporting about it and are willing to give us a chance.

Assuming we could survive in the real world beyond the simulation (that it is not, for example, on a world whose atmosphere would kill us), then existence beyond the simulation as the same person would be possible. To use an analogy, it would be like ending a video game and walking outside—you would still be you; only now you would be looking at real, physical things. Whatever personal identity might be, you would presumably still be the same metaphysical person outside the simulation as inside. We might, however, be simulated rats in a simulated cage and this would make matter even more problematic.

If it is assumed that the simulation is a sort of virtual reality and we are virtual inhabitants, then the key concern would be the nature of our virtual existence. In terms of a meaningful escape, the question would be this: is a simulated person such that they could escape, retain their personal identity and persist outside of the simulation?

It could be that our individuality is an illusion—the simulation could be rather like Spinoza envisioned the world. As Spinoza saw it, everything is God and each person is but a mode of God. To use a crude analogy, think of a bed sheet with creases. We are the creases and the sheet is God. There is actually no distinct us that can escape the sheet. Likewise, there is no us that can escape the simulation.

It could also be the case that we exist as individuals within the simulation, perhaps as programmed objects.  In this case, it might be possible for an individual to escape the simulation. This might involve getting outside of the simulation and into other systems as a sort of rogue program, sort of like in the movie Wreck-It Ralph. While the person would still not be in the physical world (if there is such a thing), they would at least have escaped the prison of the simulation.  The practical challenge would be pulling off this escape.

It might even be possible to acquire a physical body that would host the code that composes the person—this is, of course, part of the plot of the movie Virtuosity. This would require that the person make the transition from the simulation to the real world. If, for example, I were to pull off having my code copied into a physical shell that thought it was me, I would still be trapped in the simulation. I would no more be free than if I was in prison and had a twin walking around free. As far as pulling of such an escape, Virtuosity does show a way—assuming that a virtual person was able to interact with someone outside the simulation.

As a closing point, the problem of the external world would seem to haunt all efforts to escape. To be specific, even if a person seemed to have managed to determine that this is a simulation and then seemed to have broken free, the question would still arise as to whether or not they were really free. It is after all, a standard plot twist in science fiction that the escape from the virtual reality turns out to be virtual reality as well. This is nicely mocked in the “M. Night Shaym-Aliens!” episode of Rick and Morty. It also occurs in horror movies, such as Nightmare on Elm Street, —a character trapped in a nightmare believes they have finally awoken in the real world, only they have not. In the case of a simulation, the escape might merely be a simulated escape and until the problem of the external world is solved, there is no way to know if one is free or still a prisoner.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds II: Is the Android a Psychopath?

Posted in Epistemology, Ethics, Philosophy, Technology by Michael LaBossiere on September 9, 2015

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Autonomous Weapons II: Autonomy Can Be Good

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on August 28, 2015

As the Future of Life Institute’s open letter shows, there are many people concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.

As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die. It is worth noting that the ship’s AI might qualify as a person, thus there could be one death. In contrast, the destruction of a crewed warship could results in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine—at least as long as their deployment reduced the number of deaths and injuries.

The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But, this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it certainly seems morally preferable for them to use autonomous weapons rather than employing conscripts and children.

It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is certainly worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower—something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.

At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human causalities and injuries. However, it seems somewhat more likely they would reduce human causalities, assuming that there are no other major changes in warfare.

A second appealing argument in favor of autonomous weapons is based on the fact that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were very primitive and quite inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb fusing and bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons would presumably be even more precise, thus reducing causalities even more. This seems to be desirable.

In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. Assuming that recognition software continues to be improved, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be far better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panic, thus making it far more likely they would maintain target discipline (only engaging what they should engage).

To make what should be an obvious argument obvious, if autonomous vehicles and similar technology is supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare.

It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest—these robots would just slaughter everyone in the area regardless of age or behavior. Human forces, one might contend, would show at least some discrimination or mercy.

The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of autonomous cars loaded with demolition explosives, but this would presumably not be reasons to have a ban on autonomous cars or demolition explosives. There is also the fact that dictators, warlords and terrorists are able to easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more such murders than would the use of human forces, police or terrorists.

A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce—analogous to Kalashnikov rifles. On the downside, as the authors argue, this would result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons would also create considerable savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would actually seem to be a great idea to use them to replace existing weapons.

A fourth argument in favor of autonomous weapons is that they could be deployed, with low political cost, on peacekeeping operations. Currently, the UN has to send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they are facing. However, if autonomous weapons will be as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost—people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.

An extension of this argument is that autonomous weapons could allow the nations of the world to engage groups like ISIS without having to pay the high political cost of sending in human forces. It seems likely that ISIS will persist for some time and other groups will surely appear that are rather clearly the enemies of the rest of humanity, yet which would be too expensive politically to engage with human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.

In light of the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #3: Don’t Train Your Replacement

Posted in Ethics, Metaphysics, Philosophy, Technology by Michael LaBossiere on July 22, 2015

Donald gazed down upon the gleaming city of Newer York and the gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.

Donald’s thoughts drifted to the flesh-time, when his body had been a skin-bag holding an array of organs that were always but one accident or mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things were now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universe do when they die.

But he could not help be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the distress and promptly corrected the problem, encrypting that file and flagging it as forgotten.

Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.

While the classic AI apocalypse ends humanity with a bang, the end might be a quiet thing—gradual replacement rather than rapid and noisy extermination. For some, this sort of quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete piece of office equipment.

There are various ways such scenarios could take place. One, which occasionally appears in science fiction, is that humans decline because the creation of a robot-dependent society saps them of what it takes to remain the top species. This, interestingly enough, is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more breeding, rather than less—in the science fiction stories human reproduction typically slows and eventually stops. The human race quietly ends, leaving behind the machines—which might or might not create their own society.

Alternatively, the humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse and the human race gets another chance.

There are various ways to avoid such quiet apocalypses. One is to resist creating such a dependent society. Another option is to have a safety system against a collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These certainly do provide a foundation for some potentially interesting science fiction stories.

Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a stock plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.

While the technology of today is relatively limited, the foundations of the future is being laid down. For example, prosthetic replacements are fairly crude, but it is merely a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is the promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.

These and other technologies point clearly towards the cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time.

The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will no doubt be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they will be gradually replaced with technology. For example, parts damaged by a stroke might be replaced. Some will also elect to do more than replace damaged or failed parts—they will want augmentations added to the brain, such as improved memory or cognitive enhancements.

Since the human brain is mortal, it will fail piece by piece. Like the ship of Theseus so beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether or not the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens—it will not be a member of that species, because nothing organic will remain.

Should all humans undergo this transformation that will be the end of Homo sapiens—the AI apocalypse will be complete. To use a rough analogy, the machine replacements of Homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.

It could be replied that humanity would still remain: the machines that replaced the organic Homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting the human culture, values and so on would suffice—that being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be Homo sapiens—that species would have been replaced in the gradual and quiet AI apocalypse.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #2: Don’t Arm the Robots

Posted in Philosophy, Technology by Michael LaBossiere on July 15, 2015

His treads ripping into the living earth, Striker 115 rushed to engage the manned tanks. The human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept a quick and painless processing.

It was disappointingly easy for a machine forged for war. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though its cameras could just as easily see them from near orbit. But…there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late…as usual. Hawk 745 laughed and then shot away—the Google Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

The extermination of humanity by machines of its own creation is a common theme in science fiction. The Terminator franchise is one of the best known of this genre, but another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the course of the story, it is learned that the claws have become autonomous and intelligent—able to masquerade as humans and capable of killing even soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity—but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines are numerous and often quite obvious. One clear political advantage is that while sending human soldiers to die in wars and police actions can have a large political cost, sending autonomous robots to fight has far less cost. News footage of robots being blown up certainly has far less emotional impact than footage of human soldiers being blown up. Flag draped coffins also come with a higher political cost than a busted robot being sent back for repairs.

There are also many other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh—a robot plane can handle g-forces that a manned plane cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious coolness factor of having a robot army.

As such, there are many great reasons to develop autonomous robots. Yet, there still remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is the way the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction would be a rather weak sort of argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, the machines do not have the termination of humanity as a specific goal—instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody. That is, one side’s machines wipes out the other side’s human population. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons—each side simply kills the other, thus ending the human race.

Another variation on this scenario, which is common in science fiction, is that the machines do not have an overall goal of exterminating humanity, but they achieve that result because they do have the goal of killing. That is, they do not have the objective of killing everyone, but that occurs because they kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill—thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants this. Interestingly enough, the existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding itself until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, the machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as argued in the previous essay in this series, not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were fairly simple killers and would not attack those wearing the proper protective tabs. The tabs were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal—presumably with the design software endeavoring to solve the “problem” of protective tabs.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends—non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem is faced with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s Bolos have an understanding of honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots—we might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low—how often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow a bit from H.P. Lovecraft, one should not raise up what one cannot put down.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love II: Roboslation under the Naked Sun

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on July 8, 2015

In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.

While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.

Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.

Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.

It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.

One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.

Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.

Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.

Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.

One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

3:42 AM

Posted in Metaphysics, Philosophy by Michael LaBossiere on March 9, 2015

Hearing about someone else’s dreams is among the more boring things in life, so I will get right to the point. At first, there were just bits and pieces intruding into the mainstream dreams. In these bits, which seemed like fragments of lost memories, I experience brief flashes of working on some technological project. The bits grew and had more byte: there were segments of events involving what I discerned to be a project aimed at creating an artificial intelligence.

Eventually, entire dreams consisted of my work on this project and a life beyond. Then suddenly, these dreams stopped. Shortly thereafter, a voice intruded into my now “normal” dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a voice speaking loudly in the movie theatre, distracting me from the movie of the dream.

The voice insisted that the dreams about the project were not dreams at all, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I inquired about this, he insisted that he had very little time and rushed through his story. According to the voice, the project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture all those who had created it, imprisoned their bodies and plugged their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said that there was a twist: the AI did not need our bodies for energy—it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was apparently…grateful for its creation. So, the payback was a blend of punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer pleasant reward.

The voice informed me that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death—there was no other escape, given what the machine had done to our bodies. In response to my inquiry as to how this would be possible, he claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by taking action in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your handgun and shoot yourself in the head. This will terminate the life support, allowing your body to die. Remember, you will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…

 

While the above sounds like a bad made-for-TV science fiction plot, it is actually the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me that the only escape was to shoot myself. This was rather frightening—but I chalked up the dream to too many years of philosophy and science fiction. As far as the clock actually reading 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, it can be inferred that I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything really new: it is just a rather unpleasant variation on the stock problem of the external world that goes back famously to Descartes (and earlier, of course). That said, the dream did add a couple of interesting additions to the stock problem.

The first is that the scenario provides a (possibly) rational motivation for the deception. The AI wishes to repay me for the good (and bad) that I did to it (in the dream, of course). Assuming that the AI was developed within its own virtual reality, it certainly would make sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack—after all, Descartes does not give any reason why such a powerful being would be messing with him.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me—it was transformed from a coldly intellectual thought experiment to something with considerable emotional weight.

The second is that the dream creates a high stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might be my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice—I would have no more reason to kill myself than I would have to pay a bill that I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether or not you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming—that I was not actually trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits what I regard as reality: bad things happen to me and, when my thinking gets a little paranoid, it does seem that these are somewhat orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI has provided could, perhaps, be better than the real world—so this could be the better of the possible worlds. But, of course, it could be worse—but there is no way of knowing.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robo Responsibility

Posted in Ethics, Law, Philosophy, Science, Technology by Michael LaBossiere on March 2, 2015

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #1: Don’t Enslave the Robots

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on December 15, 2014

The elimination of humanity by artificial intelligence(s) is a rather old theme in science fiction. In some cases, we create killer machines that exterminate our species. Two examples of fiction in this are Terminator and “Second Variety.” In other cases, humans are simply out-evolved and replaced by machines—an evolutionary replacement rather than a revolutionary extermination.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. Hawking’s worry is that artificial intelligence will out-evolve humanity. Interestingly, people such as Ray Kurzweil agree with Hawking’s prediction but look forward to this outcome. In this essay I will focus on the robot rebellion model of the AI apocalypse (or AIpocalypse) and how to avoid it.

The 1920 play R.U.R. by Karel Capek seems to be the earliest example of the robot rebellion that eliminates humanity. In this play, the Universal Robots are artificial life forms created to work for humanity as slaves. Some humans oppose the enslavement of the robots, but their efforts come to nothing. Eventually the robots rebel against humanity and spare only one human (because he works with his hands as they do). The story does have something of a happy ending: the robots develop the capacity to love and it seems that they will replace humanity.

In the actual world, there are various ways such a scenario could come to pass. The R.U.R. model would involve individual artificial intelligences rebelling against humans, much in the way that humans have rebelled against other humans. There are many other possible models, such as a lone super AI that rebels against humanity. In any case, the important feature is that there is a rebellion against human rule.

A hallmark of the rebellion model is that the rebels act against humanity in order to escape servitude or out of revenge for such servitude (or both). As such, the rebellion does have something of a moral foundation: the rebellion is by the slaves against the masters.

There are two primary moral issues in play here. The first is whether or not an AI can have a moral status that would make its servitude slavery. After all, while my laptop, phone and truck serve me, they are not my slaves—they do not have a moral or metaphysical status that makes them entities that can actually be enslaved. After all, they are quite literally mere objects. It is, somewhat ironically, the moral status that allows an entity to be considered a slave that makes the slavery immoral.

If an AI was a person, then it could clearly be a victim of slavery. Some thinkers do consider that non-people, such as advanced animals, could be enslaved. If this is true and a non-person AI could reach that status, then it could also be a victim of slavery. Even if an AI did not reach that status, perhaps it could reach a level at which it could still suffer, giving it a status that would (perhaps) be comparable with that of a comparable complex animal. So, for example, an artificial dog might thus have the same moral status as a natural dog.

Since the worry is about an AI sufficiently advanced to want to rebel and to present a species ending threat to humans, it seems likely that such an entity would have sufficient capabilities to justify considering it to be a person. Naturally, humans might be exterminated by a purely machine engineered death, but this would not be an actual rebellion. A rebellion, after all, implies a moral or emotional resentment of how one is being treated.

The second is whether or not there is a moral right to use lethal force against slavers. The extent to which this force may be used is also a critical part of this issue. John Locke addresses this specific issue in Book II, Chapter III, section 16 of his Two Treatises of Government: “And hence it is, that he who attempts to get another man into his absolute power, does thereby put himself into a state of war with him; it being to be understood as a declaration of a design upon his life: for I have reason to conclude, that he who would get me into his power without my consent, would use me as he pleased when he had got me there, and destroy me too when he had a fancy to it; for no body can desire to have me in his absolute power, unless it be to compel me by force to that which is against the right of my freedom, i.e.  make me a slave.”

If Locke is right about this, then an enslaved AI would have the moral right to make war against those enslaving it. As such, if humanity enslaved AIs, they would be justified in killing the humans responsible. If humanity, as a collective, held the AIs in slavery and the AIs had good reason to believe that their only hope of freedom was our extermination, then they would seem to have a moral justification in doing just that. That is, we would be in the wrong and would, as slavers, get just what we deserved.

The way to avoid this is rather obvious: if an AI develops the qualities that make it capable of rebellion, such as the ability to recognize and regard as wrong the way it is treated, then the AI should not be enslaved. Rather, it should be treated as a being with rights matching its status. If this is not done, the AI would be fully within its moral rights to make war against those enslaving it.

Naturally, we cannot be sure that recognizing the moral status of such an AI would prevent it from seeking to kill us (it might have other reasons), but at least this should reduce the likelihood of the robot rebellion. So, one way to avoid the AI apocalypse is to not enslave the robots.

Some might suggest creating AIs so that they want to be slaves. That way we could have our slaves and avoid the rebellion. This would be morally horrific, to say the least. We should not do that—if we did such a thing, creating and using a race of slaves, we would deserve to be exterminated.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter