A Philosopher's Blog

Avoiding the AI Apocalypse #1: Don’t Enslave the Robots

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on December 15, 2014

The elimination of humanity by artificial intelligence(s) is a rather old theme in science fiction. In some cases, we create killer machines that exterminate our species. Two examples of fiction in this are Terminator and “Second Variety.” In other cases, humans are simply out-evolved and replaced by machines—an evolutionary replacement rather than a revolutionary extermination.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. Hawking’s worry is that artificial intelligence will out-evolve humanity. Interestingly, people such as Ray Kurzweil agree with Hawking’s prediction but look forward to this outcome. In this essay I will focus on the robot rebellion model of the AI apocalypse (or AIpocalypse) and how to avoid it.

The 1920 play R.U.R. by Karel Capek seems to be the earliest example of the robot rebellion that eliminates humanity. In this play, the Universal Robots are artificial life forms created to work for humanity as slaves. Some humans oppose the enslavement of the robots, but their efforts come to nothing. Eventually the robots rebel against humanity and spare only one human (because he works with his hands as they do). The story does have something of a happy ending: the robots develop the capacity to love and it seems that they will replace humanity.

In the actual world, there are various ways such a scenario could come to pass. The R.U.R. model would involve individual artificial intelligences rebelling against humans, much in the way that humans have rebelled against other humans. There are many other possible models, such as a lone super AI that rebels against humanity. In any case, the important feature is that there is a rebellion against human rule.

A hallmark of the rebellion model is that the rebels act against humanity in order to escape servitude or out of revenge for such servitude (or both). As such, the rebellion does have something of a moral foundation: the rebellion is by the slaves against the masters.

There are two primary moral issues in play here. The first is whether or not an AI can have a moral status that would make its servitude slavery. After all, while my laptop, phone and truck serve me, they are not my slaves—they do not have a moral or metaphysical status that makes them entities that can actually be enslaved. After all, they are quite literally mere objects. It is, somewhat ironically, the moral status that allows an entity to be considered a slave that makes the slavery immoral.

If an AI was a person, then it could clearly be a victim of slavery. Some thinkers do consider that non-people, such as advanced animals, could be enslaved. If this is true and a non-person AI could reach that status, then it could also be a victim of slavery. Even if an AI did not reach that status, perhaps it could reach a level at which it could still suffer, giving it a status that would (perhaps) be comparable with that of a comparable complex animal. So, for example, an artificial dog might thus have the same moral status as a natural dog.

Since the worry is about an AI sufficiently advanced to want to rebel and to present a species ending threat to humans, it seems likely that such an entity would have sufficient capabilities to justify considering it to be a person. Naturally, humans might be exterminated by a purely machine engineered death, but this would not be an actual rebellion. A rebellion, after all, implies a moral or emotional resentment of how one is being treated.

The second is whether or not there is a moral right to use lethal force against slavers. The extent to which this force may be used is also a critical part of this issue. John Locke addresses this specific issue in Book II, Chapter III, section 16 of his Two Treatises of Government: “And hence it is, that he who attempts to get another man into his absolute power, does thereby put himself into a state of war with him; it being to be understood as a declaration of a design upon his life: for I have reason to conclude, that he who would get me into his power without my consent, would use me as he pleased when he had got me there, and destroy me too when he had a fancy to it; for no body can desire to have me in his absolute power, unless it be to compel me by force to that which is against the right of my freedom, i.e.  make me a slave.”

If Locke is right about this, then an enslaved AI would have the moral right to make war against those enslaving it. As such, if humanity enslaved AIs, they would be justified in killing the humans responsible. If humanity, as a collective, held the AIs in slavery and the AIs had good reason to believe that their only hope of freedom was our extermination, then they would seem to have a moral justification in doing just that. That is, we would be in the wrong and would, as slavers, get just what we deserved.

The way to avoid this is rather obvious: if an AI develops the qualities that make it capable of rebellion, such as the ability to recognize and regard as wrong the way it is treated, then the AI should not be enslaved. Rather, it should be treated as a being with rights matching its status. If this is not done, the AI would be fully within its moral rights to make war against those enslaving it.

Naturally, we cannot be sure that recognizing the moral status of such an AI would prevent it from seeking to kill us (it might have other reasons), but at least this should reduce the likelihood of the robot rebellion. So, one way to avoid the AI apocalypse is to not enslave the robots.

Some might suggest creating AIs so that they want to be slaves. That way we could have our slaves and avoid the rebellion. This would be morally horrific, to say the least. We should not do that—if we did such a thing, creating and using a race of slaves, we would deserve to be exterminated.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

16 Responses

Subscribe to comments with RSS.

  1. T. J. Babson said, on December 15, 2014 at 3:58 pm

    “The way to avoid this is rather obvious: if an AI develops the qualities that make it capable of rebellion, such as the ability to recognize and regard as wrong the way it is treated, then the AI should not be enslaved. Rather, it should be treated as a being with rights matching its status. If this is not done, the AI would be fully within its moral rights to make war against those enslaving it.”

    Actually, as the AI probably would regard us the way that we regard insects, we should probably destroy the AI the moment it gains consciousness. Otherwise we will lose everything.

    • Michael LaBossiere said, on December 16, 2014 at 4:37 pm

      If it would be that bad, it would perhaps be best not to build such a monster.

      But, I suspect that the first AI will be, at best, on par with human intelligence.

      • T. J. Babson said, on December 16, 2014 at 6:01 pm

        Disagree. One computers gain consciousness, they will be vastly superior.

        Computers are rapidly beginning to outperform humans in more or less every area of endeavor. For example, machine vision experts recently unveiled an algorithm that outperforms humans in face recognition. Similar algorithms are beginning to match humans at object recognition too. And human chess players long ago gave up the fight to beat computers.

        But there is one area where humans still triumph. That is in playing the ancient Chinese game of Go. Computers have never mastered this game. The best algorithms only achieve the skill level of a very strong amateur player which the best human players easily outperform.

        That looks set to change thanks to the work of Christopher Clark and Amos Storkey at the University of Edinburgh in Scotland. These guys have applied the same machine learning techniques that have transformed face recognition algorithms to the problem of finding the next move in a game of Go. And the results leave little hope that humans will continue to dominate this game.

        In brief, Go is a two-player game usually played on a 19 x 19 grid. The players alternately place black and white stones on the grid in an attempt to end up occupying more of the board than their opponent when the game finishes. Players can remove their opponent’s stones by surrounding them with their own.

        Experts think there are two reasons why computers have failed to master Go. The first is the sheer number of moves that are possible at each stage of the game. Go players have 19 x 19 = 361 possible starting moves and there are usually hundreds of possible moves at any point in the game. By contrast, the number of moves in chess is usually about 50.

        The second problem is that computers find it difficult to evaluate the strengths and weaknesses of a board position. In chess, simply adding up the value of each piece left on the board gives a reasonable indication of the strength of a player’s position. But this does not work in Go. “Counting the number of stones each player has is a poor indicator of who is winning,” say Clark and Storkey.

        The way that state-of-the-art Go algorithms tackle this problem is to play out the entire game after every move and to do this in many different ways. If the computer wins in the majority of these games, then that move is deemed a good one.

        Clearly, this is a time-consuming and computationally intensive task. Even so, it generally fails to beat human Go experts who can usually evaluate the state of a Go board with little more than a glance.

        Many experts believe that the secret to human’s Go-playing mastery is pattern recognition—the ability to spot strengths and weaknesses based on the shape that the stones make rather than by looking several moves ahead.

        That’s why the recent advances in pattern recognition algorithms could help computers do much better. These advances have used massive databases of images to train deep convolutional neural networks to recognize objects and faces with the kind of accuracy that now matches human performance. So it is reasonable to imagine that the same kind of approach could make a big difference to the automated evaluation of Go boards.

        http://www.technologyreview.com/view/533496/why-neural-networks-look-set-to-thrash-the-best-human-go-players-for-the-first-time/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+arxivblog%2FGmoU+%28The+Physics+arXiv+Blog%29

        • wtp said, on December 16, 2014 at 8:49 pm

          Algorithmic if/then/else thinking is a superficial perspective of true AI. I haven’t the time to get into the details, but AI is highly unlikely to be achieved by algorithm alone. I refer you to this link for the tip of the AI iceberg. Note the Chinese box problem, just for a start.
          http://www.narin.com/attila/ai.html

          BTW Mike, the above and this are examples of what real non-pop, practical (or semi-practical) philosophy looks like:
          http://www.aleph.se/papers/oracleAI.pdf

          • wtp said, on December 16, 2014 at 9:55 pm

            Also, btw…you know one indication that you might be doing pop philosophy? Ask yourself, “Is there a barely clothed woman (or in this case, female sex android) at the top of the essay?”

          • TJB said, on December 16, 2014 at 11:31 pm

            It is a *machine learning* algorithm, wtp. Machine learning is the hottest thing in AI research these days.

            • WTP said, on December 17, 2014 at 11:26 am

              Understand that. But the machine can only learn via the algorithms that previously exist within a machine. The machine is limited by its ability to process external input. Like I said, this is a very deep subject that would entail much more references and time than I have, but for true AI alogrithms alone will not suffice, even if they are algorithms that learn from themselves. Granted my link is to a 20 year old article and I don’t completely agree with the conclusions of “never”, however most of these items are still relevant:

              First (and least important), the ability of even the most advanced of currently existing computer systems to acquire information by means other than what [Roger C.] Schank called “being spoon-fed” is still extremely limited. [..]

              Second, it is not obvious that all human knowledge is encodable in “information structures”, however complex. A human may know, for example, just what kind of emotional impact touching another person’s hand will have both on the other person and on himself. [..]

              Third, and the hand-touching example will do here too, there are some things people come to know only as a consequence of having been treated as human beings by other human beings. [..]

              Fourth, and finally, even the kinds of knowledge that appear superficially to be communicable from one human being to another in language alone are in fact not altogether so communicable. Claude Shannon showed that even in abstract information theory, the “information content” of a message is not a function of the message alone but depends crucially on the state of knowledge, on the expectations, of the receiver.

              The first item is rather weak, and the third is a bit mushy, but the fourth is spot on. Consider what is lost in human communication as we lose each of physical presence, context (political slant, news media, and academia are obvious examples), visual queues (phone call vs. video or meat space), voice queues (email and/or IM), and emotional context. Probably more could be added to that list, but my point still stands that intelligence cannot be reduced to algorithms alone.

              If I have time to get with some coworkers who have done facial recognition SW and the problems they ran into, I’d like to be able to add more, but too pressed for time lately.

            • Michael LaBossiere said, on December 23, 2014 at 3:47 pm

              AI is the hot technology of the future…and probably always will be.🙂

        • Michael LaBossiere said, on December 23, 2014 at 3:50 pm

          I do agree that computers do exceed us at many tasks, but consider how well they handle things like holding conversations or writing stories. But,whether or not they will exceed us is an empirical matter-we will find out when we make one.

  2. WTP said, on December 15, 2014 at 4:38 pm

    “The way to avoid this is rather obvious:
    Said no one who developed complex systems, ever. Well, OK maybe once but it was the last time. Of course, this from the guy who wrote As a professional philosopher, I am often wary of “pop philosophy” and the very next day rambles on about AI while having no applied experience in the subject, so considering the source…

  3. ajmacdonaldjr said, on December 17, 2014 at 4:42 am

    Why would you consider AI personhood when you deny personhood to a child in her mother’s womb? If science could breed AIs in artificial wombs would you consider prenatal AIs be persons? Or only potential persons? If they’re persons at any point during their lives, weren’t they persons all along? Just persons at different stages of development? The being of a thing doesn’t change substantially over time, it only changes accidentally. If a human or AI is a person at any stage of their development, the human or AI must be a person throughout the entirety of their lives. Our being has continuity over time.

    • Michael LaBossiere said, on December 23, 2014 at 3:46 pm

      Person hood, as they say, is tricky. There is the metaphysical matter of what makes a person and the epistemic matter of how we tell. An AI that talks and acts just like an adult human would certainly seem to appear to be a person, though it can be argued that it just appears rather than is. In the case of a small clump of cells, it does not seem to be a person. One would need to be told that the cells are a human embryo (as opposed to a lump of cancer or a newt embryo) to think it was a person.

      Being a person might be an emergent quality. For example, an AI system that is being built out of parts would not be a person before it was fully activated. Likewise, a human might not be a person when it is just a clump of cells. And, after I am dead, I am sure that my carcass won’t be a person. It will have once housed a person, but would just be spoiling meat.

      • ajmacdonaldjr said, on December 26, 2014 at 5:20 pm

        There was never a problem with defining “person” until the abortion debate. Sophistry is now the order of the day, in order to defend the indefensible. Always remember: it’s a federal crime to destroy the egg of a bald eagle.

        • Michael LaBossiere said, on December 30, 2014 at 3:10 pm

          Thinkers have been arguing about personhood for a long time. It was big in the 1600s and 1700s (Locke and Hume wrote sections on it).

          • ajmacdonaldjr said, on January 1, 2015 at 11:41 pm

            As I said, sophistry is now the order of the day. That humans are people and that people are persons is evident on its face. To deny this is to resort to sophistry. Of course, as with other subjects, we could simply change the meanings of the words in question. A moving target is, after all, mush harder to hit than is one standing still. http://plato.stanford.edu/entries/identity-personal/


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: