A Philosopher's Blog

Ex Machina & Other Minds II: Is the Android a Psychopath?

Posted in Epistemology, Ethics, Philosophy, Technology by Michael LaBossiere on September 9, 2015

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Three Questions to Ask About Pages to Screens

Posted in Aesthetics, Philosophy by Michael LaBossiere on May 30, 2014
Do Androids Dream of Electric Sheep?

(Photo credit: Wikipedia)

While I consider myself something of a movie buff, I am out-buffed by one of my colleagues. This is a good thing—I enjoy the opportunity to hear about movies from someone who knows much more than I. We recently had a discussion about science-fiction classics and one sub-topic that came up was the matter of movies based on books or short stories.

Not surprisingly, the discussion turned to Blade Runner, which is supposed to be based on Do Androids Dream of Electric Sheep? By Phillip K. Dick. While I like the movie, some fans of the author hate the movie because it deviates from the book. This leads to two of the three questions.

The first question, which I think is the most important of the three is this: is the movie good? The second question, which I consider as having less importance, is this: how much does the movie deviate from the book/story? For some people, the second question is rather important and their answer to the first question can hinge on the answer to the second question. For these folks, the greater the degree of deviation from the book/story, the worse the movie. This presumably rests on the view that an important aesthetic purpose of a movie based on a book/story is to faithfully reproduce the book/story in movie format.

My own view is that deviation from the book/story is not actually relevant to the quality of the movie as a movie. That is, if the only factor that allegedly makes the movie bad is that it deviates from the book/story, then the movie is actually good. One way to argue for this is to point out the obvious: if someone saw the movie without knowing about the book, she would presumably regard it as a good movie. If she then found out it was based on a book/story, then nothing about the movie would have changed—as such, it should still be a good movie on the grounds that the relation to the book/story is external to the movie. To use an analogy, imagine that someone sees a painting and regards it as well done artistically. Then the person finds out it is a painting of a specific person and finds a photo of the person that shows the painting differs from the photo. To then claim that the painting is badly done would seem to be to make an unfounded claim.

It might be countered that the painting would be bad, because it failed to properly imitate the person in the photo. However, this would merely count against the accuracy of the imitation and not the artistic merit of the work. That it does not look exactly like the person would not entail that it is lacking as an artistic art. Likewise for the movie: the fact that it is not exactly like the book/story does not entail that it is thus badly done. Naturally, it is fair to claim that it does not imitate well, but this is a different matter than being a well done work.

That said, I am sympathetic to the view that a movie does need to imitate a book/movie to a certain degree if it is to legitimately claim that name. Take, for example, the movie Lawnmower Man.  While not a great film, the only thing it has in common with the Stephen King story is the name. In fact, King apparently sued over this because the film had no meaningful connection to his story. However, whether the movie has a legitimate claim to the name of a book/story or not is a matter that is distinct from the quality of the movie. After all, a very bad movie might be faithful to a very bad book/story. But it would still be bad.

The third question I came up with was this: is the movie so bad that it desecrates the story/book? In some cases, authors sell the film rights to books/stories or the works become public domain (and thus available to anyone). In some cases, the films made from such works are both reasonably true to the originals and also reasonably good. The obvious examples here are the Lord of the Rings movies. However, there are cases in which the movie (or TV show) is so bad that the badness desecrates the original work by associating its awfulness with a good book/story.

One example of this is the desecration of the Wizard of Earthsea by the Sci-Fi Channel (or however they spell it these days). This was so badly done that Ursula K. Le Guin felt obligated to write a response to it. While the book is not one of my favorites, I did like it and was initially looking forward to seeing it as a series. However, it was the TV version of seeing a friend killed and re-animated as a shuffling horror of a zombie. Perhaps not quite that bad—but still pretty damn bad. Since I also like Edgar Rice Burroughs Mars books, I did not see the travesty that is Disney’s John Carter. To answer my questions, this movie was apparently very bad, deviated from the rather good book, and did desecrate it just a bit (I have found it harder to talk people into reading the books since they think of the badness of the movie).

From both a moral and aesthetic standpoint, I would contend that if a movie is to be made from a book or story, those involved have an obligation to make the movie at least as good as the original book/story. There is also an obligation to have at least some meaningful connection to the original work—after all, if there is no such connection then there is no legitimate grounds for having the film bear that name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Spotting Psychopaths

Posted in Ethics, Philosophy, Science by Michael LaBossiere on May 19, 2011
Character Rick Deckard has a hard time resisti...

Image via Wikipedia

Seeing Jon Ronson’s interview on The Daily Show got me thinking about psychopaths. I did not buy his book, so I will not comment on it. Rather, I’ll say a bit about spotting psychopaths from a philosophical perspective.

First, a bit about psychopaths. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self control.

In terms of specific qualities  psychopaths lack, these include shame, guilt, remorse and empathy. These qualities tend to lead  psychopaths to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and express contempt for others.

Psychopaths are supposed to behave in ways that are impulsive and irresponsible. This might be because they are taken to fail to properly grasp the potential consequences of their actions. This seems to be a  general defect in that it applies to the consequences for others as well as for themselves This reduced ability to properly assess the risks of being doubted, caught, or punished no doubt has a significant impact on their behavior (and their chances of being exposed).

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as  predators that prey on  their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.”

Given these behavior traits, it might be wondered how psychopaths are able to avoid detection long enough to actually engage in such behavior. After all, people tend to be on guard against such treatment.The answer is easy enough. First, psychopaths often seem charming. Since they seem to tend to lack a commitment to truth, they are willing and able to say whatever they believe will achieve their goals. Second, they are often adept at using intimidation and manipulation to get what they want. Third, they are often skilled mimics and are able to pass themselves off as normal people.

It is estimated that 1% of the general population is made up of psychopaths. The prison populations are supposed to contain a larger percentage (which would hardly be surprising) and the corporate world is supposed to have an above normal percentage of psychopaths. However, these numbers are not solidly established.

One obvious problem facing anyone attempting to determine the number of psychopaths is that they will tend to do their best to hide their true nature. After all, the intelligent psychopaths will generally get that they are not like other people and that normal people will tend to react negatively to them. The same holds true in attempts to determine whether or not a specific person is a psychopath or not. In many ways, the psychopath is like Glaucon’s unjust man in the Ring of Gyges story: he is a person who wants to do what he wants without regard to others, but needs to avoid being recognized for what he is.

As noted above, psychopaths are characterized as possessing traits that would tend to result in their exposure. As noted above, psychopaths are characterized by having poor impulse control, having difficulty with behaving responsibly, and a poor capacity for assessing consequences. Their deficiency in regard to empathy also probably  makes it more difficult for them to blend in properly.These could be called “exposure traits” in that they tend to expose the psychopath to others.

One rather interesting point to consider is whether or not these exposure traits are actually traits that are essential components of being a psychopath. After all, they might merely be traits possessed by the psychopaths that have been exposed. To advance this discussion, I will head into the territory of science fiction.

In science fiction, one interesting problem is the thing problem. This problem gets its name from Carpenter’s classic horror film The Thing (which is based on “Who Goes There?” by John W. Campbell). The thing is an inimical alien that can almost flawlessly imitate any living thing it has consumed. In the case of the movie, the humans had to sort out who was a human and who was a thing. In the case of psychopaths, the challenge is to distinguish between normal humans and psychopaths. In the movie, a test is devised: each part of a thing is its own creature and will try to survive, even if that means exposure. So, sticking a hot wire into a blood (or thing juice) sample will reveal whether the person is human or thing: if the “blood” squeals and tries to escape, the donor is a thing.

This test will, of course, expose any thing. Or, more accurately, expose any  thing that acts as expected. If a thing was, contrary to the way things are supposed to be,  able to suppress the survival response of one of its parts, it would pass the test and remain undetected. As such, any exposed thing would be a thing that could not do this, and this could lead the humans to believe that things cannot do this. At least until the things that could finished them off.

If you prefer machines or replicants to things, this situation can also be presented in terms borrowed from Phillip K. Dick’s works. In Blade Runner (based on Do Androids Dream of Electric Sheep?) there are replicants that can easily pass for humans, with one exception: they cannot pass the Voight-Kampff Test because they do not have the time to develop the responses of a normal human. The similarity of the Hare checklist is obvious. Of course, the test only works on replicants that cannot mimic humans enough to pass the test. A replicant that could give the right responses would pass as human.

Dick’s short story “Second Variety” also presented human-like machines, the claws. These machines were made for a world war and eventually broke free of human control, developing machines that could pass as humans. Unlike the replicants, the claws were always intent on killing humans-thus necessitating a means to tell them apart.  The early models were easily recognized as being non-humans. Unfortunately for the humans in the story, the only way they could tell the most advanced models  from humans was by seeing multiple claws of the same variety together. Otherwise, they easily passed as humans right up until the point they started killing.

It seems worth considering that the same might apply to psychopaths. To be specific, normal people can catch the psychopaths that are poor mimics, have poor impulse control, have difficulty with behaving responsibly, and  possess a poor capacity for assessing consequences. However, the psychopaths that are better mimics, have better impulse control, can act responsibly, and can assess consequences would be far more difficult to spot. Such psychopaths could easily pass as normal humans, much like Glaucon’s unjust man is able to conceal his true nature.  As such, perhaps the experts think that these specific traits are part of what it is to be a psychopath because these traits are possessed by the psychopaths they have caught. However, as with the more advanced claws, perhaps the most dangerous psychopaths are eluding detection. At least until it is too late.

Enhanced by Zemanta