A Philosopher's Blog

The Simulation I: The Problem of the External World

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on October 24, 2016

Elon Musk and others have advanced the idea that we exist within a simulation. The latest twist on this is that he and others are allegedly funding efforts to escape this simulation. This is, of course, the most recent chapter in the ancient philosophical problem of the external world. Put briefly, this problem is the challenge of proving that what seems to be a real external world is, in fact, a real external world. As such, it is a problem in epistemology (the study of knowledge).

The problem is often presented in the context of metaphysical dualism. This is the view that reality is composed of two fundamental categories of stuff: mental stuff and physical stuff. The mental stuff is supposed to be what the soul or mind is composed of, while things like tables and kiwis (the fruit and the bird) are supposed to be composed of physical stuff. Using the example of a fire that I seem to be experiencing, the problem would be trying to prove that the idea of the fire in my mind is being caused by a physical fire in the external world.

Renee Descartes has probably the best known version of this problem—he proposes that he is being deceived by an evil demon that creates, in his mind, an entire fictional world. His solution to this problem was to doubt until he reached something he could not doubt: his own existence. From this, he inferred the existence of God and then, over the rest of his Meditations on First Philosophy, he established that God was not a deceiver. Going back to the fire example, if I seem to see a fire, then there probably is an external, physical fire causing that idea. Descartes did not, obviously, decisively solve the problem: otherwise Musk and his fellows would be easily refuted by using Descartes’ argument.

One often overlooked contribution Descartes made to the problem of the external world is consideration of why the deception is taking place. Descartes attributes the deception of the demon to malice—it is an evil demon (or evil genius). In contrast, God’s goodness entails he is not a deceiver. In the case of Musk’s simulation, there is the obvious question of the motivation behind it—is it malicious (like Descartes’ demon) or more benign? On the face of it, such deceit does seem morally problematic—but perhaps the simulators have excellent moral reasons for this deceit. Descartes’s evil demon does provide the best classic version of Musk’s simulation idea since it involves an imposed deception. More on this later.

John Locke took a rather more pragmatic approach to the problem. He rejected the possibility of certainty and instead argued that what matters is understanding matters enough to avoid pain and achieve pleasure. Going back to the fire, Locke would say that he could not be sure that the fire was really an external, physical entity. But, he has found that being in what appears to be fire has consistently resulted in pain and hence he understands enough to want to avoid standing in fire (whether it is real or not). This invites an obvious comparison to video games: when playing a game like World of Warcraft or Destiny, the fire is clearly not real. But, because having your character fake die in fake fire results in real annoyance, it does not really matter that the fire is not real. The game is, in terms of enjoyment, best played as if it is.

Locke does provide the basis of a response to worries about being in a simulation, namely that it would not matter if we were or were not—from the standpoint of our happiness and misery, it would make no difference if the causes of pain and pleasure were real or simulated. Locke, however, does not consider that we might be within a simulation run by others. If it were determined that we are victims of a deceit, then this would presumably matter—especially if the deceit were malicious.

George Berkeley, unlike Locke and Descartes, explicitly and passionately rejected the existence of matter—he considered it a gateway drug to atheism. Instead, he embraces what is called “idealism”, “immaterialism” and “phenomenalism.” His view was that reality is composed of metaphysical immaterial minds and these minds have ideas. As such, for him there is no external physical reality because there is nothing physical. He does, however, need to distinguish between real things and hallucinations or dreams. His approach was to claim that real things are more vivid that hallucinations and dreams. Going back to the example of fire, a real fire for him would not be a physical fire composed of matter and energy. Rather, I would have a vivid idea of fire. For Berkeley, the classic problem of the external world is sidestepped by his rejection of the external world.  However, it is interesting to speculate how a simulation would be handled by Berkeley’s view.

Since Berkeley does not accept the existence of matter, the real world outside the simulation would not be a material world—it would a world composed of minds. A possible basis for the difference is that the simulated world is less vivid than the real world (to use his distinction between hallucinations and reality). On this view, we would be minds trapped in a forced dream or hallucination. We would be denied the more vivid experiences of minds “outside” the simulation, but we would not be denied an external world in the metaphysical sense. To use an analogy, we would be watching VHS, while the minds “outside” the simulation would be watching Blu-Ray.

While Musk does not seem to have laid out a complete philosophical theory on the matter, his discussion indicates that he thinks we could be in a virtual reality style simulation. On this view, the external world would presumably be a physical world of some sort. This distinction is not a metaphysical one—presumably the simulation is being run on physical hardware and we are some sort of virtual entities in the program. Our error, then, would be to think that our experiences correspond to material entities when they, in fact, merely correspond to virtual entities. Or perhaps we are in a Matrix style situation—we do have material bodies, but receive virtual sensory input that does not correspond to the physical world.

Musk’s discussion seems to indicate that he thinks there is a purpose behind the simulation—that it has been constructed by others. He does not envision a Cartesian demon, but presumably envisions beings like what we think we are.  If they are supposed to be like us (or we like them, since we are supposed to be their creation), then speculation about their motives would be based on why we might do such a thing.

There are, of course, many reasons why we would create such a simulation. One reason would be scientific research: we already create simulations to help us understand and predict what we think is the real world. Perhaps we are in a simulation used for this purpose. Another reason would be for entertainment. We created games and simulated worlds to play in and watch; perhaps we are non-player characters in a game world or unwitting actors in a long running virtual reality show (or, more likely, shows).

One idea, which was explored in Frederik Pohl’s short story “The Tunnel under the World”, is that our virtual world exists to test advertising and marketing techniques for the real world. In Pohl’s story, the inhabitants of Tylerton are killed in the explosion of the town’s chemical plant and they are duplicated as tiny robots inhabiting a miniature reconstruction of the town. Each day for the inhabitants is June 15th and they wake up with their memories erased, ready to be subject to the advertising techniques to be tested that day.  The results of the methods are analyzed, the inhabitants are wiped, and it all starts up again the next day.

While this tale is science fiction, Google and Facebook are working very hard to collect as much data as they can about us with an end to monetize all this information. While the technology does not yet exist to duplicate us within a computer simulation, that would seem to be a logical goal of this data collection—just imagine the monetary value of being able to simulate and predict people’s behavior at the individual level. To be effective, a simulation owned by one company would need to model the influences of its competitors—so we could be in a Google World or a Facebook World now so that these companies can monetize us to exploit the real versions of us in the external world.

Given that a simulated world is likely to exist to exploit the inhabitants, it certainly makes sense to not only want to know if we are in such a world, but also to try to undertake an escape. This will be the subject of the next essay.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Advertisements

Believing What You Know is Not True

Posted in Epistemology, Philosophy, Reasoning/Logic by Michael LaBossiere on February 5, 2016

“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”

Stephen Colbert

While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from the land of comedy into the realm of philosophy (though detractors might claim that philosophy is comedy without humor; but that is actually law). Colbert has what seems to be an odd epistemology: he regularly claims that he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as merely comedic, it does raise many interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.

While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold the opinion that it is true. A belief is true, in the common sense view, when it gets reality right—this is the often maligned correspondence theory of truth. The stock simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks, but shall suffice for this discussion.

Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.

One possible response is to point out that the human mind is not beholden to the rules of logic—while a contradiction cannot be true, there are many ways a person can hold to contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold to both.  This might be due to an ability to compartmentalize the beliefs so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.

While these responses do have considerable appeal, they do not appear to work in cases in which the person actually claims, as Colbert does, that they believe something they know is not true. After all, making this claim does require considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they belief something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.

One possibility is to consider the power of cognitive dissonance management—a person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem. I will explore this possibility in the context of comfort beliefs in a later essay.

Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true—even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.

Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty sided die, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true—assuming that this means that I can believe in something that I believe is less likely than another belief.

People are also strongly influenced by emotional and other factors that are not based in a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Skepticism, Locke & Games

Posted in Epistemology, Philosophy by Michael LaBossiere on September 25, 2015

In philosophy skepticism is the view that we lack knowledge. There are numerous varieties of skepticism and these are defined by the extent of the doubt endorsed by the skeptic. A relatively mild case of skepticism might involve doubts about metaphysical claims while a truly rabid skeptic would doubt everything—including her own existence.

While many philosophers have attempted to defeat the dragon of skepticism, all of these attempts seem to have failed. This is hardly surprising—skepticism seems to be unbreakable. The arguments for this have an ancient pedigree and can be distilled down to two simple arguments.

The first goes after the possibility of justifying a belief and thus attacks the standard view that knowledge requires a belief that is true and justified. If a standard of justification is presented, then there is the question of what justifies that standard. If a justification is offered, then the same question can be raised into infinity. And beyond. If no justification is offered, then there is no reason to accept the standard.

A second stock argument for skepticism is that any reasonable argument given in support of knowledge can be countered by an equally reasonable argument against knowledge.  Some folks, such as the famous philosopher Chisholm, have contended that it is completely fair to assume that we do have knowledge and begin epistemology from that point. However, this seems to have all the merit of grabbing the first place trophy without actually competing.

Like all sane philosophers, I tend to follow David Hume in my everyday life: my skepticism is nowhere to be seen when I am filling out my taxes, sitting in brain numbing committee meeting, or having a tooth drilled. However, like a useless friend, it shows up again when it is no longer needed. As such, it would be nice if skepticism could be defeated or a least rendered irrelevant.

John Locke took a rather interesting approach to skepticism. While, like Descartes, he seemed to want to find certainty, he settled for a practical approach to the matter. After acknowledging that our faculties cannot provide certainty, he asserted that what matters to us is the ability of our faculties to aid us in our preservation and wellbeing.

Jokingly, he challenges “the dreamer” to put his hand into a furnace—this would, he claims, wake him “to a certainty greater than he could wish.” More seriously, Locke contends that our concern is not with achieving epistemic certainty. Rather, what matters is our happiness and misery. While Locke can be accused of taking an easy out rather than engaging the skeptic in a battle of certainty or death, his approach is certainly appealing. Since I happened to think through this essay while running with an injured back, I will use that to illustrate my view on this matter.

When I set out to run, my back began hurting immediately. While I could not be certain that I had a body containing a spine and nerves, no amount of skeptical doubt could make the pain go away—in regards to the pain, it did not matter whether I really had a back or not. That is, in terms of the pain it did not matter whether I was a pained brain in a vat or a pained brain in a runner on the road. In either scenario, I would be in pain and that is what really mattered to me.

As I ran, it seemed that I was covering distance in a three-dimensional world. Since I live in Florida (or what seems to be Florida) I was soon feeling quite warm and had that Florida feel of sticky sweat. I could eventually feel my thirst and some fatigue. Once more, it did not seem to really matter if this was real—whether I was really bathed in sweat or a brain bathed in some sort of nutrient fluid, the run was the same to me. As I ran, I took pains to avoid cars, trees and debris. While I did not know if they were real, I have experience what it is like to be hit by a car (or as if I was hit by a car) and also experience involving falling (or the appearance of falling). In terms of navigating through my run, it did not matter at all whether it was real or not. If I knew for sure that my run was really real for real that would not change the run. If I somehow knew it was all an illusion that I could never escape, I would still run for the sake of the experience of running.

This, of course, might seem a bit odd. After all, when the hero of a story or movie finds out that she is in a virtual reality what usually follows is disillusionment and despair. However, my attitude has been shaped by years of gaming—both tabletop (BattleTech, Dungeons & Dragons, Pathfinder, Call of Cthulhu, and so many more) and video (Zork, Doom, Starcraft, Warcraft, Destiny, Halo, and many more). When I am pretending to be a paladin, the Master Chief, or a Guardian, I know I am doing something that is not really real for real. However, the game can be pleasant and enjoyable or unpleasant and awful. This enjoyment or suffering is just as real as enjoyment or suffering caused by what is supposed to be really real for real—though I believe it is but a game.

If I somehow knew that I was trapped in an inescapable virtual reality, then I would simply keep playing the game—that is what I do. Plus, it would get boring and awful if I stopped playing. If I somehow knew that I was in the really real world for real, I would keep doing what I am doing. Since I might be trapped in just such a virtual reality or I might not, the sensible thing to do is keep playing as if it is really real for real. After all, that is the most sensible option in every case. As such, the reality or lack thereof of the world I think I occupy does not matter at all. The play, as they say, is the thing.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds II: Is the Android a Psychopath?

Posted in Epistemology, Ethics, Philosophy, Technology by Michael LaBossiere on September 9, 2015

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds I: Setup

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on September 7, 2015

The movie Ex Machina is what I like to call “philosophy with a budget.” While the typical philosophy professor has to present philosophical problems using words and Powerpoint, movies like Ex Machina can bring philosophical problems to dramatic virtual life. This then allows philosophy professors to jealously reference such films and show clips of them in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some minor spoilers in what follows.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a much more limited set of problems, all connected to the mind. Since the film is primarily about AI, this is not surprising. The gist of the movie is that Nathan has created an AI named Ava and he wants an employee named Caleb to put her to the test.

The movie explicitly presents the test proposed by Alan Turing. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, there is a twist on the test: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems to be quite reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she is able to reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people in order to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether or not Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Introduction to Philosophy

Posted in Aesthetics, Epistemology, Ethics, Metaphysics, Philosophy, Reasoning/Logic, Universities & Colleges by Michael LaBossiere on July 17, 2015

The following provides a (mostly) complete Introduction to Philosophy course.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #1

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals,  the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

 

A Philosopher’s Blog: 2014 Free on Amazon

Posted in Philosophy by Michael LaBossiere on December 30, 2014

A-Philosopher's-Blog-2014A Philosopher’s Blog: 2014 Philosophical Essays on Many Subjects will be available as a free Kindle book on Amazon from 12/31/2014-1/4/2015. This book contains all the essays from the 2014 postings of A Philosopher’s Blog. The topics covered range from the moral implications of sexbots to the metaphysics of determinism.

A Philosopher’s Blog: 2014 on Amazon US

A Philosophers Blog: 2014 on Amazon UK

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter