In Philip K. Dick’s “We Can Remember It for You Wholesale” Rekal, Incorporated offers its clients a form of virtual vacation: for a modest fee, memories of an amazing vacation are implanted. The company also provides relevant mementos and “evidence” of the trip. In the story (and the movie, Total Recall, based on it) things go terribly wrong.
While the technology of the story does not yet exist, a very limited form of virtual reality has finally become something of a reality. Because of this, it is worth considering the matter of virtual vacations. Interestingly, philosophers have long envisioned a form of virtual reality; but they have usually presented it as a problem in epistemology (the study or theory of knowledge). This is the problem of the external world: how do I know that what I think is real is actually real? In the case of the virtual vacation, there is no such problem: the vacation is virtual and not real. Perhaps some philosopher will be inspired to try to solve the problem of the virtual vacation: how does one know that it is not real?
Philosophers have also considered virtual reality in the context of ethics. One of the best known cases is Robert Nozick’s experience machine. Nozick envisioned a machine that would allow the user to have any experience they desired. Some philosophers have made use of this sort of a machine as a high-tech version of the “pig objection.” This objection, which was used by Aristotle and others, is against taking pleasure to be the highest good. The objection is often presented as a choice: you must pick between continuing your current life or living as an animal—but with the greatest pleasures of that beast guaranteed. The objector, of course, expects that people will choose to remain people, thus showing that mere pleasure is not the highest good. In the case of the experience machine variant, the choice is between living a real life with all its troubles and a life of ultimate pleasure in the experience machine. The objector hopes, of course, that our intuitions will still favor valuing the real over the virtual.
Since the objection is generally presented as a choice of life (you either live life entirely outside the machine or entirely inside of it) it worth considering there might be a meaningful difference if people take virtual vacations rather than living virtual lives.
On the face of it, there would seem to be no real problem with virtual vacations in which a person either spends their vacation time in a virtual world or has the memories implanted. The reason for this is that people already take virtual vacations of a sort—they play immersive video games and watch movies. Before this, people took “virtual vacations” in books, plays and in their own imagination. That said, a true virtual vacation might be sufficiently different to require arguments in its favor. I now turn to these arguments.
The first reason in favor of virtual vacations is their potential affordability. If virtual vacations eventually become budget alternatives to real vacations as in the story), they would allow people to have the experience of a high priced vacation for a modest cost. For example, a person might take a virtual luxury cruise in a stateroom that, if real, might cost $100,000.
The second reason in support of virtual vacations is that they could be used to virtually visit places where the access is limited (such as public parks that can only handle so many people), where access would be difficult (such as very remote locations), or places where access would be damaging (such as environmentally sensitive areas).
A third reason is that virtual vacations could allow people to have vacations they could not really have, such as visiting Mars, adventuring in Middle Earth, or spending a weekend as a dolphin.
A fourth reason is that virtual vacations could be much safer than real vacations—no travel accidents, no terrorist attacks, no disease, and so on for the various dangers that can be encountered in the real world. Those familiar with science fiction might point to the dangers of virtual worlds, using Sword Art Online and the very lethal holodecks of Star Trek as examples. However, it would seem easy enough to make the technology so that it cannot actually kill people. It was always a bit unclear why the holodecks had the option of turning off the safety systems—that is rather like having an option for your Xbox One or PS4 to explode and kill you when you lose a game.
A fifth reason is convenience—going on a virtual vacation would generally be far easier than going on a real vacation. There are other reasons that could be considered, but I now turn to an objection and some concerns.
The most obvious objection against virtual vacations is that they are, by definition, not real.
The idea is that the pig objection would apply not just to an entire life in a virtual world, but to a vacation. Since the virtual vacation is not real, it lacks value and hence it would be wrong for people to take them in place of real vacations. Fortunately, there seems to be an easy reply to this objection.
The pig objection does seem to have bite in cases in which a person is supposed to be doing significant things. For example, a person who spends a weekend in virtual reality treating virtual patients with virtual Ebola would certainly not merit praise and would not be acting in a virtuous way. However, the point of a vacation is amusement and restoration rather than engaging in significant actions. If virtual vacations are to be criticized because they merely entertain, then the same would apply to real vacations. After all, their purpose is also to entertain. This is not to say that people cannot do significant things while on vacation, but to focus on the point of a vacation as vacation. As such, the pig objection does not seem to have much bite here.
It could be objected that virtual vacations would fail to be as satisfying as actual vacations because they are not real. This is certainly an objection worth considering—if a virtual vacation fails as a vacation, then there would be a very practical reason not to take one. However, this is something that remains to be seen. Now, to the concerns.
One concern, which has been developed in science fiction, is that virtual vacations might prove addicting. Video games have already proven to be addicting to some people; there are even a very few cases of people literally gaming to death. While this is a legitimate concern and there will no doubt be a Virtual Reality Addicts Anonymous in the future, this is not a special objection against virtual reality—unless, of course, it proves to be destructively addicting on a significant scale. Even if it were addictive, it would presumably do far less damage than drug or alcohol addiction. In fact, this could be another point in its favor—if people who would otherwise be addicted to drugs or alcohol self-medicated with virtual reality instead, there could be a reduction in social woes and costs arising from addiction.
A second concern is that virtual vacations would have a negative impact on real tourist economies. My home state of Maine and adopted state of Florida both have tourism based economies and if people stopped real vacations in favor of virtual vacations, their economies would suffer greatly. One stock reply is that when technology kills one industry, it creates a new one. In this case, the economic loss to real tourism would be offset to some degree by the economic gain in virtual tourism. States and countries could even create or license their own virtual vacation experiences. Another reply is that there will presumably still be plenty of people who will prefer the real vacations to the virtual vacations. Even now people could spend their vacations playing video games; but most who have the money and time still chose to go on a real vacation.
A third concern is that having wondrous virtual vacations will increase peoples’ dissatisfaction with the tedious grind that is life for most under the economic lash of capitalism. An obvious reply is that most are already dissatisfied. Another reply is that this is more of an objection against the emptiness of capitalism for the many than an objection against virtual vacations. In any case, amusements eventually wear thin and most people actually want to return to work.
In light of the above, virtual vacations seem like a good idea. That said, many disasters are later explained by saying “it seemed like a good idea at the time.”
“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”
While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from the land of comedy into the realm of philosophy (though detractors might claim that philosophy is comedy without humor; but that is actually law). Colbert has what seems to be an odd epistemology: he regularly claims that he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as merely comedic, it does raise many interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.
While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold the opinion that it is true. A belief is true, in the common sense view, when it gets reality right—this is the often maligned correspondence theory of truth. The stock simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks, but shall suffice for this discussion.
Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.
One possible response is to point out that the human mind is not beholden to the rules of logic—while a contradiction cannot be true, there are many ways a person can hold to contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold to both. This might be due to an ability to compartmentalize the beliefs so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.
While these responses do have considerable appeal, they do not appear to work in cases in which the person actually claims, as Colbert does, that they believe something they know is not true. After all, making this claim does require considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they belief something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.
One possibility is to consider the power of cognitive dissonance management—a person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem. I will explore this possibility in the context of comfort beliefs in a later essay.
Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true—even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.
Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty sided die, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true—assuming that this means that I can believe in something that I believe is less likely than another belief.
People are also strongly influenced by emotional and other factors that are not based in a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence.
In philosophy skepticism is the view that we lack knowledge. There are numerous varieties of skepticism and these are defined by the extent of the doubt endorsed by the skeptic. A relatively mild case of skepticism might involve doubts about metaphysical claims while a truly rabid skeptic would doubt everything—including her own existence.
While many philosophers have attempted to defeat the dragon of skepticism, all of these attempts seem to have failed. This is hardly surprising—skepticism seems to be unbreakable. The arguments for this have an ancient pedigree and can be distilled down to two simple arguments.
The first goes after the possibility of justifying a belief and thus attacks the standard view that knowledge requires a belief that is true and justified. If a standard of justification is presented, then there is the question of what justifies that standard. If a justification is offered, then the same question can be raised into infinity. And beyond. If no justification is offered, then there is no reason to accept the standard.
A second stock argument for skepticism is that any reasonable argument given in support of knowledge can be countered by an equally reasonable argument against knowledge. Some folks, such as the famous philosopher Chisholm, have contended that it is completely fair to assume that we do have knowledge and begin epistemology from that point. However, this seems to have all the merit of grabbing the first place trophy without actually competing.
Like all sane philosophers, I tend to follow David Hume in my everyday life: my skepticism is nowhere to be seen when I am filling out my taxes, sitting in brain numbing committee meeting, or having a tooth drilled. However, like a useless friend, it shows up again when it is no longer needed. As such, it would be nice if skepticism could be defeated or a least rendered irrelevant.
John Locke took a rather interesting approach to skepticism. While, like Descartes, he seemed to want to find certainty, he settled for a practical approach to the matter. After acknowledging that our faculties cannot provide certainty, he asserted that what matters to us is the ability of our faculties to aid us in our preservation and wellbeing.
Jokingly, he challenges “the dreamer” to put his hand into a furnace—this would, he claims, wake him “to a certainty greater than he could wish.” More seriously, Locke contends that our concern is not with achieving epistemic certainty. Rather, what matters is our happiness and misery. While Locke can be accused of taking an easy out rather than engaging the skeptic in a battle of certainty or death, his approach is certainly appealing. Since I happened to think through this essay while running with an injured back, I will use that to illustrate my view on this matter.
When I set out to run, my back began hurting immediately. While I could not be certain that I had a body containing a spine and nerves, no amount of skeptical doubt could make the pain go away—in regards to the pain, it did not matter whether I really had a back or not. That is, in terms of the pain it did not matter whether I was a pained brain in a vat or a pained brain in a runner on the road. In either scenario, I would be in pain and that is what really mattered to me.
As I ran, it seemed that I was covering distance in a three-dimensional world. Since I live in Florida (or what seems to be Florida) I was soon feeling quite warm and had that Florida feel of sticky sweat. I could eventually feel my thirst and some fatigue. Once more, it did not seem to really matter if this was real—whether I was really bathed in sweat or a brain bathed in some sort of nutrient fluid, the run was the same to me. As I ran, I took pains to avoid cars, trees and debris. While I did not know if they were real, I have experience what it is like to be hit by a car (or as if I was hit by a car) and also experience involving falling (or the appearance of falling). In terms of navigating through my run, it did not matter at all whether it was real or not. If I knew for sure that my run was really real for real that would not change the run. If I somehow knew it was all an illusion that I could never escape, I would still run for the sake of the experience of running.
This, of course, might seem a bit odd. After all, when the hero of a story or movie finds out that she is in a virtual reality what usually follows is disillusionment and despair. However, my attitude has been shaped by years of gaming—both tabletop (BattleTech, Dungeons & Dragons, Pathfinder, Call of Cthulhu, and so many more) and video (Zork, Doom, Starcraft, Warcraft, Destiny, Halo, and many more). When I am pretending to be a paladin, the Master Chief, or a Guardian, I know I am doing something that is not really real for real. However, the game can be pleasant and enjoyable or unpleasant and awful. This enjoyment or suffering is just as real as enjoyment or suffering caused by what is supposed to be really real for real—though I believe it is but a game.
If I somehow knew that I was trapped in an inescapable virtual reality, then I would simply keep playing the game—that is what I do. Plus, it would get boring and awful if I stopped playing. If I somehow knew that I was in the really real world for real, I would keep doing what I am doing. Since I might be trapped in just such a virtual reality or I might not, the sensible thing to do is keep playing as if it is really real for real. After all, that is the most sensible option in every case. As such, the reality or lack thereof of the world I think I occupy does not matter at all. The play, as they say, is the thing.
This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers. Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.
In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.
Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.
While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.
Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.
Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.
Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.
While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.
One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.
While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.
One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.
The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.
It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.
While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).
In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.
A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.
It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.