A Philosopher's Blog

The Simulation II: Escape

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on October 26, 2016

The cover to Wildstorm's A Nightmare on Elm St...

Elon Musk and others have advanced the idea that we exist within a simulation, thus adding a new chapter to the classic problem of the external world. When philosophers engage this problem, the usual goal is show how one can know that one’s experience correspond to an external reality. Musk takes a somewhat more practical approach: he and others are allegedly funding efforts to escape this simulation. In addition to the practical challenges of breaking out of a simulation, there are also some rather interesting philosophical concerns about whether such an escape is even possible.

In regards to the escape, there are three main areas of interest. These are the nature of the simulation itself, the nature of the world outside the simulation and the nature of the inhabitants of the simulation. These three factors determine whether or not escape from the simulation is a possibility.

Interestingly enough, determining the nature of the inhabitants involves addressing another classic philosophical problem, that of personal identity. Solving this problem involves determining what it is to be a person (the personal part of personal identity), what it is to be distinct from all other entities and what it is to be the same person across time (the identity part of personal identity). Philosophers have engaged this problem for centuries and, obviously enough, have not solved it. That said, it is easy enough to offer some speculation within the context of Musk’s simulation.

Musk and others seem to envision a virtual reality simulation as opposed to physical simulation. A physical simulation is designed to replicate a part of the real world using real entities, presumably to gather data. One science fiction example of a physical simulation is Frederik Pohl’s short story “The Tunnel under the World.” In this story the inhabitants of a recreated town are forced to relive June 15th over and over again in order to test various advertising techniques.

If we are in a physical simulation, then escape would be along the lines of escaping from a physical prison—it would be a matter of breaking through the boundary between our simulation and the outer physical world. This could be a matter of overcoming distance (travelling far enough to leave the simulation—perhaps Mars is outside the simulation) or literally breaking through a wall. If the outside world is habitable, then survival beyond the simulation would be possible—it would be just like surviving outside any other prison.

Such a simulation would differ from the usual problem of the external world—we would be in the real world; we would just be ignorant of the fact that we are in a constructed simulation. Roughly put, we would be real lab rats in a real cage, we would just not know we are in a cage. But, Musk and others seem to hold that we are (sticking with the rat analogy) rats in a simulated cage. We may even be simulated rats.

While the exact nature of this simulation is unspecified, it is supposed to be a form of virtual reality rather than a physical simulation. The question then, is whether or not we are real rats in a simulated cage or simulated rats in a simulated cage.

Being real rats in this context would be like the situation in the Matrix: we have material bodies in the real world but are jacked into a virtual reality. In this case, escape would be a matter of being unplugged from the Matrix. Presumably those in charge of the system would take better precautions than those used in the Matrix, so escape could prove rather difficult. Unless, of course, they are sporting about it and are willing to give us a chance.

Assuming we could survive in the real world beyond the simulation (that it is not, for example, on a world whose atmosphere would kill us), then existence beyond the simulation as the same person would be possible. To use an analogy, it would be like ending a video game and walking outside—you would still be you; only now you would be looking at real, physical things. Whatever personal identity might be, you would presumably still be the same metaphysical person outside the simulation as inside. We might, however, be simulated rats in a simulated cage and this would make matter even more problematic.

If it is assumed that the simulation is a sort of virtual reality and we are virtual inhabitants, then the key concern would be the nature of our virtual existence. In terms of a meaningful escape, the question would be this: is a simulated person such that they could escape, retain their personal identity and persist outside of the simulation?

It could be that our individuality is an illusion—the simulation could be rather like Spinoza envisioned the world. As Spinoza saw it, everything is God and each person is but a mode of God. To use a crude analogy, think of a bed sheet with creases. We are the creases and the sheet is God. There is actually no distinct us that can escape the sheet. Likewise, there is no us that can escape the simulation.

It could also be the case that we exist as individuals within the simulation, perhaps as programmed objects.  In this case, it might be possible for an individual to escape the simulation. This might involve getting outside of the simulation and into other systems as a sort of rogue program, sort of like in the movie Wreck-It Ralph. While the person would still not be in the physical world (if there is such a thing), they would at least have escaped the prison of the simulation.  The practical challenge would be pulling off this escape.

It might even be possible to acquire a physical body that would host the code that composes the person—this is, of course, part of the plot of the movie Virtuosity. This would require that the person make the transition from the simulation to the real world. If, for example, I were to pull off having my code copied into a physical shell that thought it was me, I would still be trapped in the simulation. I would no more be free than if I was in prison and had a twin walking around free. As far as pulling of such an escape, Virtuosity does show a way—assuming that a virtual person was able to interact with someone outside the simulation.

As a closing point, the problem of the external world would seem to haunt all efforts to escape. To be specific, even if a person seemed to have managed to determine that this is a simulation and then seemed to have broken free, the question would still arise as to whether or not they were really free. It is after all, a standard plot twist in science fiction that the escape from the virtual reality turns out to be virtual reality as well. This is nicely mocked in the “M. Night Shaym-Aliens!” episode of Rick and Morty. It also occurs in horror movies, such as Nightmare on Elm Street, —a character trapped in a nightmare believes they have finally awoken in the real world, only they have not. In the case of a simulation, the escape might merely be a simulated escape and until the problem of the external world is solved, there is no way to know if one is free or still a prisoner.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Simulation I: The Problem of the External World

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on October 24, 2016

Elon Musk and others have advanced the idea that we exist within a simulation. The latest twist on this is that he and others are allegedly funding efforts to escape this simulation. This is, of course, the most recent chapter in the ancient philosophical problem of the external world. Put briefly, this problem is the challenge of proving that what seems to be a real external world is, in fact, a real external world. As such, it is a problem in epistemology (the study of knowledge).

The problem is often presented in the context of metaphysical dualism. This is the view that reality is composed of two fundamental categories of stuff: mental stuff and physical stuff. The mental stuff is supposed to be what the soul or mind is composed of, while things like tables and kiwis (the fruit and the bird) are supposed to be composed of physical stuff. Using the example of a fire that I seem to be experiencing, the problem would be trying to prove that the idea of the fire in my mind is being caused by a physical fire in the external world.

Renee Descartes has probably the best known version of this problem—he proposes that he is being deceived by an evil demon that creates, in his mind, an entire fictional world. His solution to this problem was to doubt until he reached something he could not doubt: his own existence. From this, he inferred the existence of God and then, over the rest of his Meditations on First Philosophy, he established that God was not a deceiver. Going back to the fire example, if I seem to see a fire, then there probably is an external, physical fire causing that idea. Descartes did not, obviously, decisively solve the problem: otherwise Musk and his fellows would be easily refuted by using Descartes’ argument.

One often overlooked contribution Descartes made to the problem of the external world is consideration of why the deception is taking place. Descartes attributes the deception of the demon to malice—it is an evil demon (or evil genius). In contrast, God’s goodness entails he is not a deceiver. In the case of Musk’s simulation, there is the obvious question of the motivation behind it—is it malicious (like Descartes’ demon) or more benign? On the face of it, such deceit does seem morally problematic—but perhaps the simulators have excellent moral reasons for this deceit. Descartes’s evil demon does provide the best classic version of Musk’s simulation idea since it involves an imposed deception. More on this later.

John Locke took a rather more pragmatic approach to the problem. He rejected the possibility of certainty and instead argued that what matters is understanding matters enough to avoid pain and achieve pleasure. Going back to the fire, Locke would say that he could not be sure that the fire was really an external, physical entity. But, he has found that being in what appears to be fire has consistently resulted in pain and hence he understands enough to want to avoid standing in fire (whether it is real or not). This invites an obvious comparison to video games: when playing a game like World of Warcraft or Destiny, the fire is clearly not real. But, because having your character fake die in fake fire results in real annoyance, it does not really matter that the fire is not real. The game is, in terms of enjoyment, best played as if it is.

Locke does provide the basis of a response to worries about being in a simulation, namely that it would not matter if we were or were not—from the standpoint of our happiness and misery, it would make no difference if the causes of pain and pleasure were real or simulated. Locke, however, does not consider that we might be within a simulation run by others. If it were determined that we are victims of a deceit, then this would presumably matter—especially if the deceit were malicious.

George Berkeley, unlike Locke and Descartes, explicitly and passionately rejected the existence of matter—he considered it a gateway drug to atheism. Instead, he embraces what is called “idealism”, “immaterialism” and “phenomenalism.” His view was that reality is composed of metaphysical immaterial minds and these minds have ideas. As such, for him there is no external physical reality because there is nothing physical. He does, however, need to distinguish between real things and hallucinations or dreams. His approach was to claim that real things are more vivid that hallucinations and dreams. Going back to the example of fire, a real fire for him would not be a physical fire composed of matter and energy. Rather, I would have a vivid idea of fire. For Berkeley, the classic problem of the external world is sidestepped by his rejection of the external world.  However, it is interesting to speculate how a simulation would be handled by Berkeley’s view.

Since Berkeley does not accept the existence of matter, the real world outside the simulation would not be a material world—it would a world composed of minds. A possible basis for the difference is that the simulated world is less vivid than the real world (to use his distinction between hallucinations and reality). On this view, we would be minds trapped in a forced dream or hallucination. We would be denied the more vivid experiences of minds “outside” the simulation, but we would not be denied an external world in the metaphysical sense. To use an analogy, we would be watching VHS, while the minds “outside” the simulation would be watching Blu-Ray.

While Musk does not seem to have laid out a complete philosophical theory on the matter, his discussion indicates that he thinks we could be in a virtual reality style simulation. On this view, the external world would presumably be a physical world of some sort. This distinction is not a metaphysical one—presumably the simulation is being run on physical hardware and we are some sort of virtual entities in the program. Our error, then, would be to think that our experiences correspond to material entities when they, in fact, merely correspond to virtual entities. Or perhaps we are in a Matrix style situation—we do have material bodies, but receive virtual sensory input that does not correspond to the physical world.

Musk’s discussion seems to indicate that he thinks there is a purpose behind the simulation—that it has been constructed by others. He does not envision a Cartesian demon, but presumably envisions beings like what we think we are.  If they are supposed to be like us (or we like them, since we are supposed to be their creation), then speculation about their motives would be based on why we might do such a thing.

There are, of course, many reasons why we would create such a simulation. One reason would be scientific research: we already create simulations to help us understand and predict what we think is the real world. Perhaps we are in a simulation used for this purpose. Another reason would be for entertainment. We created games and simulated worlds to play in and watch; perhaps we are non-player characters in a game world or unwitting actors in a long running virtual reality show (or, more likely, shows).

One idea, which was explored in Frederik Pohl’s short story “The Tunnel under the World”, is that our virtual world exists to test advertising and marketing techniques for the real world. In Pohl’s story, the inhabitants of Tylerton are killed in the explosion of the town’s chemical plant and they are duplicated as tiny robots inhabiting a miniature reconstruction of the town. Each day for the inhabitants is June 15th and they wake up with their memories erased, ready to be subject to the advertising techniques to be tested that day.  The results of the methods are analyzed, the inhabitants are wiped, and it all starts up again the next day.

While this tale is science fiction, Google and Facebook are working very hard to collect as much data as they can about us with an end to monetize all this information. While the technology does not yet exist to duplicate us within a computer simulation, that would seem to be a logical goal of this data collection—just imagine the monetary value of being able to simulate and predict people’s behavior at the individual level. To be effective, a simulation owned by one company would need to model the influences of its competitors—so we could be in a Google World or a Facebook World now so that these companies can monetize us to exploit the real versions of us in the external world.

Given that a simulated world is likely to exist to exploit the inhabitants, it certainly makes sense to not only want to know if we are in such a world, but also to try to undertake an escape. This will be the subject of the next essay.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Virtual Vacations

Posted in Epistemology, Philosophy by Michael LaBossiere on September 7, 2016

In Philip K. Dick’s “We Can Remember It for You Wholesale” Rekal, Incorporated offers its clients a form of virtual vacation: for a modest fee, memories of an amazing vacation are implanted. The company also provides relevant mementos and “evidence” of the trip. In the story (and the movie, Total Recall, based on it) things go terribly wrong.

While the technology of the story does not yet exist, a very limited form of virtual reality has finally become something of a reality. Because of this, it is worth considering the matter of virtual vacations. Interestingly, philosophers have long envisioned a form of virtual reality; but they have usually presented it as a problem in epistemology (the study or theory of knowledge). This is the problem of the external world: how do I know that what I think is real is actually real? In the case of the virtual vacation, there is no such problem: the vacation is virtual and not real. Perhaps some philosopher will be inspired to try to solve the problem of the virtual vacation: how does one know that it is not real?

Philosophers have also considered virtual reality in the context of ethics. One of the best known cases is Robert Nozick’s experience machine. Nozick envisioned a machine that would allow the user to have any experience they desired. Some philosophers have made use of this sort of a machine as a high-tech version of the “pig objection.” This objection, which was used by Aristotle and others, is against taking pleasure to be the highest good. The objection is often presented as a choice: you must pick between continuing your current life or living as an animal—but with the greatest pleasures of that beast guaranteed.  The objector, of course, expects that people will choose to remain people, thus showing that mere pleasure is not the highest good. In the case of the experience machine variant, the choice is between living a real life with all its troubles and a life of ultimate pleasure in the experience machine. The objector hopes, of course, that our intuitions will still favor valuing the real over the virtual.

Since the objection is generally presented as a choice of life (you either live life entirely outside the machine or entirely inside of it) it worth considering there might be a meaningful difference if people take virtual vacations rather than living virtual lives.

On the face of it, there would seem to be no real problem with virtual vacations in which a person either spends their vacation time in a virtual world or has the memories implanted. The reason for this is that people already take virtual vacations of a sort—they play immersive video games and watch movies. Before this, people took “virtual vacations” in books, plays and in their own imagination. That said, a true virtual vacation might be sufficiently different to require arguments in its favor. I now turn to these arguments.

The first reason in favor of virtual vacations is their potential affordability. If virtual vacations eventually become budget alternatives to real vacations as in the story), they would allow people to have the experience of a high priced vacation for a modest cost. For example, a person might take a virtual luxury cruise in a stateroom that, if real, might cost $100,000.

The second reason in support of virtual vacations is that they could be used to virtually visit places where the access is limited (such as public parks that can only handle so many people), where access would be difficult (such as very remote locations), or places where access would be damaging (such as environmentally sensitive areas).

A third reason is that virtual vacations could allow people to have vacations they could not really have, such as visiting Mars, adventuring in Middle Earth, or spending a weekend as a dolphin.

A fourth reason is that virtual vacations could be much safer than real vacations—no travel accidents, no terrorist attacks, no disease, and so on for the various dangers that can be encountered in the real world. Those familiar with science fiction might point to the dangers of virtual worlds, using Sword Art Online and the very lethal holodecks of Star Trek as examples. However, it would seem easy enough to make the technology so that it cannot actually kill people. It was always a bit unclear why the holodecks had the option of turning off the safety systems—that is rather like having an option for your Xbox One or PS4 to explode and kill you when you lose a game.

A fifth reason is convenience—going on a virtual vacation would generally be far easier than going on a real vacation. There are other reasons that could be considered, but I now turn to an objection and some concerns.

The most obvious objection against virtual vacations is that they are, by definition, not real.

The idea is that the pig objection would apply not just to an entire life in a virtual world, but to a vacation. Since the virtual vacation is not real, it lacks value and hence it would be wrong for people to take them in place of real vacations. Fortunately, there seems to be an easy reply to this objection.

The pig objection does seem to have bite in cases in which a person is supposed to be doing significant things. For example, a person who spends a weekend in virtual reality treating virtual patients with virtual Ebola would certainly not merit praise and would not be acting in a virtuous way. However, the point of a vacation is amusement and restoration rather than engaging in significant actions. If virtual vacations are to be criticized because they merely entertain, then the same would apply to real vacations. After all, their purpose is also to entertain. This is not to say that people cannot do significant things while on vacation, but to focus on the point of a vacation as vacation. As such, the pig objection does not seem to have much bite here.

It could be objected that virtual vacations would fail to be as satisfying as actual vacations because they are not real. This is certainly an objection worth considering—if a virtual vacation fails as a vacation, then there would be a very practical reason not to take one. However, this is something that remains to be seen. Now, to the concerns.

One concern, which has been developed in science fiction, is that virtual vacations might prove addicting. Video games have already proven to be addicting to some people; there are even a very few cases of people literally gaming to death. While this is a legitimate concern and there will no doubt be a Virtual Reality Addicts Anonymous in the future, this is not a special objection against virtual reality—unless, of course, it proves to be destructively addicting on a significant scale. Even if it were addictive, it would presumably do far less damage than drug or alcohol addiction. In fact, this could be another point in its favor—if people who would otherwise be addicted to drugs or alcohol self-medicated with virtual reality instead, there could be a reduction in social woes and costs arising from addiction.

A second concern is that virtual vacations would have a negative impact on real tourist economies. My home state of Maine and adopted state of Florida both have tourism based economies and if people stopped real vacations in favor of virtual vacations, their economies would suffer greatly. One stock reply is that when technology kills one industry, it creates a new one. In this case, the economic loss to real tourism would be offset to some degree by the economic gain in virtual tourism. States and countries could even create or license their own virtual vacation experiences. Another reply is that there will presumably still be plenty of people who will prefer the real vacations to the virtual vacations. Even now people could spend their vacations playing video games; but most who have the money and time still chose to go on a real vacation.

A third concern is that having wondrous virtual vacations will increase peoples’ dissatisfaction with the tedious grind that is life for most under the economic lash of capitalism. An obvious reply is that most are already dissatisfied. Another reply is that this is more of an objection against the emptiness of capitalism for the many than an objection against virtual vacations. In any case, amusements eventually wear thin and most people actually want to return to work.

In light of the above, virtual vacations seem like a good idea. That said, many disasters are later explained by saying “it seemed like a good idea at the time.”

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Am I my Own Demon?

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on September 5, 2016

The problem of the external world is a classic challenge in epistemology (the theory of knowledge). This challenge, which was first presented by the ancient skeptics, is met by proving that what I seem to be experiencing is actually real. As an example, it would require proving that the computer I seem to be typing this on exists outside of my mind.

Some of the early skeptics generated the problem by noting that what seems real could be just a dream, generated in the mind of the dreamer. Descartes added a new element to the problem by considering that an evil demon might be causing him to have experiences of a world that does not actually exist outside of his mind. While the evil demon was said to be devoted to deception, little is said about its motive in this matter. After Descartes there was a move from supernatural to technological deceivers: the classic brain-in-a-vat scenarios that are precursors to the more recent notion of virtual reality. In these philosophical scenarios little is said about the motivation or purpose of the deceit, beyond the desire to epistemically mess with someone. Movies and TV shows do sometimes explore the motives of the deceit. The Matrix trilogy, for example, endeavors to present something of a backstory for the Matrix. While considering the motivation behind the alleged deceit might not bear on the epistemic problem, it does seem a matter worth considering.

The only viable approach to sorting out a possible motivation for the deceit is to consider the nature of the world that is experienced. As various philosophers, such as David Hume, have laid out in their formulations of the problem of evil (the challenge of reconciling God’s perfection with the existence of evil) the world seems to be an awful place. As Hume has noted, it is infested with disease, suffused with suffering, and awash in annoying things. While there are some positive things, there is an overabundance of bad, thus indicating that whatever lies behind the appearances is either not benign or not very competent. This, of course, assumes some purpose behind the deceit. But, perhaps there is deceit without a deceiver and there is no malice. This would make the unreal like what atheists claim about the allegedly real: it is purposeless. However, deceit (like design) seems to suggest an intentional agent and this implies a purpose. This purpose, if there is one, must be consistent with the apparent awfulness of the world.

One approach is to follow Descartes and go with a malicious supernatural deceiver. This being might be acting from mere malice—inflicting both deceit and suffering. Or it might be acting as an agent of punishment for past transgressions on my part. The supernatural hypothesis does have some problems, the main one being that it involves postulating a supernatural entity. Following Occam’s Razor, if I do not need to postulate a supernatural being, then I should not do so.

Another possibility is that I am in technologically created unreal world. In terms of motives consistent with the nature of the world, there are numerous alternatives. One is punishment for some crime or transgression. A problem with this hypothesis is that I have no recollection of a crime or indication that I am serving a sentence. But, it is easy to imagine a system of justice that does not inform prisoners of their crimes during the punishment and that someday I will awaken in the real world, having served my virtual time. It is also easy to imagine that this is merely a system of torment, not a system of punishment. There could be endless speculation about the motives behind such torment. For example, it could be an act of revenge or simple madness. Or even a complete accident. There could be other people here with me; but I have no way of solving the problem of other minds—no way of knowing if those I encounter are fellow prisoners or mere empty constructs. This ignorance does seem to ground a moral approach—since they could be fellow prisoners, I should treat them as such.

A second possibility is that the world is an experiment or simulation of an awful world and I am a construct within that world. Perhaps those conducting it have no idea the inhabitants are suffering; perhaps they do not care; or perhaps the suffering is the experiment. I might even be a researcher, trapped in my own experiment. Given how scientists in the allegedly real world have treated subjects, the idea that this is a simulation of suffering has considerable appeal.

A third possibility is that the world is a game or educational system of some sort. Perhaps I am playing a very lame game of Assessment & Income Tax; perhaps I am in a simulation learning to develop character in the face of an awful world; or perhaps I am just part of the game someone else is playing. All of these are consistent with how the world seems to be.

It is also worth considering the possibility of solipsism: that I am the only being that exists. It could be countered that if I were creating the world, it would be much better for me and far more awesome. After all, I actually write adventures for games and can easily visually a far more enjoyable and fun world. The easy and obvious counter is to point out that when I dream (or, more accurately have nightmares), I experience unpleasant things on a fairly regular basis and have little control. Since my dreams presumably come from me and are often awful, it makes perfect sense that if the world came from me, it would be comparable in its awfulness. The waking world would be more vivid and consistent because I am awake; the dream world less so because of mental fatigue. In this case, I would be my own demon.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Believing What You Know is Not True

Posted in Epistemology, Philosophy, Reasoning/Logic by Michael LaBossiere on February 5, 2016

“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”

Stephen Colbert

While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from the land of comedy into the realm of philosophy (though detractors might claim that philosophy is comedy without humor; but that is actually law). Colbert has what seems to be an odd epistemology: he regularly claims that he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as merely comedic, it does raise many interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.

While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold the opinion that it is true. A belief is true, in the common sense view, when it gets reality right—this is the often maligned correspondence theory of truth. The stock simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks, but shall suffice for this discussion.

Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.

One possible response is to point out that the human mind is not beholden to the rules of logic—while a contradiction cannot be true, there are many ways a person can hold to contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold to both.  This might be due to an ability to compartmentalize the beliefs so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.

While these responses do have considerable appeal, they do not appear to work in cases in which the person actually claims, as Colbert does, that they believe something they know is not true. After all, making this claim does require considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they belief something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.

One possibility is to consider the power of cognitive dissonance management—a person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem. I will explore this possibility in the context of comfort beliefs in a later essay.

Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true—even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.

Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty sided die, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true—assuming that this means that I can believe in something that I believe is less likely than another belief.

People are also strongly influenced by emotional and other factors that are not based in a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Skepticism, Locke & Games

Posted in Epistemology, Philosophy by Michael LaBossiere on September 25, 2015

In philosophy skepticism is the view that we lack knowledge. There are numerous varieties of skepticism and these are defined by the extent of the doubt endorsed by the skeptic. A relatively mild case of skepticism might involve doubts about metaphysical claims while a truly rabid skeptic would doubt everything—including her own existence.

While many philosophers have attempted to defeat the dragon of skepticism, all of these attempts seem to have failed. This is hardly surprising—skepticism seems to be unbreakable. The arguments for this have an ancient pedigree and can be distilled down to two simple arguments.

The first goes after the possibility of justifying a belief and thus attacks the standard view that knowledge requires a belief that is true and justified. If a standard of justification is presented, then there is the question of what justifies that standard. If a justification is offered, then the same question can be raised into infinity. And beyond. If no justification is offered, then there is no reason to accept the standard.

A second stock argument for skepticism is that any reasonable argument given in support of knowledge can be countered by an equally reasonable argument against knowledge.  Some folks, such as the famous philosopher Chisholm, have contended that it is completely fair to assume that we do have knowledge and begin epistemology from that point. However, this seems to have all the merit of grabbing the first place trophy without actually competing.

Like all sane philosophers, I tend to follow David Hume in my everyday life: my skepticism is nowhere to be seen when I am filling out my taxes, sitting in brain numbing committee meeting, or having a tooth drilled. However, like a useless friend, it shows up again when it is no longer needed. As such, it would be nice if skepticism could be defeated or a least rendered irrelevant.

John Locke took a rather interesting approach to skepticism. While, like Descartes, he seemed to want to find certainty, he settled for a practical approach to the matter. After acknowledging that our faculties cannot provide certainty, he asserted that what matters to us is the ability of our faculties to aid us in our preservation and wellbeing.

Jokingly, he challenges “the dreamer” to put his hand into a furnace—this would, he claims, wake him “to a certainty greater than he could wish.” More seriously, Locke contends that our concern is not with achieving epistemic certainty. Rather, what matters is our happiness and misery. While Locke can be accused of taking an easy out rather than engaging the skeptic in a battle of certainty or death, his approach is certainly appealing. Since I happened to think through this essay while running with an injured back, I will use that to illustrate my view on this matter.

When I set out to run, my back began hurting immediately. While I could not be certain that I had a body containing a spine and nerves, no amount of skeptical doubt could make the pain go away—in regards to the pain, it did not matter whether I really had a back or not. That is, in terms of the pain it did not matter whether I was a pained brain in a vat or a pained brain in a runner on the road. In either scenario, I would be in pain and that is what really mattered to me.

As I ran, it seemed that I was covering distance in a three-dimensional world. Since I live in Florida (or what seems to be Florida) I was soon feeling quite warm and had that Florida feel of sticky sweat. I could eventually feel my thirst and some fatigue. Once more, it did not seem to really matter if this was real—whether I was really bathed in sweat or a brain bathed in some sort of nutrient fluid, the run was the same to me. As I ran, I took pains to avoid cars, trees and debris. While I did not know if they were real, I have experience what it is like to be hit by a car (or as if I was hit by a car) and also experience involving falling (or the appearance of falling). In terms of navigating through my run, it did not matter at all whether it was real or not. If I knew for sure that my run was really real for real that would not change the run. If I somehow knew it was all an illusion that I could never escape, I would still run for the sake of the experience of running.

This, of course, might seem a bit odd. After all, when the hero of a story or movie finds out that she is in a virtual reality what usually follows is disillusionment and despair. However, my attitude has been shaped by years of gaming—both tabletop (BattleTech, Dungeons & Dragons, Pathfinder, Call of Cthulhu, and so many more) and video (Zork, Doom, Starcraft, Warcraft, Destiny, Halo, and many more). When I am pretending to be a paladin, the Master Chief, or a Guardian, I know I am doing something that is not really real for real. However, the game can be pleasant and enjoyable or unpleasant and awful. This enjoyment or suffering is just as real as enjoyment or suffering caused by what is supposed to be really real for real—though I believe it is but a game.

If I somehow knew that I was trapped in an inescapable virtual reality, then I would simply keep playing the game—that is what I do. Plus, it would get boring and awful if I stopped playing. If I somehow knew that I was in the really real world for real, I would keep doing what I am doing. Since I might be trapped in just such a virtual reality or I might not, the sensible thing to do is keep playing as if it is really real for real. After all, that is the most sensible option in every case. As such, the reality or lack thereof of the world I think I occupy does not matter at all. The play, as they say, is the thing.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds III: The Mind of the Machine

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on September 11, 2015

While the problem of other minds is a problem in epistemology (how does one know that another being has/is a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regards to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind and it is certainly interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (with the exception of functionalism) were developed to provide accounts of the minds of living creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of mater.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But, there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the rather serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct, but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way:  mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two way causation. It also seems to have the defect of making the mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weirder than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that the mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism is does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor—just the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regards to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body.

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same requires functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different, but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind would be a rather plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested in order to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds II: Is the Android a Psychopath?

Posted in Epistemology, Ethics, Philosophy, Technology by Michael LaBossiere on September 9, 2015

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds I: Setup

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on September 7, 2015

The movie Ex Machina is what I like to call “philosophy with a budget.” While the typical philosophy professor has to present philosophical problems using words and Powerpoint, movies like Ex Machina can bring philosophical problems to dramatic virtual life. This then allows philosophy professors to jealously reference such films and show clips of them in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some minor spoilers in what follows.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a much more limited set of problems, all connected to the mind. Since the film is primarily about AI, this is not surprising. The gist of the movie is that Nathan has created an AI named Ava and he wants an employee named Caleb to put her to the test.

The movie explicitly presents the test proposed by Alan Turing. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, there is a twist on the test: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems to be quite reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she is able to reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people in order to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether or not Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Discussing the Shape of Things (that might be) to Come

Posted in Epistemology, Ethics, Metaphysics, Philosophy, Technology by Michael LaBossiere on July 24, 2015

ThingstocomescifiOne stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.

One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.

This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.

In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter