One of the oldest problems in philosophy is that of the external world. It present an epistemic challenge forged by the skeptics: how do I know that what I seem to be experiencing as the external world is really real for real? Early skeptics often claimed that what seems real might be just a dream. Descartes upgraded the problem through his evil genius/demon which used either psionic or supernatural powers to befuddle its victim. As technology progressed, philosophers presented the brain-in-a-vat scenarios and then moved on to more impressive virtual reality scenarios. One recent variation on this problem has been made famous by Elon Musk: the idea that we are characters within a video game and merely think we are in a real world. This is, of course, a variation on the idea that this apparent reality is just a simulation. There is, interestingly enough, a logically strong inductive argument for the claim that this is a virtual world.
One stock argument for the simulation world is built in the form of the inductive argument generally known as a statistical syllogism. It is statistical because it deals with statistics. It is a syllogism by definition: it has two premises and one conclusion. Generically, a statistical syllogism looks like this:
Premise 1: X% of As are Bs.
Premise 2: This is an A.
Conclusion: This is a B.
The quality (or strength, to use the proper term) of this argument depends on the percentage of As that are B. The higher the percentage, the stronger the argument. This makes good sense: the more As that are Bs, the more reasonable it is that a specific A is a B. Now, to the simulation argument.
Premise 1: Most worlds are simulated worlds.
Premise 2: This is a world.
Premise 3: This is a simulated world.
While “most” is a vague term, the argument is stronger than weaker in that if its premises are true, then the conclusion is logically more likely to be true than not. Before embracing your virtuality, it is worth considering a rather similar argument:
Premise 1: Most organisms are bacteria.
Premise 2: You are an organism.
Conclusion: You are a bacterium.
Like the previous argument, the truth of the premises make the conclusion more likely to be true than false. However, you are almost certainly not a bacteria. This does not show that the argument itself is flawed. After all, the reasoning is quite good and any organism selected truly at random would most likely be a bacterium. Rather, it indicates that when considering the truth of a conclusion, one must consider the total evidence. That is, information about the specific A must be considered when deciding whether or not it is actually a B. In the bacteria example, there are obviously facts about you that would count against the claim that you are a bacterium—such as the fact that you are a multicellular organism.
Turning back to the simulation argument, the same consideration is in play. If it is true that most worlds are simulations, then any random world is more likely to be a simulation than not. However, the claim that this specific world is a simulation would require due consideration of the total evidence: what evidence is there that this specific world is a simulation rather than real? This reverses the usual challenge of proving that the world is real to trying to prove it is not real. At this point, there seems to be little in the way of evidence that this is a simulation. Using the usual fiction examples, we do not seem to find glitches that would be best explained as programming bugs, we do not seem to encounter outsiders from reality, and we do not run into some sort of exit system (like the Star Trek holodeck). Naturally, this is all consistent with this being a simulation—it might be well programmed, the outsider might never be spotted (or never go into the system) and there might be no way out. At this point, the most reasonable position is that the simulation claim is at best on par with the claim that the world is real—all the evidence is consistent with both accounts. There is, however, still the matter of the truth of the premises in the simulation argument.
The second premise seems true—whatever this is, it seems to be a world. It seems fine to simply grant this premises. As such, the first premise is the key—while the logic of the argument is good, if the premise is not plausible then it is not a good argument overall.
The first premise is usually supported by its own stock argument. The reasoning includes the points that the real universe contains large numbers of civilizations, that many of these civilizations are advanced and that enough of these advanced civilizations create incredibly complex simulations of worlds. Alternatively, it could be claimed that there are only a few (or just one) advanced civilizations but that they create vast numbers of complex simulated worlds.
The easy and obvious problem with this sort of reasoning is that it requires making claims about an external real world in order to try to prove that this world is not real. If this world is taken to not be real, there is no reason to think that what seems true of this world (that we are developing simulations) would be true of the real world (that they developed super simulations, one of which is our world). Drawing inferences from what we think is a simulation to a greater reality would be like the intelligent inhabitants of a Pac Man world trying to draw inferences from their game to our world. This would be rather problematic.
There is also the fact that it seems simpler to accept that this world is real rather than making claims about a real world beyond this one. After all, the simulation hypothesis requires accepting a real world on top of our simulated world—why not just have this be the real world?
On an episode of the Late Show, host Stephen Colbert and Jane Lynch had an interesting discussion of guardian angels. Lynch, who currently stars as a guardian angel in “Angel from Hell”, related a story of how her guardian angel held her in a protective embrace during a low point of her life. Colbert, ever the rational Catholic, noted that he believed in guardian angels despite knowing that they do not exist. The question of the existence of guardian angels is certainly an interesting one and provides yet another way to consider the classic problem of evil.
In general terms, a guardian angel is a supernatural, benevolent being who serves as the personal protector of someone. The nature of their alleged guarding varies considerably. For some, the guardian angel is supposed to serve in the classic “angel on the shoulder” role and provide good advice. For others, the angel provides a comforting presence. Some even claim that guardian angels take a very active role, such as reducing a potentially fatal fall to one that merely inflicts massive bodily injury. My interest is, however, not with the specific functions of guardian angels, but with the question of their existence.
In the context of monotheism, a guardian angel is an agent of God. As such, this ties them into the problem of evil. The general problem of evil is the challenge of reconciling the alleged existence of God with the existence of evil. Some take this problem to decisively show that God does not exist. Others contend that it shows that God is not how philosophers envision Him in the problem—that is, He is not omniscient, omnibenevolent or omnipotent. In the case of guardian angels, the challenge is to reconcile their alleged existence with evil.
One merely has to look through the news of the day to see a multitude of cases in which a guardian angel could have saved the day with fairly little effort. For example, a guardian angel could inform the police about the location of a kidnapped child. As another example, a guardian angel could exert a bit of effort to keep a ladder from slipping. They could also do more difficult things, like preventing cancer from killing children or deflecting bullets away from school children. Since none of this ever seems to happen, one obvious conclusion is that there are no guardian angels.
However, as with the main problem of evil, there are some ways to try to address this specific problem. One option, which is not available in the case of God, is to argue that guardian angels have very limited capabilities—that is, they are incredibly weak supernatural beings. Alternatively, they might operate under very restrictive rules in terms of what they are allowed to do. One problem with this reply is that such weak angels seem indistinguishable in their effects from non-existent angels. Another problem ties this into the broader problem of evil: why wouldn’t God deploy a better sort of guardian or give them broader rules to operate under? This, of course, just brings up the usual problem of evil.
Another option is that not everyone gets an angel. Jane Lynch, for example, might get an angel that hugged her. Alan Kurdi, the young boy who drowned trying to flee Syria, did not get a guardian angel. While this would be an explanation of sorts, it still just pushes the problem back: why would God not provide everyone in need with a guardian? Mere humans are, of course, limited in their resources and abilities, so everyone cannot be protected all the time. However, God would not seem to suffer from such a limitation.
It is also possible to make use of a stock reply to the problem of evil and bring in the Devil. Perhaps Lucifer deploys his demonic agents to counter the guardian angels. So, when something bad happens to a good person, it is because her guardian angel was outdone by a demon. While this has a certain appeal, it would require a world in which God and the Devil are closely matched so that the Devil can defy God and His angels. This, of course, just brings in the general problem of evil: unless one postulates two roughly equal deities, God is on the hook for the Devil and his demons. Or rather, God’s demons.
As should be expected, guardian angels seem to fare no better than God in regards to the problem of evil. That said, the notion of benevolent, supernatural personal guardians predates monotheism. Socrates, for example, claimed to have a guardian who would warn him of bad choices (which Stephen Colbert also claims to have).
These sort of guardians were not claimed to be agents of a perfect being, as such they do avoid the problem of evil. Supernatural beings that are freelancers or who serve a limited deity can reasonably be expected to be limited in their abilities and it would certainly make sense that not everyone would have a guardian. Conflict between opposing supernatural agencies also makes sense, since there is no postulation of a single supreme being.
While these supernatural guardians do avoid the problem of evil, they run up against the problem of evidence: there does not appear to be adequate evidence for the existence of such supernatural beings. In fact, the alleged evidence for them is better explained by alternatives. For example, a little voice in one’s head is better explained in terms of the psychological rather than the supernatural (a benign mental condition rather than a supernatural guardian). As another example, a fall that merely badly injures a person rather than killing them is better explained in terms of the vagaries of chance than in terms of a conscious, supernatural intervention.
Given the above discussion, there seems to be little reason to believe in the existence of guardian angels. The world would be rather different if they did exist, so clearly they do not. Or they do so little as to make no meaningful difference—which is rather hard to distinguish from not existing.
I certainly do not begrudge people their belief in guardian angels—if that belief leads them to make better choices and feel safer in a dangerous world, then it is a benign belief. I certainly have comfort beliefs as well—as we all do. Perhaps these are our guardian angels. This, obviously, points to another discussion about such beliefs.
Thanks to movies and TV shows such as the Time Machine, Dr. Who and Back to the Future, it is easy to picture what time travel might look like: a person or people get into a machine, some cool stuff happens (coolness is proportional to the special effects budget) and the machine vanishes. It then reappears in the past or the future (without all that tedious mucking about in the time between now and then).
Thanks to philosophers, science fiction writers and scientists, there are enough problems and paradoxes regarding time travel to keep thinkers pontificating until after the end of time. I will not endeavor to solve any of these problems or paradoxes here. Rather, I will present yet another time travel scenario to the stack.
Imagine that a human research team has found a time gate on a desolate alien world. The scientists have figured out how to use the gate, at least well enough to send people back and forth through time. They also learned that the gate compensates for motion of the planet in space, thus preventing potentially fatal displacements.
As is always the case, there are nefarious beings who wish to seize the gate for their own diabolical purposes. Perhaps they want to go and change the time line so that rather than one really good Terminator movie and a second decent one were made, there are many very bad terminator movies in the new timeline. Or perhaps that want to do even worse deeds.
Unfortunately for the good guys, the small expedition has only one trained soldier, Sergeant Vasquez, and she has only a limited supply of combat gear. While the other team members will fight bravely, they know they would be no match for the nefarious beings. What they need is an army, but all they have is a time gate and one soldier.
The scientists consider using the gate to go far back in time in the hopes of recruiting aid from the original inhabitants of the world. Obvious objections are raised against this proposal, such as the concern that the original inhabitants might be worse than the nefarious beings or the possibility that the time travelers might simply be arrested and locked up.
Just as all seemed lost, the team historian recalled an ancient marketing slogan, “Army of One.” He realized that this silly marketing tool could be made into a useful reality. The time gate could be used to multiply the soldier into a true army of one. The team philosopher raised the objection that this sort of thing should not be possible, since it would require that a particular being, namely Vasquez, be multiply located. That is, she would be in different places at the same time. That sort of madness, the philosopher pointed out, was something only metaphysical universals could pull off. One of the scientists pointed out that they had used the gate to send things back and forth in time, which resulted in just that sort of multiple location. After all, a can of soda sent back in time twenty days would be a certain distance from that same soda of twenty days ago. So, multiple location was obviously something that particulars could do—otherwise time travel would be impossible. Which it clearly was not.
The team philosopher, fuming a bit, raised the objection that this was all well and good with cans of soda, because they were not people. Having the same person multiply located would presumably do irreversible damage to most theories of personal identity. The team HR expert cleared her throat and brought up the practical matter of paychecks, benefits, insurance and other such concerns. Vasquez’s husband was caught smiling a mysterious smile, which he quickly wiped of his face when he noticed other team members noticing. The philosopher then played a final card: if we had sent Vasquez back repeatedly in time, we’d have our army of one right now. I don’t see that army. So it can’t work.
Vasquez, a practical soldier, settled the matter—she told the head scientist to set the gate to take her well back before the expedition arrived. She would then use the gate to “meet herself” repeatedly until she had a big enough army to wipe out the invaders.
As she headed towards the gate with her gear, she said “I’ll go hide someplace so you won’t see me. Then I’ll ambush the nefarious invaders. We can sort things out afterwards.” The philosopher muttered, but secretly thought it was a pretty good idea.
The team members were very worried when the nefarious invaders arrived but were very glad to see an army of Vasquez rush from hiding to shoot the hell out of them. After cleaning up the mess, one of the Vasquez asked “so what do I do now? There is an army of me and a couple of me got killed in the fight. Do I try to sort it out by going back through the gate one me at a time or what?”
The HR expert looked very worried—it had been great when the army of one showed up, but the budget would not cover paying the army. But, thought the expert, Vasquez is still technically and legally one person. She could make it work…unless Vasquez got mad enough to shoot her.
And this, my friend, may be conceived to be that heavy, weighty, earthy element of sight by which such a soul is depressed and dragged down again into the visible world, because she is afraid of the invisible and of the world below-prowling about tombs and sepulchers, in the neighborhood of which, as they tell us, are seen certain ghostly apparitions of souls which have not departed pure, but are cloyed with sight and therefore visible.
While ghosts have long haunted the minds of humans, philosophers have said relatively little about them. Plato, in the Phaedo, did briefly discuss ghosts in the context of the soul. Centuries later, my “Ghosts & Minds” manifested in the Philosophers’ Magazine and then re-appeared in my What Don’t You Know? In the grand tradition of horror movie remakes, I have decided to re-visit the ghosts of philosophy and write about them once more.
The first step in these ghostly adventures is laying out a working definition of “ghost.” In the classic tales of horror and in role playing game such as Call of Cthulhu and Pathfinder ghosts are undead manifestations of souls that once inhabited living bodies. These ghosts are incorporeal or, in philosophical terms, they are immaterial minds. In the realm of fiction and games, there is a variety of incorporeal undead: ghosts, shadows, wraiths, specters, poltergeists, and many others. I will, however, stick with a basic sort of ghost and not get bogged down in the various subspecies of spirits.
A basic ghost has to possess certain qualities. The first is that a ghost must have lost its original body due to death. The second is that a ghost must retain the core metaphysical identity it possessed in life. That is, the ghost of a dead person must still be that person and the ghost of a dead animal must still be that animal. This is to distinguish a proper ghost from a mere phantasm or residue. A ghost can, of course, have changes in its mental features. For example, some fictional ghosts become single-mindedly focused on revenge and suffer a degradation of their more human qualities. The third requirement is that the ghost must not have a new “permanent” body (this would be reincarnation), although temporary possession would not count against this. The final requirement is that the ghost must be capable of interacting with the physical world in some manner. This might involve being able to manifest to the sight of the living, to change temperatures, to cause static on a TV, or to inflict a bizarre death. This condition can be used to distinguish a ghost from a spirit that is in a better (or worse) place. After all, it would be odd to say that Heaven is haunted. Or perhaps not.
While the stock ghost of fiction and games is incorporeal entity (an immaterial mind), it should not be assumed that this is a necessary condition for being a ghost. This is to avoid begging the question against non-dualist accounts of ghosts. Now that the groundwork has been put in place, it is time to move on to the ghosts.
The easy and obvious approach to the ghosts of philosophy is to simply stick with the stock ghost. This ghost, as noted above, fits in nicely with classic dualism. This is the philosophical view that there are two basic metaphysical kinds: the material stuff (which might be a substance or properties) and the immaterial stuff. Put in everyday terms, these are the body and the mind/spirit/soul.
On this view, a ghost would arise upon the death of a body that was inhabited by a mind. Since the mind is metaphysically distinct from the body, it would be possible for it to survive the death of the body. Since the mind is the person, the ghost would presumably remain a person—though being dead might have some psychological impact.
One of the main problems for dualism is the mind-body problem, which rather vexed the dualist Descartes. This is the mystery of how the immaterial mind interacts with the material body. While this is rather mysterious, the interaction of the disembodied mind with the material world is really not anymore mysterious. After all, if the mind can work the levers of the brain, it could presumably interact with other material objects. Naturally, it could be objected that the mind needs a certain sort of matter to work with—but the principle of ghosts interacting with the world is no more mysterious than the immaterial mind interacting with the material body.
Non-dualist metaphysical view would seem to have some clear problems with ghosts. One such view is philosophical materialism (also known as physicalism). Unlike everyday materialism, this is not a love of fancy cars, big houses and shiny bling. Rather, it is the philosophical view that all that exists is material. This view explicitly denies the existence of immaterial entities such as spirits and souls. There can still be minds—but they must be physical in nature.
On the face of it, materialism would seem to preclude the existence of ghosts. After all, if the person is her body, then when the body dies, then that is the end of the person. As such, while materialism is consistent with corporeal undead such as zombies, ghouls and vampires, ghosts would seem to out. Or are they?
One approach is to accept the existence of material ghosts—the original body dies and the mind persists as some sort of material object. This might be the ectoplasm of fiction or perhaps a fine cloud. It might even be a form of energy that is properly material. These would be material ghosts in a material world. Such material ghosts would presumably be able to interact with the other material objects—though this might be rather limited.
Another approach is to accept the existence of functional ghosts. One popular theory of mind is functionalism, which seems to be the result of thinking that the mind is rather like a computer. For a functionalist a mental state, such as being afraid of ghosts, is defined in terms of causal relations it holds to external influences, other mental states, and bodily behavior. Rather crudely put, a person is a set of functions and if those functions survived the death of the body and were able to interact in some manner with the physical world, then there could be functional ghosts. Such functional ghosts might be regarded as breaking one of the ghost rules in that they might require some sort of new body, such as a computer, a house, or a mechanical shell. In such cases, the survival of the function set of the dead person would be a case of reincarnation—although there is certainly a precedent in fiction for calling such entities “ghosts” even when they are in shells.
Another option, which would still avoid dualism, is for the functions to be instantiated in a non-physical manner (using the term “physical” in the popular sense). For example, the functional ghost might exist in a field of energy or a signal being broadcast across space. While still in the material world, such entities would be bodiless in the everyday meaning of the term and this might suffice to make them ghosts.
A second and far less common form of monism (the view that there is but one type of metaphysical stuff) is known as idealism or phenomenalism. This is not because the people who believe it are idealistic or really phenomenal. Rather, this is the view that all that exist is mental in nature. George Berkeley (best known as the “if a tree falls in the forest…” guy) held to this view. As he saw it, reality is composed of minds (with God being the supreme mind) and what we think of as bodies are just ideas in the mind.
Phenomenalism would seem to preclude the existence of ghosts—minds never have bodies and hence can never become ghosts. However, the idealists usually provide some account for the intuitive belief that there are bodies. Berkeley, for example, claims that the body is a set of ideas. As such, the death of the body would be a matter of having death ideas about the ideas of the body (or however that would work). Since the mind normally exists without a material body, it could easily keep on doing so. And since the “material objects” are ideas, they could be interacted with by idea ghosts. So, it all works out.
While the truly classic werewolf is a human with the ability to shift into the shape of a wolf, the movie versions typically feature a transformation to a wolf-human hybrid. The standard werewolf has a taste for human flesh, a vulnerability to silver and a serious shedding problem. Some werewolves have impressive basketball skills, but that is not a stock werewolf ability.
There have been various and sundry efforts to explain the werewolf myths and legends. Some of the scientific (or at least pseudo-scientific) include specific forms of mental illness or disease. On these accounts, the werewolf does not actually transform into wolf-like creature. The werewolf is merely a very unfortunate person. These non-magical werewolves are certainly possible, but are far more tragic than horrific.
There are also many supernatural accounts for werewolves—many involve vague references to curses. In many tales, the condition can be transmitted—perhaps by a bite or, in modern times, even by texting. These magical beasts are certainly not possible—unless, of course, this is a magical world.
There has even been some speculation about future technology based shifters—perhaps by some sort of nanotechnology that can rapidly re-structure a living creature without killing it. But, these would be werewolves of science fiction.
Interestingly enough, there could also be philosophical werewolves (which, to steal from Adventure Time, could be called “whywolves”) that have a solid metaphysical foundation. Well, as solid as metaphysics gets.
Our good dead friend Plato (who was probably not a werewolf) laid out a theory of Forms. According to Plato, the Forms are supposed to be eternal, perfect entities that exist outside of space and time. As such, they are even weirder than werewolves. However, they neither shed nor consume the flesh of humans, so they do have some positive points relative to werewolves.
For Plato, all the particular entities in this imperfect realm are what they are in virtue of their instantiation of various Forms. This is sometimes called “participation”, perhaps to make the particulars sound like they have civic virtue. To illustrate this with an example, my husky Isis is a husky because she participates in the form of Husky. This is, no doubt, among the noblest and best of the dog forms. Likewise, Isis is furry because she instantiates the form of Fur (and shares this instantiation with all things she contacts—such is the vastness of her generosity).
While there is some pretty nice stuff here in the world, it is sadly evident that all the particulars lack perfection. For example, while Donald Trump’s buildings are clearly quality structures, they are not perfect buildings. Likewise, while he does have a somewhat orange color, he does not possess perfect Orange (John Boehner is closer to the Form of Orange, yet still lacks perfection).
Plato’s account of the imperfection of particulars, like Donald Trump, involves the claim that particulars instantiate or participate in the Forms in varying degrees. When explaining this to my students, I usually use the example of photocopies of various quality—perhaps arising because of issues with the toner. The original that is copied is analogous to the Form while the copies of varying quality are analogous to the various particulars. Another example could be selfies taken of a person using cameras of various qualities. I find that the cools kids relate more to selfies than to photocopies.
Plato also asserts that particulars can instantiate or participate in “contrasting” Forms. He uses the example of how things here in the earthly realm have both Beauty and Ugliness, thus they lack perfect Beauty. To use a more specific example, even the most attractive supermodel still has flaws. As such, a person’s beauty (or ugliness) is a blend of Beauty and Ugliness. Since people can look more or less beautiful over time (time can be very mean as can gravity), this mix can shift—the degree of participation or instantiation can change. This mixing and shifting of instantiation can be used to provide a Platonic account of werewolves (which is not the same as having a Platonic relation with a werewolf).
If the huge assumptions are made that a particular is what it is because it instantiates various Forms and that the instantiations of Forms can be mixed or blended in a particular, then werewolves can easily be given a metaphysical explanation in the context of Forms.
For Plato, a werewolf would be a particular that instantiated the Form of Man but also the Form of Wolf. As such, the being would be part man and part wolf. When the person is participating most in the Form of Man, then he would appear (and act) human. However, when the Form of Wolf became dominant, her form and behavior would shift towards that of the wolf.
Plato mentions the Sun in the Allegory of the Cave as well as the light of the moon. So it seems appropriate that the moon (which reflects the light of the sun) is credited in many tales with triggering the transformation from human to wolf. Perhaps since, as Aristotle claimed, humans are rational animals, the direct light of the sun means that the human Form is dominant. The reflected light of the full moon would, at least in accord with something I just made up, result in a distortion of reason and thus allow the animal Form of Wolf to dominate. There can also be a nice connection here to Plato’s account of the three-part soul: when the Wolf is in charge, reason is mostly asleep.
While it is the wolf that usually takes the blame for the evil of the werewolf, it seems more plausible that this comes from the form of Man. After all, research of wolves has shown that they have been given a bad rap. So, whatever evil is in the werewolf comes from the human part. The howling, though, is all wolf.
As a gamer and horror fan I have an undecaying fondness for zombies. Some years back, I was intrigued to learn about philosophical zombies—I had a momentary hope that my fellow philosophers were doing something…well…interesting. But, as so often has been the case, professional philosophers managed to suck the life out of even the already lifeless. Unlike proper flesh devouring products of necromancy or mad science, philosophical zombies lack all coolness.
To bore the reader a bit, philosophical zombies are beings who look and act just like normal humans, but lack consciousness. They are no more inclined to seek the brains of humans than standard humans, although discussions of them can numb the brain. Rather than causing the horror proper to zombies (or the joy of easy XP), philosophical zombies merely bring about a feeling of vague disappointment. This is the same sort of disappointment that you might recall from childhood trick or treating when someone gave you pennies or an apple rather than real candy.
Rather than serving as creepy cannon fodder for vile necromancers or metaphors for vacuous and excessive American consumerism, philosophical zombies serve as victims in philosophical discussions about the mind and consciousness.
The dullness of current philosophical zombies does raise an important question—is it possible to have a philosophical discussion about proper zombies? There is also a second and equally important question—is it possible to have an interesting philosophical discussion about zombies? As I will show, the answers are “yes” and “obviously not.”
Since there is, at least in this world, no Bureau of Zombie Standards and Certification, there are many varieties of zombies. In my games and fiction, I generally define zombies in terms of beings that are biologically dead yet animated (or re-animated, to be more accurate). Traditionally, zombies are “mindless” or at least possess extremely basic awareness (enough to move about and seek victims).
In fiction, many beings called “zombies” do not have these qualities. The zombies in 28 Days are “mindless”, but are still alive. As such, they are not really zombies at all—just infected people. The zombies in Return of the Living Dead are dead and re-animated, but retain their human intelligence. Zombie lords and juju zombies in D&D and Pathfinder are dead and re-animated, but are intelligent. In the real world, there are also what some call zombies—these are organisms taken over and controlled by another organism, such as an ant controlled by a rather nasty fungus. To keep the discussion focused and narrow, I will stick with what I consider proper zombies: biologically dead, yet animated. While I generally consider zombies to be unintelligent, I do not consider that a definitive trait. For folks concerned about how zombies differ from other animate dead, such as vampires and ghouls, the main difference is that stock zombies lack the special powers of more luxurious undead—they have the same basic capabilities as the living creature (mostly moving around, grabbing and biting).
One key issue regarding zombies is whether or not they are possible. There are, of course, various ways to “cheat” in creating zombies—for example, a mechanized skeleton could be embedded in dead flesh to move the flesh about. This would make a rather impressive horror weapon—so look for it in a war coming soon. Another option is to have a corpse driven about by another organism—wearing the body as a “meat suit.” However, these would not be proper zombies since they are not self propelling—just being moved about by something else.
In terms of “scientific” zombies, the usual approaches include strange chemicals, viruses, funguses or other such means of animation. Since it is well-established that electrical shocks can cause dead organisms to move, getting a proper zombie would seem to be an engineering challenge—although making one work properly could require substantial “cheating” (for example, having computerized control nodes in the body that coordinate the manipulation of the dead flesh).
A much more traditional means of animating corpses is via supernatural means. In games like Pathfinder, D&D and Call of Cthulhu, zombies are animated by spells (the classic being animate dead) or by an evil spirit occupying the flesh. In the D&D tradition, zombies (and all undead) are powered by negative energy (while living creatures are powered by positive energy). It is this energy that enables the dead flesh to move about (and violate the usual laws of biology).
While the idea of negative energy is mostly a matter of fantasy games, the notion of unintelligent animating forces is not unprecedented in the history of science and philosophy. For example, Aristotle seems to have considered that the soul (or perhaps a “part” of it) served to animate the body. Past thinkers also considered forces that would animate non-living bodies. As such, it is easy enough to imagine a similar sort of force that could animate a dead body (rather than returning it to life).
The magic “explanation” is the easiest approach, in that it is not really an explanation. It seems safe to hold that magic zombies are not possible in the actual world—though all the zombie stories and movies show it is rather easy to imagine possible worlds inhabited by them.
The idea of a truly dead body moving around in the real world the way fictional zombies do in their fictional worlds does seem somewhat hard to accept. After all, it seems essential to biological creatures that they be alive (to some degree) in order for them to move about under their own power. What would be needed is some sort of force or energy that could move truly dead tissue. While this is clearly conceivable (in the sense that it is easy to imagine), it certainly does not seem possible—at least in this world. Dualists might, of course, be tempted to consider that the immaterial mind could drive the dead shell—after all, this would only be marginally more mysterious than the ghost driving around a living machine. Physicalists, of course, would almost certainly balk at proper zombies—at least until the zombie apocalypse. Then they would be running.
While the problem of other minds is a problem in epistemology (how does one know that another being has/is a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regards to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.
Over the centuries philosophers have proposed various theories of mind and it is certainly interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (with the exception of functionalism) were developed to provide accounts of the minds of living creatures.
One classic theory of mind is identity theory. This a materialist theory of mind in which the mind is composed of mater. What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.
If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But, there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.
Perhaps the best known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.
While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.
That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.
In terms of the problem of other minds, there is the rather serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.
While substance dualism is the best known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct, but are not different ontological substances.
Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way: mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.
This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two way causation. It also seems to have the defect of making the mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.
As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weirder than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.
A second type of property dualism is interactionism. As the name indicates, this is the theory that the mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism is does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.
As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor—just the assertion that the view should not be dismissed from mere organic prejudice.
The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regards to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body.
While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.
For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same requires functional rather than physical identity. As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different, but are functionally the same in that they can run Windows and Windows software (and Linux, of course).
As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind would be a rather plausible account of machine minds.
If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested in order to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.
The movie Ex Machina is what I like to call “philosophy with a budget.” While the typical philosophy professor has to present philosophical problems using words and Powerpoint, movies like Ex Machina can bring philosophical problems to dramatic virtual life. This then allows philosophy professors to jealously reference such films and show clips of them in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some minor spoilers in what follows.
While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a much more limited set of problems, all connected to the mind. Since the film is primarily about AI, this is not surprising. The gist of the movie is that Nathan has created an AI named Ava and he wants an employee named Caleb to put her to the test.
The movie explicitly presents the test proposed by Alan Turing. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, there is a twist on the test: Caleb knows that Ava is a machine and will be interacting with her in person.
In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.
What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.
Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:
How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.
As a test for intelligence, artificial or otherwise, this seems to be quite reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language. Fortunately, Ava uses English and these problems are bypassed.
Ava easily passes the Cartesian test: she is able to reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.
Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people in order to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.
While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether or not Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).
In the next essay, I will consider the matter of testing in more depth.
One stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.
One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).
This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.
As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.
This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.
As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.
As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.
In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.
A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.
In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems. Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.
Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.
To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.
There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.
Donald gazed down upon the gleaming city of Newer York and the gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.
Donald’s thoughts drifted to the flesh-time, when his body had been a skin-bag holding an array of organs that were always but one accident or mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things were now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universe do when they die.
But he could not help be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the distress and promptly corrected the problem, encrypting that file and flagging it as forgotten.
Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.
While the classic AI apocalypse ends humanity with a bang, the end might be a quiet thing—gradual replacement rather than rapid and noisy extermination. For some, this sort of quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete piece of office equipment.
There are various ways such scenarios could take place. One, which occasionally appears in science fiction, is that humans decline because the creation of a robot-dependent society saps them of what it takes to remain the top species. This, interestingly enough, is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more breeding, rather than less—in the science fiction stories human reproduction typically slows and eventually stops. The human race quietly ends, leaving behind the machines—which might or might not create their own society.
Alternatively, the humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse and the human race gets another chance.
There are various ways to avoid such quiet apocalypses. One is to resist creating such a dependent society. Another option is to have a safety system against a collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These certainly do provide a foundation for some potentially interesting science fiction stories.
Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a stock plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.
While the technology of today is relatively limited, the foundations of the future is being laid down. For example, prosthetic replacements are fairly crude, but it is merely a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is the promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.
These and other technologies point clearly towards the cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time.
The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will no doubt be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they will be gradually replaced with technology. For example, parts damaged by a stroke might be replaced. Some will also elect to do more than replace damaged or failed parts—they will want augmentations added to the brain, such as improved memory or cognitive enhancements.
Since the human brain is mortal, it will fail piece by piece. Like the ship of Theseus so beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether or not the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens—it will not be a member of that species, because nothing organic will remain.
Should all humans undergo this transformation that will be the end of Homo sapiens—the AI apocalypse will be complete. To use a rough analogy, the machine replacements of Homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.
It could be replied that humanity would still remain: the machines that replaced the organic Homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting the human culture, values and so on would suffice—that being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be Homo sapiens—that species would have been replaced in the gradual and quiet AI apocalypse.