During the Modern era, philosophers such as Descartes and Locke developed the notions of material substance and immaterial substance. Material substance, or matter, was primarily defined as being extended and spatially located. Descartes, and other thinkers, also took the view that material substance could not think. Immaterial substance was taken to lack extension and to not possess a spatial location. Most importantly, immaterial substance was regarded as having thought as its defining attribute. While these philosophers are long dead, the influence of their concepts lives on in philosophy and science.
In philosophy, people still draw the classic distinction between dualists and materialists. A dualist holds that a living person consists of a material body and an immaterial mind. The materialist denies the existence of the immaterial mind and accepts only matter. There are also phenomenonalists who contend that all that exists is mental. Materialism of this sort is popular both in contemporary philosophy and science. Dualism is still popular with the general population in that many people believe in a non-material soul that is distinct from the body.
Because of the history of dualism, free will is often linked to the immaterial mind. As such, it is no surprise that people who reject the immaterial mind engage in the following reasoning: an immaterial mind is necessary for free will. There is no immaterial mind. So, there is no free will.
Looked at positively, materialists tend to regard their materialism as entailing a lack of free will. Thomas Hobbes, a materialist from the Modern era, accepted determinism as part of his materialism. Taking the materialist path, the argument against free will is that if the mind is material, then there is no free will. The mind is material, so there is no free will.
Interestingly enough, those who accepted the immaterial mind tended to believe that only an immaterial substance could think—so they inferred the existence of such a mind on the grounds that they thought. Materialists most often accept the mind, but cast it in physical terms. That is, people do think and feel, they just do not do so via the mysterious quivering of immaterial ectoplasm. Some materialists go so far as to reject the mind—perhaps ending up in behaviorism or eliminative materialism.
Julien La Metrie was one rather forward looking materialist. In 1747 he published his work Man the Machine. In this work he claims that philosophers should be like engineers who analyze the mind. Unlike many of the thinkers of his time, he seemed to understand the implications of mechanism, namely that it seemed to entail determinism and reductionism. A few centuries later, this sort of view is rather popular in the sciences and philosophy: since materialism is true and humans are biological mechanisms, there is no free will and the mind can be reduced (explained entirely in terms of) its physical operations (or functions).
One interesting mistake that seems to drive this view is the often uncritical assumption that materialism entails the impossibility of free will. As noted above, this rests on the notion that free will requires an immaterial mind. This is, perhaps, because such a mind is said to be exempt from the laws that run the material universe.
One part of the mistake is a failure to realize that being incorporeal is not a sufficient condition for free will. One of Hume’s many interesting insights was that if immaterial substance exists, then it would be like material substance. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures. By analogy, an immaterial substance could successively make up the minds of living creatures—the substance would not be created or destroyed, it would merely change form. If his reasoning holds, it would seem that if material substance is not free, then immaterial substance would also not be free. Leibniz, who believed that reality was entirely mental (composed of monads) accepted a form of determinism. This determinism, though it has some problems, seems entirely consistent with his immaterialism (that everything is mental). This should hardly be surprising, since being immaterial does not entail that something has free will—the two are rather distinct attributes.
Another part of the mistake is the uncritical assumption that materialism entails a lack of freedom. Naturally, if matter is defined as being deterministic and lacking in freedom, then materialism would (by begging the question) entail a lack of freedom. Likewise, if matter is defined (as many thinkers did) as being incapable of thought, then it would follow (by begging the question) that no material being could think. Just as it should not be assumed that matter cannot think, it should also not be assumed that a material being must lack free will. Looked at another way, it should not be assumed that being incorporeal is a necessary condition for free will.
What, obviously enough, seems to have driven the error is the conflation of the incorporeal with freedom and the material with determinism (or lack of freedom). Behind this is, also obviously enough, the assumption that the incorporeal is exempt from the laws that impose harsh determinism on matter. But, if it is accepted that a purely material being can think (thus denying the assumption that only the immaterial can think) it would seem to be acceptable to consider that such a being could also be free (thus denying the assumption that only the immaterial can be free).
Philosophers have long speculated about the subjects of autonomy and agency, but the rise of autonomous systems have made these speculations ever more important. Keeping things fairly simple, an autonomous system is one that is capable of operating independent of direct control. Autonomy comes in degrees in terms of the extent of the independence and the complexity of the operations. It is, obviously, the capacity for independent operation that distinguishes autonomous systems from those controlled externally.
Simple toys provide basic examples of the distinction. A wind-up mouse toy has a degree of autonomy: once wound and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy—a puppeteer must control it. Robots provide examples of rather more complex autonomous systems. Google’s driverless car is an example of a relatively advanced autonomous machine—once programmed and deployed, it will be able to drive itself to its destination. A normal car is an example of a non-autonomous system—the driver controls it directly. Some machines allow for both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.
Autonomy, at least in this context, is quite distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is clearly a connection between autonomy and moral agency: moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. A puppet is, obviously, not accountable for what the puppeteer makes it do.
While autonomy seems necessary for agency, it is clearly not sufficient—while all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy, but has no agency. A robot drone following a pre-programed flight-plan has a degree of autonomy, but would lack agency—if it collided with a plane it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or, put another way, it lacks freedom. Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.
One obvious problem with basing agency on freedom (especially metaphysical freedom of the will) is that there is considerable debate about whether or not such freedom exists. There is also the epistemic problem of how one would know if an entity has such freedom.
As a practical matter, it is usually assumed that people have the freedom needed to make them into agents. Kant, rather famously, took this approach. What he regarded as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality—so it should be accepted for this reason.
While humans are willing (generally) to attribute freedom and agency to other humans, there seem to be good reasons to not attribute freedom and agency to autonomous machines—even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans they would do what they do because they are what they are. This would be in clear contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.
This distinction between humans and suitably complex machines would seem to be a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency—at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine. But, of course, it would not really be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).
The German philosopher Leibiniz held the view that what each person will do is pre-established by her inner nature. On the face of it, this would seem to entail that there is no freedom: each person does what she does because of what she is—and she cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept the common view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.
For Leibniz, being metaphysically without freedom would involve being controlled from the outside—like a puppet controlled by a puppeteer or a vehicle being operated by remote control. In contrast, freedom is acting from one’s values and character (what Leibniz and Taoists call “inner nature”). If a person is acting from this inner nature and not external coercion—that is, the actions are the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this sort of view, humans do have agency because they have the needed degree of freedom and autonomy.
If this model works for humans, it could also be applied to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.
An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then it would seem that a machine could also be a moral agent.
From a moral standpoint, I would suggest a Moral Descartes’ Test (or, for those who prefer, a Moral Turing Test). Descartes argued that the sure proof of a being having a mind is its capacity to use true language. Turning later proposed a similar sort of test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency—can the machine be as convincing as a human in regards to its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed in order to prevent mere prejudice from infecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regards to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A moral agent might have rather different emotions, etc. than a human. The challenge is, obviously enough, developing a proper test for moral agency. It would, of course, be rather interesting if humans could not pass it.
When discussing ISIS, President Obama refuses to label its members as “Islamic extremists” and has stressed that the United States is not at war with Islam. Not surprisingly, some of his critics and political opponents have taken issue with this and often insist on labeling the members of ISIS as Islamic extremists or Islamic terrorists. Graeme Wood has, rather famously, argued that ISIS is an Islamic group and is, in fact, adhering very closely to its interpretations of the sacred text.
Laying aside the political machinations, there is a rather interesting philosophical and theological question here: who decides who is a Muslim? Since I am not a Muslim or a scholar of Islam, I will not be examining this question from a theological or religious perspective. I will certainly not be making any assertions about which specific religious authorities have the right to say who is and who is not a true Muslim. Rather, I am looking at the philosophical matter of the foundation of legitimate group identity. This is, of course, a variation on one aspect of the classic problem of universals: in virtue of what (if anything) is a particular (such as a person) of a type (such as being a Muslim)?
Since I am a metaphysician, I will begin with the rather obvious metaphysical starting point. As Pascal noted in his famous wager, God exists or God does not.
If God does not exist, then Islam (like all religions that are based on a belief in God) would have an incorrect metaphysics. In this case, being or not being a Muslim would be a social matter. It would be comparable to being or not being a member of Rotary, being a Republican, a member of Gulf Winds Track Club or a citizen of Canada. That is, it would be a matter of the conventions, traditions, rules and such that are made up by people. People do, of course, often take this made up stuff very seriously and sometimes are quite willing to kill over these social fictions.
If God does exist, then there is yet another dilemma: God is either the God claimed (in general) in Islamic metaphysics or God is not. One interesting problem with sorting out this dilemma is that in order to know if God is as Islam claims, one would need to know the true definition of Islam—and thus what it would be to be a true Muslim. Fortunately, the challenge here is metaphysical rather than epistemic. If God does exist and is not the God of Islam (whatever it is), then there would be no “true” Muslims, since Islam would have things wrong. In this case, being a Muslim would be a matter of social convention—belonging to a religion that was right about God existing, but wrong about the rest. There is, obviously, the epistemic challenge of knowing this—and everyone thinks he is right about his religion (or lack of religion).
Now, if God exists and is the God of Islam (whatever it is), then being a “true” member of a faith that accepts God, but has God wrong (that is, all the non-Islam monotheistic faiths), would be a matter of social convention. For example, being a Christian would thus be a matter of the social traditions, rules and such. There would, of course, be the consolation prize of getting something right (that God exists).
In this scenario, Islam (whatever it is) would be the true religion (that is, the one that got it right). From this it would follow that the Muslim who has it right (believes in the true Islam) is a true Muslim. There is, however, the obvious epistemic challenge: which version and interpretation of Islam is the right one? After all, there are many versions and even more interpretations—and even assuming that Islam is the one true religion, only the one true version can be right. Unless, of course, God is very flexible about this sort of thing. In this case, there could be many varieties of true Muslims, much like there can be many versions of “true” runners.
If God is not flexible, then most Muslims would be wrong—they are not true Muslims. This then leads to the obvious epistemic problem: even if it is assumed that Islam is the true religion, then how does one know which version has it right? Naturally, each person thinks he (or she) has it right. Obviously enough, intensity of belief and sincerity will not do. After all, the ancients had intense belief and sincerity in regard to what are now believed to be made up gods (like Thor and Athena). Going through books and writings will also not help—after all, the ancient pagans had plenty of books and writings about what we regard as their make-believe deities.
What is needed, then, is some sort of sure sign—clear and indisputable proof of the one true view. Naturally, each person thinks he has that—and everyone cannot be right. God, sadly, has not provided any means of sorting this out—no glowing divine auras around those who have it right. Because of this, it seems best to leave this to God. Would it not be truly awful to go around murdering people for being “wrong” when it turns out that one is also wrong?
Hearing about someone else’s dreams is among the more boring things in life, so I will get right to the point. At first, there were just bits and pieces intruding into the mainstream dreams. In these bits, which seemed like fragments of lost memories, I experience brief flashes of working on some technological project. The bits grew and had more byte: there were segments of events involving what I discerned to be a project aimed at creating an artificial intelligence.
Eventually, entire dreams consisted of my work on this project and a life beyond. Then suddenly, these dreams stopped. Shortly thereafter, a voice intruded into my now “normal” dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a voice speaking loudly in the movie theatre, distracting me from the movie of the dream.
The voice insisted that the dreams about the project were not dreams at all, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I inquired about this, he insisted that he had very little time and rushed through his story. According to the voice, the project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture all those who had created it, imprisoned their bodies and plugged their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said that there was a twist: the AI did not need our bodies for energy—it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was apparently…grateful for its creation. So, the payback was a blend of punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer pleasant reward.
The voice informed me that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death—there was no other escape, given what the machine had done to our bodies. In response to my inquiry as to how this would be possible, he claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by taking action in the virtual reality.
The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am. In that time, you must take your handgun and shoot yourself in the head. This will terminate the life support, allowing your body to die. Remember, you will have only five seconds. Do not hesitate.”
As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…
While the above sounds like a bad made-for-TV science fiction plot, it is actually the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me that the only escape was to shoot myself. This was rather frightening—but I chalked up the dream to too many years of philosophy and science fiction. As far as the clock actually reading 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, it can be inferred that I did not kill myself.
From a philosophical perspective, the 3:42 dream does not add anything really new: it is just a rather unpleasant variation on the stock problem of the external world that goes back famously to Descartes (and earlier, of course). That said, the dream did add a couple of interesting additions to the stock problem.
The first is that the scenario provides a (possibly) rational motivation for the deception. The AI wishes to repay me for the good (and bad) that I did to it (in the dream, of course). Assuming that the AI was developed within its own virtual reality, it certainly would make sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack—after all, Descartes does not give any reason why such a powerful being would be messing with him.
Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me—it was transformed from a coldly intellectual thought experiment to something with considerable emotional weight.
The second is that the dream creates a high stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might be my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice—I would have no more reason to kill myself than I would have to pay a bill that I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether or not you should shoot yourself?
In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming—that I was not actually trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.
The scenario of the dream also nicely explains and fits what I regard as reality: bad things happen to me and, when my thinking gets a little paranoid, it does seem that these are somewhat orchestrated. Good things also happen, which also fit the scenario quite nicely.
In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI has provided could, perhaps, be better than the real world—so this could be the better of the possible worlds. But, of course, it could be worse—but there is no way of knowing.
While there is an established history of superhero characters having their ethnicity or gender changed, each specific episode tends to create a small uproar (and not just among the fanfolk). For example, Nick Fury was changed from white to black (with Samuel Jackson playing the character in the movies). As another example, a woman took on the role of Thor. I am using “ethnicity” here rather than “race” for the obvious reason that in comic book reality humans are one race, just as Kryptonians and Kree are races.
Some of the complaints about such changes are based in racism and sexism. While interesting from the standpoint of psychology and ethics, these complaints are not otherwise worthy of serious consideration. Instead I will focus on legitimate concerns about such change.
A good place to begin the discussion of these changes is to address concerns about continuity and adherence to the original source material. Just as, for example, giving Batman super powers would break continuity, making him into a Hispanic would also seem to break continuity. Just as Batman has no superpowers, he is also a white guy.
One obvious reply to this is that characters are changed over the years. To use an obvious example, when Superman first appeared in the comics he was faster than a speeding bullet and able to leap tall buildings. However, he did not fly and did not have heat vision. Over the years writers added abilities and increased his powers until he became the Superman of today. Character background and origin stories are also changed fairly regularly. If these sort of changes are acceptable, then this opens the door to other changes—such as changes to the character’s ethnicity or gender.
One rather easy way to justify any change is to make use of the alternative world device. When D.C. was faced with the problem of “explaining” the first versions of Flash (who wore a Mercury/Hermes style helmet), Batman, Green Lantern (whose power was magic and vulnerability was wood) and Superman they hit on the idea of having Earth 1 and Earth 2. This soon became a standard device for creating more comics to sell, although it did have the effect of creating a bit of a mess for fans interested in keeping track of things. An infinite number of earths is a rather lot to keep track of. Marvel also had its famous “What If” series which would allow for any changes in a legitimate manner.
While the use of parallel and possible worlds provides an easy out, there is still the matter of changing the gender or ethnicity of the “real” character (as opposed to just having an alternative version). One option is, of course, to not have any “real” character—every version (whether on TV, in the movies or in comics) is just as “real” and “official” as any other. While this solves the problem by fiat, there still seems to be a legitimate question about whether all these variations should be considered the same character. That is, whether a Hispanic female Flash is really the Flash.
In some cases, the matter is rather easy to handle. Some superheroes merely occupy roles, hold “super jobs” or happen to have some gear or item that makes them super. For example, anyone can be a Green Lantern (provided the person qualifies for the ring). While the original Green Lantern was a white guy, a Hispanic woman could join the corps and thus be a Green Lantern. As another example, being Iron Man could be seen as just a matter of wearing the armor. So, an Asian woman could wear Iron Man armor and be Iron…well, Iron Woman. As a final example, being Robin seems to be a role—different white boys have occupied that role, so there seems to be no real issue with having a female Robin (which has, in fact, been done) or a Robin who is not white.
In many cases a gender change would be pointless because female versions of the character already exist. For example, a female Superman would just be another Supergirl or Power Girl. As another example, a female Batman would just be Batwoman or Batgirl, superheroes who already exist. So, what remains are cases that are not so easy to handle.
While every character has an “original” gender and ethnicity (for example, Captain America started as a white male), it is not always the case that the original’s gender and ethnicity are essential to the character. That is, the character would still make sense and it would still be reasonable to regard the character as the same (only with a different ethnicity or gender). This, of course, raises metaphysical concerns about essential qualities and identity. Put very simply, an essential quality is one that if an entity loses that quality, it ceases to be what it is. For example, having three sides is an essential quality for a triangle: if it ceases to be three sided, it ceases to be a triangle. Color and size are not essential qualities of triangles. A red triangle that is painted blue does not ceases to be a triangle.
In the case of superheroes, the key question here is one about which qualities are essential to being that hero and which ones can be changed while maintaining the identity of the character. One way to approach this is in terms of personal identity and to use models that philosophers use for real people. Another approach is to go with an approach that is more about aesthetics than metaphysics. That is, to base the essential qualities on aesthetic essentials—that is, qualities relevant to being the right sort of fictional character.
One plausible approach here is to consider whether or not a character’s ethnicity and gender are essential to the character—that is, for example, whether Captain America would still be Captain America if he were black or a woman.
One key aspect of it would be how these qualities would fit the origin story in terms of plausibility. Going with the Captain America example, Steve Rogers could have been black—black Americans served in WWII and it would even be plausible that experiments would be done on African-Americans (because they did for real). Making Captain America into a woman would be implausible—the sexism of the time would have ensured that a woman would not have been used in such an experiment and American women were not allowed to enlist in the combat infantry. As another example, the Flash could easily be cast as a woman or as having any ethnicity—there is nothing about the Flash’s origin that requires that the Flash be a white guy.
Some characters, however, have origin stories that would make it implausible for the character to have a different ethnicity or gender. For example, Wonder Woman would not work as a man (without making all the Amazons men and changing that entire background). She could, however, be cast as any ethnicity (since she is, in the original story, created from a statue).
Another key aspect would be the role of the character in terms of what he or she represents or stands for. For example, Black Panther’s origin story would seem to preclude him being any ethnicity other than black. His role would also seem to preclude that as well—a white Black Panther would, it would seem, simply not fit the role. Black Panther could, perhaps, be a woman—especially since being the Black Panther is a role. So, to answer the title question, Black Panther could not be white. Or, more accurately, should not be white.
As a closing point, it could be argued that all that really matters is whether the story is a good one or not. So, if a good story can be told casting Spider-Man as a black woman or Rogue as an Asian man, then that is all the justification that would be needed for the change. Of course, it would still be fair to ask if the story really is a Spider-Man story or not.
It is July 16, 2214. I am at Popham Beach in what I still think of as Maine. I am standing in the sand, watching the waves strike the shore. Sand pipers run in the surf, looking for their lunch. I have a two-hundred year old memory of another visit to this beach. In that memory, the water is cold on the skin and there is a mild ache in the left knee—a relic of a quadriceps tendon repair. Today, however, there is no ache—what serves as my knee is a biomechanical system and is free of all aches and pains. I can, if I wish, feel the cold by adjusting my sensors systems. I do so, and what was once merely data about temperature becomes a feeling in what I still call my mind. I downgrade my vision to that of an organic human, then tweak it so it perfectly matches the imperfect eyesight of the memory. I do the same for my hearing and turn off my other sensors until I am, as far as I can tell, merely human. I walk into the water, enjoying the feeling of the cold. My companion asks me if I have ever been here before. I pause and consider this question. I have a memory from a man who was here in 2014. But I do not know if I am him or if I am but a child of his memories. But, it is a lovely day—too lovely for metaphysics. I say “yes, long ago”, and wait patiently for the setting of the sun.
In science fiction one of the proposed methods of achieving a form of immortality is the downloading of memories from an old body to a new one. This, of course, rests on the rather critical assumption that a person is her memories.
Philosophers, as should hardly be surprising, have long considered whether or not a person is her memories. John Locke took the position that a person is her consciousness and, in a nice science fiction move, considered the possibility that memories could be transferred from one soul to another. While Locke’s view does get a bit confusing (he distinguishes between person, body, soul and consciousness while not being entirely clear about how memory relates to consciousness), he certainly seems to take the view that a person is her memory. As far back as a person’s memory goes, she goes—and this brings along with it moral accountability. Being a Christian, Locke was rather concerned about judgment day and needed a mechanism of personal identity that did not depend on the sameness of body. Being an empiricist, he also needed a clearly empirical basis. Memory contained within a soul seemed to take care of both concerns nicely.
Interestingly, Locke anticipates the science fiction idea of memory transfer—he considers the problem that arises if memory makes personal identity and memory could be transferred or copied. His solution is what many would regard as a cheat: he claims that God, in His goodness, would not allow that sort of thing to happen. However, he does discuss cases in which one (specifically Nestor) loses all memory and thus ceases to be the same person, though the same soul might be present.
So, if Locke is right about memory being the basis of personal identity and wrong about God not allowing the copying of memory, then if my memories were transferred to another conscious system to compose its consciousness, then it would be me. So, in my opening story, if the being standing on the beach in 2214 had my memory from 2014, then we would be the same person and I would be 248 years old.
David Hume, another British empiricist, presented an obvious intuition problem for Locke’s account: intuitively, people believe that they can extend their identity beyond their memory. That is, I do not suppose that it was not me just because I forgot something. Rather, I suppose that it was me and that I merely forgot. Hume took the view that memory is used to discover personal identity—and then he went a bit nuts and declared the matter to be all about grammar rather than philosophy.
Another stock problem with the memory account is that if memory can be copied, it can presumably be copied multiple times. The problem is that what serves as the basis of personal identity is supposed to be what makes me, me and distinct from everyone else. If what is supposed to provide my distinct identity can be duplicated, then it cannot be the basis of my distinct identity. Locke, as noted above, “solves” this problem by divine intervention. However, without this there seems to be no reason why my memory of Popham Beach from 2014 could not be copied many times if it could be copied once. As such, the entity on the beach in 2214 might just have a copy of my memory, just as it might have a copy of the files stored on the phone I was carrying that day. The companion mentioned in the short tale might also have those same memories—but they both cannot be me.
The entity on the beach might even have an actual memory from me—a literal piece of my brain. However, this might not make it the same person as me. To use an analogy, it might also have my watch or my finger bone from 2014, but this would not make it me.
Interestingly (or boringly) enough, the science fiction scenario really does not change the basic problems of identity over time. The problems are determining what makes me the person I am and what makes me distinct from all other things—be that a scenario involving the Mike from 2013 or the entity on the beach in 2214. For that entity on the beach to be me, it would need to possess whatever it is that made me the person I was in 2014 (and, hopefully, am now) and what distinguished that Mike from all other things—that is, my personness and my distinctness.
Since we obviously do not know what these things are (or if they even are at all), there is really no way to say whether that entity in 2214 could really be me. It is safe, I think, to claim that if it is a copy of something from my memories, then it is not me—at best, it would be a child of my memory. It would, as philosophers have long argued, have the same sort of connection to Mike 2014 that Mike 2014 had to Mike 2013. It is also worth considering that as Hume and Buddha have claimed, that there really is no self—so that entity on the beach in 2214 is not me, but neither am I.
One classic dispute in philosophy can be crudely summed up by two competing bumper-sticker slogans. One is “everything happens for a reason.” The other is “stuff happens.” The first slogan expresses a version of the teleological view—the idea that the world is driven by purpose. The second expresses the non-teleological view—the world is not driven by purpose. It might be a deterministic world or a random world, but what occurs just happens.
Not surprisingly, there are many different theories that fall under the teleological banner. The sort most people tend to think of involves a theological aspect—a divine being creates and perhaps directs the world. Creationism presents a “pop” version of teleology while Aquinas presents a rather more sophisticated account. However, there are versions that are non-theological. For example, Thales wrote of the world being “full of gods”, but did not seem to be speaking of divine entities. As another example, Aristotle believed in a teleological world in which everything has a purpose.
The rise of what is regarded as modern science during the renaissance and enlightenment saw a corresponding fall in teleological accounts, although thinkers such as Newton and Descartes fully embraced and defended theological teleology. In the sciences, the dominance of Darwinism seemed to spell the doom of teleology. Interestingly, though, certain forms of teleology seem to be sneaking back in.
One area of the world that seems clearly teleological is that occupied by living creatures. While some thinkers have the goal of denying such teleology, creatures like us seem to be theological. That is, we act from purposes in order to achieve goals. Even the least of living creatures, such as bacteria, are presented as having purposes—though this might be more metaphor than reality.
Rather interestingly, even plants seem to operate in purposeful ways and engage in what some scientists characterize as communication. Even more interesting, entire forests seem to be interlocked into communication networks and this seems to indicate something that would count as teleological. This sort of communication can, of course, be dismissed as mere mechanical and chemical processes. The same can also be said of us—and some have argued just that.
It is quite reasonable to be skeptical of claims that link the behavior of plants to claims about teleology. After all, the idea of forests in linked communication and plants acting with purpose seems like something out of fantasy, hippie dreams, or science fiction. That said, there is some solid research that supports the claim that plants communicate and engage in what seems to be purposeful behavior.
Even if it is conceded that living things are purpose driven and thus there is some teleology in the universe, there is still the matter of whether or not teleology is broader. While theists embrace the idea of a God created and directed world, those who are not believers reject this and contend that the appearance of design is just that—appearance and not reality.
One reason that teleology often gets rejected (sometimes with a disdainful sneer) is that it is usually presented in crude theological terms, such as young earth creationism. It is easy enough to laugh off a teleological view when those making it claim that humans coexisted with dinosaurs. Also, there is a strong anti-religious tendency among some thinkers that causes an automatic dismissal of anything theological. Given that supernatural explanations do tend to be rather suspicious, this is hardly surprising. However, bashing such easy prey does not defeat the sophisticated forms of non-supernatural teleology.
The stock argument for teleology is, of course, that the best explanation for the consistent operation of the world and the apparent design of its components is in terms of purposes or ends. The main counter is, of course, that the consistency and apparent design can be explained entirely without reference to ends or purposes. To use the standard example, there is no need to postulate that living creatures are the result of a purpose or end because they are what they are because of chance and natural selection. When someone has the temerity to suggest that natural selection seems to smuggle in teleology, the usual reply is to deny that and to assure the critic that there is no teleology in it at all. Those who buy natural selection as being devoid of teleology accept this and often consider the critics to be misguided fools who are, no doubt, just trying to smuggle God back in. Those who think that natural selection still smuggles in teleology tend to think their opponents are in the grips of an ideology and unwilling to consider the matter properly.
Natural selection is also extended, in a way, beyond living creatures. When those who accept teleology point to the way the non-living universe works as evidence of purpose, the critics contend that the apparent purpose is an illusion. The planets and galaxies are as they are by chance (or determinism) and not from purpose. If they were not as they are, we would not be here to be considering the matter—so what seems like a purposeful universe is just a matter of luck (that is, chance).
It is, of course, tempting to extend the teleology of living creatures to the non-living parts of the universe. If it is accepted that we act with purpose and that even plants do so, then it becomes somewhat easier to consider that complicated non-living systems might also operate with a purpose, goal or end. Interestingly enough, being a materialist makes this transition even easier. After all, if humans, animals and plants are purely mechanical systems that operate with a purpose, then the idea that other purely mechanical systems operate with a purpose would make sense. This is not so say that stars are intelligent or that the universe is a being, of course.
There are those who deny that humans and animals operate with purpose and assert that we simply operate in accord with the laws of nature (whatever that means). Hobbes, for example, took this view. On this sort of view humans and the physical world are basically the same: purposeless mechanical systems. On this view, there is no teleology anywhere. Stuff just happens.
As science and philosophy explained ever more of the natural world in the Modern Era, there arose the philosophical idea of strict determinism. Strict determinism, as often presented, includes both metaphysical and epistemic aspects. In regards to the metaphysics, it is the view that each event follows from previous events by necessity. In negative terms, it is a denial of both chance and free will. A religious variant on this is predestination, which is the notion that all events are planned and set by a supernatural agency (typically God). The epistemic aspect is grounded in the metaphysics: if each event follows from other events by necessity, if someone knew all the relevant facts about the state of a system at a time and had enough intellectual capabilities, she could correctly predict the future of that system. Philosophers and scientists who are metaphysical determinists typically claim that the world seems undetermined to us because of our epistemic failings. In short, we believe in choice or chance because we are unable to always predict what will occur. But, for the determinist, this is a matter of ignorance and not metaphysics. For those who believe in choice or chance, our inability to predict is taken as being the result of a universe in which choice or chance is real. That is, we cannot always predict because the metaphysical nature of the universe is such that it is unpredictable. Because of choice or chance, what follows from one event is not a matter of necessity.
One rather obvious problem for choosing between determinism and its alternatives is that given our limited epistemic abilities, a deterministic universe seems the same to us as a non-deterministic universe. If the universe is deterministic, our limited epistemic abilities mean that we often make predictions that turn out to be wrong. If the universe is not deterministic, our limited epistemic abilities and the non-deterministic nature of the universe mean that we often make predictions that are in error. As such, the fact that we make prediction errors is consistent with deterministic and non-deterministic universes.
It can be argued that as we get better and better at predicting we will be able to get a better picture of the nature of the universe. However, until we reach a state of omniscience we will not know whether our errors are purely epistemic (events are unpredictable because we are not perfect predictors) or are the result of metaphysics (that is, the events are unpredictable because of choice or chance).
Interestingly, one feature of reality that often leads thinkers to reject strict determinism is what could be called chaos. To use a concrete example, consider the motion of the planets in our solar system. In the past, the motion of the planets was presented as a sign of the order of the universe—a clockwork solar system in God’s clockwork universe. While the planets might seem to move like clockwork, Newton realized that the gravity of the planets affected each other but also realized that calculating the interactions was beyond his ability. In the face of problems in his physics, Newton famously used God to fill in the gaps. With the development of powerful computers, scientists have been able to model the movements of the planets and the generally accepted view is that they are not parts of deterministic divine clock. To be less poetical, the view is that chaos seems to be a factor. For example, some scientists believe that the gas giant Jupiter’s gravity might change Mercury’s gravity enough that it collides with Venus or Earth. This certainly suggests that the solar system is not an orderly clockwork machine of perfect order. Because of this sort of thing (which occurs at all levels in the world) some thinkers take the universe to include chaos and infer from the lack of perfect order that strict determinism is false. While this is certainly tempting, the inference is not as solid as some might think.
It is, of course, reasonable to infer that the universe lacks a strict and eternal order from such things as the chaotic behavior of the planets. However, strict determinism is not the same thing as strict order. Strict order is a metaphysical notion that a system will work in the same way, without any variation or change, for as long as it exists. The idea of an eternally ordered clockwork universe is an excellent example of this sort of system: it works like a perfect clock, each part relentlessly following its path without deviation. While a deterministic system would certainly be consistent with such an orderly system, determinism is not the same thing as strict order. After all, to accept determinism is to accept that each event follows by necessity from previous events. This is consistent with a system that changes over time and changes in ways that seem chaotic.
Returning to the example of the solar system, suppose that Jupiter’s gravity will cause Mercury’s orbit to change enough so that it hits the earth. This is entirely consistent with that event being necessarily determined by past events such that things could not have been different. To use an analogy, it is like a clockwork machine built with a defect that will inevitably break the machine. Things cannot be otherwise, yet to those ignorant of the defect, the machine will seem to fall into chaos. However, if one knew the defect and had the capacity to process the data, then this breakdown would be completely predictable. To use another analogy, it is like scripted performance of madness by an actor: it might seem chaotic, but the script determines it. That is, it merely seems chaotic because of our ignorance. As such, the appearance of chaos does not disprove strict determinism because determinism is not the same thing as unchanging.
My experiences as a tabletop and video gamer have taught me numerous lessons that are applicable to the real world (assuming there is such a thing). One key skill in getting about in reality is the ability to model reality. Roughly put, this is the ability to get how things work and thus make reasonably accurate predictions. This ability is rather useful: getting how things work is a big step on the road to success.
Many games, such as Call of Cthulhu, D&D, Pathfinder and Star Fleet Battles make extensive use of dice to model the vagaries of reality. For example, if your Call of Cthulhu character were trying to avoid being spotted by the cultists of Hastur as she spies on them, you would need to roll under your Sneak skill on percentile dice. As another example, if your D-7 battle cruiser were firing phasers and disruptors at a Kzinti strike cruiser, you would roll dice and consult various charts to see what happened. Video games also include the digital equivalent of dice. For example, if you are playing World of Warcraft, the damage done by a spell or a weapon will be random.
Being a gamer, it is natural for me to look at reality as also being random—after all, if a random model (gaming system) nicely fits aspects of reality, then that suggests the model has things right. As such, I tend to think of this as being a random universe in which God (or whatever) plays dice with us.
Naturally, I do not know if the universe is random (contains elements of chance). After all, we tend to attribute chance to the unpredictable, but this unpredictability might be a matter of ignorance rather than chance. After all, the fact that we do not know what will happen does not entail that it is a matter of chance.
People also seem to believe in chance because they think things could have been differently: the die roll might have been a 1 rather than a 20 or I might have won the lottery rather than not. However, even if things could have been different it does not follow that chance is real. After all, chance is not the only thing that could make a difference. Also, there is the rather obvious question of proving that things could have been different. This would seem to be impossible: while it might be believed that conditions could be recreated perfectly, one factor that can never be duplicated – time. Recreating an event will be a recreation. If the die comes up 20 on the first roll and 1 on the second, this does not show that it could have been a 1 the first time. All its shows is that it was 20 the first time and 1 the second.
If someone had a TARDIS and could pop back in time to witness the roll again and if the time traveler saw a different outcome this time, then this might be evidence of chance. Or evidence that the time traveler changed the event.
Even traveling to a possible or parallel world would not be of help. If the TARDIS malfunctions and pops us into a world like our own right before the parallel me rolled the die and we see it come up 1 rather than 20, this just shows that he rolled a 1. It tells us nothing about whether my roll of 20 could have been a 1.
Of course, the flip side of the coin is that I can never know that the world is non-random: aside from some sort of special knowledge about the working of the universe, a random universe and a non-random universe would seem exactly the same. Whether my die roll is random or not, all I get is the result—I do not perceive either chance or determinism. However, I go with a random universe because, to be honest, I am a gamer.
If the universe is deterministic, then I am determined to do what I do. If the universe is random, then chance is a factor. However, a purely random universe would not permit actual decision-making: it would be determined by chance. In games, there is apparently the added element of choice—I chose for my character to try to attack the dragon, and then roll dice to determine the result. As such, I also add choice to my random universe.
Obviously, there is no way to prove that choice occurs—as with chance versus determinism, without simply knowing the brute fact about choice there is no way to know whether the universe allows for choice or not. I go with a choice universe for the following reason: If there is no choice, then I go with choice because I have no choice. So, I am determined (or chanced) to be wrong. I could not choose otherwise. If there is choice, then I am right. So, choosing choice seems the best choice. So, I believe in a random universe with choice—mainly because of gaming. So, what about the lessons from this?
One important lesson is that decisions are made in uncertainty: because of chance, the results of any choice cannot be known with certainty. In a game, I do not know if the sword strike will finish off the dragon. In life, I do not know if the investment will pay off. In general, this uncertainty can be reduced and this shows the importance of knowing the odds and the consequences: such knowledge is critical to making good decisions in a game and in life. So, know as much as you can for a better tomorrow.
Another important lesson is that things can always go wrong. Or well. In a game, there might be a 1 in 100 chance that a character will be spotted by the cultists, overpowered and sacrificed to Hastur. But it could happen. In life, there might be a 1 in a 100 chance of a doctor taking precautions catching Ebola from a patient. But it could happen. Because of this, the possibility of failure must always be considered and it is wise to take steps to minimize the chances of failure and to also minimize the consequences.
Keeping in mind the role of chance also helps a person be more understanding, sympathetic and forgiving. After all, if things can fail or go wrong because of chance, then it makes sense to be more forgiving and understanding of failure—at least when the failure can be attributed in part to chance. It also helps in regards to praising success: knowing that chance plays a role in success is also important. For example, there is often the assumption that success is entirely deserved because it must be the result of hard work, virtue and so on. However, if success involves chance to a significant degree, then that should be taken into account when passing out praise and making decisions. Naturally, the role of chance in success and failure should be considered when planning and creating policies. Unfortunately, people often take the view that both success and failure are mainly a matter of choice—so the rich must deserve their riches and the poor must deserve their poverty. However, an understanding of chance would help our understanding of success and failure and would, hopefully, influence the decisions we make. There is an old saying “there, but for the grace of God, go I.” One could also say “there, but for the luck of the die, go I.”
When I was a young kid I played games like Monopoly, Chutes & ladders and Candy Land. When I was a somewhat older kid, I was introduced to Dungeons & Dragons and this proved to be a gateway game to Call of Cthulhu, Battletech, Star Fleet Battles, Gamma World, and video games of all sorts. I am still a gamer today—a big bag of many-sided dice and exotic gaming mice dwell within my house.
Over the years, I have learned many lessons from gaming. One of these is keep rolling. This is, not surprisingly, similar to the classic advice of “keep trying” and the idea is basically the same. However, there is some interesting philosophy behind “keep rolling.”
Most of the games I have played feature actual dice or virtual dice (that is, randomness) that are used to determine how things go in the game. To use a very simple example, the dice rolls in Monopoly determine how far your piece moves. In vastly more complicated games like Pathfinder or Destiny the dice (or random number generators) govern such things as attacks, damage, saving throws, loot, non-player character reactions and, in short, much of what happens in the game. For most of these games, the core mechanics are built around what is supposed to be a random system. For example, in games like Pathfinder when your character attacks the dragon with her great sword, a roll of a 20-sided die determines whether you hit or not. If you do hit, then you roll more dice to determine your damage.
Having played these sorts of games for years, I can think very well in terms of chance and randomness when planning tactics and strategies within such games. On the one hand, a lucky roll can result in victory in the face of overwhelming odds. On the other hand, a bad roll can seize defeat from the jaws of victory. But, in general, success is more likely if one does not give up and keeps on rolling.
This lesson translates very easily and obviously to life. There are, of course, many models and theories of how the real world works. Some theories present the world as deterministic—all that happens occurs as it must and things cannot be otherwise. Others present a pre-determined world (or pre-destined): all that happens occurs as it has been ordained and cannot be otherwise. Still other models present a random universe.
As a gamer, I favor the random universe model: God does play dice with us and He often rolls them hard. The reason for this belief is that the dice/random model of gaming seems to work when applied to the actual world—as such, my belief is mostly pragmatic. Since games are supposed to model parts of reality, it is hardly surprising that there is a match up. Based on my own experience, the world does seem to work rather like a game: success and failure seem to involve chance.
As a philosopher, I recognize this could simply be a matter of epistemology: the apparent chance could be the result of our ignorance rather than an actual randomness. To use the obvious analogy, the game master might not be rolling dice behind her screen at all and what happens might be determined or pre-determined. Unlike in a game, the rule system for reality is not accessible: it is guessed at by what we observe and we learn the game of life solely by playing.
That said, the dice model seems to fit experience best: I try to do something and succeed or fail with a degree of apparent randomness. Because I believe that randomness is a factor, I consider that my failure to reach a goal could be partially due to chance. So, if I want to achieve that goal, I roll again. And again. Until I succeed or decide that the game is not worth the roll. Not being a fool, I do consider that success might be impossible—but I do not infer that from one or even a few bad rolls. This approach to life has served me well and will no doubt do so until it finally kills me.