While there is an established history of superhero characters having their ethnicity or gender changed, each specific episode tends to create a small uproar (and not just among the fanfolk). For example, Nick Fury was changed from white to black (with Samuel Jackson playing the character in the movies). As another example, a woman took on the role of Thor. I am using “ethnicity” here rather than “race” for the obvious reason that in comic book reality humans are one race, just as Kryptonians and Kree are races.
Some of the complaints about such changes are based in racism and sexism. While interesting from the standpoint of psychology and ethics, these complaints are not otherwise worthy of serious consideration. Instead I will focus on legitimate concerns about such change.
A good place to begin the discussion of these changes is to address concerns about continuity and adherence to the original source material. Just as, for example, giving Batman super powers would break continuity, making him into a Hispanic would also seem to break continuity. Just as Batman has no superpowers, he is also a white guy.
One obvious reply to this is that characters are changed over the years. To use an obvious example, when Superman first appeared in the comics he was faster than a speeding bullet and able to leap tall buildings. However, he did not fly and did not have heat vision. Over the years writers added abilities and increased his powers until he became the Superman of today. Character background and origin stories are also changed fairly regularly. If these sort of changes are acceptable, then this opens the door to other changes—such as changes to the character’s ethnicity or gender.
One rather easy way to justify any change is to make use of the alternative world device. When D.C. was faced with the problem of “explaining” the first versions of Flash (who wore a Mercury/Hermes style helmet), Batman, Green Lantern (whose power was magic and vulnerability was wood) and Superman they hit on the idea of having Earth 1 and Earth 2. This soon became a standard device for creating more comics to sell, although it did have the effect of creating a bit of a mess for fans interested in keeping track of things. An infinite number of earths is a rather lot to keep track of. Marvel also had its famous “What If” series which would allow for any changes in a legitimate manner.
While the use of parallel and possible worlds provides an easy out, there is still the matter of changing the gender or ethnicity of the “real” character (as opposed to just having an alternative version). One option is, of course, to not have any “real” character—every version (whether on TV, in the movies or in comics) is just as “real” and “official” as any other. While this solves the problem by fiat, there still seems to be a legitimate question about whether all these variations should be considered the same character. That is, whether a Hispanic female Flash is really the Flash.
In some cases, the matter is rather easy to handle. Some superheroes merely occupy roles, hold “super jobs” or happen to have some gear or item that makes them super. For example, anyone can be a Green Lantern (provided the person qualifies for the ring). While the original Green Lantern was a white guy, a Hispanic woman could join the corps and thus be a Green Lantern. As another example, being Iron Man could be seen as just a matter of wearing the armor. So, an Asian woman could wear Iron Man armor and be Iron…well, Iron Woman. As a final example, being Robin seems to be a role—different white boys have occupied that role, so there seems to be no real issue with having a female Robin (which has, in fact, been done) or a Robin who is not white.
In many cases a gender change would be pointless because female versions of the character already exist. For example, a female Superman would just be another Supergirl or Power Girl. As another example, a female Batman would just be Batwoman or Batgirl, superheroes who already exist. So, what remains are cases that are not so easy to handle.
While every character has an “original” gender and ethnicity (for example, Captain America started as a white male), it is not always the case that the original’s gender and ethnicity are essential to the character. That is, the character would still make sense and it would still be reasonable to regard the character as the same (only with a different ethnicity or gender). This, of course, raises metaphysical concerns about essential qualities and identity. Put very simply, an essential quality is one that if an entity loses that quality, it ceases to be what it is. For example, having three sides is an essential quality for a triangle: if it ceases to be three sided, it ceases to be a triangle. Color and size are not essential qualities of triangles. A red triangle that is painted blue does not ceases to be a triangle.
In the case of superheroes, the key question here is one about which qualities are essential to being that hero and which ones can be changed while maintaining the identity of the character. One way to approach this is in terms of personal identity and to use models that philosophers use for real people. Another approach is to go with an approach that is more about aesthetics than metaphysics. That is, to base the essential qualities on aesthetic essentials—that is, qualities relevant to being the right sort of fictional character.
One plausible approach here is to consider whether or not a character’s ethnicity and gender are essential to the character—that is, for example, whether Captain America would still be Captain America if he were black or a woman.
One key aspect of it would be how these qualities would fit the origin story in terms of plausibility. Going with the Captain America example, Steve Rogers could have been black—black Americans served in WWII and it would even be plausible that experiments would be done on African-Americans (because they did for real). Making Captain America into a woman would be implausible—the sexism of the time would have ensured that a woman would not have been used in such an experiment and American women were not allowed to enlist in the combat infantry. As another example, the Flash could easily be cast as a woman or as having any ethnicity—there is nothing about the Flash’s origin that requires that the Flash be a white guy.
Some characters, however, have origin stories that would make it implausible for the character to have a different ethnicity or gender. For example, Wonder Woman would not work as a man (without making all the Amazons men and changing that entire background). She could, however, be cast as any ethnicity (since she is, in the original story, created from a statue).
Another key aspect would be the role of the character in terms of what he or she represents or stands for. For example, Black Panther’s origin story would seem to preclude him being any ethnicity other than black. His role would also seem to preclude that as well—a white Black Panther would, it would seem, simply not fit the role. Black Panther could, perhaps, be a woman—especially since being the Black Panther is a role. So, to answer the title question, Black Panther could not be white. Or, more accurately, should not be white.
As a closing point, it could be argued that all that really matters is whether the story is a good one or not. So, if a good story can be told casting Spider-Man as a black woman or Rogue as an Asian man, then that is all the justification that would be needed for the change. Of course, it would still be fair to ask if the story really is a Spider-Man story or not.
It is July 16, 2214. I am at Popham Beach in what I still think of as Maine. I am standing in the sand, watching the waves strike the shore. Sand pipers run in the surf, looking for their lunch. I have a two-hundred year old memory of another visit to this beach. In that memory, the water is cold on the skin and there is a mild ache in the left knee—a relic of a quadriceps tendon repair. Today, however, there is no ache—what serves as my knee is a biomechanical system and is free of all aches and pains. I can, if I wish, feel the cold by adjusting my sensors systems. I do so, and what was once merely data about temperature becomes a feeling in what I still call my mind. I downgrade my vision to that of an organic human, then tweak it so it perfectly matches the imperfect eyesight of the memory. I do the same for my hearing and turn off my other sensors until I am, as far as I can tell, merely human. I walk into the water, enjoying the feeling of the cold. My companion asks me if I have ever been here before. I pause and consider this question. I have a memory from a man who was here in 2014. But I do not know if I am him or if I am but a child of his memories. But, it is a lovely day—too lovely for metaphysics. I say “yes, long ago”, and wait patiently for the setting of the sun.
In science fiction one of the proposed methods of achieving a form of immortality is the downloading of memories from an old body to a new one. This, of course, rests on the rather critical assumption that a person is her memories.
Philosophers, as should hardly be surprising, have long considered whether or not a person is her memories. John Locke took the position that a person is her consciousness and, in a nice science fiction move, considered the possibility that memories could be transferred from one soul to another. While Locke’s view does get a bit confusing (he distinguishes between person, body, soul and consciousness while not being entirely clear about how memory relates to consciousness), he certainly seems to take the view that a person is her memory. As far back as a person’s memory goes, she goes—and this brings along with it moral accountability. Being a Christian, Locke was rather concerned about judgment day and needed a mechanism of personal identity that did not depend on the sameness of body. Being an empiricist, he also needed a clearly empirical basis. Memory contained within a soul seemed to take care of both concerns nicely.
Interestingly, Locke anticipates the science fiction idea of memory transfer—he considers the problem that arises if memory makes personal identity and memory could be transferred or copied. His solution is what many would regard as a cheat: he claims that God, in His goodness, would not allow that sort of thing to happen. However, he does discuss cases in which one (specifically Nestor) loses all memory and thus ceases to be the same person, though the same soul might be present.
So, if Locke is right about memory being the basis of personal identity and wrong about God not allowing the copying of memory, then if my memories were transferred to another conscious system to compose its consciousness, then it would be me. So, in my opening story, if the being standing on the beach in 2214 had my memory from 2014, then we would be the same person and I would be 248 years old.
David Hume, another British empiricist, presented an obvious intuition problem for Locke’s account: intuitively, people believe that they can extend their identity beyond their memory. That is, I do not suppose that it was not me just because I forgot something. Rather, I suppose that it was me and that I merely forgot. Hume took the view that memory is used to discover personal identity—and then he went a bit nuts and declared the matter to be all about grammar rather than philosophy.
Another stock problem with the memory account is that if memory can be copied, it can presumably be copied multiple times. The problem is that what serves as the basis of personal identity is supposed to be what makes me, me and distinct from everyone else. If what is supposed to provide my distinct identity can be duplicated, then it cannot be the basis of my distinct identity. Locke, as noted above, “solves” this problem by divine intervention. However, without this there seems to be no reason why my memory of Popham Beach from 2014 could not be copied many times if it could be copied once. As such, the entity on the beach in 2214 might just have a copy of my memory, just as it might have a copy of the files stored on the phone I was carrying that day. The companion mentioned in the short tale might also have those same memories—but they both cannot be me.
The entity on the beach might even have an actual memory from me—a literal piece of my brain. However, this might not make it the same person as me. To use an analogy, it might also have my watch or my finger bone from 2014, but this would not make it me.
Interestingly (or boringly) enough, the science fiction scenario really does not change the basic problems of identity over time. The problems are determining what makes me the person I am and what makes me distinct from all other things—be that a scenario involving the Mike from 2013 or the entity on the beach in 2214. For that entity on the beach to be me, it would need to possess whatever it is that made me the person I was in 2014 (and, hopefully, am now) and what distinguished that Mike from all other things—that is, my personness and my distinctness.
Since we obviously do not know what these things are (or if they even are at all), there is really no way to say whether that entity in 2214 could really be me. It is safe, I think, to claim that if it is a copy of something from my memories, then it is not me—at best, it would be a child of my memory. It would, as philosophers have long argued, have the same sort of connection to Mike 2014 that Mike 2014 had to Mike 2013. It is also worth considering that as Hume and Buddha have claimed, that there really is no self—so that entity on the beach in 2214 is not me, but neither am I.
One classic dispute in philosophy can be crudely summed up by two competing bumper-sticker slogans. One is “everything happens for a reason.” The other is “stuff happens.” The first slogan expresses a version of the teleological view—the idea that the world is driven by purpose. The second expresses the non-teleological view—the world is not driven by purpose. It might be a deterministic world or a random world, but what occurs just happens.
Not surprisingly, there are many different theories that fall under the teleological banner. The sort most people tend to think of involves a theological aspect—a divine being creates and perhaps directs the world. Creationism presents a “pop” version of teleology while Aquinas presents a rather more sophisticated account. However, there are versions that are non-theological. For example, Thales wrote of the world being “full of gods”, but did not seem to be speaking of divine entities. As another example, Aristotle believed in a teleological world in which everything has a purpose.
The rise of what is regarded as modern science during the renaissance and enlightenment saw a corresponding fall in teleological accounts, although thinkers such as Newton and Descartes fully embraced and defended theological teleology. In the sciences, the dominance of Darwinism seemed to spell the doom of teleology. Interestingly, though, certain forms of teleology seem to be sneaking back in.
One area of the world that seems clearly teleological is that occupied by living creatures. While some thinkers have the goal of denying such teleology, creatures like us seem to be theological. That is, we act from purposes in order to achieve goals. Even the least of living creatures, such as bacteria, are presented as having purposes—though this might be more metaphor than reality.
Rather interestingly, even plants seem to operate in purposeful ways and engage in what some scientists characterize as communication. Even more interesting, entire forests seem to be interlocked into communication networks and this seems to indicate something that would count as teleological. This sort of communication can, of course, be dismissed as mere mechanical and chemical processes. The same can also be said of us—and some have argued just that.
It is quite reasonable to be skeptical of claims that link the behavior of plants to claims about teleology. After all, the idea of forests in linked communication and plants acting with purpose seems like something out of fantasy, hippie dreams, or science fiction. That said, there is some solid research that supports the claim that plants communicate and engage in what seems to be purposeful behavior.
Even if it is conceded that living things are purpose driven and thus there is some teleology in the universe, there is still the matter of whether or not teleology is broader. While theists embrace the idea of a God created and directed world, those who are not believers reject this and contend that the appearance of design is just that—appearance and not reality.
One reason that teleology often gets rejected (sometimes with a disdainful sneer) is that it is usually presented in crude theological terms, such as young earth creationism. It is easy enough to laugh off a teleological view when those making it claim that humans coexisted with dinosaurs. Also, there is a strong anti-religious tendency among some thinkers that causes an automatic dismissal of anything theological. Given that supernatural explanations do tend to be rather suspicious, this is hardly surprising. However, bashing such easy prey does not defeat the sophisticated forms of non-supernatural teleology.
The stock argument for teleology is, of course, that the best explanation for the consistent operation of the world and the apparent design of its components is in terms of purposes or ends. The main counter is, of course, that the consistency and apparent design can be explained entirely without reference to ends or purposes. To use the standard example, there is no need to postulate that living creatures are the result of a purpose or end because they are what they are because of chance and natural selection. When someone has the temerity to suggest that natural selection seems to smuggle in teleology, the usual reply is to deny that and to assure the critic that there is no teleology in it at all. Those who buy natural selection as being devoid of teleology accept this and often consider the critics to be misguided fools who are, no doubt, just trying to smuggle God back in. Those who think that natural selection still smuggles in teleology tend to think their opponents are in the grips of an ideology and unwilling to consider the matter properly.
Natural selection is also extended, in a way, beyond living creatures. When those who accept teleology point to the way the non-living universe works as evidence of purpose, the critics contend that the apparent purpose is an illusion. The planets and galaxies are as they are by chance (or determinism) and not from purpose. If they were not as they are, we would not be here to be considering the matter—so what seems like a purposeful universe is just a matter of luck (that is, chance).
It is, of course, tempting to extend the teleology of living creatures to the non-living parts of the universe. If it is accepted that we act with purpose and that even plants do so, then it becomes somewhat easier to consider that complicated non-living systems might also operate with a purpose, goal or end. Interestingly enough, being a materialist makes this transition even easier. After all, if humans, animals and plants are purely mechanical systems that operate with a purpose, then the idea that other purely mechanical systems operate with a purpose would make sense. This is not so say that stars are intelligent or that the universe is a being, of course.
There are those who deny that humans and animals operate with purpose and assert that we simply operate in accord with the laws of nature (whatever that means). Hobbes, for example, took this view. On this sort of view humans and the physical world are basically the same: purposeless mechanical systems. On this view, there is no teleology anywhere. Stuff just happens.
As science and philosophy explained ever more of the natural world in the Modern Era, there arose the philosophical idea of strict determinism. Strict determinism, as often presented, includes both metaphysical and epistemic aspects. In regards to the metaphysics, it is the view that each event follows from previous events by necessity. In negative terms, it is a denial of both chance and free will. A religious variant on this is predestination, which is the notion that all events are planned and set by a supernatural agency (typically God). The epistemic aspect is grounded in the metaphysics: if each event follows from other events by necessity, if someone knew all the relevant facts about the state of a system at a time and had enough intellectual capabilities, she could correctly predict the future of that system. Philosophers and scientists who are metaphysical determinists typically claim that the world seems undetermined to us because of our epistemic failings. In short, we believe in choice or chance because we are unable to always predict what will occur. But, for the determinist, this is a matter of ignorance and not metaphysics. For those who believe in choice or chance, our inability to predict is taken as being the result of a universe in which choice or chance is real. That is, we cannot always predict because the metaphysical nature of the universe is such that it is unpredictable. Because of choice or chance, what follows from one event is not a matter of necessity.
One rather obvious problem for choosing between determinism and its alternatives is that given our limited epistemic abilities, a deterministic universe seems the same to us as a non-deterministic universe. If the universe is deterministic, our limited epistemic abilities mean that we often make predictions that turn out to be wrong. If the universe is not deterministic, our limited epistemic abilities and the non-deterministic nature of the universe mean that we often make predictions that are in error. As such, the fact that we make prediction errors is consistent with deterministic and non-deterministic universes.
It can be argued that as we get better and better at predicting we will be able to get a better picture of the nature of the universe. However, until we reach a state of omniscience we will not know whether our errors are purely epistemic (events are unpredictable because we are not perfect predictors) or are the result of metaphysics (that is, the events are unpredictable because of choice or chance).
Interestingly, one feature of reality that often leads thinkers to reject strict determinism is what could be called chaos. To use a concrete example, consider the motion of the planets in our solar system. In the past, the motion of the planets was presented as a sign of the order of the universe—a clockwork solar system in God’s clockwork universe. While the planets might seem to move like clockwork, Newton realized that the gravity of the planets affected each other but also realized that calculating the interactions was beyond his ability. In the face of problems in his physics, Newton famously used God to fill in the gaps. With the development of powerful computers, scientists have been able to model the movements of the planets and the generally accepted view is that they are not parts of deterministic divine clock. To be less poetical, the view is that chaos seems to be a factor. For example, some scientists believe that the gas giant Jupiter’s gravity might change Mercury’s gravity enough that it collides with Venus or Earth. This certainly suggests that the solar system is not an orderly clockwork machine of perfect order. Because of this sort of thing (which occurs at all levels in the world) some thinkers take the universe to include chaos and infer from the lack of perfect order that strict determinism is false. While this is certainly tempting, the inference is not as solid as some might think.
It is, of course, reasonable to infer that the universe lacks a strict and eternal order from such things as the chaotic behavior of the planets. However, strict determinism is not the same thing as strict order. Strict order is a metaphysical notion that a system will work in the same way, without any variation or change, for as long as it exists. The idea of an eternally ordered clockwork universe is an excellent example of this sort of system: it works like a perfect clock, each part relentlessly following its path without deviation. While a deterministic system would certainly be consistent with such an orderly system, determinism is not the same thing as strict order. After all, to accept determinism is to accept that each event follows by necessity from previous events. This is consistent with a system that changes over time and changes in ways that seem chaotic.
Returning to the example of the solar system, suppose that Jupiter’s gravity will cause Mercury’s orbit to change enough so that it hits the earth. This is entirely consistent with that event being necessarily determined by past events such that things could not have been different. To use an analogy, it is like a clockwork machine built with a defect that will inevitably break the machine. Things cannot be otherwise, yet to those ignorant of the defect, the machine will seem to fall into chaos. However, if one knew the defect and had the capacity to process the data, then this breakdown would be completely predictable. To use another analogy, it is like scripted performance of madness by an actor: it might seem chaotic, but the script determines it. That is, it merely seems chaotic because of our ignorance. As such, the appearance of chaos does not disprove strict determinism because determinism is not the same thing as unchanging.
My experiences as a tabletop and video gamer have taught me numerous lessons that are applicable to the real world (assuming there is such a thing). One key skill in getting about in reality is the ability to model reality. Roughly put, this is the ability to get how things work and thus make reasonably accurate predictions. This ability is rather useful: getting how things work is a big step on the road to success.
Many games, such as Call of Cthulhu, D&D, Pathfinder and Star Fleet Battles make extensive use of dice to model the vagaries of reality. For example, if your Call of Cthulhu character were trying to avoid being spotted by the cultists of Hastur as she spies on them, you would need to roll under your Sneak skill on percentile dice. As another example, if your D-7 battle cruiser were firing phasers and disruptors at a Kzinti strike cruiser, you would roll dice and consult various charts to see what happened. Video games also include the digital equivalent of dice. For example, if you are playing World of Warcraft, the damage done by a spell or a weapon will be random.
Being a gamer, it is natural for me to look at reality as also being random—after all, if a random model (gaming system) nicely fits aspects of reality, then that suggests the model has things right. As such, I tend to think of this as being a random universe in which God (or whatever) plays dice with us.
Naturally, I do not know if the universe is random (contains elements of chance). After all, we tend to attribute chance to the unpredictable, but this unpredictability might be a matter of ignorance rather than chance. After all, the fact that we do not know what will happen does not entail that it is a matter of chance.
People also seem to believe in chance because they think things could have been differently: the die roll might have been a 1 rather than a 20 or I might have won the lottery rather than not. However, even if things could have been different it does not follow that chance is real. After all, chance is not the only thing that could make a difference. Also, there is the rather obvious question of proving that things could have been different. This would seem to be impossible: while it might be believed that conditions could be recreated perfectly, one factor that can never be duplicated – time. Recreating an event will be a recreation. If the die comes up 20 on the first roll and 1 on the second, this does not show that it could have been a 1 the first time. All its shows is that it was 20 the first time and 1 the second.
If someone had a TARDIS and could pop back in time to witness the roll again and if the time traveler saw a different outcome this time, then this might be evidence of chance. Or evidence that the time traveler changed the event.
Even traveling to a possible or parallel world would not be of help. If the TARDIS malfunctions and pops us into a world like our own right before the parallel me rolled the die and we see it come up 1 rather than 20, this just shows that he rolled a 1. It tells us nothing about whether my roll of 20 could have been a 1.
Of course, the flip side of the coin is that I can never know that the world is non-random: aside from some sort of special knowledge about the working of the universe, a random universe and a non-random universe would seem exactly the same. Whether my die roll is random or not, all I get is the result—I do not perceive either chance or determinism. However, I go with a random universe because, to be honest, I am a gamer.
If the universe is deterministic, then I am determined to do what I do. If the universe is random, then chance is a factor. However, a purely random universe would not permit actual decision-making: it would be determined by chance. In games, there is apparently the added element of choice—I chose for my character to try to attack the dragon, and then roll dice to determine the result. As such, I also add choice to my random universe.
Obviously, there is no way to prove that choice occurs—as with chance versus determinism, without simply knowing the brute fact about choice there is no way to know whether the universe allows for choice or not. I go with a choice universe for the following reason: If there is no choice, then I go with choice because I have no choice. So, I am determined (or chanced) to be wrong. I could not choose otherwise. If there is choice, then I am right. So, choosing choice seems the best choice. So, I believe in a random universe with choice—mainly because of gaming. So, what about the lessons from this?
One important lesson is that decisions are made in uncertainty: because of chance, the results of any choice cannot be known with certainty. In a game, I do not know if the sword strike will finish off the dragon. In life, I do not know if the investment will pay off. In general, this uncertainty can be reduced and this shows the importance of knowing the odds and the consequences: such knowledge is critical to making good decisions in a game and in life. So, know as much as you can for a better tomorrow.
Another important lesson is that things can always go wrong. Or well. In a game, there might be a 1 in 100 chance that a character will be spotted by the cultists, overpowered and sacrificed to Hastur. But it could happen. In life, there might be a 1 in a 100 chance of a doctor taking precautions catching Ebola from a patient. But it could happen. Because of this, the possibility of failure must always be considered and it is wise to take steps to minimize the chances of failure and to also minimize the consequences.
Keeping in mind the role of chance also helps a person be more understanding, sympathetic and forgiving. After all, if things can fail or go wrong because of chance, then it makes sense to be more forgiving and understanding of failure—at least when the failure can be attributed in part to chance. It also helps in regards to praising success: knowing that chance plays a role in success is also important. For example, there is often the assumption that success is entirely deserved because it must be the result of hard work, virtue and so on. However, if success involves chance to a significant degree, then that should be taken into account when passing out praise and making decisions. Naturally, the role of chance in success and failure should be considered when planning and creating policies. Unfortunately, people often take the view that both success and failure are mainly a matter of choice—so the rich must deserve their riches and the poor must deserve their poverty. However, an understanding of chance would help our understanding of success and failure and would, hopefully, influence the decisions we make. There is an old saying “there, but for the grace of God, go I.” One could also say “there, but for the luck of the die, go I.”
When I was a young kid I played games like Monopoly, Chutes & ladders and Candy Land. When I was a somewhat older kid, I was introduced to Dungeons & Dragons and this proved to be a gateway game to Call of Cthulhu, Battletech, Star Fleet Battles, Gamma World, and video games of all sorts. I am still a gamer today—a big bag of many-sided dice and exotic gaming mice dwell within my house.
Over the years, I have learned many lessons from gaming. One of these is keep rolling. This is, not surprisingly, similar to the classic advice of “keep trying” and the idea is basically the same. However, there is some interesting philosophy behind “keep rolling.”
Most of the games I have played feature actual dice or virtual dice (that is, randomness) that are used to determine how things go in the game. To use a very simple example, the dice rolls in Monopoly determine how far your piece moves. In vastly more complicated games like Pathfinder or Destiny the dice (or random number generators) govern such things as attacks, damage, saving throws, loot, non-player character reactions and, in short, much of what happens in the game. For most of these games, the core mechanics are built around what is supposed to be a random system. For example, in games like Pathfinder when your character attacks the dragon with her great sword, a roll of a 20-sided die determines whether you hit or not. If you do hit, then you roll more dice to determine your damage.
Having played these sorts of games for years, I can think very well in terms of chance and randomness when planning tactics and strategies within such games. On the one hand, a lucky roll can result in victory in the face of overwhelming odds. On the other hand, a bad roll can seize defeat from the jaws of victory. But, in general, success is more likely if one does not give up and keeps on rolling.
This lesson translates very easily and obviously to life. There are, of course, many models and theories of how the real world works. Some theories present the world as deterministic—all that happens occurs as it must and things cannot be otherwise. Others present a pre-determined world (or pre-destined): all that happens occurs as it has been ordained and cannot be otherwise. Still other models present a random universe.
As a gamer, I favor the random universe model: God does play dice with us and He often rolls them hard. The reason for this belief is that the dice/random model of gaming seems to work when applied to the actual world—as such, my belief is mostly pragmatic. Since games are supposed to model parts of reality, it is hardly surprising that there is a match up. Based on my own experience, the world does seem to work rather like a game: success and failure seem to involve chance.
As a philosopher, I recognize this could simply be a matter of epistemology: the apparent chance could be the result of our ignorance rather than an actual randomness. To use the obvious analogy, the game master might not be rolling dice behind her screen at all and what happens might be determined or pre-determined. Unlike in a game, the rule system for reality is not accessible: it is guessed at by what we observe and we learn the game of life solely by playing.
That said, the dice model seems to fit experience best: I try to do something and succeed or fail with a degree of apparent randomness. Because I believe that randomness is a factor, I consider that my failure to reach a goal could be partially due to chance. So, if I want to achieve that goal, I roll again. And again. Until I succeed or decide that the game is not worth the roll. Not being a fool, I do consider that success might be impossible—but I do not infer that from one or even a few bad rolls. This approach to life has served me well and will no doubt do so until it finally kills me.
Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.
In this essay I will present the game that creates the paradox and then discuss a specific aspect of Nozick’s version, namely his stipulation regarding the effect of how the player of the game actually decides.
The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000. Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.
This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).
The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000. If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to Nozick’s condition that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
This stipulation provides some insight into how the Predictor’s prediction ability is supposed to work. This is important because the workings of the Predictor’s ability to predict are, as I argued in my previous essay, rather significant in sorting out how one should decide.
The stipulation mainly serves to indicate how the Predicator’s ability does not work. First, it would seem to indicate that the Predictor does not rely on time travel—that is, it does not go forward in time to observe the decision and then travel back to place (or not place) the money in the box. After all, the prediction in this case would be explained in terms of what the player decided to do. This still leaves it open for the Predictor to visit (or observe) a possible future (or, more accurately, a possible world that is running ahead of the actual world in its time) since the possible future does not reveal what the player actually decides, just what she decides in that possible future. Second, this would seem to indicate that the Predictor is not able to “see” the actual future (perhaps by being able to perceive all of time “at once” rather than linearly as humans do). After all, in this case it would be predicting based on what the player actually decided. Third, this would also rule out any form of backwards causation in which the actual choice was the cause of the prediction. While there are, perhaps, other specific possibilities that are also eliminated, the gist is that the Predictor has to, by Nozick’s stipulation, be limited to information available at the time of the prediction and not information from the future. There are a multitude of possibilities here.
One possibility is that the Predictor is telepathic and can predict based on what it reads regarding the player’s intentions at the time of the prediction. In this case, the best approach would be for the player to think that she will take one box, and then after the prediction is made, take both. Or, alternatively, use some sort of drugs or technology to “trick” the Predictor. The success of this strategy would depend on how well the player can fool the Predictor. If the Predictor cannot be fooled or is unlikely to be fooled then the smart strategy would be to intend to take box B and then just take box B. After all, if the Predictor cannot be fooled, then box B will be empty if the player intends on taking both.
Another possibility is that the Predictor is a researcher—it gathers as much information as it can about the player and makes a shrewd guess based on that information (which might include what the player has written about the paradox). Since Nozick stipulates that the Predictor is “almost certainly” right, the Predictor would need to be an amazing researcher. In this case, the player’s only way to mislead the Predictor is to determine its research methods and try to “game” it so the Predictor will predict that she will just take B, then actually decide to take both. But, once again, the Predictor is stipulated to be “almost certainly” right—so it would seem that the player should just take B. If B is empty, then the Predictor got it wrong, which would “almost certainly” not happen. Of course, it could be contended that since the player does not know how the Predictor will predict based on its research (the player might not know what she will do), then the player should take both. This, of course, assumes that the Predictor has a reasonable chance of being wrong—contrary to the stipulation.
A third possibility is that the Predictor predicts in virtue of its understanding of what it takes to be a determinist system. Alternatively, the system might be a random system, but one that has probabilities. In either case, the Predictor uses the data available to it at the time and then “does the math” to predict what the player will decide.
If the world really is deterministic, then the Predictor could be wrong if it is determined to make an error in its “math.” So, the player would need to predict how likely this is and then act accordingly. But, of course, the player will simply act as she is determined to act. If the world is probabilistic, then the player would need to estimate the probability that the Predictor will get it right. But, it is stipulated that the Predictor is “almost certainly” right so any strategy used by the player to get one over on the Predictor will “almost certainly” fail, so the player should take box B. Of course, the player will do what “the dice say” and the choice is not a “true” choice.
If the world is one with some sort of metaphysical free will that is in principle unpredictable, then the player’s actual choice would, in principle, be unpredictable. But, of course, this directly violates the stipulation that the Predictor is “almost certainly” right. If the player’s choice is truly unpredictable, then the Predictor might make a shrewd/educated guess, but it would not be “almost certainly” right. In that case, the player could make a rational case for taking both—based on the estimate of how likely it is that the Predictor got it wrong. But this would be a different game, one in which the Predictor is not “almost certainly” right.
This discussion seems to nicely show that the stipulation that “what you actually decide to do is not part of the explanation of why he made the prediction he made” is a red herring. Given the stipulation that the Predictor is “almost certainly” right, it does not really matter how its predictions are explained. The stipulation that what the player actually decides is not part of the explanation simply serves to mislead by creating the false impression that there is a way to “beat” the Predictor by actually deciding to take both boxes and gambling that it has predicted the player will just take B. As such, the paradox seems to be dissolved—it is the result of some people being misled by one stipulation and not realizing that the stipulation that the Predictor is “almost certainly” right makes the other irrelevant.
One classic philosophical dispute is the battle over innate ideas. An innate idea, as the name suggests, is an idea that is not acquired by experience but is “built into” the mind. As might be imagined, the specific nature and content of such ideas vary considerably among the philosophers who accept them. Leibniz, for example, takes God to be the author of the innate ideas that exist within the monads. Other thinkers, for example, accept that humans have an innate concept of beauty that is the product of evolution.
Over the centuries, philosophers have advanced various arguments for (and against) innate ideas. For example, some take Plato’s Meno as a rather early argument for innate ideas. In the Meno, Socrates claims to show that Meno’s servant knows geometry, despite the (alleged) fact that the servant never learned geometry. Other philosophers have argued that there must be innate ideas in order for the mind to “process” information coming in from the senses. To use a modern analogy, just as a smart phone needs software to make the camera function, the brain would need to have built in ideas in order to process the sensory data coming in via the optic nerve.
Other philosophers, such as John Locke, have been rather critical of the idea of innate ideas in general. Others have been critical of specific forms of innate ideas—the idea that God is the cause of innate ideas is, as might be suspected, not very popular among philosophers today.
Interestingly enough, there is some contemporary evidence for innate ideas. In his August 2014 Scientific American article “Accidental Genius”, Darold A. Treffert advances what can be seen as a 21st century version of the Meno. Investigating the matter of “accidental geniuses” (people who become savants as the result of an accident, such as a brain injury), researchers found that they could create “instant savants” by the use using brain stimulation. These instant savants were able to solve a mathematical puzzle that they could not solve without the stimulation. Treffert asserts that this ability to solve the puzzle was due to the fact that they “’know things’ innately they were never taught.” To provide additional support for his claim, Treffert gave the example of a savant sculptor, Clemons, who “had no formal training in art but knew instinctively how to produce an armature, the frame for the sculpture, to enable his pieces to show horse in motion.” Treffert goes on to explicitly reject the “blank slate” notion (which was made famous by John Locke) in favor of the notion that the “brain might come loaded with a set of innate predispositions for processing what it sees or for understanding the ‘rules’ of music art or mathematics.” While this explanation is certainly appealing, it is well worth considering alternative explanations.
One stock objection to this sort of argument is the same sort of argument used against claims about past life experiences. When it is claimed that a person had a past life on the basis that the person knows about things she would not normally know, the easy and obvious reply is that the person learned about these things through perfectly mundane means. In the case of alleged innate ideas, the easy and obvious reply is that the person gained the knowledge through experience. This is not to claim that the person in question is engaged in deception—she might not recall the experience that provided the knowledge. For example, the instant savants who solved the puzzle probably had previous puzzle experience and the sculptor might have seen armatures in the past.
Another objection is that an idea might appear to be innate but might actually be a new idea that did not originate directly from a specific experience. To use a concrete example, consider a person who developed a genius for sculpture after a head injury. The person might have an innate idea that allowed him to produce the armature. An alternative explanation is that the person faced the problem regarding his sculpture and developed a solution. The solution turned out to be an armature, because that is what would solve the problem. To use an analogy, someone faced with the problem of driving in a nail might make a hammer but this does not entail that the idea of a hammer is innate. Rather, a hammer like device is what would work in that situation and hence it is what a person would tend to make.
As has always been the case in the debate over innate ideas, the key question is whether the phenomena in question can be explained best by innate ideas or without them.
It waits somewhere in the dark infinity of time. Perhaps the past. Perhaps the future. Perhaps now. The worst thing.
Whenever something bad happens to me, such as a full quadriceps tendon tear, people always helpfully remark that “it could have been worse.” Some years ago, after that tendon tear, I wrote an essay about this matter which focused on possibility and necessity. That is, whether it could be worse or not. While the tendon tear was perhaps the worst thing to happen to me (as of this writing), I did have some bad things happen this summer and got to hear how things could have been worse. Since it seemed like a fun game, I decided to play along: when lightning took out the pine tree in front of my house I said “why, it could have been worse” and then was hit with inspiration: what would be the worst thing? The thing that which nothing worse can be conceived.
I can say with complete confidence that there must be such a thing. After all, just as there must be a tallest building, there must be the worst thing. But, of course, this would not be much of an essay if I failed to argue for this claim.
Interestingly enough, arguing for the worst thing is rather similar to arguing for the existence of a perfect thing (that is, God). Thomas Aquinas famously made use of his Five Ways to argue for the existence of God and most of these arguments relied on a combination of an infinite regress and a reduction to absurdity. For example, Aquinas argued from the fact that things move to the need for an unmoved mover on the grounds that an infinite regress would arise if everything had to be moved by something else. A regress argument with a reduction to absurdity will serve quite nicely in arguing for the worst thing.
Take any thing. To avoid the usual boring philosophical approach of calling this thing X, I’ll call this thing Troy. If Troy is the worst thing, then the worst thing exists. If Troy is not the worst thing, then there must be another thing that is worse than Troy. That thing, which I will call Sally, is either the worst thing or not. If Sally is the worst thing, then the worst thing exists and is Sally. If it is not Sally, there must be something worse than Sally. This cannot go on to infinity so there must be a thing that is worse than all other things—the worst thing. I’ll call it Dave.
The obvious counter is to throw down the infinity gauntlet: if there is an infinite number of things, there will not be a worst thing. After all, for any thing, there will be an infinite number of other things. As Leibniz claimed, the infinite number cannot be said to be even or odd, therefor in an infinite universe a thing could not be said to be worst.
One might be inclined to reject the infinity gauntlet—after all, even if there is an infinite number of things, each thing would stand in a relation to all other things and there would thus still be a worst thing.
Another obvious counter is to assert that there could be two or more things that are equally bad—that is, identical in their badness. As such, there would not be a worst thing. A counter to this is to follow Leibniz once again and argue that there could not be two identical things—they would need to differ in some way that would make one worse than the other. This could be countered by asserting that the two might be different, yet equally bad. In this case, the response would be to follow the model used in arguing for the best thing (God) and assert that the worst thing would be worst in every possible respect and hence anything equally as bad would be identical and thus there would be one worst thing, not two. I suppose that this would have some consolation value—it would certainly be a scary universe that had multiple worst things.
Of course, this just shows that there is something that is worse than all other things that happen to be—which leaves open the possibility that it is not the worst thing in another sense of the term. So now I will turn to arguing for the truly worst thing.
Another way to argue for the worst thing is to use the model of St. Anselm’s ontological argument. Very crudely put, the ontological argument works like this: God is that which nothing greater can be conceived. If God only existed as an idea in the mind, a greater being can be conceived, namely God existing for real. Thus, God must exist.
In the case of the worst thing, it would be that which nothing worse can be conceived. If it only existed as an idea in the mind, a worse thing can be conceived, namely the worst thing existing for real. Thus, the worst thing must exist.
Another variant on the ontological argument can also be used here. A stock variant is that since God is perfect, He must exist. This is because if He did not exist, He would not be perfect. But He is, so He must. In the case of the worst thing, the worst thing must exist because it is the worst. This is because if it did not exist, it would not be the worst. But it is, so it does. This worst thing would be the truly worst thing (just as God is supposed to be the best thing).
This approach does, of course, inherit the usual difficulties of an ontological argument as pointed out by Gaunilo and Kant (that existence is not a quality). It would certainly be better for the universe and the folks in it for the critics to be right so that there is no worst thing.
Azim Shariff and Kathleen Vohs recently had their article, “What Happens to a Society That Does Not Believe in Free Will”, published in Scientific American. This article considers the causal impact of a disbelief in free will with a specific focus on law and ethics.
Philosophers have long addressed the general problem of free will as well as the specific connection between free will and ethics. Not surprisingly, studies conducted to determine the impact of disbelief in free will have the results that philosophers have long predicted.
One impact is that when people have doubts about free will they tend to have less support for retributive punishment. Retributive punishment, as the name indicates, is punishment aimed at making a person suffer for her misdeeds. Doubt in free will did not negatively impact a person’s support for punishment aimed at deterrence or rehabilitation.
While the authors do consider one reason for this, namely that those who doubt free will would regard wrongdoers as analogous to harmful natural phenomenon that need to dealt with rather than subject to vengeance, this view also matches a common view about moral accountability. To be specific, moral (and legal) accountability is generally proportional to the control a person has over events. To use a concrete example, consider the difference between these two cases. In the first case, Sally is driving well above the speed limit and is busy texting and sipping her latte. She doesn’t see the crossing guard frantically waving his sign and runs over the children in the cross walk. In case two, Jane is driving the speed limit and children suddenly run directly in front of her car. She brakes and swerves immediately, but she hits the children. Intuitively, Sally has acted in a way that was morally wrong—she should have been going the speed limit and she should have been paying attention. Jane, though she hit the children, did not act wrongly—she could not have avoided the children and hence is not morally responsible.
For those who doubt free will, every case is like Jane’s case: for the determinist, every action is determined and a person could not have chosen to do other than she did. On this view, while Jane’s accident seems unavoidable, so was Sally’s accident: Sally could not have done other than she did. As such, Sally is no more morally accountable than Jane. For someone who believes this, inflicting retributive punishment on Sally would be no more reasonable than seeking vengeance against Jane.
However, it would seem to make sense to punish Sally to deter others and to rehabilitate Sally so she will drive the speed limit and pay attention in the future. Of course, if these is no free will, then we would not chose to punish Sally, she would not chose to behave better and people would not decide to learn from her lesson. Events would happen as determined—she would be punished or not. She would do it again or not. Other people would do the same thing or not. Naturally enough, to speak of what we should decide to do in regards to punishments would seem to assume that we can chose—that is, that we have some degree of free will.
A second impact that Shariff and Vohs noted was that a person who doubts free will tends to behave worse than a person who does not have such a skeptical view. One specific area in which behavior worsens is that such skepticism seems to incline people to be more willing to harm others. Another specific area is that such skepticism also inclines others to lie or cheat. In general, the impact seems to be that the skepticism reduces a person’s willingness (or capacity) to resist impulsive reactions in favor of greater restraint and better behavior.
Once again, this certainly makes sense. Going back to the examples of Sally and Jane, Sally (unless she is a moral monster) would most likely feel remorse and guilt for hurting the children. Jane, though she would surely feel badly, would not feel moral guilt. This would certainly be reasonable: a person who hurts others should feel guilt if she could have done otherwise but should not feel moral guilt if she could not have done otherwise (although she certainly should feel sympathy). If someone doubts free will, then she will regard her own actions as being out of her control: she is not choosing to lie, or cheat or hurt others—these events are just happening. People might be hurt, but this is like a tree falling on them—it just happens. Interestingly, these studies show that people are consistent in applying the implications of their skepticism in regards to moral (and legal) accountability.
One rather important point is to consider what view we should have regarding free will. I take a practical view of this matter and believe in free will. As I see it, if I am right, then I am…right. If I am wrong, then I could not believe otherwise. So, choosing to believe I can choose is the rational choice: I am right or I am not at fault for being wrong.
I do agree with Kant that we cannot prove that we have free will. He believed that the best science of his day was deterministic and that the matter of free will was beyond our epistemic abilities. While science has marched on since Kant, free will is still unprovable. After all, deterministic, random and free-will universes would all seem the same to the people in them. Crudely put, there are no observations that would establish or disprove metaphysical free will. There are, of course, observations that can indicate that we are not free in certain respects—but completely disproving (or proving) free will would seem to beyond our abilities—as Kant contended.
Kant had a fairly practical solution: he argued that although free will cannot be proven, it is necessary for ethics. So, crudely put, if we want to have ethics (which we do), then we need to accept the existence of free will on moral grounds. The experiments described by Shariff and Vohs seems to support Kant: when people doubt free will, this has an impact on their ethics.
One aspect of this can be seen as positive—determining the extent to which people are in control of their actions is an important part of determining what is and is not a just punishment. After all, we do not want to inflict retribution on people who could not have done otherwise or, at the very least, we would want relevant circumstances to temper retribution with proper justice. It also makes more sense to focus on deterrence and rehabilitation more than retribution. However just, retribution merely adds more suffering to the world while deterrence and rehabilitation reduces it.
The second aspect of this is negative—skepticism about free will seems to cause people to think that they have a license to do ill, thus leading to worse behavior. That is clearly undesirable. This then, provides an interesting and important challenge: balancing our view of determinism and freedom in order to avoid both unjust punishment and becoming unjust. This, of course, assumes that we have a choice. If we do not, we will just do what we do and giving advice is pointless. As I jokingly tell my students, a determinist giving advice about what we should do is like someone yelling advice to a person falling to certain death—he can yell all he wants about what to do, but it won’t matter.