A Philosopher's Blog

How You Should Vote

Posted in Politics by Michael LaBossiere on October 13, 2014

As I write this in early October, Election Day in the United States is about a month away. While most Americans do not vote, there is still in question of how a voter should vote.

While I do have definite opinions about the candidates and issues on the current ballot in my part of Florida, this essay is not aimed at convincing you to vote as I did (via my mail-in ballot). Rather, my goal is to discuss how you should vote in general.

The answer to the question of how you should vote is easy: if you are rational, then you should vote in your self-interest. In the case of a specific candidate, you should vote for the candidate you believe will act in your self-interest. In the case of such things as ballot measures, you should vote for or against based on how you believe it will impact your self-interest. So, roughly put, you should vote for what is best for you.

While this is rather obvious advice, it does bring up two often overlooked concerns. The first is the matter of determining what is actually in your self-interest. The second is determining whether or not your voting decision is in your self-interest. In the case of a candidate, the concern is whether or not the candidate will act in your self-interest. In the case of things like ballot measures, the question is whether or not the measure will be advantageous to your interests or not.

It might be thought that a person just knows what is in her self-interest. Unfortunately, people can be wrong about this. In most cases people just assume that if they want or like something, then it is in their self-interest. But, what a person likes or wants need not be what is best for him. For example, a person might like the idea of cutting school funding without considering how it will impact her family. In contrast, what people do not want or dislike is assumed to be against their self-interest. Obviously, what a person dislikes or does not want might not be bad for her. For example, a person might dislike the idea of an increased minimum wage and vote against it without considering whether it would actually be in their self-interest or not. The take-away is that a person needs to look beyond what he likes or dislikes, wants or does not want in order to determine her actual self-interest.

It is natural to think that of what is in a person’s self interest in rather selfish terms. That is, in terms of what seems to benefit just the person without considering the interests of others. While this is one way to look at self-interest, it is worth considering what might seem to be in the person’s selfish interest could actually be against her self-interest. For example, a business owner might see paying taxes to fund public education as being against her self-interest because it seems to have no direct, selfish benefit to her. However, having educated fellow citizens would seem to be in her self-interest and even in her selfish interest. For example, having the state pay for the education of her workers is advantageous to her—even if she has to contribute a little. As another example, a person might see paying taxes for public health programs and medical aid to foreign countries as against her self-interest because she has her own medical coverage and does not travel to those countries. However, as has been shown with Ebola, public and even world health is in her interest—unless she lives in total isolation. As such, even the selfish should consider whether or not their selfishness in a matter is actually in their self-interest.

It is also worth considering a view of self-interest that is more altruistic. That is, that a person’s interest is not just in her individual advantages but also in the general good. For this sort of person, providing for the common defense and securing the general welfare would be in her self-interest because her self-interest goes beyond just her self.

So, a person should sort out her self-interest and consider that it might not just be a matter of what she likes, wants or sees as in her selfish advantage. The next step is to determine which candidate is most likely to act in her self-interest and which vote on a ballot measure is most likely to serve her self-interest.

Political candidates, obviously enough, try very hard to convince their target voters that they will act in their interest. Those backing ballot measures also do their best to convince voters that voting a certain way is in their self-interest.

However, the evidence is that politicians do not act in the interest of the majority of those who voted for them. Researchers at Princeton and Northwestern conducted a study, “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens”, to determine whether or not politicians acted based on the preferences of the majority. The researchers examined about 1,800 policies and matched them against the preferences expressed by three classes: the average American (50th income percentile), the affluent American (the 90th percentile of income) and the large special interest groups.

The results are hardly surprising: “The central point that emerges from our research is that economic elites and organized groups representing business interests have substantial independent impacts on US government policy, while mass-based interest groups and average citizens have little or no independent influence.” This suggests that voters are rather poor at selecting candidates who will act in their interest (or perhaps that there are no candidates who will do so).

It can be countered that the study just shows that politicians generally act contrary to the preferences of the majority but not that they act contrary to their self-interest. After all, I made the point that what people want (prefer) might not be what is in their self-interest. But, on the face of it, unless what is in the interest of the majority is that the affluent get their way, then it seems that the politicians voters choose generally do not act in the best interest of the voters. This would indicate that voters should pick different candidates.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Lessons from Gaming #1: Keep Rolling

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on October 10, 2014
English: Six dice of various colours. 4-sided ...

 (Photo credit: Wikipedia)

When I was a young kid I played games like Monopoly, Chutes & ladders and Candy Land. When I was a somewhat older kid, I was introduced to Dungeons & Dragons and this proved to be a gateway game to Call of Cthulhu, Battletech, Star Fleet Battles, Gamma World, and video games of all sorts. I am still a gamer today—a big bag of many-sided dice and exotic gaming mice dwell within my house.

Over the years, I have learned many lessons from gaming. One of these is keep rolling. This is, not surprisingly, similar to the classic advice of “keep trying” and the idea is basically the same. However, there is some interesting philosophy behind “keep rolling.”

Most of the games I have played feature actual dice or virtual dice (that is, randomness) that are used to determine how things go in the game. To use a very simple example, the dice rolls in Monopoly determine how far your piece moves. In vastly more complicated games like Pathfinder or Destiny the dice (or random number generators) govern such things as attacks, damage, saving throws, loot, non-player character reactions and, in short, much of what happens in the game. For most of these games, the core mechanics are built around what is supposed to be a random system. For example, in games like Pathfinder when your character attacks the dragon with her great sword, a roll of a 20-sided die determines whether you hit or not. If you do hit, then you roll more dice to determine your damage.

Having played these sorts of games for years, I can think very well in terms of chance and randomness when planning tactics and strategies within such games. On the one hand, a lucky roll can result in victory in the face of overwhelming odds. On the other hand, a bad roll can seize defeat from the jaws of victory. But, in general, success is more likely if one does not give up and keeps on rolling.

This lesson translates very easily and obviously to life. There are, of course, many models and theories of how the real world works. Some theories present the world as deterministic—all that happens occurs as it must and things cannot be otherwise. Others present a pre-determined world (or pre-destined): all that happens occurs as it has been ordained and cannot be otherwise. Still other models present a random universe.

As a gamer, I favor the random universe model: God does play dice with us and He often rolls them hard. The reason for this belief is that the dice/random model of gaming seems to work when applied to the actual world—as such, my belief is mostly pragmatic. Since games are supposed to model parts of reality, it is hardly surprising that there is a match up. Based on my own experience, the world does seem to work rather like a game: success and failure seem to involve chance.

As a philosopher, I recognize this could simply be a matter of epistemology: the apparent chance could be the result of our ignorance rather than an actual randomness. To use the obvious analogy, the game master might not be rolling dice behind her screen at all and what happens might be determined or pre-determined. Unlike in a game, the rule system for reality is not accessible: it is guessed at by what we observe and we learn the game of life solely by playing.

That said, the dice model seems to fit experience best: I try to do something and succeed or fail with a degree of apparent randomness. Because I believe that randomness is a factor, I consider that my failure to reach a goal could be partially due to chance. So, if I want to achieve that goal, I roll again. And again. Until I succeed or decide that the game is not worth the roll. Not being a fool, I do consider that success might be impossible—but I do not infer that from one or even a few bad rolls. This approach to life has served me well and will no doubt do so until it finally kills me.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

A Philosopher’s Blog: 2012-2013

Posted in Philosophy by Michael LaBossiere on October 8, 2014

A-Philosopher's-Blog-2012-2013-CoverMy latest book, A Philosopher’s Blog 2012-2013, will be free on Amazon from October 8, 2014 to October 12 2014.

Description: “This book contains select essays from the 2012-2013 postings of A Philosopher’s Blog. The topics covered range from economic justice to defending the humanities, plus some side trips into pain pills and the will.”

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Lawful Good

Posted in Ethics, Philosophy by Michael LaBossiere on October 6, 2014
Paladin II

Paladin II (Photo credit: Wikipedia)

As I have written in other posts on alignments, it is often useful to look at the actual world in terms of the D&D alignment system. In this essay, I will look at the alignment that many players find the most annoying: lawful good (or, as some call it, “awful good”).

Pathfinder, which is a version of the D20 D&D system, presents the alignment as follows:

 A lawful good character believes in honor. A code or faith that she has unshakable belief in likely guides her. She would rather die than betray that faith, and the most extreme followers of this alignment are willing (sometimes even happy) to become martyrs.

A lawful good character at the extreme end of the lawful-chaotic spectrum can seem pitiless. She may become obsessive about delivering justice, thinking nothing of dedicating herself to chasing a wicked dragon across the world or pursuing a devil into Hell. She can come across as a taskmaster, bent upon her aims without swerving, and may see others who are less committed as weak. Though she may seem austere, even harsh, she is always consistent, working from her doctrine or faith. Hers is a world of order, and she obeys superiors and finds it almost impossible to believe there’s any bad in them. She may be more easily duped by such impostors, but in the end she will see justice is done—by her own hand if necessary.

In the fantasy worlds of role-playing games, the exemplar of the lawful good alignment is the paladin. Played properly, a paladin character is a paragon of virtue, a word of righteousness, a defender of the innocent and a pain in the party’s collective ass. This is because the paladin and, to a somewhat lesser extent, all lawful good characters are very strict about being good. They are usually quite willing to impose their goodness on the party, even when doing so means that the party must take more risks, do things the hard way, or give up some gain. For example, lawful good characters always insist on destroying unholy magical items, even when they could be cashed in for stacks of gold.

In terms of actual world moral theories, lawful good tends to closely match virtue theory: the objective is to be a paragon of virtue and all that entails. In actual game play, players tend to (knowingly or unknowingly) embrace the sort of deontology (rules based ethics) made famous by our good dead friend Immanuel Kant. On this sort of view, morality is about duty and obligations, the innate worth of people, and the need to take action because it is right (rather than expedient or prudent). Like Kant, lawful good types tend to be absolutists—there is one and only one correct solution to any moral problem and there are no exceptions. The lawful good types also tend to reject consequentialism—while the consequences of actions are not ignored (except by the most fanatical of the lawful good), what ultimately matters is whether the act is good in and of itself or not.

In the actual world, a significant number of people purport to be lawful good—that is, they claim to be devoted to honor, goodness, and order. Politicians, not surprisingly, often try to cast themselves, their causes and their countries in these terms. As might be suspected, most of those who purport to be good are endeavoring to deceive others or themselves—they mistake their prejudices for goodness and their love of power for a devotion to a just order. While those skilled at deceiving others are dangerous, those who have convinced themselves of their own goodness can be far more dangerous: they are willing to destroy all who oppose them for they believe that those people must be evil.

Fortunately, there are actually some lawful good types in the world. These are the people who sincerely work for just, fair and honorable systems of order, be they nations, legal systems, faiths or organizations. While they can seem a bit fanatical at times, they do not cross over into the evil that serves as a key component of true fanaticism.

Neutral good types tend to see the lawful good types as being too worried about order and obedience. The chaotic good types respect the goodness of the lawful good types, but find their obsession with hierarchy, order and rules oppressive. However, good creatures never willingly and knowingly seriously harm other good creatures. So, while a chaotic good person might be critical of a lawful good organization, she would not try to destroy it.

Chaotic evil types are the antithesis of the lawful good types and they are devoted enemies. The chaotic evil folks hate the order and goodness of the lawful good, although they certainly delight in destroying them.

Neutral evil types are opposed to the goodness of the lawful good, but can be adept at exploiting both the lawful and good aspects of the lawful good. Of course, the selfishly evil need to avoid exposure, since the good will not willingly suffer their presence.

Lawful evil types can often get along with the lawful good types in regards to the cause of order. Both types respect tradition, authority and order—although they do so for very different reasons. Lawful evil types often have compunctions that can make them seem to have some goodness and the lawful good are sometimes willing to see such compunctions as signs of the possibility of redemption. In general, the lawful good and lawful evil are most likely to be willing to work together at the societal level. For example, they might form an alliance against a chaotic evil threat to their nation. Inevitably, though, the lawful good and lawful evil must end up in conflict. Which is as it should be.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Gaming Newcomb’s Paradox III: What You Actually Decide

Posted in Metaphysics, Philosophy, Reasoning/Logic by Michael LaBossiere on October 3, 2014

Robert Nozick

Robert Nozick (Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

In this essay I will present the game that creates the paradox and then discuss a specific aspect of Nozick’s version, namely his stipulation regarding the effect of how the player of the game actually decides.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to Nozick’s condition that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

This stipulation provides some insight into how the Predictor’s prediction ability is supposed to work. This is important because the workings of the Predictor’s ability to predict are, as I argued in my previous essay, rather significant in sorting out how one should decide.

The stipulation mainly serves to indicate how the Predicator’s ability does not work. First, it would seem to indicate that the Predictor does not rely on time travel—that is, it does not go forward in time to observe the decision and then travel back to place (or not place) the money in the box. After all, the prediction in this case would be explained in terms of what the player decided to do. This still leaves it open for the Predictor to visit (or observe) a possible future (or, more accurately, a possible world that is running ahead of the actual world in its time) since the possible future does not reveal what the player actually decides, just what she decides in that possible future. Second, this would seem to indicate that the Predictor is not able to “see” the actual future (perhaps by being able to perceive all of time “at once” rather than linearly as humans do). After all, in this case it would be predicting based on what the player actually decided. Third, this would also rule out any form of backwards causation in which the actual choice was the cause of the prediction. While there are, perhaps, other specific possibilities that are also eliminated, the gist is that the Predictor has to, by Nozick’s stipulation, be limited to information available at the time of the prediction and not information from the future. There are a multitude of possibilities here.

One possibility is that the Predictor is telepathic and can predict based on what it reads regarding the player’s intentions at the time of the prediction. In this case, the best approach would be for the player to think that she will take one box, and then after the prediction is made, take both. Or, alternatively, use some sort of drugs or technology to “trick” the Predictor. The success of this strategy would depend on how well the player can fool the Predictor. If the Predictor cannot be fooled or is unlikely to be fooled then the smart strategy would be to intend to take box B and then just take box B. After all, if the Predictor cannot be fooled, then box B will be empty if the player intends on taking both.

Another possibility is that the Predictor is a researcher—it gathers as much information as it can about the player and makes a shrewd guess based on that information (which might include what the player has written about the paradox). Since Nozick stipulates that the Predictor is “almost certainly” right, the Predictor would need to be an amazing researcher. In this case, the player’s only way to mislead the Predictor is to determine its research methods and try to “game” it so the Predictor will predict that she will just take B, then actually decide to take both. But, once again, the Predictor is stipulated to be “almost certainly” right—so it would seem that the player should just take B. If B is empty, then the Predictor got it wrong, which would “almost certainly” not happen. Of course, it could be contended that since the player does not know how the Predictor will predict based on its research (the player might not know what she will do), then the player should take both. This, of course, assumes that the Predictor has a reasonable chance of being wrong—contrary to the stipulation.

A third possibility is that the Predictor predicts in virtue of its understanding of what it takes to be a determinist system. Alternatively, the system might be a random system, but one that has probabilities. In either case, the Predictor uses the data available to it at the time and then “does the math” to predict what the player will decide.

If the world really is deterministic, then the Predictor could be wrong if it is determined to make an error in its “math.” So, the player would need to predict how likely this is and then act accordingly. But, of course, the player will simply act as she is determined to act. If the world is probabilistic, then the player would need to estimate the probability that the Predictor will get it right. But, it is stipulated that the Predictor is “almost certainly” right so any strategy used by the player to get one over on the Predictor will “almost certainly” fail, so the player should take box B. Of course, the player will do what “the dice say” and the choice is not a “true” choice.

If the world is one with some sort of metaphysical free will that is in principle unpredictable, then the player’s actual choice would, in principle, be unpredictable. But, of course, this directly violates the stipulation that the Predictor is “almost certainly” right. If the player’s choice is truly unpredictable, then the Predictor might make a shrewd/educated guess, but it would not be “almost certainly” right. In that case, the player could make a rational case for taking both—based on the estimate of how likely it is that the Predictor got it wrong. But this would be a different game, one in which the Predictor is not “almost certainly” right.

This discussion seems to nicely show that the stipulation that “what you actually decide to do is not part of the explanation of why he made the prediction he made” is a red herring. Given the stipulation that the Predictor is “almost certainly” right, it does not really matter how its predictions are explained. The stipulation that what the player actually decides is not part of the explanation simply serves to mislead by creating the false impression that there is a way to “beat” the Predictor by actually deciding to take both boxes and gambling that it has predicted the player will just take B.  As such, the paradox seems to be dissolved—it is the result of some people being misled by one stipulation and not realizing that the stipulation that the Predictor is “almost certainly” right makes the other irrelevant.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Gaming Newcomb’s Paradox II: Mechanics

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on October 1, 2014
La bildo estas kopiita de wikipedia:es. La ori...

(Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

As a philosopher, a game master (a person who runs a tabletop role playing game) and an author of game adventures, I am rather fond of puzzles and paradoxes. As a philosopher, I can (like other philosophers) engage in the practice known as “just making stuff up.” As an adventure author, I can do the same—but I need to present the actual mechanics of each problem, puzzle and paradox. For example, a trap description has to specific exactly how the trap works, how it may be overcome and what happens if it is set off. I thought it would be interesting to look at Newcomb’s Paradox from a game master perspective and lay out the possible mechanics for it. But first, I will present the paradox and two stock attempts to solve it.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to laying out some possible mechanics in gamer terms.

One obvious advantage of crafting the mechanics for a game is that the author and the game master know exactly how the mechanic works. That is, she knows the truth of the matter. While the players in role-playing games know the basic rules, they often do not know the full mechanics of a specific challenge, trap or puzzle. Instead, they need to figure out how it works—which often involves falling into spiked pits or being ground up into wizard burger. Fortunately, Newcomb’s Paradox has very simple game mechanics, but many variants.

In game mechanics, the infallible Predictor is easy to model. The game master’s description would be as follows: “have the player character (PC) playing the Predictor’s game make her choice. The Predictor is infallible, so if the player takes box B, she gets the million. If the player takes both, she gets $1,000.” In this case, the right decision is to take B. After all, the Predictor is infallible. So, the solution is easy.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

A less-than infallible Predictor is also easy to model with dice. The description of the Predictor simply specifies the accuracy of its predictions. So, for example: “The Predictor is correct 99% of the time. After the player character makes her choice, roll D100 (generating a number from 1-100). If you roll 100, the Predictor was wrong. If the PC picked just box B, it is empty and she gets nothing because the Predictor predicted she would take both. If she picked both, B is full and she gets $1,001,000 because the Predictor predicted she would just take one. If you roll 1-99, the Predictor was right. If the PC picked box B, she gets $1,000,000. If she takes both, she gets $1,000 since box B is empty.” In this case, the decision is a gambling matter and the right choice can be calculated by considering the chance the Predictor is right and the relative payoffs. Assuming the Predictor is “almost always right” would make choosing only B the rational choice (unless the player absolutely and desperately needs only $1,000), since the player who picks just B will “almost always” get the $1,000,000 rather than nothing while the player who picks both will “almost always” get just $1,000. But, if the Predictor is “almost always wrong” (or even just usually wrong), then taking both would be the better choice. And so on for all the fine nuances of probability. The solution is relatively easy—it just requires doing some math based on the chance the Predictor is correct in its predictions. As such, if the mechanism of the Predicator is specified, there is no paradox and no problem at all. But, of course, in a role-playing game puzzle, the players should not know the mechanism.

If the game master is doing her job, when the players are confronted by the Predictor, they will not know the predictor’s predictive powers (and clever players will suspect some sort of trick or trap). The game master will say something like “after explaining the rules, the strange being says ‘my predictions are nearly always right/always right’ and sets two boxes down in front of you.” Really clever players will, of course, make use of spells, items, psionics or technology (depending on the game) to try to determine what is in the box and the capabilities of the Predictor. Most players will also consider just attacking the Predictor and seeing what sort of loot it has. So, for the game to be played in accord with the original version, the game master will need to provide plausible ways to counter all these efforts so that the players have no idea about the abilities of the Predictor or what is in box B. In some ways, this sort of choice would be similar to Pascal’s famous Wager: one knows that the Predictor will get it right or it won’t. But, in this case, the player has no idea about the odds of the Predictor being right. In this case, from the perspective of the player who is acting in ignorance, taking both boxes yields a 100% chance of getting $1,000 and somewhere between 0 and 100% chance of getting the extra $1,000,000. Taking the B box alone yields a 100% chance of not getting the $1,000 and some chance between 0% and 100% of getting $1,000,000. When acting in ignorance, the safe bet is to take both: the player walks away with at least $1,000. Taking just B is a gamble that might or might not pay off. The player might walk away with nothing or $1,000,000.

But, which choice is rational can depend on many possible factors. For example, suppose the players need $1,000 to buy a weapon they need to defeat the big boss monster in the dungeon, then picking the safe choice would be the smart choice: they can get the weapon for sure. If they need $1,001,000 to buy the weapon, then picking both would also be a smart choice, since that is the only way to get that sum in this game. If they need $1,000,000 to buy the weapon, then there is no rational way to pick between taking one or both, since they have no idea what gives them the best chance of getting at least $1,000,000. Picking both will get them $1,000 but only gets them the $1,000,000 if the Predictor predicted wrong. And they have no idea if it will get it wrong. Picking just B only gets them $1,000,000 if the Predictor predicted correctly. And they have no idea if it will get it right.

In the actual world, a person playing the game with the Predictor would be in the position of the players in the role-playing game: she does not know how likely it is that the Predictor will get it right. If she believes that the Predictor will probably get it wrong, then she would take both. If she thinks it will get it right, she would take just B. Since she cannot pick randomly (in Nozick’s scenario B is empty if the players decides by chance), that option is not available. As such, Newcomb’s Paradox is an epistemic problem: the player does not know the accuracy of the predictions but if she did, she would know how to pick. But, if it is known (or just assumed) the Predictor is infallible or almost always right, then taking B is the smart choice (in general, unless the person absolutely must have $1,000). To the degree that the Predictor can be wrong, taking both becomes the smarter choice (if the Predictor is always wrong, taking both is the best choice). So, there seems to be no paradox here. Unless I have it wrong, which I certainly do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page