A Philosopher's Blog

Gaming Newcomb’s Paradox III: What You Actually Decide

Posted in Metaphysics, Philosophy, Reasoning/Logic by Michael LaBossiere on October 3, 2014

Robert Nozick

Robert Nozick (Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

In this essay I will present the game that creates the paradox and then discuss a specific aspect of Nozick’s version, namely his stipulation regarding the effect of how the player of the game actually decides.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to Nozick’s condition that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

This stipulation provides some insight into how the Predictor’s prediction ability is supposed to work. This is important because the workings of the Predictor’s ability to predict are, as I argued in my previous essay, rather significant in sorting out how one should decide.

The stipulation mainly serves to indicate how the Predicator’s ability does not work. First, it would seem to indicate that the Predictor does not rely on time travel—that is, it does not go forward in time to observe the decision and then travel back to place (or not place) the money in the box. After all, the prediction in this case would be explained in terms of what the player decided to do. This still leaves it open for the Predictor to visit (or observe) a possible future (or, more accurately, a possible world that is running ahead of the actual world in its time) since the possible future does not reveal what the player actually decides, just what she decides in that possible future. Second, this would seem to indicate that the Predictor is not able to “see” the actual future (perhaps by being able to perceive all of time “at once” rather than linearly as humans do). After all, in this case it would be predicting based on what the player actually decided. Third, this would also rule out any form of backwards causation in which the actual choice was the cause of the prediction. While there are, perhaps, other specific possibilities that are also eliminated, the gist is that the Predictor has to, by Nozick’s stipulation, be limited to information available at the time of the prediction and not information from the future. There are a multitude of possibilities here.

One possibility is that the Predictor is telepathic and can predict based on what it reads regarding the player’s intentions at the time of the prediction. In this case, the best approach would be for the player to think that she will take one box, and then after the prediction is made, take both. Or, alternatively, use some sort of drugs or technology to “trick” the Predictor. The success of this strategy would depend on how well the player can fool the Predictor. If the Predictor cannot be fooled or is unlikely to be fooled then the smart strategy would be to intend to take box B and then just take box B. After all, if the Predictor cannot be fooled, then box B will be empty if the player intends on taking both.

Another possibility is that the Predictor is a researcher—it gathers as much information as it can about the player and makes a shrewd guess based on that information (which might include what the player has written about the paradox). Since Nozick stipulates that the Predictor is “almost certainly” right, the Predictor would need to be an amazing researcher. In this case, the player’s only way to mislead the Predictor is to determine its research methods and try to “game” it so the Predictor will predict that she will just take B, then actually decide to take both. But, once again, the Predictor is stipulated to be “almost certainly” right—so it would seem that the player should just take B. If B is empty, then the Predictor got it wrong, which would “almost certainly” not happen. Of course, it could be contended that since the player does not know how the Predictor will predict based on its research (the player might not know what she will do), then the player should take both. This, of course, assumes that the Predictor has a reasonable chance of being wrong—contrary to the stipulation.

A third possibility is that the Predictor predicts in virtue of its understanding of what it takes to be a determinist system. Alternatively, the system might be a random system, but one that has probabilities. In either case, the Predictor uses the data available to it at the time and then “does the math” to predict what the player will decide.

If the world really is deterministic, then the Predictor could be wrong if it is determined to make an error in its “math.” So, the player would need to predict how likely this is and then act accordingly. But, of course, the player will simply act as she is determined to act. If the world is probabilistic, then the player would need to estimate the probability that the Predictor will get it right. But, it is stipulated that the Predictor is “almost certainly” right so any strategy used by the player to get one over on the Predictor will “almost certainly” fail, so the player should take box B. Of course, the player will do what “the dice say” and the choice is not a “true” choice.

If the world is one with some sort of metaphysical free will that is in principle unpredictable, then the player’s actual choice would, in principle, be unpredictable. But, of course, this directly violates the stipulation that the Predictor is “almost certainly” right. If the player’s choice is truly unpredictable, then the Predictor might make a shrewd/educated guess, but it would not be “almost certainly” right. In that case, the player could make a rational case for taking both—based on the estimate of how likely it is that the Predictor got it wrong. But this would be a different game, one in which the Predictor is not “almost certainly” right.

This discussion seems to nicely show that the stipulation that “what you actually decide to do is not part of the explanation of why he made the prediction he made” is a red herring. Given the stipulation that the Predictor is “almost certainly” right, it does not really matter how its predictions are explained. The stipulation that what the player actually decides is not part of the explanation simply serves to mislead by creating the false impression that there is a way to “beat” the Predictor by actually deciding to take both boxes and gambling that it has predicted the player will just take B.  As such, the paradox seems to be dissolved—it is the result of some people being misled by one stipulation and not realizing that the stipulation that the Predictor is “almost certainly” right makes the other irrelevant.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Gaming Newcomb’s Paradox II: Mechanics

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on October 1, 2014
La bildo estas kopiita de wikipedia:es. La ori...

(Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

As a philosopher, a game master (a person who runs a tabletop role playing game) and an author of game adventures, I am rather fond of puzzles and paradoxes. As a philosopher, I can (like other philosophers) engage in the practice known as “just making stuff up.” As an adventure author, I can do the same—but I need to present the actual mechanics of each problem, puzzle and paradox. For example, a trap description has to specific exactly how the trap works, how it may be overcome and what happens if it is set off. I thought it would be interesting to look at Newcomb’s Paradox from a game master perspective and lay out the possible mechanics for it. But first, I will present the paradox and two stock attempts to solve it.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to laying out some possible mechanics in gamer terms.

One obvious advantage of crafting the mechanics for a game is that the author and the game master know exactly how the mechanic works. That is, she knows the truth of the matter. While the players in role-playing games know the basic rules, they often do not know the full mechanics of a specific challenge, trap or puzzle. Instead, they need to figure out how it works—which often involves falling into spiked pits or being ground up into wizard burger. Fortunately, Newcomb’s Paradox has very simple game mechanics, but many variants.

In game mechanics, the infallible Predictor is easy to model. The game master’s description would be as follows: “have the player character (PC) playing the Predictor’s game make her choice. The Predictor is infallible, so if the player takes box B, she gets the million. If the player takes both, she gets $1,000.” In this case, the right decision is to take B. After all, the Predictor is infallible. So, the solution is easy.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

A less-than infallible Predictor is also easy to model with dice. The description of the Predictor simply specifies the accuracy of its predictions. So, for example: “The Predictor is correct 99% of the time. After the player character makes her choice, roll D100 (generating a number from 1-100). If you roll 100, the Predictor was wrong. If the PC picked just box B, it is empty and she gets nothing because the Predictor predicted she would take both. If she picked both, B is full and she gets $1,001,000 because the Predictor predicted she would just take one. If you roll 1-99, the Predictor was right. If the PC picked box B, she gets $1,000,000. If she takes both, she gets $1,000 since box B is empty.” In this case, the decision is a gambling matter and the right choice can be calculated by considering the chance the Predictor is right and the relative payoffs. Assuming the Predictor is “almost always right” would make choosing only B the rational choice (unless the player absolutely and desperately needs only $1,000), since the player who picks just B will “almost always” get the $1,000,000 rather than nothing while the player who picks both will “almost always” get just $1,000. But, if the Predictor is “almost always wrong” (or even just usually wrong), then taking both would be the better choice. And so on for all the fine nuances of probability. The solution is relatively easy—it just requires doing some math based on the chance the Predictor is correct in its predictions. As such, if the mechanism of the Predicator is specified, there is no paradox and no problem at all. But, of course, in a role-playing game puzzle, the players should not know the mechanism.

If the game master is doing her job, when the players are confronted by the Predictor, they will not know the predictor’s predictive powers (and clever players will suspect some sort of trick or trap). The game master will say something like “after explaining the rules, the strange being says ‘my predictions are nearly always right/always right’ and sets two boxes down in front of you.” Really clever players will, of course, make use of spells, items, psionics or technology (depending on the game) to try to determine what is in the box and the capabilities of the Predictor. Most players will also consider just attacking the Predictor and seeing what sort of loot it has. So, for the game to be played in accord with the original version, the game master will need to provide plausible ways to counter all these efforts so that the players have no idea about the abilities of the Predictor or what is in box B. In some ways, this sort of choice would be similar to Pascal’s famous Wager: one knows that the Predictor will get it right or it won’t. But, in this case, the player has no idea about the odds of the Predictor being right. In this case, from the perspective of the player who is acting in ignorance, taking both boxes yields a 100% chance of getting $1,000 and somewhere between 0 and 100% chance of getting the extra $1,000,000. Taking the B box alone yields a 100% chance of not getting the $1,000 and some chance between 0% and 100% of getting $1,000,000. When acting in ignorance, the safe bet is to take both: the player walks away with at least $1,000. Taking just B is a gamble that might or might not pay off. The player might walk away with nothing or $1,000,000.

But, which choice is rational can depend on many possible factors. For example, suppose the players need $1,000 to buy a weapon they need to defeat the big boss monster in the dungeon, then picking the safe choice would be the smart choice: they can get the weapon for sure. If they need $1,001,000 to buy the weapon, then picking both would also be a smart choice, since that is the only way to get that sum in this game. If they need $1,000,000 to buy the weapon, then there is no rational way to pick between taking one or both, since they have no idea what gives them the best chance of getting at least $1,000,000. Picking both will get them $1,000 but only gets them the $1,000,000 if the Predictor predicted wrong. And they have no idea if it will get it wrong. Picking just B only gets them $1,000,000 if the Predictor predicted correctly. And they have no idea if it will get it right.

In the actual world, a person playing the game with the Predictor would be in the position of the players in the role-playing game: she does not know how likely it is that the Predictor will get it right. If she believes that the Predictor will probably get it wrong, then she would take both. If she thinks it will get it right, she would take just B. Since she cannot pick randomly (in Nozick’s scenario B is empty if the players decides by chance), that option is not available. As such, Newcomb’s Paradox is an epistemic problem: the player does not know the accuracy of the predictions but if she did, she would know how to pick. But, if it is known (or just assumed) the Predictor is infallible or almost always right, then taking B is the smart choice (in general, unless the person absolutely must have $1,000). To the degree that the Predictor can be wrong, taking both becomes the smarter choice (if the Predictor is always wrong, taking both is the best choice). So, there seems to be no paradox here. Unless I have it wrong, which I certainly do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Gaming Newcomb’s Paradox I: Problem Solved

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on September 29, 2014
Billy Jack

Billy Jack (Photo credit: Wikipedia)

One of the many annoying decision theory puzzles is Newcomb’s Paradox. The paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

The following standard chart shows the possible results:

 

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

 

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predicator has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predicator is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B.

Gamers of the sort who play Pathfinder, D&D and other such role playing games know how to properly solve this paradox. The Predictor has at least $1,001,000 on hand (probably more, since it will apparently play the game with anyone) and is worth experience points (everything is worth XP). The description just specifies its predictive abilities for the game and no combat abilities are mentioned. So, the solution is to beat down the Predictor, loot it and divide up the money and experience points. It is kind of a jerk when it comes to this game, so there is not really much of a moral concern here.

It might be claimed that the Predictor could not be defeated because of its predictive powers. However, knowing what someone is going to do and being able to do something about it are two very different matters. This is nicely illustrated by the film Billy Jack:

 

[Billy Jack is surrounded by Posner's thugs]

Mr. Posner: You really think those Green Beret Karate tricks are gonna help you against all these boys?

Billy Jack: Well, it doesn’t look to me like I really have any choice now, does it?

Mr. Posner: [laughing] That’s right, you don’t.

Billy Jack: You know what I think I’m gonna do then? Just for the hell of it?

Mr. Posner: Tell me.

Billy Jack: I’m gonna take this right foot, and I’m gonna whop you on that side of your face…

[points to Posner's right cheek]

Billy Jack: …and you wanna know something? There’s not a damn thing you’re gonna be able to do about it.

Mr. Posner: Really?

Billy Jack: Really.

[kicks Posner's right cheek, sending him to the ground]

 

So, unless the Predictor also has exceptional combat abilities, the rational solution is the classic “shoot and loot” or “stab and grab.” Problem solved.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Buffer Zones & Consistency

Posted in Ethics, Law, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on August 8, 2014
English: United States Supreme Court building ...

 (Photo credit: Wikipedia)

In the summer of 2014, the United States Supreme Court struck down the Massachusetts law that forbid protesters from approaching within 35 feet of abortion clinics. The buffer zone law was established in response to episodes of violence. Not surprisingly, the court based its ruling on the First Amendment—such a buffer zone violates the right of free expression of those wishing to protest against abortion or who desire to provide unsought counseling to those seeking abortions.

Though I am a staunch supporter of the freedom of expression, I do recognize that there can be legitimate limits on this freedom—especially when such limits provide protection to the life, liberty and property of others. To use the stock examples, freedom of expression does not permit people to engage in death threats, slander, or panicking people by screaming “fire” in a crowded, non-burning theater.

While I do recognize that the buffer zone does serve a legitimate purpose in enhancing safety, I do agree with the court. The grounds for this agreement is that the harm done to freedom of expression by banning protest in public spaces exceeds the risk of harm caused by allowing such protests. Naturally enough, I do agree that people who engage in threatening behavior can be justly removed—but this is handled by existing laws. That said, I do regard the arguments in favor of the buffer zone as having merit—weighing the freedom of expression against safety concerns is challenging and people of good conscience can disagree in this matter.

One rather interesting fact is that the Supreme Court has its own buffer zone—there is a federal law that bans protesters from the plaza of the court.  Since the plaza is a public space, it would seem analogous to the public space of the sidewalks covered by the Massachusetts law. Given the Supreme Court’s ruling, the principle seems to be that the First Amendment ensures a right to protest in public spaces—even when there is a history of violence and legitimate safety concerns exist. While the law is whatever those with the biggest guns say it is, there is the matter of the ethics of the matter and this is governed by consistent application.

A principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that people are initially morally equal and hence must be treated as such. This requires that moral principles be applied consistently.  Naturally, a person’s actions can affect the initially equality. For example, a person who commits horrible evil deeds would not be morally equal to someone who does predominantly good deeds.

Impartiality is the assumption that moral principles must not be applied with partiality. Inconsistent application would involve non-impartial application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. What counts as a relevant difference in particular cases can be a matter of great controversy. For example, while many people do not think that gender is a relevant difference in terms of how people should be treated other people think it is very important. This assumption requires that principles be applied consistently.

Given that the plaza of the court is a public space analogous to a sidewalk, then if the First Amendment guarantees the right to protest in public spaces of this sort, then the law forbidding protests in the plaza is unconstitutional and must be struck down. To grant protesters access to the sidewalks outside clinics while forbidding them from the public plaza of the court would be an inconsistent application of the principle. But, of course, there is always a way to counter this.

One way to counter this in a principled way is to show that an alleged inconsistency is merely apparent.  One way to do this is by showing that there is a relevant difference in the situation. If the Supreme Court wishes to morally justify their buffer while denying others their buffers, they would need to show a relevant difference that warrants the difference in application. They could, for example, contend that a plaza is relevantly different from a sidewalk. One might point to a size difference and how this impacts protesting. They could also contend that government property is exempt from the law (much like certain state legislatures ban the public from bringing guns into the legislature building even while passing laws allowing people to bring guns into places where other people work)—but they would need to ground the exemption.

My own view, obviously enough, is that there is no relevant difference between the scenarios: if the First Amendment applies to the public spaces around private property, it also applies to the public spaces around state property (which is the most public of public property).

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Speed of Rage

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on July 9, 2014
English: A raging face.

(Photo credit: Wikipedia)

The rise of social media has created an entire new world for social researchers. One focus of the research has been on determining how quickly and broadly emotions spread online. The April 2014 issue of the Smithsonian featured and article on this subject by Matthew Shaer.

Not surprisingly, researchers at Beijing University found that the emotion of rage spread the fastest and farthest online. Researchers in the United States found that anger was a speed leader, but not the fastest in the study: awe was even faster than rage. But rage was quite fast. As might be expected, sadness was a slow spreader and had a limited expansion.

This research certainly makes sense—rage tends to be a strong motivator and sadness tends to be a de-motivator. The power of awe was an interesting finding, but some reflection does indicate that this would make sense—the emotion tends to move people to want to share (in the real world, think of people eagerly drawing the attention of strangers to things like beautiful sunsets, impressive feats or majestic animals).

In general, awe is a positive emotion and hence it seems to be a good thing that it travels far and wide on the internet. Rage is, however, something of a mixed bag.

When people share their rage via social media, they are sharing with an intent to express (“I am angry!”) and to infect others with this rage (“you should be angry, too!”). Rage, like many infectious agents, also has the effect of weakening the host’s “immune system.” In the case of anger, the immune system is reason and emotional control. As such, rage tends to suppress reason and lower emotional control. This serves to make people even more vulnerable to rage and quite susceptible to the classic fallacy of appeal to anger—this is the fallacy in which a person accepts her anger as proof that a claim is true. Roughly put, the person “reasons” like this: “this makes me angry, so it is true.” This infection also renders people susceptible to related emotions (and fallacies), such as fear (and appeal to force).

Because of these qualities of anger, it is easy for untrue claims to be accepted far and wide via the internet. This is, obviously enough, the negative side of anger.  Anger can also be positive—to use an analogy, it can be like a cleansing fire that sweeps away brambles and refuse.

For anger to be a positive factor, it would need to be a virtuous anger (to follow Aristotle). Put a bit simply, it would need to be the right degree of anger, felt for the right reasons and directed at the right target. This sort of anger can mobilize people to do good. For example, people might learn of a specific corruption rotting away their society and be moved to act against it. As another example, people might learn of an injustice and be mobilized to fight against it.

The challenge is, of course, to distinguish between warranted and unwarranted anger. This is a rather serious challenge—as noted above, people tend to feel that they are right because they are angry rather than inquiring as to whether their rage is justified or not.

So, when you see a post or Tweet that moves you anger, think before adding fuel to the fire of anger.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Defining Rape I: Definitions

Posted in Law, Politics, Reasoning/Logic, Universities & Colleges by Michael LaBossiere on June 25, 2014
A picture of a dictionary viewed with a lens o...

A picture of a dictionary viewed with a lens on top of it, at the word “Internet” (Photo credit: Wikipedia)

One of the basic lessons of philosophy dating back to at least Socrates is that terms need to be properly defined. Oversimplifying things a bit, a good definition needs to avoid being too narrow and also avoid being too broad. A definition that is too narrow leaves out things that the term should include. One that is too broad allows in too much. A handy analogy for this is the firewall that your computer should have: if it doing its job properly, it lets in what should be allowed into your computer while keeping attacks out. An example of a definition that is too narrow would be to define “art” as “any product of the visual arts, such as painting and sculpture.” This is too narrow because it leaves out what is manifestly art, such as movies and literature. As an example of a definition that is too broad, defining “art” as “that which creates an emotional effect” would be defective since it would consider such things as being punch in the face or winning the lottery as art. A perfect definition would thus be like perfect security: all that belongs is allowed in and all that does not is excluded.

While people have a general understanding of the meaning of “rape”, the usual view covers what my colleague Jean Kazez calls “classic” rape—an attack that involves the clear use of force, threat or coercion. As she notes, another sort of rape is what is called “date” rape—a form of assault that, on college campuses, often involves intoxication rather than overt violence.

In many cases the victims of sexual assault do not classify the assault as rape. According to Cathy Young, “three quarters of the female students who were classified as victims of sexual assault by incapacitation did not believe they had been raped; even when only incidents involving penetration were counted, nearly two-thirds did not call it rape. Two-thirds did not report the incident to the authorities because they didn’t think it was serious enough.”

In some cases, a victim does change her mind (sometimes after quite some time) and re-classify the incident as rape. For example, a woman who eventually reported being raped twice by a friend explained her delay on the grounds that it took her a while to “to identify what happened as an assault.”

The fact that a victim changed her mind does not, obviously, invalidate her claim that she was raped. However, there is the legitimate concern about what is and is not rape—that is, what is a good definition of an extremely vile thing. After all, when people claim there is an epidemic of campus rapes, they point to statistics claiming that 1 in 5 women will be sexually assaulted in college. This statistic is horrifying, but it is still reasonable to consider what it actually means. Jean Kazez has looked at the numbers in some detail here.

One obvious problem with inquiring into the statistics and examining the definition of “rape” is that the definition has become an ideological matter for some. For some on the left, “rape” is very broadly construed and to raise even rational concerns about the broadness of the definition is to invite accusations of ignorant insensitivity (at best) and charges of misogyny. For some on the right, “rape” is very narrowly defined (including the infamous notion of “legitimate” rape) and to consider expanding the definition is to invite accusations of being politically correct or, in the case of women, being a radical feminist or feminazi.

As the ideological territory is staked out and fortified, the potential for rational discussion is proportionally decreased. In fact, to even suggest that there is a matter to be rationally discussed (with the potential for dispute and disagreement) might be greeted with hostility by some. After all, when a view becomes part of a person’s ideological identity, the person tends to believe that there is nothing left to discuss and any attempt at criticism is both automatically in error and a personal attack.

However, the very fact that there are such distinct ideological fortresses indicates a clear need for rational discussion of this matter and I will endeavor to do so in the following essays.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Science & Self-Identity

Posted in Philosophy, Politics, Reasoning/Logic, Science by Michael LaBossiere on June 9, 2014
English: The smallpox vaccine diluent in a syr...

 (Photo credit: Wikipedia)

The assuming an authority of dictating to others, and a forwardness to prescribe to their opinions, is a constant concomitant of this bias and corruption of our judgments. For how almost can it be otherwise, but that he should be ready to impose on another’s belief, who has already imposed on his own? Who can reasonably expect arguments and conviction from him in dealing with others, whose understanding is not accustomed to them in his dealing with himself? Who does violence to his own faculties, tyrannizes over his own mind, and usurps the prerogative that belongs to truth alone, which is to command assent by only its own authority, i.e. by and in proportion to that evidence which it carries with it.

-John Locke

As a philosophy professor who focuses on the practical value of philosophical thinking, one of my main objectives is to train students to be effective critical thinkers. While true critical thinking has been, ironically, threatened by the fact that it has become something of a fad, I stick with a very straightforward and practical view of the subject. As I see it, critical thinking is the rational process of determining whether a claim should be accepted as true, rejected or false or subject to the suspension of judgment. Roughly put, a critical thinker operates on the principle that the belief in a claim should be proportional to the evidence for it, rather than in proportion to our interests or feelings. In this I follow John Locke’s view: “Whatsoever credit or authority we give to any proposition more than it receives from the principles and proofs it supports itself upon, is owing to our inclinations that way, and is so far a derogation from the love of truth as such: which, as it can receive no evidence from our passions or interests, so it should receive no tincture from them.” Unfortunately, people often fail to follow this principle and do so in matters of considerable importance, such as climate change and vaccinations. To be specific, people reject proofs and evidence in favor of interests and passions.

Despite the fact that the scientific evidence for climate change is overwhelming, there are still people who deny climate change. These people are typically conservatives—although there is nothing about conservatism itself that requires denying climate change.

While rejecting the scientific evidence for climate change can be regarded as irrational, it is easy enough to attribute a rational motive behind this view. After all, there are people who have an economic interest in denying climate change or, at least, preventing action from being taken that they regard as contrary to their interests (such as implementing the cap and trade system on carbon originally proposed by conservative thinkers). This interest would provide a motive to lie (that is, make claims that one knows are not true) as well as a psychological impetus to sincerely hold to a false belief. As such, I can easily make sense of climate change denial in the face of overwhelming evidence: big money is on the line. However, the denial less rational for the majority of climate change deniers—after all, they are not owners of companies in the fossil fuel business. However, they could still be motivated by a financial stake—after all, addressing climate change could cost them more in terms of their energy bills. Of course, not addressing climate change could cost them much more.

In any case, I get climate denial in that I have a sensible narrative as to why people reject the science on the basis of interest. However, I have been rather more confused by people who deny the science regarding vaccines.

While vaccines are not entirely risk free, the scientific evidence is overwhelming that they are safe and very effective. Scientists have a good understanding of how they work and there is extensive empirical evidence of their positive impact—specifically the massive reduction in cases of diseases such as polio and measles. Oddly enough, there is significant number of Americans who willfully deny the science of vaccination. What is most unusual is that these people tend to be college educated. They are also predominantly political liberals, thus showing that science denial is bi-partisan. It is fascinating, but also horrifying, to see someone walk through the process of denial—as shown in a segment on the Daily Show. This process is rather complete: evidence is rejected, experts are dismissed and so on—it is as if the person’s mind switched into a Bizzaro version of critical thinking (“kritikal tincing” perhaps). This is in marked contrast with the process of rational disagreement in which the methodology of critical thinking is used in defense of an opposing viewpoint. Being a philosopher, I value rational disagreement and I am careful to give opposing views their due. However, the use of fallacious methods and outright rejection of rational methods of reasoning is not acceptable.

As noted above, climate change denial makes a degree of sense—behind the denial is a clear economic interest. However, vaccine science denial seems to lack that motive. While I could be wrong about this, there does not seem to be any economic interest that would benefit from this denial—except, perhaps, the doctors and hospitals that will be treating the outbreaks of preventable diseases. However, doctors and hospitals obviously encourage vaccination. As such, an alternative explanation is needed.

Recent research does provide some insight into the matter and this research is consistent with Locke’s view that people are influenced by both interests and passions. In this case, the motivating passion seems to be a person’s commitment to her concept of self. The idea is that when a person’s self-concept or self-identity is threatened by facts, the person will reject the facts in favor of her self-identity.  In the case of the vaccine science deniers, the belief that vaccines are harmful has somehow become part of their self-identity. Or so goes the theory as to why these deniers reject the evidence.

To be effective, this rejection must be more than simply asserting the facts are wrong. After all, the person is aiming to deceive herself to maintain her self-identity. As such, the person must create an entire narrative which makes their rejection seem sensible and believable to them. A denier must, as Pascal said in regards to his famous wager, make himself believe his denial. In the case of matters of science, a person needs to reject not just the claims made by scientists but also the method by which the scientists support the claims. Roughly put, the narrative of denial must be a complete story that protects itself from criticism. This is, obviously enough, different from a person who denies a claim on the basis of evidence—since there is rational support for the denial, there is no need to create a justifying narrative.

This, I would say, is one of the major dangers of this sort of denial—not the denial of established facts, but the explicit rejection of the methodology that is used to assess facts. While people often excel at compartmentalization, this strategy runs the risk of corrupting the person’s thinking across the board.

As noted above, as a philosopher one of my main tasks is to train people to think critically and rationally. While I would like to believe that everyone can be taught to be an effective and rational thinker, I know that people are far more swayed by rhetoric and (ironically) fallacious reasoning then they are swayed by good logic. As such, there might be little hope that people can be “cured” of their rejection of science and reasoning. Aristotle took this view—while noting that some can be convinced by “arguments and fine ideals” most people cannot. He advocated the use of coercive habituation to get people to behave properly and this could (and has) been employed to correct incorrect beliefs. However, such a method is agnostic in regards to the truth—people can be coerced into accepting the false as well as the true.

Interestingly enough, a study by Brendan Nyhan shows that reason and persuasion both fail when employed in attempts to change false beliefs that are critical to a person’s self-identity. In the case of Nyhan’s study, there were various attempts to change the beliefs of vaccine science deniers using reason (facts and science) and also various methods of rhetoric/persuasions (appeals to emotions and anecdotes). Since reason and persuasion are the two main ways to convince people, this is certainly a problem.

The study and other research did indicate an avenue that might work. Assuming that it is the threat to a person’s self-concept that triggers the rejection mechanism, the solution is to approach a person in a way that does not trigger this response. To use an analogy, it is like trying to conduct a transplant without triggering the body’s immune system to reject the transplanted organ.

One obvious problem is that once a person has taken a false belief as part of his self-concept, it is rather difficult to get him to regard any attempt to change his mind as anything other than a threat. Addressing this might require changing the person’s self-concept or finding a specific strategy for addressing that belief that is somehow not seen as a threat. Once that is done, the second stage—that of actually addressing the false belief, can begin.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Leadership & Responsibility

Posted in Ethics, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on June 2, 2014
English: Official image of Secretary of Vetera...

English: Official image of Secretary of Veterans Affairs Eric Shinseki (Photo credit: Wikipedia)

The recent resignation of Eric Shinseki from his former position as the head of the Department of Veteran Affairs raised, once again, the issue of the responsibilities of a leader. While I will not address the specific case of Shinseki, I will use this opportunity discuss leadership and responsibility in general terms.

Not surprisingly, people often assign responsibility based on ideology. For example, Democrats would be more inclined to regard a Republican leader as being fully responsible for his subordinates while being more forgiving of fellow Democrats. However, judging responsibility based on political ideology is obviously a poor method of assessment. What is needed is, obviously enough, some general principles that can be used to assess the responsibility of leaders in a consistent manner.

Interestingly (or boringly) enough, I usually approach the matter of leadership and responsibility using an analogy to the problem of evil. Oversimplified quite a bit, the problem of evil is the problem of reconciling God being all good, all knowing and all powerful with the existence of evil in the world. If God is all good, then he would tolerate no evil. If God was all powerful, He could prevent all evil. And if God was all knowing, then He would not be ignorant of any evil. Given God’s absolute perfection, He thus has absolute responsibility as a leader: He knows what every subordinate is doing, knows whether it is good or evil and has the power to prevent or cause any behavior. As such, when a subordinate does evil, God has absolute accountability. After all, the responsibility of a leader is a function of what he can know and the extent of his power.

In stark contrast, a human leader (no matter how awesome) falls rather short of God. Such leaders are clearly not perfectly good and they are obviously not all knowing or all powerful. These imperfections thus lower the responsibility of the leader.

In the case of goodness, no human can be expected to be morally perfect. As such, failures of leadership due to moral imperfection can be excusable—within limits. The challenge is, of course, sorting out the extent to which imperfect humans can legitimately be held morally accountable and to what extent our unavoidable moral imperfections provide a legitimate excuse. These standards should be applied consistently to leaders so as to allow for the highest possible degree of objectivity.

In the case of knowledge, no human can be expected to be omniscient—we have extreme limits on our knowledge. The practical challenge is sorting out what a leader can reasonably be expected to know and the responsibility of the leader should be proportional to that extent of knowledge. This is complicated a bit by the fact that there are at least two factors here, namely the capacity to know and what the leader is obligated to know. Obligations to know should not exceed the human capacity to know, but the capacity to know can often exceed the obligation to know. For example, the President could presumably have everyone spied upon (which is apparently what he did do) and thus could, in theory, know a great deal about his subordinates. However, this would seem to exceed what the President is obligated to know (as President) and probably exceeds what he should know.

Obviously enough, what a leader can know and what she is obligated to know will vary greatly based on the leader’s position and responsibilities. For example, as the facilitator of the philosophy & religion unit at my university, my obligation to know about my colleagues is very limited as is my right to know about them. While I have an obligation to know what courses they are teaching, I do not have an obligation or a right to know about their personal lives or whether they are doing their work properly on outside committees. So, if a faculty member skipped out on committee meetings, I would not be responsible for this—it is not something I am obligated to know about.

As another example, the chair of the department has greater obligations and rights in this regard. He has the right and obligation to know if they are teaching their classes, doing their assigned work and so on. Thus, when assessing the responsibility of a leader, sorting out what the leader could know and what she was obligated to know are rather important matters.

In regards to power (taken in a general sense), even the most despotic dictator’s powers are still finite. As such, it is reasonable to consider the extent to which a leader can utilize her authority or use up her power to compel subordinates to obey. As with knowledge, responsibility is proportional to power. After all, if a leader lacks to power (or authority) to compel obedience in regards to certain matters, then the leader cannot be accountable for not making the subordinates do or not do certain actions. Using myself as an example, my facilitator position has no power: I cannot demote, fire, reprimand or even put a mean letter into a person’s permanent record. The extent of my influence is limited to my ability to persuade—with no rewards or punishments to offer. As such, my responsibility for the actions of my colleagues is extremely limited.

There are, however, legitimate concerns about the ability of a leader to make people behave correctly and this raises the question of the degree to which a leader is responsible for not being persuasive enough or using enough power to make people behave. That is, the concern is when bad behavior based on resisting applied authority or power is the fault of the leader or the fault of the resistor. This is similar to the concern about the extent to which responsibility for failing to learn falls upon the teacher and to which it falls on the student. Obviously, even the best teacher cannot reach all students and it would seem reasonable to believe that even the best leader cannot make everyone do what they should be doing.

Thus, when assessing alleged failures of leadership it is important to determine where the failures lie (morality, knowledge or power) and the extent to which the leader has failed. Obviously, principled standards should be applied consistently—though it can be sorely tempting to damn the other guy while forgiving the offenses of one’s own guy.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Talking Points & Climate Change

Posted in Philosophy, Politics, Reasoning/Logic, Science by Michael LaBossiere on May 14, 2014
English: Animated global map of monthly long t...

English: Animated global map of monthly long term mean surface air temperature (Mollweide projection). (Photo credit: Wikipedia)

While science and philosophy are supposed to be about determining the nature of reality, politics is often aimed at creating perceptions that are alleged to be reality. This is why it is generally wiser to accept claims supported by science and reason over claims “supported” by ideology and interest.

 

The matter of climate change is a matter of both science (since the climate is an objective feature of reality) and politics (since perception of reality can be shaped by rhetoric and ideology). Ideally, the facts of climate change would be left to science and sorting out how to address it via policy would fall, in part, to the politicians. Unfortunately, politicians and other non-scientists have taken it on themselves to make claims about the science, usually in the form of unsupported talking points.

 

On the conservative side, there has been a general shifting in the talking points. Originally, there was one main talking point: there is no climate change and the scientists are wrong. This point was often supported by alleging that the scientists were motivated by ideology to lie about the climate. In contrast, those whose profits could be impacted if climate change was real were taken as objective sources.

 

In the face of mounting evidence and shifting public opinion, this talking point became the claim that while climate change is occurring, it is not caused by humans. This then shifted to the claim that climate change is caused by humans, but there is nothing we can (or should) do now.

 

In response to the latest study, certain Republicans have embraced three talking points. These points do seem to concede that climate change is occurring and that humans are responsible. These points do have a foundation that can be regarded as rational and each will be considered in turn.

 

One talking point is that the scientists are exaggerating the impact of climate change and that it will not be as bad as they claim. This does rest on a reasonable concern about any prediction: how accurate is the prediction? In the case of a scientific prediction based on data and models, the reasonable inquiry would focus on the accuracy of the data and how well the models serve as models of the actual world. To use an analogy, the reliability of predictions about the impact of a crash on a vehicle based on a computer model would hinge on the accuracy of the data and the model and both could be reasonable points of inquiry.

 

Since the climate scientists have the data and models used to make the predications, to properly dispute the predictions would require showing problems with either the data or the models (or both). Simply saying they are wrong would not suffice—what is needed is clear evidence that the data or models (or both) are defective in ways that would show the predictions are excessive in terms of the predicted impact.

 

One indirect way to do this would be to find clear evidence that the scientists are intentionally exaggerating. However, if the scientists are exaggerating, then this would be provable by examining the data and plugging it into an accurate model. That is, the scientific method should be able to be employed to show the scientists are wrong.

 

In some cases people attempt to argue that the scientists are exaggerating because of some nefarious motivation—a liberal agenda, a hatred of oil companies, a desire for fame or some other wickedness. However, even if it could be shown that the scientists have a nefarious motivation, it does not follow that the predictions are wrong. After all, to dismiss a claim because of an alleged defect in the person making the claim is a fallacy. Being suspicious because of a possible nefarious motive can be reasonable, though. So, for example, the fact that the fossil fuel companies have a great deal at stake here does not prove that their claims about climate change are wrong. But the fact that they have considerable incentive to deny certain claims does provide grounds for suspicion regarding their objectivity (and hence credibility).  Naturally, if one is willing to suspect that there is a global conspiracy of scientists, then one should surely be willing to consider that fossil fuel companies and their fellows might be influenced by their financial interests.

 

One could, of course, hold that the scientists are exaggerating for noble reasons—that is, they are claiming it is worse than it will be in order to get people to take action. To use an analogy, parents sometimes exaggerate the possible harms of something to try to persuade their children not to try it. While this is nicer than ascribing nefarious motives to scientists, it is still not evidence against their claims. Also, even if the scientists are exaggerating, there is still the question about how bad things really would be—they might still be quite bad.

 

Naturally, if an objective and properly conducted study can be presented that shows the predictions are in error, then that is the study that I would accept. However, I am still waiting for such a study.

 

The second talking point is that the laws being proposed will not solve the problems. Interestingly, this certainly seems to concede that climate change will cause problems. This point does have a reasonable foundation in that it would be unreasonable to pass laws aimed at climate change that are ineffective in addressing the problems.

 

While crafting the laws is a matter of politics, sorting out whether such proposals would be effective does seem to fall in the domain of science. For example, if a law proposes to cut carbon emissions, there is a legitimate question as to whether or not that would have a meaningful impact on the problem of climate change. Showing this would require having data, models and so on—merely saying that the laws will not work is obviously not enough.

 

Now, if the laws will not work, then the people who confidently make that claim should be equally confident in providing evidence for their claim. It seems reasonable to expect that such evidence be provided and that it be suitable in nature (that is, based in properly gathered data, examined by impartial scientists and so on).

 

The third talking point is that the proposals to address climate change will wreck the American economy. As with the other points, this does have a rational basis—after all, it is sensible to consider the impact on the economy.

 

One way to approach this is on utilitarian grounds: that we can accept X environmental harms (such as coastal flooding) in return for Y (jobs and profits generated by fossil fuels). Assuming that one is a utilitarian of the proper sort and that one accepts this value calculation, then one can accept that enduring such harms could be worth the advantages. However, it is well worth noting that as usual, the costs will seem to fall heavily on those who are not profiting. For example, the flooding of Miami and New York will not have a huge impact on fossil fuel company profits (although they will lose some customers).

 

Making the decisions about this should involve openly considering the nature of the costs and benefits as well as who will be hurt and who will benefit. Vague claims about damaging the economy do not allow us to make a proper moral and practical assessment of whether the approach will be correct or not. It might turn out that staying the course is the better option—but this needs to be determined with an open and honest assessment. However, there is a long history of this not occurring—so I am not optimistic about this occurring.

 

It is also worth considering that addressing climate change could be good for the economy. After all, preparing coastal towns and cities for the (allegedly) rising waters could be a huge and profitable industry creating many jobs. Developing alternative energy sources could also be profitable as could developing new crops able to handle the new conditions. There could be a whole new economy created, perhaps one that might rival more traditional economic sectors and newer ones, such as the internet economy. If companies with well-funded armies of lobbyists got into the climate change countering business, I suspect that a different tune would be playing.

 

To close, the three talking points do raise questions that need to be answered:

 

  • Is climate change going to be as bad as it is claimed?
  • What laws (if any) could effectively and properly address climate change?
  • What would be the cost of addressing climate change and who would bear the cost?

 

 

My Amazon Author Page

 

My Paizo Page

 

My DriveThru RPG Page

 

Enhanced by Zemanta

The Better than Average Delusion

Posted in Reasoning/Logic by Michael LaBossiere on March 28, 2014
Average Joe copy

Average Joe copy (Photo credit: Wikipedia)

One interesting, but hardly surprising, cognitive bias is the tendency of a person to regard herself as better than average—even when no evidence exists for that view. Surveys in which Americans are asked to compare themselves to their fellows are quite common and nicely illustrate this bias: the overwhelming majority of Americans rank themselves as above average in everything ranging from leadership ability to accuracy in self-assessment.

Obviously enough, the majority of people cannot be better than average—that is just how averages work. As to why people think the way they do, the disparity between what is claimed and what is the case can be explained in at least two ways. One is another well-established cognitive bias, namely the tendency people have to believe that their performance is better than it actually is. Teachers get to see this in action quite often—students generally believe that they did better on the test than they actually did. For example, I have long lost count of people who have gotten Cs or worse on papers who say to me “but it felt like an A!” I have no doubt that it felt like an A to the student—after all, people tend to rather like their own work. Given that people tend to regard their own performance as better than it is, it certainly makes sense that they would regard their abilities as better than average—after all, we tend to think that we are all really good.

Another reason is yet another bias: people tend to give more weight to the negative over the positive. As such, when assessing other people, we will tend to consider negative things about them as having more significance than the positive things. So, for example, when Sally is assessing the honesty of Bill, she will give more weight to incidents in which Bill was dishonest relative to those in which he was honest. As such, Sally will most likely see herself as being more honest than Bill. After enough comparisons, she will most likely see herself as above average.

This self-delusion probably has some positive effects—for example, it no doubt allows people to maintain a sense of value and to enjoy the smug self-satisfaction that they are better than most other folks. This surely helps people get by day-to-day.

There are, of course, downsides to this—after all, a person who does not do a good job assessing himself and others will be operating on the basis of inaccurate information and this rarely leads to good decision making.

Interestingly enough, the better-than-average delusion holds up quite well even in the face of clear evidence to the contrary. For example, the British Journal of Social Psychology did a survey of British prisoners asking them to compare themselves to other prisoners and the general population in terms of such traits as honesty, compassion, and trustworthiness. Not surprisingly, the prisoners ranked themselves as above average. They did, however, only rank themselves as average when it came to the trait of law-abidingness. This suggests that reality has some slight impact on people, but not as much as one might hope.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta
Follow

Get every new post delivered to your Inbox.

Join 2,085 other followers