A Philosopher's Blog

“Don’t Fallacy Me” Web Game

Posted in Reasoning/Logic by Michael LaBossiere on February 26, 2015

Interested in playing a Fallacy game? My 42 Fallacies have been transformed into a game. The link is http://dontfallacy.me/

I’m not associated with the game, other than their use of my fallacies.

Tagged with:

Ransoms & Hostages

Posted in Ethics, Law, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on February 20, 2015

1979 Associated Press photograph showing hosta...

While some countries will pay ransoms to free hostages, the United States has a public policy of not doing this. Thanks to ISIS, the issue of whether ransoms should be paid to terrorists groups or not has returned to the spotlight.

One reason to not pay a ransom for hostages is a matter of principle. This principle could be that bad behavior should not be rewarded or that hostage taking should be punished (or both).

One of the best arguments against paying ransoms for hostages is both a practical and a utilitarian moral argument. The gist of the argument is that paying ransoms gives hostage takers an incentive to take hostages. This incentive will mean that more people will be taken hostage. The cost of not paying is, of course, the possibility that the hostage takers will harm or kill their initial hostages. However, the argument goes, if hostage takers realize that they will not be paid a ransom, they will not have an incentive to take more hostages. This will, presumably, reduce the chances that the hostage takers will take hostages. The calculation is, of course, that the harm done to the existing hostages will be outweighed by the benefits of not having people taken hostage in the future.

This argument assumes, obviously enough, that the hostage takers are primarily motivated by the ransom payment. If they are taking hostages primarily for other reasons, such as for status, to make a statement or to get media attention, then not paying them a ransom will not significantly reduce their incentive to take hostages. This leads to a second reason to not pay ransoms.

In addition to the incentive argument, there is also the funding argument. While a terrorist group might have reasons other than money to take hostages, they certainly benefit from getting such ransoms. The money they receive can be used to fund additional operations, such as taking more hostages. Obviously enough, if ransoms are not paid, then such groups do lose this avenue of funding which can impact their operations. Since paying a ransom would be funding terrorism, this provides both a moral a practical reason not to pay ransoms.

While these arguments have a rational appeal, they are typically countered by a more emotional appeal. A stock approach to arguing that ransoms should be paid is the “in their shoes” appeal. The method is very straightforward and simply involves asking a person whether or not she would want a ransom to be paid for her (or a loved one). Not surprising, most people would want the ransom to be paid, assuming doing so would save her (or her loved one). Sometimes the appeal is made explicitly in terms of emotions: “how would you feel if your loved one died because the government refuses to pay ransoms?” Obviously, any person would feel awful.

This method does have considerable appeal. The “in their shoes” appeal can be seem similar to the golden rule approach (do unto others as you would have them do unto you). To be specific, the appeal is not to do unto others, but to base a policy on how one would want to be treated in that situation. If I would not want the policy applied to me (that is, I would want to be ransomed or have my loved one ransomed), then I should be morally opposed to the policy as a matter of consistency. This certainly makes sense: if I would not want a policy applied in my case, then I should (in general) not support that policy.

One obvious counter is that there seems to be a distinction between what a policy should be and whether or not a person would want that policy applied to herself. For example, some universities have a policy that if a student misses more than three classes, the student fails the course. Naturally, no student wants that policy to be applied to her (and most professors would not have wanted it applied to them when they were students), but this hardly suffices to show that the policy is wrong. As another example, a company might have a policy of not providing health insurance to part time employees. While the CEO would certainly not like the policy if she were part time, it does not follow that the policy must be a bad one. As such, policies need to be assessed not just in terms of how a persons feels about them, but in terms of their merit or lack thereof.

Another obvious counter is to use the same approach, only with a modification. In response to the question “how would you feel if you were the hostage or she were a loved one?” one could ask “how would you feel if you or a loved one were taken hostage in an operation funded by ransom money? Or “how would you feel if you or a loved one were taken hostage because the hostage takers learned that people would pay ransoms for hostages?” The answer would be, of course, that one would feel bad about that. However, while how one would feel about this can be useful in discussing the matter, it is not decisive. Settling the matter rationally does require considering more than just how people would feel—it requires looking at the matter with a degree of objectivity. That is, not just asking how people would feel, but what would be right and what would yield the best results in the practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Are Anti-Vaccination People Stupid?

Posted in Medicine/Health, Philosophy, Politics, Reasoning/Logic, Science by Michael LaBossiere on February 18, 2015
Poster from before the 1979 eradication of sma...

Poster from before the 1979 eradication of smallpox, promoting vaccination. (Photo credit: Wikipedia)

The United States recently saw an outbreak of the measles (644 cases in 27 states) with the overwhelming majority of victims being people who had not been vaccinated. Critics of the anti-vaccination movement have pointed to this as clear proof that the movement is not only misinformed but also actually dangerous. Not surprisingly, those who take the anti-vaccination position are often derided as stupid. After all, there is no evidence that vaccines cause the harms that the anti-vaccination people refer to when justifying their position. For example, one common claim is that vaccines cause autism, but this seems to be clearly untrue. There is also the fact that vaccinations have been rather conclusively shown to prevent diseases (though not perfectly, of course).

It is, of course, tempting for those who disagree with the anti-vaccination people to dismiss them uniformly as stupid people who lack the brains to understand science. This, however, is a mistake. One reason it is a mistake is purely pragmatic: those who are pro-vaccination want the anti-vaccination people to change their minds and calling them stupid, mocking and insulting them will merely cause them to entrench. Another reason it is a mistake is that the anti-vaccination people are not, in general, stupid. There are, in fact, grounds for people to be skeptical or concerned about matters of health and science. To show this, I will briefly present some points of concern.

One point of rational concern is the fact that scientific research has been plagued with a disturbing amount of corruption, fraud and errors. For example, the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies. As such, it is hardly stupid to be concerned that scientific research might not be accurate. Somewhat ironically, the study that started the belief that vaccines cause autism is a paradigm example of bad science. However, it is not stupid to consider that the studies that show vaccines are safe might have flaws as well.

Another matter of concern is the influence of corporate lobbyists on matters relating to health. For example, the dietary guidelines and recommendations set forth by the United States Government should be set on the basis of the best science. However, the reality is that these matters are influenced quite strongly by industry lobbyists, such as the dairy industry. Given the influence of the corporate lobbyists, it is not foolish to think that the recommendations and guidelines given by the state might not be quite right.

A third point of concern is the fact that the dietary and health guidelines and recommendations undo what seems to be relentless and unwarranted change. For example, the government has warned us of the dangers of cholesterol for decades, but this recommendation is being changed. It would, of course, be one thing if the changes were the result of steady improvements in knowledge. However, the recommendations often seem to lack a proper foundation. John P.A. Ioannidis, a professor of medicine and statistics at Stanford, has noted “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome. In this literature of epidemic proportions, how many results are correct?” Given such criticism from experts in the field, it hardly seems stupid of people to have doubts and concerns.

There is also the fact that people do suffer adverse drug reactions that can lead to serious medical issues and even death. While the reported numbers vary (one FDA page puts the number of deaths at 100,000 per year) this is certainly a matter of concern. In an interesting coincidence, I was thinking about this essay while watching the Daily Show on Hulu this morning and one of my “ad experiences” was for Januvia, a diabetes drug. As required by law, the ad mentioned all the side effects of the drug and these include some rather serious things, including death. Given that the FDA has approved drugs with dangerous side effects, it is hardly stupid to be concerned about the potential side effects from any medicine or vaccine.

Given the above points, it would certainly not be stupid to be concerned about vaccines. At this point, the reader might suspect that I am about to defend an anti-vaccine position. I will not—in fact, I am a pro-vaccination person. This might seem somewhat surprising given the points I just made. However, I can rationally reconcile these points with my position on vaccines.

The above points do show that there are rational grounds for taking a general critical and skeptical approach to matters of health, medicine and science. However, this general skepticism needs to be properly rational. That is, it should not be a rejection of science but rather the adoption of a critical approach to these matters in which one considers the best available evidence, assesses experts by the proper standards (those of a good argument from authority), and so on. Also, it is rather important to note that the general skepticism does not automatically justify accepting or rejecting specific claims. For example, the fact that there have been flawed studies does not prove that the specific studies about vaccines as flawed. As another example, the fact that lobbyists influence the dietary recommendations does not prove that vaccines are harmful drugs being pushed on Americans by greedy corporations. As a final example, the fact that some medicines have serious and dangerous side effects does not prove that the measles vaccine is dangerous or causes autism. Just as one should be rationally skeptical about pro-vaccination claims one should also be rationally skeptical about anti-vaccination claims.

To use an obvious analogy, it is rational to have a general skepticism about the honesty and goodness of people. After all, people do lie and there are bad people. However, this general skepticism does not automatically prove that a specific person is dishonest or evil—that is a matter that must be addressed on the individual level.

To use another analogy, it is rational to have a general concern about engineering. After all, there have been plenty of engineering disasters. However, this general concern does not warrant believing that a specific engineering project is defective or that engineering itself is defective. The specific project would need to be examined and engineering is, in general, the most rational approach to building stuff.

So, the people who are anti-vaccine are not, in general, stupid. However, they do seem to be making the mistake of not rationally considering the specific vaccines and the evidence for their safety and efficacy. It is quite rational to be concerned about medicine in general, just as it is rational to be concerned about the honesty of people in general. However, just as one should not infer that a friend is a liar because there are people who lie, one should not infer that a vaccine must be bad because there is bad science and bad medicine.

Convincing anti-vaccination people to accept vaccination is certainly challenging. One reason is that the issue has become politicized into a battle of values and identity. This is partially due to the fact that the anti-vaccine people have been mocked and attacked, thus leading them to entrench and double down. Another reason is that, as argued above, they do have well-founded concerns about the trustworthiness of the state, the accuracy of scientific studies, and the goodness of corporations. A third reason is that people tend to give more weight to the negative and also tend to weigh potential loss more than potential gain. As such, people would tend to give more weight to negative reasons against vaccines and fear the alleged dangers of vaccines more than they would value their benefits.

Given the importance of vaccinations, it is rather critical that the anti-vaccination movement be addressed. Calling people stupid, mocking them and attacking them are certainly not effective ways of convincing people that vaccines are generally safe and effective. A more rational and hopefully more effective approach is to address their legitimate concerns and consider their fears. After all, the goal should be the health of people and not scoring points.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Euphemism

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on January 14, 2015

With the start of a new semester, I have gotten a bit behind on my blogging. But, since I am working on a book on rhetorical devices, I have an easy solution; here is an example from the book:

When I was a kid, people bought used cars. These days, people buy fine pre-owned vehicles. There is (usually) no difference between the meanings of “used car” and “pre-owned” car—both refer to the same thing, namely a car someone else has owned and used. However, “used” sounds a bit nasty, perhaps suggesting that the car might be a bit sticky in places. By substituting “pre-owned” for “used”, the car sounds somehow better, although it is the same car whether it is described as used or pre-owned.

If you need to make something that is negative sound positive without actually making it better, then a euphemism would be your tool of choice. A euphemism is a pleasant or at least inoffensive word or phrase that is substituted for a word or phrase that means the same thing but is unpleasant, offensive otherwise negative in terms of its connotation. To use an analogy, using a euphemism is like coating a bitter pill with sugar, making it easier to swallow.

Euphemisms and some other rhetorical devices make use of the fact that words or phrases have connotations as well as denotations. Put a bit simply, the denotation of a term is the literal meaning of the term. The connotation of the term is its emotional association. Terms can have the same denotation but very different connotations. For example “child” and “rug rat” have rather different emotional associations.

The way to use a euphemism is to replace the key words or phrases that are negative in their connotation with those that are positive (or at least neutral). Naturally, it helps to know what the target audience regards as positive words, but generically positive words can do the trick quite well.

The defense against a euphemism is to replace the positive term with a neutral term that has the same meaning. For example, for “an American citizen was inadvertently neutralized during a drone strike”, the neutral presentation would be “An American citizen was killed during a drone strike.” While “killed” does have a negative connotation, it does describe the situation with more neutrality.

In some cases, euphemisms are used for commendable reasons, such as being polite in social situations or to avoid exposing children to “adult” concepts. For example, at a funeral it is considered polite to refer the dead person as “the departed” rather than “the corpse.”

 

Examples

“Pre-owned” for “used.”

“Neutralization” for “killing.”

“Freedom fighter” for “terrorist”

“Revenue enhancement” for “tax increase.”

“Down-sized” for “fired.”

“Between jobs” for “unemployed.”

“Passed” for “dead.”

“Office manager” for “secretary.”

“Custodian” for “janitor.”

“Detainee” for “prisoner.”

“Enhanced interrogation” for “torture.”

“Self-injurious behavior incidents” for “suicide attempts.”

“Adult entertainment” or “adult material” for “pornography.”

“Sanitation engineer” for “garbage man.”

“Escort”, “call girl”, or “lady of the evening” for “prostitute.”

“Gentlemen’s club” for “strip club.”

“Exotic dancer” for “stripper”

“A little thin on top” for “bald.”

“In a family way” for “pregnant.”

“Sleeping with” for “having sex with.”

“Police action” for “undeclared war.”

“Downsized” for “fired.”

“Wardrobe malfunction” for “exposure.”

“Commandeer” for “steal.”

“Modify the odds in my favor” for “cheat.”

The Teenage Mind & Decision Making

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on November 14, 2014

One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior.

Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults.

Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward.

Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense.

It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter.

Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time.

Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Factions & Fallacies

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on October 29, 2014

In general, human beings readily commit to factions and then engage in very predictable behavior: they regard their own factions as right, good and truthful while casting opposing factions as wrong, evil and deceitful. While the best known factions tend to be political or religious, people can form factions around almost anything, ranging from sports teams to video game consoles.

While there can be rational reasons to form and support a faction, factionalism tends to be fed and watered by cognitive biases and fallacies. The core cognitive bias of factionalism is what is commonly known as in group bias. This is the psychology tendency to easily form negative views of those outside of the faction. For example, Democrats often regard Republicans in negative terms, casting them as uncaring, sexist, racist and fixated on money. In turn, Republcians typically look at Democrats in negative terms and regard them as fixated on abortion, obsessed with race, eager to take from the rich, and desiring to punish success. This obviously occurs outside of politics as well, with competing religious groups regarding each other as heretics or infidels. It even extends to games and sports, as the battle of #gamergate serving as a nice illustration.

The flip side of this bias is that members of a faction regard their fellows and themselves in a positive light and are thus inclined to attribute to themselves positive qualities. For example, Democrats see themselves as caring about the environment and being concerned about social good. As another example, Tea Party folks cast themselves as true Americans who get what the founding fathers really meant.

This bias is often expressed in terms of and fuelled by stereotypes. For example, critics of the sexist aspects of gaming will make use of the worst stereotypes of male gamers (dateless, pale misogynists who spew their rage around a mouthful of Cheetos). As another example, Democrats will sometimes cast the rich as being uncaring and out of touch plutocrats. These stereotypes are sometimes taken the extreme of demonizing: presenting the other faction members as not merely wrong or bad but evil to the extreme.

Such stereotypes are easy to accept and many are based on another bias, known as a fundamental attribution error. This is a psychological tendency to fail to realize that the behavior of other people is as much limited by circumstances as our behavior would be if we were in their shoes. For example, a person who was born into a well off family and enjoyed many advantages in life might fail to realize the challenges faced by people who were not so lucky in their birth. Because of this, she might demonize those who are unsuccessful and attribute their failure to pure laziness.

Factionalism is also strengthened by various common fallacies. The most obvious of these is the appeal to group identity. This fallacy occurs when a person accepts her pride in being in a group as evidence that a claim is true. Roughly put, a person believes it because her faction accepts it as true. The claim might actually be true, the mistake is that the basis of the belief is not rational. For example, a devoted environmentalist might believe in climate change because of her membership in that faction rather than on the basis of evidence (which actually does show that climate change is occurring). This method of belief “protects” group members from evidence and arguments because such beliefs are based on group identity rather than evidence and arguments. While a person can overcome this fallacy, faction-based beliefs tend to only change when the faction changes or if the person leaves the faction.

The above-mentioned biases also tend to lean people towards fallacious reasoning. The negative biases tend to motivate people to accept straw man reasoning, which is when a when a person simply ignores a person’s actual position and substitutes a distorted, exaggerated or misrepresented version of that position. Politicians routinely make straw men out of the views they oppose and their faction members typically embrace these. The negative biases also make ad hominem fallacies common. An ad homimen is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). For example, opponents of a feminist critic of gaming might reject her claims by claiming that she is only engaged in the criticism so as to become famous and make money. While it might be true that she is doing just that, this does not disprove her claims. The guilt by association fallacy, in which a person rejects a claim simply because it is pointed out that people she dislikes accept the claim, both arises from and contributes to factionalism.

The negative views and stereotypes are also often fed by fallacies that involve poor generalizations. One is misleading vividness, a fallacy in which a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. For example, a person in a faction holding that gamers are violent misogynists might point to the recent death threats against a famous critic of sexism in games as evidence that most gamers are violent misogynists. Misleading vividness is, of course, closely related to hasty generalization, a fallacy in which a person draws a conclusion about a population based on a sample that is not large enough to justify that conclusion. For example, a Democrat might believe that all corporations are bad based on the behavior of BP and Wal-Mart. Biased generalizations also occur, which is a fallacy that is committed when a person draws a conclusion about a population based on a sample that is biased or prejudiced in some manner. This tends to be fed by the confirmation bias—the tendency people have to seek and accept evidence for their view while avoiding or ignoring evidence against their view. For example, a person might hold that his view that the poor want free stuff for nothing from visits to web sites that feature Youtube videos selected to show poor people expressing that view.

The positive biases also contribute to fallacious reasoning, often taking the form of a positive ad hominem. A positive ad hominem occurs when a claim is accepted on the basis of some irrelevant fact about the author or person presenting the claim or argument. Typically, this fallacy involves two steps. First, something positive (but irrelevant) about the character of person making the claim, her circumstances, or her actions is made. Second, this is taken to be evidence for the claim in question. For example, a Democrat might accept what Bill Clinton says as being true, just because he really likes Bill.

Nor surprisingly, factionalism is also supported by faction variations on appeals to belief (it is true/right because my faction believes it is so), appeal to common practice (it is right because my faction does it), and appeal to tradition (it is right because my faction has “always done this”).

Factionalism is both fed by and contributes to such biases and poor reasoning. This is not to say that group membership is a bad thing, just that it is wise to be on guard against the corrupting influence of factionalism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

42 Fallacies for Free in Portuguese

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on October 28, 2014

Thanks to Laércio Lameira my 42 Fallacies is available in Portuguese as a free PDF.

42 Falacias

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Lessons from Ebola

Posted in Ethics, Philosophy, Politics, Reasoning/Logic, Science by Michael LaBossiere on October 24, 2014
English: Biosafety level 4 hazmat suit: resear...

English: Biosafety level 4 hazmat suit: researcher is working with the Ebola virus (Photo credit: Wikipedia)

While Ebola outbreaks are not new, the latest outbreak has provided some important lessons. These lessons are actually nothing new, but the outbreak does provide a focus for discussing them.

The first lesson is that most people are very bad at risk assessment. In the Ebola hot spots it is reasonable to be worried about catching Ebola. It is also reasonable to be concerned about the situation in general. However, many politicians, pundits and citizens in the United States are greatly overestimating the threat presented by Ebola in the United States. There are only a few cases of Ebola in the United States and the disease is, the experts claim, difficult to catch. As such, the chance that an American will catch Ebola in the United States is extremely low. It is also a fact Ebola outbreaks have been contained before in countries with far less medical resources than the United States. So, while it is prudent to prepare, the reaction to Ebola has greatly exceeded its actual threat in the United States. If the concern is with protecting Americans from disease and death, there are far more serious health threats that should be the primary focus of our concern and resources.

The threat of Ebola is overestimated for a variety of reasons. One is that people are rather susceptible to the fallacy of misleading vividness. This a fallacy in which a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is particularly vivid or dramatic does not make the event more likely to occur, especially in the face of significant statistical evidence. Ebola is indeed scary, but the chance of infection in the United States is extremely low.

Another reason is that people are also susceptible to a variation on the spotlight fallacy. This variant involves inferring the probability that something will happen based on how often you hear about it, rather than based on how often it actually occurs. Ebola has infected the 24 hour news cycle and hearing about it so often creates the psychological impression that infection is likely.

As I have consistently argued, threats should be assessed realistically and the response should be proportional to the actual threat.

The second lesson is that the politicians, media and pundits will exploit scary things for their own advantages. The media folks know that scary stories and fear mongering get viewers, so they are exploiting Ebola to the detriment of the public. Ebola has been made into a political issue, so the politicians and pundits are trying to exploit it for political points. The Republicans are using it as part of their narrative that Obama is an incompetent president and thus are emphasizing the matter. Obama and the Democrats have to strike back in order to keep the Republicans from scoring points. As with the media, the politicians and pundits are exploiting Ebola for their own advantage at the expense of the public.

This willful misleading and exaggeration is clearly morally wrong on the grounds that it misleads the public and makes a rational and proportional response to the problem more difficult.

The third lesson is that people will propose extreme solutions without considering the consequences of those solutions. One example is the push to shutdown air travel between the United States and countries experiencing the Ebola outbreak. While this seems intuitively appealing, one main consequence would be that people would still come to the United States from those countries, only they would do so in more roundabout ways. This would make it much harder to track such people and would, ironically, put the United States at greater risk.

As always, solutions should be carefully considered in terms of their consequences, costs and other relevant factors.

The final lesson I will consider is that the situation shows that health is a public good and not just a private good. While most people get that defense and police are public goods, there is the view that health is a private good and something that should be left to the individual to handle. That is, the state should protect the citizen from terrorists and criminals, but she is on her own when it comes to disease and injury. However, as I have argued elsewhere at length, if the state is obligated to protect its citizens from death and harm, this should also apply to disease and injury. After all, disease will kill a person just as effectively as a terrorist’s bomb or a criminal’s bullet.

Interestingly, even many Republicans are pushing for a state response to Ebola. I suspect that one reason Ebola is especially frightening is that it is a disease that comes from outside the United States and was brought by a foreigner. This taps into fears that have been carefully and lovingly crafted during the war on terror and this helps explain why even anti-government people are pushing for government action.

But, if the state has a vital role to play in addressing Ebola, then it would seem to have a similar role to play in regards to other medical threats. While Ebola is scary and foreign, it is a medical threat and thus is like other medical threats. However, consistency is not a strong trait in most people, so some who cry for government action against the Ebola that scares them also cry out against the state playing a role in protecting Americans from things that kill vastly more Americans.

The public health concern also extends beyond borders—diseases do not recognize political boundaries. While there are excellent moral reasons for being concerned about the health of people in other countries, there are also purely pragmatic reasons. One is that in a well-connected world diseases can travel quickly all over the globe. So, an outbreak in Africa can spread to other countries. Another is that the global economy is impacted by outbreaks. So, an outbreak in one country can impact the economy of other countries. As such, there are purely selfish reasons to regard health as public good.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Gaming Newcomb’s Paradox III: What You Actually Decide

Posted in Metaphysics, Philosophy, Reasoning/Logic by Michael LaBossiere on October 3, 2014

Robert Nozick

Robert Nozick (Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

In this essay I will present the game that creates the paradox and then discuss a specific aspect of Nozick’s version, namely his stipulation regarding the effect of how the player of the game actually decides.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to Nozick’s condition that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

This stipulation provides some insight into how the Predictor’s prediction ability is supposed to work. This is important because the workings of the Predictor’s ability to predict are, as I argued in my previous essay, rather significant in sorting out how one should decide.

The stipulation mainly serves to indicate how the Predicator’s ability does not work. First, it would seem to indicate that the Predictor does not rely on time travel—that is, it does not go forward in time to observe the decision and then travel back to place (or not place) the money in the box. After all, the prediction in this case would be explained in terms of what the player decided to do. This still leaves it open for the Predictor to visit (or observe) a possible future (or, more accurately, a possible world that is running ahead of the actual world in its time) since the possible future does not reveal what the player actually decides, just what she decides in that possible future. Second, this would seem to indicate that the Predictor is not able to “see” the actual future (perhaps by being able to perceive all of time “at once” rather than linearly as humans do). After all, in this case it would be predicting based on what the player actually decided. Third, this would also rule out any form of backwards causation in which the actual choice was the cause of the prediction. While there are, perhaps, other specific possibilities that are also eliminated, the gist is that the Predictor has to, by Nozick’s stipulation, be limited to information available at the time of the prediction and not information from the future. There are a multitude of possibilities here.

One possibility is that the Predictor is telepathic and can predict based on what it reads regarding the player’s intentions at the time of the prediction. In this case, the best approach would be for the player to think that she will take one box, and then after the prediction is made, take both. Or, alternatively, use some sort of drugs or technology to “trick” the Predictor. The success of this strategy would depend on how well the player can fool the Predictor. If the Predictor cannot be fooled or is unlikely to be fooled then the smart strategy would be to intend to take box B and then just take box B. After all, if the Predictor cannot be fooled, then box B will be empty if the player intends on taking both.

Another possibility is that the Predictor is a researcher—it gathers as much information as it can about the player and makes a shrewd guess based on that information (which might include what the player has written about the paradox). Since Nozick stipulates that the Predictor is “almost certainly” right, the Predictor would need to be an amazing researcher. In this case, the player’s only way to mislead the Predictor is to determine its research methods and try to “game” it so the Predictor will predict that she will just take B, then actually decide to take both. But, once again, the Predictor is stipulated to be “almost certainly” right—so it would seem that the player should just take B. If B is empty, then the Predictor got it wrong, which would “almost certainly” not happen. Of course, it could be contended that since the player does not know how the Predictor will predict based on its research (the player might not know what she will do), then the player should take both. This, of course, assumes that the Predictor has a reasonable chance of being wrong—contrary to the stipulation.

A third possibility is that the Predictor predicts in virtue of its understanding of what it takes to be a determinist system. Alternatively, the system might be a random system, but one that has probabilities. In either case, the Predictor uses the data available to it at the time and then “does the math” to predict what the player will decide.

If the world really is deterministic, then the Predictor could be wrong if it is determined to make an error in its “math.” So, the player would need to predict how likely this is and then act accordingly. But, of course, the player will simply act as she is determined to act. If the world is probabilistic, then the player would need to estimate the probability that the Predictor will get it right. But, it is stipulated that the Predictor is “almost certainly” right so any strategy used by the player to get one over on the Predictor will “almost certainly” fail, so the player should take box B. Of course, the player will do what “the dice say” and the choice is not a “true” choice.

If the world is one with some sort of metaphysical free will that is in principle unpredictable, then the player’s actual choice would, in principle, be unpredictable. But, of course, this directly violates the stipulation that the Predictor is “almost certainly” right. If the player’s choice is truly unpredictable, then the Predictor might make a shrewd/educated guess, but it would not be “almost certainly” right. In that case, the player could make a rational case for taking both—based on the estimate of how likely it is that the Predictor got it wrong. But this would be a different game, one in which the Predictor is not “almost certainly” right.

This discussion seems to nicely show that the stipulation that “what you actually decide to do is not part of the explanation of why he made the prediction he made” is a red herring. Given the stipulation that the Predictor is “almost certainly” right, it does not really matter how its predictions are explained. The stipulation that what the player actually decides is not part of the explanation simply serves to mislead by creating the false impression that there is a way to “beat” the Predictor by actually deciding to take both boxes and gambling that it has predicted the player will just take B.  As such, the paradox seems to be dissolved—it is the result of some people being misled by one stipulation and not realizing that the stipulation that the Predictor is “almost certainly” right makes the other irrelevant.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Gaming Newcomb’s Paradox II: Mechanics

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on October 1, 2014
La bildo estas kopiita de wikipedia:es. La ori...

(Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

As a philosopher, a game master (a person who runs a tabletop role playing game) and an author of game adventures, I am rather fond of puzzles and paradoxes. As a philosopher, I can (like other philosophers) engage in the practice known as “just making stuff up.” As an adventure author, I can do the same—but I need to present the actual mechanics of each problem, puzzle and paradox. For example, a trap description has to specific exactly how the trap works, how it may be overcome and what happens if it is set off. I thought it would be interesting to look at Newcomb’s Paradox from a game master perspective and lay out the possible mechanics for it. But first, I will present the paradox and two stock attempts to solve it.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to laying out some possible mechanics in gamer terms.

One obvious advantage of crafting the mechanics for a game is that the author and the game master know exactly how the mechanic works. That is, she knows the truth of the matter. While the players in role-playing games know the basic rules, they often do not know the full mechanics of a specific challenge, trap or puzzle. Instead, they need to figure out how it works—which often involves falling into spiked pits or being ground up into wizard burger. Fortunately, Newcomb’s Paradox has very simple game mechanics, but many variants.

In game mechanics, the infallible Predictor is easy to model. The game master’s description would be as follows: “have the player character (PC) playing the Predictor’s game make her choice. The Predictor is infallible, so if the player takes box B, she gets the million. If the player takes both, she gets $1,000.” In this case, the right decision is to take B. After all, the Predictor is infallible. So, the solution is easy.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

A less-than infallible Predictor is also easy to model with dice. The description of the Predictor simply specifies the accuracy of its predictions. So, for example: “The Predictor is correct 99% of the time. After the player character makes her choice, roll D100 (generating a number from 1-100). If you roll 100, the Predictor was wrong. If the PC picked just box B, it is empty and she gets nothing because the Predictor predicted she would take both. If she picked both, B is full and she gets $1,001,000 because the Predictor predicted she would just take one. If you roll 1-99, the Predictor was right. If the PC picked box B, she gets $1,000,000. If she takes both, she gets $1,000 since box B is empty.” In this case, the decision is a gambling matter and the right choice can be calculated by considering the chance the Predictor is right and the relative payoffs. Assuming the Predictor is “almost always right” would make choosing only B the rational choice (unless the player absolutely and desperately needs only $1,000), since the player who picks just B will “almost always” get the $1,000,000 rather than nothing while the player who picks both will “almost always” get just $1,000. But, if the Predictor is “almost always wrong” (or even just usually wrong), then taking both would be the better choice. And so on for all the fine nuances of probability. The solution is relatively easy—it just requires doing some math based on the chance the Predictor is correct in its predictions. As such, if the mechanism of the Predicator is specified, there is no paradox and no problem at all. But, of course, in a role-playing game puzzle, the players should not know the mechanism.

If the game master is doing her job, when the players are confronted by the Predictor, they will not know the predictor’s predictive powers (and clever players will suspect some sort of trick or trap). The game master will say something like “after explaining the rules, the strange being says ‘my predictions are nearly always right/always right’ and sets two boxes down in front of you.” Really clever players will, of course, make use of spells, items, psionics or technology (depending on the game) to try to determine what is in the box and the capabilities of the Predictor. Most players will also consider just attacking the Predictor and seeing what sort of loot it has. So, for the game to be played in accord with the original version, the game master will need to provide plausible ways to counter all these efforts so that the players have no idea about the abilities of the Predictor or what is in box B. In some ways, this sort of choice would be similar to Pascal’s famous Wager: one knows that the Predictor will get it right or it won’t. But, in this case, the player has no idea about the odds of the Predictor being right. In this case, from the perspective of the player who is acting in ignorance, taking both boxes yields a 100% chance of getting $1,000 and somewhere between 0 and 100% chance of getting the extra $1,000,000. Taking the B box alone yields a 100% chance of not getting the $1,000 and some chance between 0% and 100% of getting $1,000,000. When acting in ignorance, the safe bet is to take both: the player walks away with at least $1,000. Taking just B is a gamble that might or might not pay off. The player might walk away with nothing or $1,000,000.

But, which choice is rational can depend on many possible factors. For example, suppose the players need $1,000 to buy a weapon they need to defeat the big boss monster in the dungeon, then picking the safe choice would be the smart choice: they can get the weapon for sure. If they need $1,001,000 to buy the weapon, then picking both would also be a smart choice, since that is the only way to get that sum in this game. If they need $1,000,000 to buy the weapon, then there is no rational way to pick between taking one or both, since they have no idea what gives them the best chance of getting at least $1,000,000. Picking both will get them $1,000 but only gets them the $1,000,000 if the Predictor predicted wrong. And they have no idea if it will get it wrong. Picking just B only gets them $1,000,000 if the Predictor predicted correctly. And they have no idea if it will get it right.

In the actual world, a person playing the game with the Predictor would be in the position of the players in the role-playing game: she does not know how likely it is that the Predictor will get it right. If she believes that the Predictor will probably get it wrong, then she would take both. If she thinks it will get it right, she would take just B. Since she cannot pick randomly (in Nozick’s scenario B is empty if the players decides by chance), that option is not available. As such, Newcomb’s Paradox is an epistemic problem: the player does not know the accuracy of the predictions but if she did, she would know how to pick. But, if it is known (or just assumed) the Predictor is infallible or almost always right, then taking B is the smart choice (in general, unless the person absolutely must have $1,000). To the degree that the Predictor can be wrong, taking both becomes the smarter choice (if the Predictor is always wrong, taking both is the best choice). So, there seems to be no paradox here. Unless I have it wrong, which I certainly do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow

Get every new post delivered to your Inbox.

Join 2,222 other followers