A Philosopher's Blog

Doubling Down

Posted in Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 11, 2015
A diagram of cognitive dissonance theory. Diss...

 (Photo credit: Wikipedia)

One interesting phenomenon is the tendency of people to double down on beliefs. For those not familiar with doubling down, this occurs when a person is confronted with evidence against a beloved belief and her belief, far from being weakened by the evidence, is strengthened.

One rather plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. Roughly put, when a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the logical inference that the belief is not true, then the rational thing to do is reject the old belief. If the evidence is not plausible or does not strongly support the logical inference that the belief is untrue, then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.

As might be suspected, the assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information about the matter, and credible sources. This assessment can merely push the matter back: the evidence for the evidence will also need to be assessed, which serves to fuel some classic skeptical arguments about the impossibility of knowledge. The idea is that every belief must be assessed and this would lead to an infinite regress, thus making knowing whether a belief is true or not impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some evidence for the belief—even if the evidence is faith or some sort of revelation.

In terms of assessing the reasoning, the matter is entirely objective if it is deductive logic.  Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs, thus the reasoning is objectively good or bad. Inductive reasoning is a different matter. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Unlike deductive arguments, inductive arguments vary greatly in strength and while there are standards of assessment, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.

A second option is to reject the evidence without undergoing the process of honestly assessing the evidence and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or just feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can simply reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies are commonly employed in this task).

This rejection costs less psychologically than engaging the evidence and reasoning, but is often not free. Since the person probably has some awareness of the self-deception, it needs to be psychologically “justified” and this seems to result in the person strengthening her commitment to the belief. People seem to have all sorts of interesting cognitive biases that help out here, such as confirmation bias and other forms of motivated reasoning. These can be rather hard to defend against, since they derange the very mechanisms that are needed to avoid them.

One interesting way people “defend” their beliefs is by regarding the evidence and opposing argument as an unjust attack, which strengthens her resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer that they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. This is rather problematic reasoning—as shown by the fact that people do not infer that they are in error just because people are supporting them.

People also, as John Locke argued in his work on enthusiasm, consider how strongly they feel about a belief as evidence for its truth. When people are challenged, they typically feel angry and this strong emotion makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.

As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with one of doubling down. This accusation, after all, comes with the insinuation that the person is in error and is thus irrationally holding to a false belief. The reasonable defense is to show that evidence and arguments are being used in support of the belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with logic. My opponents double down.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Twitter Mining

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on July 11, 2014
Image representing Twitter as depicted in Crun...

Image via CrunchBase

In February, 2014 Twitter made all its tweets available to researchers. As might be suspected, this massive data is a potential treasure trove to researchers. While one might picture researchers going through the tweets for the obvious content (such as what people eat and drink), this data can be mined in some potentially surprising ways. For example, the spread of infectious diseases can be tracked via an analysis of tweets. This sort of data mining is not new—some years ago I wrote an essay on the ethics of mining data and used Target’s analysis of data to determine when customers were pregnant (so as to send targeted ads). What is new about this is that all the tweets are now available to researchers, thus providing a vast heap of data (and probably a lot of crap).

As might be imagined, there are some ethical concerns about the use of this data. While some might suspect that this creates a brave new world for ethics, this is not the case. While the availability of all the tweets is new and the scale is certainly large, this scenario is old hat for ethics. First, tweets are public communications that are on par morally with yelling statements in public places, posting statements on physical bulletin boards, putting an announcement in the paper and so on. While the tweets are electronic, this is not a morally relevant distinction. As such, researchers delving into the tweets is morally the same as a researcher looking at a bulletin board for data or spending time in public places to see the number of people who go to a specific store.

Second, tweets can (often) be linked to a specific person and this raises the stock concern about identifying specific people in the research. For example, identifying Jane Doe as being likely to have an STD based on an analysis of her tweets. While twitter provides another context in which this can occur, identifying specific people in research without their consent seems to be well established as being wrong. For example, while a researcher has every right to count the number of people going to a strip club via public spaces, to publish a list of the specific individuals visiting the club in her research would be morally dubious—at best. As another example, a researcher has every right to count the number of runners observed in public spaces. However, to publish their names without their consent in her research would also be morally dubious at best. Engaging in speculation about why they run and linking that to specific people would be even worse (“based on the algorithm used to analysis the running patterns, Jane Doe is using her running to cover up her affair with John Roe”).

One counter is, of course, that anyone with access to the data and the right sorts of algorithms could find out this information for herself. This would simply be an extension of the oldest method of research: making inferences from sensory data. In this case the data would be massive and the inferences would be handled by computers—but the basic method is the same. Presumably people do not have a privacy right against inferences based on publically available data (a subject I have written about before). Speculation would presumably not violate privacy rights, but could enter into the realm of slander—which is distinct from a privacy matter.

However, such inferences would seem to fall under privacy rights in regards to the professional ethics governing researchers—that is, researchers should not identify specific people without their consent whether they are making inferences or not. To use an analogy, if I infer that Jane Doe and John Roe’s public running patterns indicate they are having an affair, I have not violated their right to privacy (assuming this also covers affairs). However, if I were engaged in running research and published this in a journal article without their permission, then I would presumably be acting in violation of research ethics.

The obvious counter is that as long as a researcher is not engaged in slander (that is intentionally saying untrue things that harm a person), then there would be little grounds for moral condemnation. After all, as long as the data was publically gathered and the link between the data and the specific person is also in the public realm, then nothing wrong has been done. To use an analogy, if someone is in a public park wearing a nametag and engages in specific behavior, then it seems morally acceptable to report that. To use the obvious analogy, this would be similar to the ethics governing journalism: public behavior by identified individuals is fair game. Inferences are also fair game—provided that they do not constitute slander.

In closing, while Twitter has given researchers a new pile of data the company has not created any new moral territory.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page