“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”
While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from the land of comedy into the realm of philosophy (though detractors might claim that philosophy is comedy without humor; but that is actually law). Colbert has what seems to be an odd epistemology: he regularly claims that he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as merely comedic, it does raise many interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.
While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold the opinion that it is true. A belief is true, in the common sense view, when it gets reality right—this is the often maligned correspondence theory of truth. The stock simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks, but shall suffice for this discussion.
Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.
One possible response is to point out that the human mind is not beholden to the rules of logic—while a contradiction cannot be true, there are many ways a person can hold to contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold to both. This might be due to an ability to compartmentalize the beliefs so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.
While these responses do have considerable appeal, they do not appear to work in cases in which the person actually claims, as Colbert does, that they believe something they know is not true. After all, making this claim does require considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they belief something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.
One possibility is to consider the power of cognitive dissonance management—a person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem. I will explore this possibility in the context of comfort beliefs in a later essay.
Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true—even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.
Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty sided die, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true—assuming that this means that I can believe in something that I believe is less likely than another belief.
People are also strongly influenced by emotional and other factors that are not based in a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence.
In the previous essay on threat assessment I looked at the influence of availability heuristics and fallacies that directly relate to errors in reasoning about statistics and probability. This essay continues the discussion by exploring the influence of fear and anger on threat assessment.
As noted in the previous essay, a rational assessment of a threat involves properly considering how likely it is that a threat will occur and, if it occurs, how severe the consequences might be. As might be suspected, the influence of fear and anger can cause people to engage in poor threat assessment that overestimates the likelihood of a threat or the severity of the threat.
One common starting point for anger and fear is the stereotype. Roughly put, a stereotype is an uncritical generalization about a group. While stereotypes are generally thought of as being negative (that is, attributing undesirable traits such as laziness or greed), there are also positive stereotypes. They are not positive in that the stereotyping itself is good. Rather, the positive stereotype attributes desirable qualities, such as being good at math or skilled at making money. While it makes sense to think that stereotypes that provide a foundation for fear would be negative, they often include a mix of negative and positive qualities. For example, a feared group might be cast as stupid, yet somehow also incredibly cunning and dangerous.
After recent terrorist attacks, many people in the United States have embraced negative stereotypes about Muslims, such as the idea that they are all terrorists. This sort of stereotyping leads to similar mistakes that arise from hasty generalizations: reasoning about a threat based on stereotypes will tend to lead to an error in assessment. The defense against a stereotype is to seriously inquire whether the stereotype is true or not.
This stereotype has been used as a base (or fuel) for a stock rhetorical tool, that of demonizing. Demonizing, in this context, involves portraying a group as evil and dangerous. This can be seen as a specialized form of hyperbole in that it exaggerates the evil of the group and the danger it represents. Demonizing is often combined with scapegoating—blaming a person or group for problems they are not actually responsible for. A person can demonize on her own or be subject to the demonizing rhetoric of others.
Demonizing presents a clear threat to rational threat assessment. If a group is demonized successfully, it will be (by definition) regarded as more evil and dangerous than it really is. As such, both the assessment of the probability and severity of the threat will be distorted. For example, the demonization of Muslims by various politicians and pundits influences some people to make errors in assessing the danger presented by Muslims in general and Syrian refugees in particular.
The defense against demonizing is similar to the defense against stereotypes—a serious inquiry into whether the claims are true or are, in fact, demonizing. It is worth noting that what might seem to be demonizing might be an accurate description. This is because demonizing is, like hyperbole, exaggerating the evil of and danger presented by a group. If the description is true, then it would not be demonizing. Put informally, describing a group as evil and dangerous need not be demonizing. For example, this description would match the Khmer Rouge.
While stereotyping and demonizing are mere rhetorical devices, there are also fallacies that distort threat assessment. Not surprisingly, one of this is scare tactics (also known as appeal to fear). This fallacy involves substituting something intended to create fear in the target in place of evidence for a claim. While scare tactics can be used in other ways, it can be used to distort threat assessment. One aspect of its distortion is the use of fear—when people are afraid, they tend to overestimate the probability and severity of threats. Scare tactics is also used to feed fear—one fear can be used to get people to accept a claim that makes them even more afraid.
One thing that is especially worrisome about scare tactics in the context of terrorism is that in addition to making people afraid, it is also routinely used to “justify” encroachments on rights, massive spending, and the abandonment of important moral values. While courage is an excellent defense against this fallacy, asking two important questions also helps. The first is to ask “should I be afraid?” and the second is to ask “even if I am afraid, is the claim actually true?” For example, scare tactics has been used to “support” the claim that Syrian refugees should not be allowed into the United States. In the face of this tactic, one should inquire whether or not there are grounds to be afraid of Syrian refugees and also inquire into whether or not an appeal to fear justifies the proposed ban (obviously, it does not).
It is worth noting that just because something is scary or makes people afraid it does not follow that it cannot serve as legitimate evidence in a good argument. For example, the possibility of a fatal head injury from a motorcycle accident is scary, but is also a good reason to wear a helmet. The challenge is sorting out “judgments” based merely on fear and judgments that involve good reasoning about scary things.
While fear makes people behave irrationally, so does anger. While anger is an emotion and not a fallacy, it does provide the fuel for the appeal to anger. This fallacy occurs when something that is intended to create anger is substituted for evidence for a claim. For example, a demagogue might work up a crowd’s anger at illegal migrants to get them to accept absurd claims about building a wall along a massive border.
Like scare tactics, the use of an appeal to anger distorts threat assessment. One aspect is that when people are angry, they tend to reason poorly about the likelihood and severity of a threat. For example, the crowd that is enraged against illegal migrants might greatly overestimate the likelihood that the migrants are “taking their jobs” and the extent to which they are “destroying America.” Another aspect is that the appeal to anger, in the context of public policy, is often used to “justify” policies that encroach on rights and do other harms. For example, when people are angry about a mass shooting, proposals follow to limit gun rights that actually had no relevance to the incident. As another example, the anger at illegal migrants is often used to “justify” policies that would actually be harmful to the United States. As a third example, appeals to anger are often used to justify policies that would be ineffective at addressing terrorism and would do far more harm than good (such as the proposed ban on all Muslims).
It is important to keep in mind that if a claim makes a person angry, it does not follow that the claim cannot be evidence for a conclusion. For example, a person who learns that her husband is having an affair with an underage girl would probably be very angry. But, this would also serve as good evidence for the conclusion that she should report him to the police and then divorce him. As another example, the fact that illegal migrants are here illegally and this is often simply tolerated can make someone mad, but this can also serve as a premise in a good argument in favor of enforcing (or changing) the laws.
One defense against appeal to anger is good anger management skills. Another is to seriously inquire into whether or not there are grounds to be angry and whether or not any evidence is being offered for the claim. If all that is offered is an appeal to anger, then there is no reason to accept the claim on the basis of the appeal.
The rational assessment of threats is important for practical and moral reasons. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. There is also the concern about the harm of creating fear and anger that are unfounded. In addition to the psychological harm to individuals that arise from living in fear and anger, there is also the damage stereotyping, demonizing, scare tactics and appeal to anger do to society as a whole. While anger and fear can unify people, they most often unify by dividing—pitting us against them.
As in my previous essay, I urge people to think through threats rather than giving in to the seductive demons of fear and anger.
When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is, very broadly speaking, the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”
Making rational decisions about dangers involves considering both of these factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is a very low severity threat. There is an exceedingly low probability that there will be a mass shooting, but it is a high severity threat since it can result in injury or death.
While humans have done a fairly good job at surviving, this seems to have been despite our amazingly bad skills at rational threat assessment. To be specific, the worry people feel in regards to a threat generally does not match up with the actual probability of the threat occurring. People do seem somewhat better at assessing the severity, though they are also often in error about this.
One excellent example of poor threat assessment is in regards to the fear Americans have in regards to domestic terrorism. As of December 15, 2015 there have been 45 people killed in the United States in attacks classified as “violent jihadist attacks” and 48 people killed in attacks classified as “far right wing attacks” since 9/11/2001. In contrast, there were 301,797 gun deaths from 2005-2015 in the United States and over 30,000 people are killed each year in motor vehicle crashes in the United States.
Despite the incredibly low likelihood of a person being killed by an act of terrorism in the United States, many people are terrified by terrorism (which is, of course, the goal of terrorism) and have become rather focused on the matter since the murders in San Bernardino. Although there have been no acts of terrorism on the part of refugees in the United States, many people are terrified of refugees and this had led to calls for refusing to accept Syrian refugees and Donald Trump has famously called for a ban on all Muslims entering the United States.
Given that an American is vastly more likely to be killed while driving than killed by a terrorist, it might be wondered why people are so incredibly bad at this sort of threat assessment. The answer, in regards to having fear vastly out of proportion to the probability is easy enough—it involves a cognitive bias and some classic fallacies.
People follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use proper statistical methods, people generally fall victim to the bias known as the availability heuristic. The idea is that a person unconsciously assigns a probability to something based on how often they think of that sort of event. While an event that occurs often will tend to be thought of often, the fact that something is often thought of does not make it more likely to occur.
After an incident of domestic terrorism, people think about terrorism far more often and thus tend to unconsciously believe that the chance of terrorism occurring is far higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is incredibly low (driving to the beach is vastly more likely to kill you than a shark is). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, very difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite it is usually the worst evidence.
People are also misled about probability by various fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain class or type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention is focused on that fact, leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most Christians are terrorists because the media covered a terrorist who was Christian (who shot up a Planned Parenthood). If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a domestic terrorist attack by Muslims.
Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. This fallacy is similar to hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample.
People often fall victim to this fallacy because stories and anecdotes tend to have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of said anecdote. Not surprisingly, people most commonly accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.
In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of domestic terrorism or tell the story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring. For example, people point to the claim that one of the terrorists in Paris masqueraded as a refugee and infer that refugees pose a great threat to the United States. Or they tell the story about the one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the visa system (which is used by about 25,000 people a year).
One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is particularly vivid or dramatic does not make the event more likely to occur, especially in the face of significant statistical evidence to the contrary.
People often accept this sort of “reasoning” because particularly vivid or dramatic cases tend to make a very strong impression on the human mind. For example, mass shootings by domestic terrorists are vivid and awful, so it is hardly surprising that people feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more a person feels that it will occur.
It should be kept in mind that taking into account the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because the effects of an accident can be very, very dramatic. If he knows that, statistically, the chances of the accident are happening are very low but he considers even a small risk to be unacceptable, then he would not be making this error in reasoning. This then becomes a matter of value judgment—how much risk is a person willing to tolerate relative to the severity of the potential harm.
The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.
Such rational assessment of threats is rather important for both practical and moral reasons. The matter of terrorism is no exception to this. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter a miniscule threat while spending little on leading causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating fear that is unfounded. In addition to the psychological harm to individuals, there is also the damage to the social fabric. There has already been an increase in attacks on Muslims in America and people are seriously considering abandoning core American values, such as the freedom of religion and being good Samaritans.
In light of the above, I urge people to think rather than feel their way through their concerns about terrorism. Also, I urge people to stop listening to Donald Trump. He has the right of free expression, but people also have the right of free listening.
One interesting phenomenon is the tendency of people to double down on beliefs. For those not familiar with doubling down, this occurs when a person is confronted with evidence against a beloved belief and her belief, far from being weakened by the evidence, is strengthened.
One rather plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. Roughly put, when a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the logical inference that the belief is not true, then the rational thing to do is reject the old belief. If the evidence is not plausible or does not strongly support the logical inference that the belief is untrue, then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.
As might be suspected, the assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information about the matter, and credible sources. This assessment can merely push the matter back: the evidence for the evidence will also need to be assessed, which serves to fuel some classic skeptical arguments about the impossibility of knowledge. The idea is that every belief must be assessed and this would lead to an infinite regress, thus making knowing whether a belief is true or not impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some evidence for the belief—even if the evidence is faith or some sort of revelation.
In terms of assessing the reasoning, the matter is entirely objective if it is deductive logic. Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs, thus the reasoning is objectively good or bad. Inductive reasoning is a different matter. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Unlike deductive arguments, inductive arguments vary greatly in strength and while there are standards of assessment, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.
A second option is to reject the evidence without undergoing the process of honestly assessing the evidence and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or just feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can simply reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies are commonly employed in this task).
This rejection costs less psychologically than engaging the evidence and reasoning, but is often not free. Since the person probably has some awareness of the self-deception, it needs to be psychologically “justified” and this seems to result in the person strengthening her commitment to the belief. People seem to have all sorts of interesting cognitive biases that help out here, such as confirmation bias and other forms of motivated reasoning. These can be rather hard to defend against, since they derange the very mechanisms that are needed to avoid them.
One interesting way people “defend” their beliefs is by regarding the evidence and opposing argument as an unjust attack, which strengthens her resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer that they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. This is rather problematic reasoning—as shown by the fact that people do not infer that they are in error just because people are supporting them.
People also, as John Locke argued in his work on enthusiasm, consider how strongly they feel about a belief as evidence for its truth. When people are challenged, they typically feel angry and this strong emotion makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.
As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with one of doubling down. This accusation, after all, comes with the insinuation that the person is in error and is thus irrationally holding to a false belief. The reasonable defense is to show that evidence and arguments are being used in support of the belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with logic. My opponents double down.
In Art of the Deal Donald Trump calls one of his rhetorical tools “truthful hyperbole.” He both defends and praises it as “an innocent form of exaggeration — and a very effective form of promotion.” As a promoter, Trump made extensive use of this technique. Now he is using it in his bid for President.
Hyperbole is an extravagant overstatement and it can be either positive or negative in character. When describing himself and his plans, Trump makes extensive use of positive hyperbole: he is the best and every plan of his is the best. He also makes extensive use of negative hyperbole—often to a degree that seems to cross over from exaggeration to fabrication. In any case, his concept of “truthful hyperbole” is well worth considering.
From a logical standpoint, “truthful hyperbole” is an impossibility. This is because hyperbole is, by definition, not true. Hyperbole is not a merely a matter of using extreme language. After all, extreme language might accurately describe something. For example, describing Daesh as monstrous and evil would be spot on. Hyperbole is a matter of exaggeration that goes beyond the actual facts. For example, describing Donald Trump as monstrously evil would be hyperbole. As such, hyperbole is always untrue. Because of this, the phrase “truthful hyperbole” says the same thing as “accurate exaggeration”, which nicely reveals the problem.
Trump, a brilliant master of rhetoric, is right about the rhetorical value of hyperbole—it can have considerable psychological force. It, however, lacks logical force—it provides no logical reason to accept a claim. Trump also seems to be right in that there can be innocent exaggeration. I will now turn to the ethics of hyperbole.
Since hyperbole is by definition untrue, there are two main concerns. One is how far the hyperbole deviates from the truth. The other is whether the exaggeration is harmless or not. I will begin with consideration of the truth.
While a hyperbolic claim is necessarily untrue, it can deviate from the truth in varying degrees. As with fish stories, there does seem to be some moral wiggle room in regards to proximity to the truth. While there is no exact line (to require that would be to fall into the line drawing fallacy) that defines the exact boundary of morally acceptable exaggeration, some untruths go beyond that line. This line varies with the circumstances—the ethics of fish stories, for example, differs from the ethics of job interviews.
While hyperbole is untrue, it does have to have at least some anchor in the truth. If it does not, then it is not exaggeration but fabrication. This is the difference between being close to the truth and being completely untrue. Naturally, hyperbole can be mixed in with fabrication.
For example, if it is claimed that some people in America celebrated the terrorism of 9/11, then that is almost certainly true—there was surely at least one person who did this. If someone claims that dozens of people celebrated in public in America on 9/11 and this was shown on TV, then this might be an exaggeration (we do not know how many people in America celebrated) but it certainly includes a fabrication (the TV part). If it is claimed that hundreds did so, the exaggeration might be considerable—but it still contains a key fabrication. When the claim reaches thousands, the exaggeration might be extreme. Or it might not—thousands might have celebrated in secret. However, the claim that people were seen celebrating in public and video existed for Trump to see is false. So, his remarks might be an exaggeration, but they definitely contain fabrication. This could, of course, lead to a debate about the distinction between exaggeration and fabrication. For example, suppose that someone filmed himself celebrating on 9/11 and showed it to someone else. This could be “exaggerated” into the claim that thousands celebrated on video and people saw it. However, saying this is an exaggeration would seem to be an understatement. Fabrication would seem the far better fit in this hypothetical case.
One way to help determine the ethical boundaries of hyperbole is to consider the second concern, namely whether the hyperbole (untruth) is harmless or not. Trump is right to claim there can be innocent forms of exaggeration. This can be taken as exaggeration that is morally acceptable and can be used as a basis to distinguish such hyperbole from lying.
One realm in which exaggeration can be quite innocent is that of storytelling. Aristotle, in the Poetics, notes that “everyone tells a story with his own addition, knowing his hearers like it.” While a lover of truth Aristotle recognized the role of untruth in good storytelling, saying that “Homer has chiefly taught other poets the art of telling lies skillfully.” The telling of tall tales that feature even extravagant extravagation is morally acceptable because the tales are intended to entertain—that is, the intention is good. In the case of exaggerating in stories to entertain the audience or a small bit of rhetorical “shine” to polish a point, the exaggeration is harmless—which ties back to the possibility that Trump sees himself as an entertainer and not an actual candidate.
In contrast, exaggerations that have a malign intent would be morally wrong. Exaggerations that are not intended to be harmful, yet prove to be so would also be problematic—but discussing the complexities of intent and consequences would take the essay to far afield.
The extent of the exaggeration would also be relevant here—the greater the exaggeration that is aimed at malign purposes or that has harmful consequences, the worse it would be morally. After all, if deviating from the truth is (generally) wrong, then deviating from it more would be worse. In the case of Trump’s claim about thousands of people celebrating on 9/11, this untruth feeds into fear, racism and religious intolerance. As such, it is not an innocent exaggeration, but a malign untruth.
Robert Dear is alleged to have murdered three people and wounded nine others at a Planned Parenthood clinic on November 29, 2015. This incident is, unfortunately, part of two recurring themes. One is that of mass shootings in the United States. The other is domestic terrorism on the part of right-wing individuals and groups including self-proclaimed Christian anti-abortionists. While a discussion of these matters could take place in many contexts, I will use a framework provided by Fox News host Andrea Tantaros.
Tantaros began her discussion of the matter by criticizing the left for “indicting an entire religion” of “Christian white Republicans” and then noted that “the same people who hesitate using the phrase Islamic terrorism were very quick to use the term ‘Christian’.”
Tantaros is correct to criticize those who indict an entire religion on the basis of the actions of the worst elements of that faith. While the alleged shooter Robert Dear has identified himself as a Christian and seems to have been motivated by his religious beliefs, it would be an error to infer that what is true of Dear is true of all Christians. Such a leap would be a classic hasty generalization—a fallacy in which an inference is made about a group on the basis of a sample that is too small to justify the inference. This inference would also be an error because it is well known that the vast majority of Christians, like the vast majority of people of other faiths, are not inclined to murder. As such, the claim that Dear represents all Christians contradicts the known facts and his alleged acts of violence should not be labeled as “Christian terrorism”, despite the fact that there have been more acts of domestic terrorism committed against abortion clinics by self-proclaimed Christians than there have been acts of domestic terrorism inflicted by self-proclaimed Muslims since September 11, 2001.
While there are some arguments in favor of labeling terrorism by the religions claimed by the terrorists, there are excellent reasons to avoid such labeling. One is that it unfairly connects all the members of a faith to the terrorists. Another is that it unfairly implies that such acts of terror are endorsed or encouraged by the faith. To label Dear as a “Christian terrorist” is to connect him to the millions of Christians who would not slaughter other people and to imply that the acts of terror are in accord with the values of Christianity.
It could, of course, be objected that such terrorists are connected to other members of the faith, despite the differences and that such terrorists are acting in accord with the values of the faith as they see it. It could even be objected, as people often do in the case of both Islam and Christianity, that all members of the faith are potential terrorists and that acts of terror are perfectly in accord with the faith. Naturally, some people of one faith insist that it is not true of them, even when they are insisting that it is true of the other faith.
The discussion on Fox then turned to claims that Republican politicians had incited the violence with their rhetoric about “selling baby parts” on the grounds that Dear is alleged to have said “no more baby parts.” Fox’s Charles Payne made the point that Republicans should not be blamed because they were asserting that public funds should not pay for abortions. Perhaps without realizing what he was doing, he immediately said “particularly when they’re talking about selling baby parts”, thus bringing up the sort of rhetoric that has been condemned as inflammatory.
Continuing the thread, Tantaros blamed the left for exploiting the shooting for political purposes, including trying to “muzzle” Republicans for talking about “the illegal harvesting of baby parts on the off chance that some lunatic out there might hear that rhetoric and decide to go shoot up a clinic.”
Before moving to the main issues, it is important to note that there is no evidence, despite numerous investigations, that Planned Parenthood has ever been involved in “the illegal harvesting of baby parts.” It is certainly ironic that as part of their denial of the influence of such rhetoric, the folks at Fox would bring up exactly that rhetoric. But, now to the issues.
One issue that is a matter of psychology and causation is whether or not such rhetoric was a causal factor in the actions of the alleged shooter. As others have argued, given that there has only been one such attack since the rhetoric heated up, its causal influence must be incredibly small. There is also the very reasonable point that even if the alleged shooter were motived by the rhetoric, this would be but one factor among many others. As such, to place moral blame for the shootings upon the Republican rhetoric would be an error.
The second issue is one of free speech. Legally, the Republican rhetoric is protected by the 1st Amendment and rightfully so. As long as they do not cross the line and start telling people to commit crimes, they have every legal right to engage in such heated rhetoric. Lying of the sort that is used in rhetoric is also not against the law. However, there is also the moral issue: should the Republicans use such rhetoric?
One answer is linked to the psychological issue—as long as the Republicans are not knowingly causing people to engage in acts of violence, the moral right to free speech would entail that their actions are morally tolerable. The mere fact that the rhetoric is extreme and offensive to some is not grounds for regarding it as morally wrong. However, being merely morally tolerable is not a very exalted status. I have a preference for civil discourse that avoids needlessly heated rhetoric, but perhaps this is but a personal preference.
Another answer is linked to the untruths that have been used in the rhetoric. While truth seems to matter little in politics, it still matters in ethics. As such, intentionally making untrue claims about Planned Parenthood would seem to be wrong, at least on the assumption that lying is wrong. It could, of course, be argued that such untruths can be justified on utilitarian grounds—which is a standard way to justify lying.
Since the killings at the clinic constituted a mass shooting, the conversation would not be complete without the raising of a stock talking point about good guys with guns. The honor fell to Payne to say “And also, what if more people had guns there, guys?”
The issue of whether or not the presence of armed civilians would prevent or mitigate a mass shooting is certainly one of considerable controversy. But it is essentially an empirical matter that can, in theory, be settled by examining the data. Those who support the claim that a solution to gun violence is being armed point to cases in which armed civilians use their guns to prevent or at least mitigate crimes. Those who disagree with this claim point to cases in which things did not work out so well and present arguments against the deterrence value and effectiveness of armed civilians.
One problem with reaching a rational conclusion about the effectiveness of armed civilians in preventing or mitigating crime is that there is a lack of good data on gun violence. Pointing to some examples in which the good guy with a gun saved the day is relevant, but is still essentially anecdotal evidence. Likewise, pointing to examples in which it did not work out is also relevant, yet still anecdotal. As such, my view is that claims about the value of guns in this regard are largely unsupported—as are claims about their lack of value. However, it is certainly possible to speculate based on the available information and that seems to indicate that the crime fighting value of armed civilians is a rather mixed bag.
As this is being written at the end of November, Donald Trump is still the leading Republican presidential candidate. While some might take the view that this is in spite of the outrageous and terrible things Trump says, a better explanation is that he is doing well because of this behavior. Some regard it as evidence of his authenticity and find it appealing in the face of so many slick and seemingly inauthentic politicians (Hillary Clinton and Jeb Bush are regarded by some as examples of this). Some agree with what Trump says and thus find this behavior very appealing.
Trump was once again in the media spotlight for an outrageous claim. This time, he made a claim about something he believed happened on 9/11: “Hey, I watched when the World Trade Center came tumbling down. And I watched in Jersey City, New Jersey, where thousands and thousands of people were cheering as that building was coming down. Thousands of people were cheering.”
Trump was immediately called on this claim on the grounds that it is completely untrue. While it would be as reasonable to dismiss Trump’s false claim as it would be to dismiss any other claim that is quite obviously untrue, the Washington Post and Politifact undertook a detailed investigation. On the one hand, it seems needless to dignify such a falsehood with investigation. On the other hand, since Trump is the leading Republican candidate, his claims could be regarded as meriting the courtesy of a fact check rather than simple dismissal as being patently ludicrous. As should be expected, while they did find some urban myths and rumors, they found absolutely no evidence supporting Trump’s claim.
Rather impressively, Trump decided to double-down on his claim rather than acknowledging that his claim is manifestly false. His confidence has also caused some of his supporters to accept his claim, typically with vague references about having some memory of something that would support Trump’s claim. This is consistent with the way ideologically motivated “reasoning” works: when confronted with evidence against a claim that is part of one’s ideologically identity, the strength of the belief becomes even stronger. This holds true across the political spectrum and into other areas as well. For example, people who strongly identify with the anti-vaccination movement not only dismiss the overwhelming scientific evidence against their views, they often double-down on their beliefs and some even take criticism as more proof that they are right.
This tendency does make psychological sense—when part of a person’s identity is at risk, it is natural to engage in a form of wishful thinking and accept (or reject) a claim because one really wants the claim to be true (or false). However, wishful thinking is fallacious thinking—wanting a claim to be true does not make it true. As such, this tendency is a defect in a person’s rationality and giving in to it will generally lead to poor decision making.
There is also the fact that since at least the time of Nixon a narrative about liberal media bias has been constructed and implanted into the minds of many. This provides an all-purpose defense against almost any negative claims made by the media about conservatives. Because of this, Trump’s defenders can allege that the media covered up the story (which would, of course, contradict his claim that he saw all those people in another city celebrating 9/11) or that they are now engaged in a conspiracy against Trump.
A rather obvious problem with the claim that the media is engaged in some sort of conspiracy is that if Trump saw those thousands celebrating in New Jersey, then there should be no shortage of witnesses and video evidence. However, there are no witnesses and no such video evidence. This is because Trump’s claim is not true.
While it would be easy to claim that Trump is simply lying, this might not be the case. As discussed in an earlier essay I wrote about presidential candidate Ben Carson’s untrue claims, a claim being false is not sufficient to make it a lie. For example, a person might say that he has $20 in his pocket but be wrong because a pickpocket stole it a few minutes ago. Her claim would be untrue, but it would be a mistake to accuse her of being a liar. While this oversimplifies things quite a bit, for Trump to be lying about this he would need to believe that what he is saying is not true and be engaged in the right (or rather) wrong sort of intent. The matter of intent is important for obvious reasons, such as distinguishing fiction writers from liars. If Trump believes what he is saying, then he would not be lying.
While it might seem inconceivable that Trump really believes such an obvious untruth, it could very well be the case. Memory, as has been well-established, is notoriously unreliable. People forget things and fill in the missing pieces with bits of fiction they think are facts. This happens to all of us because of our imperfect memories and a need for a coherent narrative. There is also the fact that people can convince themselves that something is true—often by using on themselves various rhetorical techniques. One common way this is done is by reputation—the more often people hear a claim repeated, the more likely it is that they will accept it as true, even when there is no evidence for the claim. This is why the use of repeated talking points is such a popular strategy among politicians, pundits and purveyors. Trump might have told himself his story so many times that he now sincerely believes it and once it is cemented in his mind, it will be all but impossible for any evidence or criticism to dislodge his narrative. If this is the case, in his mind there was such massive celebrations and he probably can even “remember” the images and sounds—such is the power of the mind.
Trump could, of course, be well aware that he is saying something untrue but has decided to stick with his claim. This would make considerable sense—while people are supposed to pay a price for being completely wrong and an even higher price for lying, Trump has been rewarded with more coverage and more support with each new outrageous thing he does or says. Because of this success, Trump has excellent reasons to continue doing what he has been doing. It might take him all the way to the White House.
At the end of October, 2015 the remaining Republican candidates engaged in what the media billed as a debate. Many people have been rather critical of the way the moderators managed the debate and the questions they raised. While some attribute their behavior to political bias and present it as evidence of the dreaded liberal bias of the media, a better explanation is that the main concern of the moderators was to maximize the number of eyeballs watching the event. Substantive questions about the nuances of policy and their answers tend to bore people. Questions that compare Donald Trump to a comic book villain and pit the contestants against each other are entertaining and more likely to draw an audience.
While the quality and intent of the “debate” moderators are matters of interest, my main concern in this essay is the matter of truth and its relevance to politics. I am well aware that the cynical view that truth matters in politics as much as Jek Porkins mattered in the attack on the Death Star. While people often shrug and make jokes about politicians being liars when the matter of truth in politics comes up, if truth does not matter much in politics, then we are to blame. We accept the untruths of those who share our ideology, though we pounce with ferocity on the lies of the opposing team. In fact, we even pounce on the truths of the opposing team. This is largely due to well-studied psychological biases. Fortunately, there are those who make it their business to assess the claims of the political class. Most notable among them is Politifact.
While politicians grow a bounteous crop of untruths in their minds, I will focus on one interesting example of Ben Carson and his relationship with Mannatech. Carl Quintanilla, one of the moderators, asked Carson about his relationship with this company:
Quintanilla: There’s a company called Mannatech, a maker of nutritional supplements, with which you had a ten-year relationship. They offered claims that they could cure autism, cancer. They paid $7 million dollars to settle a deceptive marketing lawsuit in Texas, and yet your involvement continued. Why?
Carson: Well, it’s easy to answer. I didn’t have an involvement with them. That is total propaganda and this is what happens in our society. Total propaganda. I did a couple of speeches for them, I did speeches for other people, they were paid speeches, it is absolutely absurd to say that I had any kind of relationship with them. Do I take the product? Yes. I think it’s a good product.
While some might regard this as a “gotcha” question or an example of the liberal media bias, it is actually as reasonable to ask this of Carson as it would be to ask Hillary Clinton about her various financial connections. These sorts of questions are legitimate inquiries about judgment, character and the sort of interests that might influence a politician. They also are relevant in terms of what potential scandals might emerge.
Carson was also given a clear test of character: would he bear false witness in regards to his own deeds and say something false or would he set himself free with the truth? His choice was to say he “didn’t have an involvement with them.” Unfortunately, this claim contradicts the known facts. The Wall Street Journal has laid out Carson’s ties to this company. Politifact has, not surprisingly, rated his claim as false. They did not, however, apply the lowest ranking, that of Pants on Fire.
No doubt aware that Carson had make an untrue claim, the moderator endeavored to press him on this point:
Quintanilla: To be fair, you were on the homepage of the website with the logo over your shoulder.
Carson: If somebody put me on their homepage, they did it without my permission.
Quintanilla: Does that not speak to your vetting process or judgement in any way?
Carson: No, it speaks to the fact that I don’t know those… See, they know.
At this point, the audience began to boo Quintanilla, presumably in defense of Carson. This was not particularly surprising: Fox New and conservative politicians have been pushing the “liberal media” and “gotcha” question talking points very effectively and a dislike for non-conservative media is very strong in many conservatives. To be fair to the audience, they might not have known that Carson said something untrue and that the moderator was endeavoring to make that clear—which is what should be expected in a forum that should involve challenging questions.
However, the audience did not need to be aware of the particular facts that made Carson’s claim untrue. There was no need for them to have done research since he refuted his own claim about not being involved with the company in his reply. Carson said, “I did a couple of speeches for them, I did speeches for other people, they were paid speeches, it is absolutely absurd to say that I had any kind of relationship with them.” A reasonable interpretation of the claim that he “didn’t have any kind of relationship with them” is that he had no relationship with the company. Doing paid speeches for the company would certainly seem to be involvement and a relationship. The facts, of course, point to a significant involvement with the company. But, even without those facts, his own claim he was paid by the company shows that he was involved with the company.
It could be claimed that Carson meant something else by “involvement” and “relationship” and he could be defended on semantic grounds—much as Bill Clinton attempted a definitional defense regarding the word “sex” when pressed by the Republicans. To be fair, “involvement” and “relationship” could be taken to require more than being paid to give speeches. To use an analogy, while a hooker might be paid to provide a man with sex, this does not entail that she is involved with him or that she has a relationship with him. As such, the audience could be forgiven for booing the moderator for endeavoring to take Carson to task for saying an untruth. The audience members might have honestly believed that being paid to give speeches does not count as involvement or a relationship and presumably they would extend this same principle to Democratic candidates who have been paid by various interests yet are not “involved” with them.
When pressed by Jim Geraghty of the National Review about this matter, the Carson camp engaged in an intriguing semantic defense involving the path by which the compensation reached Carson and what actually counts as an endorsement. The reader is invited to view the video featuring Dr. Carson talking about Mannatech and judge whether or not this should be considered an endorsement. To my untrained eye, this seems indistinguishable from other paid endorsements I have seen. As such, it seems reasonable to hold that Carson spoke an untruth and some of the audience rushed to defend him for this. Neither is surprising, but both are disappointing. Especially since Carson’s poll numbers are doing just fine. Either his supporters do not believe he said untrue things or they do not care. Both of these explanations are worrying.
As I write this, the number of Republican presidential contenders is in the double digits. While businessman and reality TV show star Donald Trump is still regarded as leading the pack, neurosurgeon Ben Carson has been gaining ground and some polls put him ahead of Trump.
In an earlier essay I did an analysis of how someone like Trump could sustain his lead despite what would have been politically fatal remarks by most other candidates. In this essay I will examine the question of why Trump and Carson are doing well and will do so in the context of the notion of expertise.
From a rational standpoint, a person should consider an elected office as a job and herself as the employer who is engaged in evaluating the candidate. As such, the expertise of the candidate should be a rather important factor. What should also be considered are the personal qualities needed to do the job well, such as dependability, integrity and so on. A person should also consider the extent to which the candidate will act in her self-interest and also the extent to which the candidate will act in accord with her values. While a person’s self-interest and values can be consistent with each other, there can be a conflict. For example, it might be in the self-interest of a wealthy person for taxes on the rich to be lowered, but his values might such that he favors shifting more of the tax burden to the wealthy.
When considering whether a candidate has the needed expertise or not, the main factors include education, experience, accomplishments, position, and reputation. I will begin by considering education.
While education is usually looked at in terms of formal education, it can also include what is learned outside of the classroom. While there is no degree offered in being-the-president it is certainly worth considering the education of candidates and its relevance towards the office they are seeking. In this case, the office is the presidency. Carson has an M.D. and is clearly well educated. Trump is also an educated man, albeit not a brain surgeon.
Interestingly, influential elements in the Republican have pushed an anti-intellectual and anti-science line over the years. As such, it is hardly a surprise that some Republicans like to compare Obama to a professor and intend for this comparison to be an insult. The anti-science leaning has, in recent years, been very strong in regards to the science of climate change. However, it is well worth noting that the opposition to science and intellectualism seems to be driven primarily by an ideological opposition to specific positions in science. Those on the left are often cast as being in favor of science and intellectualism—in large part, perhaps, due to the fact that scientists and intellectuals tend to lean more left than right. However, a plausible case can be made that some of the pro-science and pro-intellectual leaning of the left also comes from ideology—that is, leftists like the science and intellectualism that matches their world views. As an example, the left tends to be pro-environment and this fits in nicely with the science of climate change. Interestingly, when science goes against a view held by some left leaning folks, they will attack and reject science with the same sort of “arguments” that are employed by their fellows on the right. One good example of this is the sort of anti-vaccination people who reject the scientific evidence in favor of their ideology.
Given the fact that Carson is a neurosurgeon and Trump has an education, it might be wondered how they are doing so well given the alleged anti-science and anti-intellectual views of some Republicans. In the case of Trump, the answer is easy and obvious: what he says tends to nicely fit into this view. While Trump has authored several books, no one would accuse him of being an intellectual.
Carson’s case is a bit more complicated. On the one hand, he is a well-educated neurosurgeon and is regarded as intelligent and thoughtful. On the other hand, he tends to make remarks that make him appear anti-intellectual and anti-science. Some claim that he is doing this in a calculated way to appeal to the baser nature of some of the Republican base. Others assert that his apparent missteps are due to his lack of experience in the realm of politics. Coincidentally, this leads to the next subject of consideration.
Since the presidency is not an entry level job, it seems reasonable to expect that a candidate have relevant experience in similar jobs. It also seems reasonable to expect that the candidate would be accomplished in relevant ways, have held relevant positions, and have a good reputation that is relevant to the presidency.
This is why many past presidents have been governors, military leaders or in congress before they moved to the oval office. While Trump has had experience in business and reality TV, he has not held political office. While some claim that executive business experience is relevant, it is certainly reasonable to consider that it is not an adequate substitute for experience in a political position. I, for example, would not claim that my experience in chairing committees, captaining athletic teams, and running classes would qualify me to be president.
While Carson has some administrative experience, he is primarily a neurosurgeon. While this is certainly impressive, it does not seem relevant to his ability to be president. I, for example, am also a doctor and have written numerous books—but these would not seem to be large points in favor of me being president.
Given the relatively weak qualifications of Trump and Carson in these areas, it might seem odd that they are currently trouncing former governor Jeb Bush, Senator Marco Rubio, Governor Scott Walker, Senator Rand Paul and former governor John Kasich.
One easy explanation for the success of Trump and Carson is that Republican politicians and pundits adopted a tactic of waging rhetorical war against politicians, insiders, the establishment and government itself. In contrast, being a non-politician, a political outsider, a non-establishment person and against government were lauded as virtues. This tactic seems to have been too successful: the firehose that the Republican strategists struggled to keep targeted on Democrats seems to have slipped from their grip and is now hosing the more qualified candidates while Trump and Carson stay dry. The irony here is that those who are probably the best qualified to actually run the country (such as Rubio, Bush and Kasich) are currently regarded as undesirable precisely because of the qualities that make them qualified.
What might also be ironic is that it seems the Republican rhetoric of attacking politicians for being politicians has helped Bernie Sanders in his bid to become the Democratic candidate. While Sanders is a senator, he is a very plausible as an outsider and a non-establishment person. He is even convincing as being a non-politician politician: though he has plenty of political experience, he seems to have an authenticity and integrity that is all too uncommon among the polished, packaged and marketed politicians (most notable Hilary Clinton).
As a final point, many pundits take the view that Trump, Carson and Sanders will inevitably fade in the polls and be replaced by the more traditional candidates. Pundits who like to hedge their bets a bit will usually also add that even if Trump or Carson becomes the Republican nominee, they cannot win the general election. The pundits also claim that even if Sanders get the nomination, he will lose in the general election. Of course, if the 2016 election is Sanders versus Trump or Carson, one of them has to win.