A Philosopher's Blog

Trump’s Enquiring Rhetoric

Posted in Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on May 4, 2016

As this is being written, Donald Trump is the last surviving Republican presidential candidate. His final opponents, Cruz and Kasich, suspended their campaigns, though perhaps visions of a contested convention still haunt their dreams.

Cruz left the field of battle with a bizarre Trump arrow lodged in his buttocks: Trump had attacked Cruz by alleging that Ted Cruz’ father was associated with Lee Harvey Oswald. The basis for this claim was an article in the National Enquirer, a tabloid that has claimed Justice Scalia was assassinated by a hooker working for the CIA. While this tabloid has no credibility, the fact that Trump used it as a source necessitated an investigation into the claim about Cruz’ father. As should be expected, Politifact ranked it as Pants on Fire. I almost suspect that Trump is trolling the media and laughing about how he has forced them to seriously consider and thoroughly investigate claims that are utterly lacking in evidence (such as his claims about televised celebrations in America after the 9/11 attacks).

When confronted about his claim about an Oswald-Cruz connection, Trump followed his winning strategy: he refused to apologize and engaged in some Trump-Fu as his “defense.” When interviewed on ABC, his defense was as follows:  “What I was doing was referring to a picture reported and in a magazine, and I think they didn’t deny it. I don’t think anybody denied it. No, I don’t know what it was exactly, but it was a major story and a major publication, and it was picked up by many other publications. …I’m just referring to an article that appeared. I mean, it has nothing to do with me.”

This response begins with what appears to be a fallacy: he is asserting that if a claim is not denied, then it is therefore true (I am guessing the “they” is either the Cruz folks or the National Enquirer folks. This can be seen as a variation on the classic appeal to ignorance fallacy. In this fallacy, a person infers that if there is a lack of evidence against a claim, then the claim is true. However, proving a claim requires that there be adequate evidence for the claim, not just a lack of evidence against it. There is no evidence that I do not have a magical undetectable pet dragon that only I can sense. This, however, does not prove that I have such a pet.

While a failure to deny a claim might be regarded as suspicious, not denying a claim is not proof the claim is true. It might not even be known that a claim has been made (so it would not be denied). For example, Kanye West is not denying that he plans to become master of the Pan flute—but this is not proof he intends to do this. It can also be a good idea to not lend a claim psychological credence by denial—some people think that denial of a claim is evidence it is true. Naturally, Cruz did end up denying the claim.

Trump next appears to be asserting the claim is true because it was “major” and repeated. He failed to note the “major” publication is a tabloid that is lacking in credibility. As such, Trump could be seen as engaging in a fallacious appeal to authority. In this case, the National Enquirer lacks the credibility needed to serve as the basis for a non-fallacious argument from authority. Roughly put, a good argument from authority is such that the credibility of the authority provides good grounds for accepting a claim. Trump did not have a good argument from authority.

Trump also uses a fascinating technique of “own and deny.” He does this by launching an attack and then both “owning” and denying it. It is as if he punched Cruz in the face and then said, “it wasn’t me, someone else did the punching. But I will punch Cruz again. Although it wasn’t me.” I am not sure if this is a rhetorical technique or a pathological condition. However, it does allow him the best of both worlds: he can appear tough and authentic by “owning it” yet also appear to not be responsible for the attack. This seems to be quite appealing to his followers, although it is obviously logically problematic: one must either own or deny, both cannot be true.

He also makes use of an established technique:  he gets media attention drawn to a story and then uses this attention to “prove” the story is true (because it is “major” and repeated). While effective, this technique does not prove a claim is true.

Trump was also interviewed on NBC and asked why he attacked Cruz in the face of almost certain victory in Indiana.  In response, he said, “Well, because I didn’t know I had it in the grasp. …I had no idea early in the morning that was — the voting booths just starting — the voting booths were practically not even opened when I made this call. It was a call to a show. And they ran a clip of some terrible remarks made by the father about me. And all I did is refer him to these articles that appeared about his picture. And — you know, not such a bad thing.”

This does provide something of a defense for Trump. As he rightly says, he did not know he would win and he hoped that his attack would help his chances. While the fact that a practice is common does not justify it (this would be the common practice fallacy), Trump seems to be playing within the rules of negative campaigning. That said, the use of the National Enquirer as a source is a new twist as is linking an opponent to the JFK assassination. This is not to say that Trump is acting in a morally laudable manner, just that he is operating within the rules of the game. To use an analogy, while the brutal hits of football might be regarded as morally problematic, they are within the rules of the game. Likewise, such attacks are within the rules of politics.

However, Trump goes on to commit the “two wrongs make a right” fallacy: since bad things were said about Trump, he concludes that he has the right to strike back. While Trump has every right to respond to attacks, he does not have a right to respond with a completely fabricated accusation.

Trump then moves to downplaying what he did and engages in one of his signature moves: he is not really to blame (he just pointed out the articles). So, his defense is essentially “I am just punching the guy back. But, I really didn’t punch him. I just pointed out that someone else punched him. And that punching was not a bad thing.”

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

My Old Husky & Philosophy III: Experiments & Studies

Posted in Medicine/Health, Philosophy, Reasoning/Logic by Michael LaBossiere on April 8, 2016

Isis on the GoWhile my husky, Isis, and I have both slowed down since we teamed up in 2004, she is doing remarkably well these days. As I often say, pulling so many years will slow down man and dog. While Isis faced a crisis, most likely due to the wear of time on her spine, the steroids seemed to have addressed the pain and inflammation so that we have resumed our usual adventures. Tail up and bright eyed is the way she is now and the way she should be.

In my previous essay I looked at using causal reasoning on a small sale by applying the methods of difference and agreement. In this essay I will look at thinking critically about experiments and studies.

The gold standard in science is the controlled cause to effect experiment. The objective of this experiment is to determine the effect of a cause. As such, the question is “I wonder what this does?” While the actual conducting of such an experiment can be complicated and difficult, the basic idea is rather simple. The first step is to have a question about a causal agent. For example, it might be wondered what effect steroids have on arthritis in elderly dogs. The second step is to determine the target population, which might already be taken care of in the first step—for example, elderly dogs would be the target population. The third step is to pull a random sample from the target population. This sample needs to be representative (that is, it needs to be like the target population and should ideally be a perfect match in miniature). For example, a sample from the population of elderly dogs would ideally include all breeds of dogs, male dogs, female dogs, and so on for all relevant qualities of dogs. The problem with a biased sample is that the inference drawn from the experiment will be weak because the sample might not be adequately like the general population. The sample also needs to be large enough—a sample that is too small will also fail to adequately support the inference drawn from the experiment.

The fourth step involves splitting the sample into the control group and the experimental group. These groups need to be as similar as possible (and can actually be made of the same individuals). The reason they need to be alike is because in the fifth step the experimenters introduce the cause (such as steroids) to the experimental group and the experiment is run to see what difference this makes between the two groups. The final step is getting the results and determining if the difference is statistically significant. This occurs when the difference between the two groups can be confidently attributed to the presence of the cause (as opposed to chance or other factors). While calculating this properly can be complicated, when assessing an experiment (such as a clinical trial) it is easy enough to compare the number of individuals in the sample to the difference between the experimental and control groups. This handy table from Critical Thinking makes this quite easy and also shows the importance of having a large enough sample.

 

Number in Experimental Group

(with similarly sized control group)

Approximate Figure That the difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
25 27
50 19
100 13
250 8
500 6
1,000 4
1,500 3

 

Many “clinical trials” mentioned in articles and blog posts have very small samples sizes and this often makes their results meaningless. This table also shows why anecdotal evidence is fallacious: a sample size of one is all but completely useless when it comes to an experiment.

The above table also assumes that the experiment is run correctly: the sample was representative, the control group was adequately matched to the experimental group, the experimenters were not biased, and so on for all the relevant factors. As such, when considering the results of an experiment it is important to consider those factors as well. If, for example, you are reading an article about an herbal supplement for arthritic dogs and it mentions a clinical trial, you would want to check on the sample size, the difference between the two groups and determine whether the experiment was also properly conducted. Without this information, you would need to rely entirely on the credibility of the source. If the source is credible and claims that the experiment was conducted properly, then it would be reasonable to trust the results. If the source’s credibility is in question, then trust should be withheld. Assessing credibility is a matter of determining expertise and the goal is to avoid being a victim of a fallacious appeal to authority. Here is a short checklist for determining whether a person (or source) is an expert or not:

 

  • The person has sufficient expertise in the subject matter in question.
  • The claim being made by the person is within her area(s) of expertise.
  • There is an adequate degree of agreement among the other experts in the subject in question.
  • The person in question is not significantly biased.
  • The area of expertise is a legitimate area or discipline.
  • The authority in question must be identified.

 

While the experiment is the gold standard, there are times when it cannot be used. In some cases, this is a matter of ethics: exposing people or animals to something potentially dangerous might be deemed morally unacceptable. In other cases, it is a matter of practicality or necessity. In such cases, studies are used.

One type of study is the non-experimental cause to effect study. This is identical to the cause to effect experiment with one rather critical difference: the experimental group is not exposed to the cause by those running the study. For example, a study might be conducted of dogs who recovered from Lyme disease to see what long term effects it has on them.

The study, as would be expected, runs in the same basic way as the experiment and if there is a statistically significant difference between the two groups (and it has been adequately conducted) then it is reasonable to make the relevant inference about the effect of the cause in question.

While useful, this sort of study is weaker than the experiment. This is because those conducting the study have to take what they get—the experimental group is already exposed to the cause and this can create problems in properly sorting out the effect of the cause in question. As such, while a properly run experiment can still get erroneous results, a properly run study is even more likely to have issues.

A second type of study is the effect to cause study. It differs from the cause to effect experiment and study in that the effect is known but the cause is not. Hence, the goal is to infer an unknown cause from the known effect. It also differs from the experiment in that those conducting the study obviously do not introduce the cause.

This study is conducted by comparing the experimental group and the control group (which are, ideally, as similar as possible) to sort out a likely cause by considering the differences between them. As would be expected, this method is far less reliable than the others since those doing the study are trying to backtrack from an effect to a cause. If considerable time has passed since the suspected cause, this can make the matter even more difficult to sort out. The conducting the study also have to work with the experimental group they happen to get and this can introduce many complications into the study, making a strong inference problematic.

An example of this would be a study of elderly dogs who suffer from paw knuckling (the paw flips over so the dog is walking on the top of the paw) to determine the cause of this effect. As one might suspect, finding the cause would be challenging—there would be a multitude of potential causes in the history of the dogs ranging from injury to disease. It is also quite likely that there are many causes in play here, and this would require sorting out the different causes for this same effect. Because of such factors, the effect to cause study is the weakest of the three and supports the lowest level of confidence in its results even when conducted properly. This explains why it can be so difficult for researchers to determine the causes of many problems that, for example, elderly dogs suffer from.

In the case of Isis, the steroids that she is taking have been well-studied, so it is quite reasonable for me to believe that they are a causal factor in her remarkable recovery. I do not, however, know for sure what caused her knuckling—there are so many potential causes for that effect. However, the important thing is that she is now walking normally about 90% of the time and her tail is back in the air, showing that she is a happy husky.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , ,

Philosophy & My Old Husky II: Difference & Agreement

Posted in Medicine/Health, Philosophy, Reasoning/Logic by Michael LaBossiere on April 6, 2016

Isis in the mulchAs mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a fast sprinting and long running blur of fur, she now merely saunters along. Still, lesser beasts fear her (and to a husky, all creatures are lesser beasts) and the sun is warm—so her life is still good.

Faced with the challenge of keeping her healthy and happy, I have relied a great deal on what I learned as a philosopher. As noted in the preceding essay, I learned to avoid falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.

One of the most useful tool for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since way before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.

Fortunately, the method is quite simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better—although more cases of something bad (like arthritis pain) would certainly be undesirable from other standpoints. The two cases can actually involve the same individual at different times—it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also looked into other cases in which dogs suffered from knuckling issues and when they did not.

The cases in which the effect is present and those in which it is absent are then compared in order to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy—to conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be purely a matter of coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect—especially since she has been eating peanut butter her whole life. It is also important to consider that an alleged cause might actually be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing the knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage. You must also keep in mind the possibility of reversed causation—that the alleged cause is actually the effect. For example, a person might think that the limping is causing the knuckling, but it might turn out that the knuckling is the cause of the limping.

In some cases, sorting out the cause can be very easy. For example, if a dog slips and falls, then has trouble walking, then the most likely cause is the fall (but it could still be something else—perhaps the fall and walking trouble were caused by something else). In other cases, sorting out the cause can be very difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (apparently even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert—such as a vet. Medical tests, for example, are useful for sorting out the difference and finding a likely cause.

The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of some soreness, I started giving her senior dog food, glucosamine and some extra protein. What followed was an improvement in her mobility and the absence of the signs of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I do consider that I could be wrong. Fortunately, I do have good evidence that the steroids Isis has been prescribed work—she made a remarkable improvement after starting the steroids and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids are the cause of her improvement—though this could also be a coincidence.

The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all the cases. In this method, the cases exhibiting the effect (such as knuckling) are considered in order to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence so as to avoid falling into a post hoc fallacy.

The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.

One of the main weaknesses of these methods is that they tend to have very small sample sizes—sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies—the subject of the next essay in this series.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy & My Old Husky I: Post Hoc & Anecdotal Evidence

Posted in Medicine/Health, Philosophy, Reasoning/Logic by Michael LaBossiere on April 4, 2016

dogpark065My Siberian husky, Isis, joined the pack in 2004 at the age of one. It took her a little while to realize that my house was now her house—she set out to chew all that could be chewed, presumably as part of some sort of imperative of destruction. Eventually, she came to realize that she was chewing her stuff—or so I like to say. More likely, joining me on 8-16 mile runs wore the chew out of her.

As the years went by, we both slowed down. Eventually, she could no longer run with me (despite my slower pace) and we went on slower adventures (one does not walk a husky; one goes adventuring with a husky). Despite her advanced age, she remained active—at least until recently. After an adventure, she seemed slow and sore. She cried once in pain, but then seemed to recover. Then she got worse, requiring a trip to the emergency veterinarian (pets seem to know the regular vet hours and seem to prefer their woes to take place on weekends).

The good news was that the x-rays showed no serious damage—just indication of wear and tear of age. She also had some unusual test results, perhaps indicating cancer. Because of her age, the main concern was with her mobility and pain—as long as she could get about and be happy, then that was what mattered. She was prescribed an assortment of medications and a follow up appointment was scheduled with the regular vet. By then, she had gotten worse in some ways—her right foot was “knuckling” over, making walking difficult. This is often a sign of nerve issues. She was prescribed steroids and had to go through a washout period before starting the new medicine. As might be imagined, neither of us got much sleep during this time.

While all stories eventually end, her story is still ongoing—the steroids seemed to have done the trick. She can go on slow adventures and enjoys basking in the sun—watching the birds and squirrels, willing the squirrels to fall from the tree and into her mouth.

While philosophy is often derided as useless, it was actually very helpful to me during this time and I decided to write about this usefulness as both a defense of philosophy and, perhaps, as something useful for others who face similar circumstances with an aging canine.

Isis’ emergency visit was focused on pain management and one drug she was prescribed was Carprofen (more infamously known by the name Rimadyl). Carprofen is an NSAID that is supposed to be safer for canines than those designed for humans (like aspirin) and is commonly used to manage arthritis in elderly dogs. Being a curious and cautious sort, I researched all the medications (having access to professional journals and a Ph.D.  is handy here). As is often the case with medications, I ran across numerous forums which included people’s sad and often angry stories about how Carprofen killed their pets. The typical story involved what one would expect: a dog was prescribed Carprofen and then died or was found to have cancer shortly thereafter. I found such stories worrisome and was concerned—I did not want my dog to be killed by her medicine. But, I also knew that without medication, she would be in terrible pain and unable to move. I wanted to make the right choice for her and knew this would require making a rational decision.

My regular vet decided to go with the steroid option, one that also has the potential for side effects—complete with the usual horror stories on the web. Once again, it was a matter of choosing between the risks of medication and the consequences of doing without. In addition to my research into the medication, I also investigated various other options for treating arthritis and pain in older dogs. She was already on glucosamine (which might be beneficial, but seems to have no serious side effects), but the web poured forth an abundance of options ranging from acupuncture to herbal remedies. I even ran across the claim that copper bracelets could help pain in dogs.

While some of the alternatives had been subject to actual scientific investigation, the majority of the discussions involved a mix of miracle and horror stories. One person might write glowingly about how an herbal product brought his dog back from death’s door while another might claim that after he gave his dog the product, the dog died because of it. Sorting through all these claims, anecdotes and studies turned out to be a fair amount of work. Fortunately, I had numerous philosophical tools that helped a great deal with such cases, specifically of the sort where it is claimed that “I gave my dog X, then he got better/died and X was the cause.” Knowing about two common fallacies is very useful in these cases.

The first is what is known as Post Hoc Ergo Propter Hoc (“after this, therefore because of this”).  This fallacy has the following form:

 

  1. A occurs before B.
  2. Therefore A is the cause of B.

 

This fallacy is committed when it is concluded that one event causes another simply because the proposed cause occurred before the proposed effect. More formally, the fallacy involves concluding that A causes or caused B because A occurs before B and there is not sufficient evidence to actually warrant such a claim.

While cause does precede effect (at least in the normal flow of time), proper causal reasoning, as will be discussed in an upcoming essay, involves sorting out whether A occurring before B is just a matter of coincidence or not. In the case of medication involving an old dog, it could entirely be a matter of coincidence that the dog died or was diagnosed with cancer after the medicine was administered. That is, the dog might have died anyway or might have already had cancer. Without a proper investigation, simply assuming that the medication was the cause would be an error. The same holds true for beneficial effects. For example, a dog might go lame after a walk and then recover after being given an herbal supplement for several days. While it would be tempting to attribute the recovery to the herbs, they might have had no effect at all. After all, lameness often goes away on its own or some other factor might have been the cause.

This is not to say that such stories should be rejected out of hand—it is to say that they should be approached with due consideration that the reasoning involved is post hoc. In concrete terms, if you are afraid to give your dog medicine she was prescribed because you heard of cases in which a dog had the medicine and then died, you should investigate more (such as talking to your vet) about whether there really is a risk of death. As another example, if someone praises an herbal supplement because her dog perked up after taking it, then you should see if there is evidence for this claim beyond the post hoc situation.

Fortunately, there has been considerable research into medications and treatments that provide a basis for making a rational choice. When considering such data, it is important not to be lured into rejecting data by the seductive power of the Fallacy of Anecdotal Evidence.

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation on hasty generalization.  It has the following forms:

Form One

  1. Anecdote A is told about a member (or small number of members) of Population P.
  2. Conclusion C is drawn about Population P based on Anecdote A.

For example, a person might hear anecdotes about dogs that died after taking a prescribed medication and infer that the medicine is likely to kill dogs.

Form Two

  1. Reasonable statistical evidence S exists for general claim C.
  2. Anecdote A is presented that is an exception to or goes against general claim C.
  3. Conclusion: General claim C is rejected.

For example, the statistical evidence shows that the claim that glucosamine-chondroitin can treat arthritis is, at best, very weakly supported. But, a person might tell a story about how their aging husky “was like a new dog” after she starting getting a daily dose of the supplement. To accept this as proof that the data is wrong would be to fall for this fallacy. That said, I do give my dog glucosamine-chondroitin because it is cheap, has no serious side effects and might have some benefit. I am fully aware of the data and do not reject it—I am gambling that it might do my husky some good.

The way to avoid becoming a victim of anecdotal evidence is to seek reliable, objective statistical data about the matter in question (a vet should be a good source). This can, I hasten to say, can be quite a challenge when it comes to treatments for pets. In many cases, there are no adequate studies or trials that provide statistical data and all the information available is in the form of anecdotes. One option is, of course, to investigate the anecdotes and try to do your own statistics. So, if the majority of anecdotes indicate something harmful (or something beneficial) then this would be weak evidence for the claim. In any case, it is wise to approach anecdotes with due care—a story is not proof.

Cannot Dump the Trump

Posted in Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on March 9, 2016

As of March, 2016 Donald Trump has continued as the leading Republican presidential candidate. Before his string of victories, Trump was regarded by most pundits as a joke candidate, one that would burn out like a hair fire. After his victories, the Republican establishment and its allies launched a massive (and massively expensive) attack on Trump. So far, this attack has failed and the Republican elite have been unable to dump Trump.

It would be foolish to claim that Trump’s nomination is inevitable. But, it would be equally foolish to cling to the belief that Trump will be taken down by the establishment or that he will gaffe himself to political death. While I have examined how Trump magnificently filled a niche crafted by the Republican party, in this essay I will examine why Trump can probably not be dumped.

As I have argued before, the Republican party is largely responsible for creating the opening for Trump. They have also made it very difficult for attacks on Trump to succeed. This is because the party has systematically undermined (at least for many Republicans) the institutions that could effectively criticize Trump. These include the media, the political establishment, the academy, and the church (broadly construed).

Since about the time of Nixon, the Republican party has engaged in a systematic campaign to cast the mainstream media as liberal and biased. This has been a rather effective campaign (thanks, in part, to the media itself) and there is considerable distrust and distaste regarding the media among Republicans. Trump has worked hard to reinforce this view—lashing out at the media that has enabled him to grow so very fat politically.

While this sustained demolition of the media has paid handsome dividends for the Republicans, the Republicans who oppose Trump now find themselves a victim of their own successful tactic: Trump is effectively immune to criticism coming from the media. When attacked, even by conservative media, he can simply avail himself of the well-worn Republican talking points. This result is exactly as should be expected: degrading an important public institution cannot be good for the health of a democratic state.

While modern Republicans have preached small government, the party firmly embraced the anti-establishment position in recent years. In the past, this approach has been rather ironic: well-entrenched Republicans would wax poetically about their outsider status in order to get re-elected to term after term. While the establishment no doubt hoped it could keep milking the inconsistent cow of outside insiders, Trump has taken advantage of this rhetoric against the established insiders. This time, the insiders are the Republicans.

This provides Trump with a readymade set of tools to counter criticisms and attacks from the Republican establishment—tools that this establishment forged. As such, Trump has little to fear from the attacks of the establishment Republicans. In fact, he should welcome their attacks: each criticism can be melted down and remade as support for Trump being an anti-establishment outsider.

While there were some significant conservative intellectuals and scholars, the Republican party has made a practice of bashing the academy (colleges, universities and intellectuals in general) as being a foul pit of liberalism. There has also been a sustained campaign against reason and expertise—with Republicans actually making ludicrous claims that ignorance is better than knowledge and that expertise is a mark of incompetence.

This approach served the Republicans fairly well when it came to certain political matters, such as climate change. However, this discrediting of the academy in the eyes of many Republican voters has served to protect Trump. Any criticism of Trump from academics or intellectuals can be dismissed with the same rhetorical weapons deployed so often in the past by the same Republicans who now weep at the prospect of a Trump victory. While the sleep of reason breeds monsters, the attack on reason has allowed Trump to flourish. This should be taken as a warning sign of what can follow Trump: when the rational defenses of society are weakened, monsters are free to take the stage.

While the Republican party often embraces religion, this embrace is often limited to anti-abortion, anti-contraception and anti-gay matters. When religious leaders, such as Pope Francis, stray beyond this zone and start taking God’s command to love each other as He loves us seriously, the Republican party generally reacts with hostility. Witness, for example, the incredibly ironic calls of the Republicans for the Pope to keep religion out of politics.

In general, the Republican party has been fine with religion that matches a conservative social agenda and does not stray into positive ethics of social responsibility and moral criticism of an ethics of selfishness (what philosophers call ethical egoism). Straying beyond this, as noted above, results in hostile attacks. To this end, the party has taken steps to undermine these aspects of religion.

One impact of this has been that Trump is able to use these same tools against religious and moral criticisms. He has even been able to go head-to-head with the Pope, thus showing that even religion cannot oppose Trump. Interestingly, many evangelical leaders have condemned Trump—although their flocks seem to rather like him. Since the conservatives like to cast the left as being the foe of religion and ethics, there is considerable irony here.

In addition to taking advantage of the systematic degrading of critical institutions, Trump can also count on the fact that the methods used against him will most likely be ineffective. Some pundits and some establishment members have endeavored to use rational argumentation against Trump. Mitt Romney, for example, has presented a well-reasoned critique of Trump that is right on the mark. Trump responded by asserting that Romney would have been happy to blow him in 2012.

The argumentation approach is not working and will almost certainly not work. As Aristotle argued, the vast majority of people are not convinced by “arguments and fine ideals” but are ruled by their emotions. In fact, all the people are ruled by emotions some of the time and some of the people are ruled by emotions all the time. As such, it is no surprise that philosophers have established that reason is largely ineffective as a tool of persuasion—it is trumped by rhetoric and fallacies (that is, no logic and bad logic). Bringing logic to an emotion fight is a losing proposition.

There is also the fact that the Republican party has, as noted above, consistently bashed intellectualism and expertise—thus making it even less likely that reasoning will be effective against Trump in regards to turning his supporters against him.

Political commitment, like being a sports fan, is also more a matter of irrational feeling than of considered logic. Just as one is unlikely to get a dedicated Cubs fan to abandon her team via syllogisms, one is not going to turn a Trump supporter by logic. Ditto for Sanders and Hillary supporters. This is not to say their supporters are stupid, just that politics is a not a game of logic.

Since Trump is effectively immune to argumentation, his opponents might try to use rhetoric and emotion against him. His Republican opponents face a serious challenge here: they are simply not as good at it as Trump. Trump has also managed to get the battle for the nomination down to the level of basic cable stand-up comedy or a junior high locker room: dick jokes, blow job innuendo, and other presidential subjects. Trump is a master, albeit short-fingered, vulgarian.  Only fellow masters and fools go up against a master vulgarian in vulgarity. While Rubio has tried some stand-up against Trump, he cannot match the man. Cruz and Kasich also lack what it takes to get into the pit with Trump and if they do, it will simply be a case of grabbing a fecal-baby (like the metaphorical tar baby, but worse).

One avenue is to avoid the pit and employ high road rhetoric and emotion against Trump. Unfortunately, the Republican contenders seem utterly inept at doing this and Trump is quite skilled at throwing rhetorical feces on anything that catches his eye. As such, it seems that Trump will not be dumped. What remains to be seen is whether or not these factors will be as effective in the general election against Hillary or Sanders. Assuming, of course, that Trump gets the nomination.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Believing What You Know is Not True

Posted in Epistemology, Philosophy, Reasoning/Logic by Michael LaBossiere on February 5, 2016

“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”

Stephen Colbert

While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from the land of comedy into the realm of philosophy (though detractors might claim that philosophy is comedy without humor; but that is actually law). Colbert has what seems to be an odd epistemology: he regularly claims that he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as merely comedic, it does raise many interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.

While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold the opinion that it is true. A belief is true, in the common sense view, when it gets reality right—this is the often maligned correspondence theory of truth. The stock simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks, but shall suffice for this discussion.

Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.

One possible response is to point out that the human mind is not beholden to the rules of logic—while a contradiction cannot be true, there are many ways a person can hold to contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold to both.  This might be due to an ability to compartmentalize the beliefs so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.

While these responses do have considerable appeal, they do not appear to work in cases in which the person actually claims, as Colbert does, that they believe something they know is not true. After all, making this claim does require considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they belief something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.

One possibility is to consider the power of cognitive dissonance management—a person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem. I will explore this possibility in the context of comfort beliefs in a later essay.

Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true—even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.

Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty sided die, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true—assuming that this means that I can believe in something that I believe is less likely than another belief.

People are also strongly influenced by emotional and other factors that are not based in a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Threat Assessment II: Demons of Fear & Anger

Posted in Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 18, 2015

In the previous essay on threat assessment I looked at the influence of availability heuristics and fallacies that directly relate to errors in reasoning about statistics and probability. This essay continues the discussion by exploring the influence of fear and anger on threat assessment.

As noted in the previous essay, a rational assessment of a threat involves properly considering how likely it is that a threat will occur and, if it occurs, how severe the consequences might be. As might be suspected, the influence of fear and anger can cause people to engage in poor threat assessment that overestimates the likelihood of a threat or the severity of the threat.

One common starting point for anger and fear is the stereotype. Roughly put, a stereotype is an uncritical generalization about a group. While stereotypes are generally thought of as being negative (that is, attributing undesirable traits such as laziness or greed), there are also positive stereotypes. They are not positive in that the stereotyping itself is good. Rather, the positive stereotype attributes desirable qualities, such as being good at math or skilled at making money. While it makes sense to think that stereotypes that provide a foundation for fear would be negative, they often include a mix of negative and positive qualities. For example, a feared group might be cast as stupid, yet somehow also incredibly cunning and dangerous.

After recent terrorist attacks, many people in the United States have embraced negative stereotypes about Muslims, such as the idea that they are all terrorists. This sort of stereotyping leads to similar mistakes that arise from hasty generalizations: reasoning about a threat based on stereotypes will tend to lead to an error in assessment. The defense against a stereotype is to seriously inquire whether the stereotype is true or not.

This stereotype has been used as a base (or fuel) for a stock rhetorical tool, that of demonizing. Demonizing, in this context, involves portraying a group as evil and dangerous. This can be seen as a specialized form of hyperbole in that it exaggerates the evil of the group and the danger it represents. Demonizing is often combined with scapegoating—blaming a person or group for problems they are not actually responsible for. A person can demonize on her own or be subject to the demonizing rhetoric of others.

Demonizing presents a clear threat to rational threat assessment. If a group is demonized successfully, it will be (by definition) regarded as more evil and dangerous than it really is. As such, both the assessment of the probability and severity of the threat will be distorted. For example, the demonization of Muslims by various politicians and pundits influences some people to make errors in assessing the danger presented by Muslims in general and Syrian refugees in particular.

The defense against demonizing is similar to the defense against stereotypes—a serious inquiry into whether the claims are true or are, in fact, demonizing. It is worth noting that what might seem to be demonizing might be an accurate description. This is because demonizing is, like hyperbole, exaggerating the evil of and danger presented by a group. If the description is true, then it would not be demonizing. Put informally, describing a group as evil and dangerous need not be demonizing. For example, this description would match the Khmer Rouge.

While stereotyping and demonizing are mere rhetorical devices, there are also fallacies that distort threat assessment. Not surprisingly, one of this is scare tactics (also known as appeal to fear). This fallacy involves substituting something intended to create fear in the target in place of evidence for a claim. While scare tactics can be used in other ways, it can be used to distort threat assessment. One aspect of its distortion is the use of fear—when people are afraid, they tend to overestimate the probability and severity of threats. Scare tactics is also used to feed fear—one fear can be used to get people to accept a claim that makes them even more afraid.

One thing that is especially worrisome about scare tactics in the context of terrorism is that in addition to making people afraid, it is also routinely used to “justify” encroachments on rights, massive spending, and the abandonment of important moral values. While courage is an excellent defense against this fallacy, asking two important questions also helps. The first is to ask “should I be afraid?” and the second is to ask “even if I am afraid, is the claim actually true?” For example, scare tactics has been used to “support” the claim that Syrian refugees should not be allowed into the United States. In the face of this tactic, one should inquire whether or not there are grounds to be afraid of Syrian refugees and also inquire into whether or not an appeal to fear justifies the proposed ban (obviously, it does not).

It is worth noting that just because something is scary or makes people afraid it does not follow that it cannot serve as legitimate evidence in a good argument. For example, the possibility of a fatal head injury from a motorcycle accident is scary, but is also a good reason to wear a helmet. The challenge is sorting out “judgments” based merely on fear and judgments that involve good reasoning about scary things.

While fear makes people behave irrationally, so does anger. While anger is an emotion and not a fallacy, it does provide the fuel for the appeal to anger. This fallacy occurs when something that is intended to create anger is substituted for evidence for a claim. For example, a demagogue might work up a crowd’s anger at illegal migrants to get them to accept absurd claims about building a wall along a massive border.

Like scare tactics, the use of an appeal to anger distorts threat assessment. One aspect is that when people are angry, they tend to reason poorly about the likelihood and severity of a threat. For example, the crowd that is enraged against illegal migrants might greatly overestimate the likelihood that the migrants are “taking their jobs” and the extent to which they are “destroying America.” Another aspect is that the appeal to anger, in the context of public policy, is often used to “justify” policies that encroach on rights and do other harms. For example, when people are angry about a mass shooting, proposals follow to limit gun rights that actually had no relevance to the incident. As another example, the anger at illegal migrants is often used to “justify” policies that would actually be harmful to the United States. As a third example, appeals to anger are often used to justify policies that would be ineffective at addressing terrorism and would do far more harm than good (such as the proposed ban on all Muslims).

It is important to keep in mind that if a claim makes a person angry, it does not follow that the claim cannot be evidence for a conclusion. For example, a person who learns that her husband is having an affair with an underage girl would probably be very angry. But, this would also serve as good evidence for the conclusion that she should report him to the police and then divorce him. As another example, the fact that illegal migrants are here illegally and this is often simply tolerated can make someone mad, but this can also serve as a premise in a good argument in favor of enforcing (or changing) the laws.

One defense against appeal to anger is good anger management skills. Another is to seriously inquire into whether or not there are grounds to be angry and whether or not any evidence is being offered for the claim. If all that is offered is an appeal to anger, then there is no reason to accept the claim on the basis of the appeal.

The rational assessment of threats is important for practical and moral reasons. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. There is also the concern about the harm of creating fear and anger that are unfounded. In addition to the psychological harm to individuals that arise from living in fear and anger, there is also the damage stereotyping, demonizing, scare tactics and appeal to anger do to society as a whole. While anger and fear can unify people, they most often unify by dividing—pitting us against them.

As in my previous essay, I urge people to think through threats rather than giving in to the seductive demons of fear and anger.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Threat Assessment I: A Vivid Spotlight

Posted in Ethics, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 16, 2015

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is, very broadly speaking, the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both of these factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is a very low severity threat. There is an exceedingly low probability that there will be a mass shooting, but it is a high severity threat since it can result in injury or death.

While humans have done a fairly good job at surviving, this seems to have been despite our amazingly bad skills at rational threat assessment. To be specific, the worry people feel in regards to a threat generally does not match up with the actual probability of the threat occurring. People do seem somewhat better at assessing the severity, though they are also often in error about this.

One excellent example of poor threat assessment is in regards to the fear Americans have in regards to domestic terrorism. As of December 15, 2015 there have been 45 people killed in the United States in attacks classified as “violent jihadist attacks” and 48 people killed in attacks classified as “far right wing attacks” since 9/11/2001.  In contrast, there were 301,797 gun deaths from 2005-2015 in the United States and over 30,000 people are killed each year in motor vehicle crashes in the United States.

Despite the incredibly low likelihood of a person being killed by an act of terrorism in the United States, many people are terrified by terrorism (which is, of course, the goal of terrorism) and have become rather focused on the matter since the murders in San Bernardino. Although there have been no acts of terrorism on the part of refugees in the United States, many people are terrified of refugees and this had led to calls for refusing to accept Syrian refugees and Donald Trump has famously called for a ban on all Muslims entering the United States.

Given that an American is vastly more likely to be killed while driving than killed by a terrorist, it might be wondered why people are so incredibly bad at this sort of threat assessment. The answer, in regards to having fear vastly out of proportion to the probability is easy enough—it involves a cognitive bias and some classic fallacies.

People follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use proper statistical methods, people generally fall victim to the bias known as the availability heuristic. The idea is that a person unconsciously assigns a probability to something based on how often they think of that sort of event. While an event that occurs often will tend to be thought of often, the fact that something is often thought of does not make it more likely to occur.

After an incident of domestic terrorism, people think about terrorism far more often and thus tend to unconsciously believe that the chance of terrorism occurring is far higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is incredibly low (driving to the beach is vastly more likely to kill you than a shark is). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, very difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite it is usually the worst evidence.

People are also misled about probability by various fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain class or type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention is focused on that fact, leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most Christians are terrorists because the media covered a terrorist who was Christian (who shot up a Planned Parenthood). If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a domestic terrorist attack by Muslims.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. This fallacy is similar to hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample.

People often fall victim to this fallacy because stories and anecdotes tend to have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of said anecdote. Not surprisingly, people most commonly accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of domestic terrorism or tell the story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring. For example, people point to the claim that one of the terrorists in Paris masqueraded as a refugee and infer that refugees pose a great threat to the United States. Or they tell the story about the one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the visa system (which is used by about 25,000 people a year).

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is particularly vivid or dramatic does not make the event more likely to occur, especially in the face of significant statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases tend to make a very strong impression on the human mind. For example, mass shootings by domestic terrorists are vivid and awful, so it is hardly surprising that people feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more a person feels that it will occur.

It should be kept in mind that taking into account the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because the effects of an accident can be very, very dramatic. If he knows that, statistically, the chances of the accident are happening are very low but he considers even a small risk to be unacceptable, then he would not be making this error in reasoning. This then becomes a matter of value judgment—how much risk is a person willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is rather important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter a miniscule threat while spending little on leading causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating fear that is unfounded. In addition to the psychological harm to individuals, there is also the damage to the social fabric. There has already been an increase in attacks on Muslims in America and people are seriously considering abandoning core American values, such as the freedom of religion and being good Samaritans.

In light of the above, I urge people to think rather than feel their way through their concerns about terrorism. Also, I urge people to stop listening to Donald Trump. He has the right of free expression, but people also have the right of free listening.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Doubling Down

Posted in Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 11, 2015
A diagram of cognitive dissonance theory. Diss...

 (Photo credit: Wikipedia)

One interesting phenomenon is the tendency of people to double down on beliefs. For those not familiar with doubling down, this occurs when a person is confronted with evidence against a beloved belief and her belief, far from being weakened by the evidence, is strengthened.

One rather plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. Roughly put, when a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the logical inference that the belief is not true, then the rational thing to do is reject the old belief. If the evidence is not plausible or does not strongly support the logical inference that the belief is untrue, then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.

As might be suspected, the assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information about the matter, and credible sources. This assessment can merely push the matter back: the evidence for the evidence will also need to be assessed, which serves to fuel some classic skeptical arguments about the impossibility of knowledge. The idea is that every belief must be assessed and this would lead to an infinite regress, thus making knowing whether a belief is true or not impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some evidence for the belief—even if the evidence is faith or some sort of revelation.

In terms of assessing the reasoning, the matter is entirely objective if it is deductive logic.  Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs, thus the reasoning is objectively good or bad. Inductive reasoning is a different matter. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Unlike deductive arguments, inductive arguments vary greatly in strength and while there are standards of assessment, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.

A second option is to reject the evidence without undergoing the process of honestly assessing the evidence and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or just feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can simply reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies are commonly employed in this task).

This rejection costs less psychologically than engaging the evidence and reasoning, but is often not free. Since the person probably has some awareness of the self-deception, it needs to be psychologically “justified” and this seems to result in the person strengthening her commitment to the belief. People seem to have all sorts of interesting cognitive biases that help out here, such as confirmation bias and other forms of motivated reasoning. These can be rather hard to defend against, since they derange the very mechanisms that are needed to avoid them.

One interesting way people “defend” their beliefs is by regarding the evidence and opposing argument as an unjust attack, which strengthens her resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer that they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. This is rather problematic reasoning—as shown by the fact that people do not infer that they are in error just because people are supporting them.

People also, as John Locke argued in his work on enthusiasm, consider how strongly they feel about a belief as evidence for its truth. When people are challenged, they typically feel angry and this strong emotion makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.

As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with one of doubling down. This accusation, after all, comes with the insinuation that the person is in error and is thus irrationally holding to a false belief. The reasonable defense is to show that evidence and arguments are being used in support of the belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with logic. My opponents double down.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Trump & Truthful Hyperbole

Posted in Ethics, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 4, 2015

In Art of the Deal Donald Trump calls one of his rhetorical tools “truthful hyperbole.” He both defends and praises it as “an innocent form of exaggeration — and a very effective form of promotion.” As a promoter, Trump made extensive use of this technique. Now he is using it in his bid for President.

Hyperbole is an extravagant overstatement and it can be either positive or negative in character. When describing himself and his plans, Trump makes extensive use of positive hyperbole: he is the best and every plan of his is the best. He also makes extensive use of negative hyperbole—often to a degree that seems to cross over from exaggeration to fabrication. In any case, his concept of “truthful hyperbole” is well worth considering.

From a logical standpoint, “truthful hyperbole” is an impossibility. This is because hyperbole is, by definition, not true.  Hyperbole is not a merely a matter of using extreme language. After all, extreme language might accurately describe something. For example, describing Daesh as monstrous and evil would be spot on. Hyperbole is a matter of exaggeration that goes beyond the actual facts. For example, describing Donald Trump as monstrously evil would be hyperbole. As such, hyperbole is always untrue. Because of this, the phrase “truthful hyperbole” says the same thing as “accurate exaggeration”, which nicely reveals the problem.

Trump, a brilliant master of rhetoric, is right about the rhetorical value of hyperbole—it can have considerable psychological force. It, however, lacks logical force—it provides no logical reason to accept a claim. Trump also seems to be right in that there can be innocent exaggeration. I will now turn to the ethics of hyperbole.

Since hyperbole is by definition untrue, there are two main concerns. One is how far the hyperbole deviates from the truth. The other is whether the exaggeration is harmless or not. I will begin with consideration of the truth.

While a hyperbolic claim is necessarily untrue, it can deviate from the truth in varying degrees. As with fish stories, there does seem to be some moral wiggle room in regards to proximity to the truth. While there is no exact line (to require that would be to fall into the line drawing fallacy) that defines the exact boundary of morally acceptable exaggeration, some untruths go beyond that line. This line varies with the circumstances—the ethics of fish stories, for example, differs from the ethics of job interviews.

While hyperbole is untrue, it does have to have at least some anchor in the truth. If it does not, then it is not exaggeration but fabrication. This is the difference between being close to the truth and being completely untrue. Naturally, hyperbole can be mixed in with fabrication.

For example, if it is claimed that some people in America celebrated the terrorism of 9/11, then that is almost certainly true—there was surely at least one person who did this. If someone claims that dozens of people celebrated in public in America on 9/11 and this was shown on TV, then this might be an exaggeration (we do not know how many people in America celebrated) but it certainly includes a fabrication (the TV part). If it is claimed that hundreds did so, the exaggeration might be considerable—but it still contains a key fabrication. When the claim reaches thousands, the exaggeration might be extreme. Or it might not—thousands might have celebrated in secret. However, the claim that people were seen celebrating in public and video existed for Trump to see is false. So, his remarks might be an exaggeration, but they definitely contain fabrication. This could, of course, lead to a debate about the distinction between exaggeration and fabrication. For example, suppose that someone filmed himself celebrating on 9/11 and showed it to someone else. This could be “exaggerated” into the claim that thousands celebrated on video and people saw it. However, saying this is an exaggeration would seem to be an understatement. Fabrication would seem the far better fit in this hypothetical case.

One way to help determine the ethical boundaries of hyperbole is to consider the second concern, namely whether the hyperbole (untruth) is harmless or not. Trump is right to claim there can be innocent forms of exaggeration. This can be taken as exaggeration that is morally acceptable and can be used as a basis to distinguish such hyperbole from lying.

One realm in which exaggeration can be quite innocent is that of storytelling. Aristotle, in the Poetics, notes that “everyone tells a story with his own addition, knowing his hearers like it.” While a lover of truth Aristotle recognized the role of untruth in good storytelling, saying that “Homer has chiefly taught other poets the art of telling lies skillfully.” The telling of tall tales that feature even extravagant extravagation is morally acceptable because the tales are intended to entertain—that is, the intention is good. In the case of exaggerating in stories to entertain the audience or a small bit of rhetorical “shine” to polish a point, the exaggeration is harmless—which ties back to the possibility that Trump sees himself as an entertainer and not an actual candidate.

In contrast, exaggerations that have a malign intent would be morally wrong. Exaggerations that are not intended to be harmful, yet prove to be so would also be problematic—but discussing the complexities of intent and consequences would take the essay to far afield.

The extent of the exaggeration would also be relevant here—the greater the exaggeration that is aimed at malign purposes or that has harmful consequences, the worse it would be morally. After all, if deviating from the truth is (generally) wrong, then deviating from it more would be worse. In the case of Trump’s claim about thousands of people celebrating on 9/11, this untruth feeds into fear, racism and religious intolerance. As such, it is not an innocent exaggeration, but a malign untruth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,
Follow

Get every new post delivered to your Inbox.

Join 2,798 other followers