Trump, Treason and Joking
During President Trump’s first state of the union address, the Democrats were clearly not interested in praising him. Trump took this slight very seriously and rushed to hold a rally to sooth his wounded pride. At the event, he accused the Democrats in Congress of committing treason: “They were like death and un-American. Un-American. Somebody said, ‘Treasonous.’ I mean, yeah, I guess, why not.” Since treason is one of the worst crimes and not applauding a president is not treason Democrats and many Republicans condemned Trump’s remarks. The response from the Whitehouse was that people, especially in the liberal media, need to get a sense of humor because “the President was obviously joking…”
As should be expected, the view people hold on this depends on their feelings about the president. His detractors believe that he was serious or at least did something very wrong. His proponents think he was joking and that the snowflakes in the liberal media and Democratic party need to man up. His most devoted fans might believe that he was serious and see this as a good thing.
Since Trump seems to have no respect for truth and faces clear challenges with grasping reality, it is difficult to tell what he means when he says words. If he was serious, then he is clearly wrong and is wandering deeper into the territory in which dwell would-be authoritarians. If he was not serious, then he was also wrong—accusing people of treason is nothing to joke about. As many have said on many other occasions, Trump is able to grossly violate norms of behavior and simply keep on going through what would be career enders for almost all other human beings. Imagine, if you will, what the response would have been on Fox News and elsewhere if Obama had “jokingly” said Republican Representative Wilson was committing treason when he yelled “you lie” at the president. I am certain they would not have chortled in appreciation at his little bon mot. They would have been, rightly enough, outraged at such behavior.
While President Trump’s behavior is morally problematic, it does provide an excellent example of a rhetorical device that could be called an “appeal to joking.” Rhetorical devices are intended to sway people’s feelings and thus influence their beliefs. Being rhetorical in nature, they lack logical force—they do not actually prove or disprove anything. The gist of the method, as noted above, is to defuse criticism by insisting that the awful thing a person said was but a joke. The method can also be developed into a fallacy by making it into a full, but bad, argument. The appeal to joking has the following form:
-
Person A says B, which is something horrible.
-
There is criticism of or backlash against B.
-
A (or A’s spokespeople) insist that A was joking.
-
Conclusion: Therefore, A should not be criticized or held accountable for saying B.
The reason that the conclusion does not follow from the premises is that merely claiming that the horrible thing was a joke does not entail that the person should not be criticized or held accountable for saying it. One reason for this is that merely making the claim does not prove the person was, in fact, joking. A second reason is that even if the person was joking, this does not entail that they are thus free from criticism or accountability. After all, a person is still accountable for their jokes.
Like many fallacies, there are good arguments that resemble it. If a person can show that they were, in fact, not serious in their remark and intended it to be a joke, then they can advance a good argument that they should not be criticized or held accountable as if they were serious. The challenge is, of course, making a convincing case that it really was a joke rather than an attempt to walk back something awful by pretending it was a joke. This form of reasoning, which is good, would be as follows:
-
Person A says B, which is something horrible.
-
There is criticism of or backlash against B.
-
A (or A’s spokespeople) provides credible evidence that A was really joking.
-
Conclusion: Therefore, A should not be as strongly criticized or held as accountable for saying B as A would be if they were serious.
From a moral standpoint, it is sensible to accept such reasoning since saying something awful as a joke is not as bad as actually meaning it. This is not to say that jokes are not without moral consequences of their own. For example, while joking about assassinating the president is not as bad as seriously planning to assassinate the president, it is still morally wrong.
Not surprisingly, defenders of a person who uses the appeal to joking will tend to think that credible evidence has been provided that the person is “just joking.” In some cases, the alleged evidence might be that the claim is so absurd or horrible that no one could be serious about it. For example, Trump’s claim that the Democrats were treasonous for not applauding for him is so absurdly over the top that one would have to either believe that Trump is joking or that he is some sort of deranged authoritarian who believes that his whims should be law and that a failure to praise him is the act of traitors.
Why Do Good People Do Bad?
Recent events have raised the old question of why (seemingly) good people do bad things. For example, Matt Lauer and Garrison Keillor were both widely respected, but have now fallen before accusations of sexual misdeeds. As another example, legendary Democrat John Conyers’s was regarded as a heroic figure by some, but is now “retiring” in the face of accusations.
One easy and obvious way to explain why people who seem good do bad things is that they merely appeared to be good. Like Plato’s unjust man from the story of the Ring of Gyges, these people presented a virtuous front to the world. But, unlike the perfectly unjust man, their misdeeds were finally exposed to the world. On this view, these are not cases of good people doing bad, they are cases of bad people who masqueraded as good people and finally lost their masks. While this cynical and jaded approach does have considerable appeal, there are alternatives that are worth considering. It must be noted that the situations of individuals obviously vary a great deal and it is not being claimed that one explanation fits everyone.
An alternative explanation of why seemingly good people do bad things is the fact that people tend to be complicated rather than simple when it comes to ethics. Or, as is often said in popular culture, everyone is a mix of good and evil. As such, it is no wonder that even those who are good people (that is, more good than evil) sometimes do bad things. There is also the obvious fact that people are imperfect creatures who fail to always act in accord with their best principles.
One way to understand this is to use a method that the philosopher David Hume was rather fond of: he would routinely ask his reader to consider their own experiences and see if they matched his views. In the case of why good people do bad, I will ask the reader to think of the very worst thing they ever did and to think of why they did it. Presumably each of us, including you, think of themselves as good people. But, we all do bad things—and honestly considering why we do these things will help us understand the motivations and reasons of others.
A third option explains why seemingly good people do bad in terms of why people might think a bad person is good (other than deception). One possibility is that people often confuse a person being good at their profession, being charming, being beautiful or possessing other such positive qualities (virtues) with being a good person. For example, Kevin Spacey is a skilled actor and this no doubt led some people to think he was thus a good person. As another example, Garrison Keillor is a master story teller and created a show that is beloved by many—and some no doubt regarded him as a good person because of these talents.
Both Plato and Kant were aware of this sort of problem—the danger of a person with only some of the virtues, or in Kant’s terms, lacking a good will. Plato warned of the clever rogue: “Did you never observe the narrow intelligence flashing from the keen eye of a clever rogue‑how eager he is, how clearly his paltry soul sees the way to his end; he is the reverse of blind, but his keen eye‑sight is forced into the service of evil, and he is mischievous in proportion to his cleverness?” Kant, in his Fundamental Principles of the Metaphysics of Morals, raises a similar point:
Moderation in the affections and passions, self-control, and calm deliberation are not only good in many respects, but even seem to constitute part of the intrinsic worth of the person; but they, are far from deserving to be called good without qualification, although they have been so unconditionally praised by the ancients. For without the principles of a good will, they may become extremely bad; and the coolness of a villain not only makes him far more dangerous, but also directly makes him more abominable in our eyes than he would have been without it.
This should be taken as a warning about judging people—while the positive virtues of a person can easily lead people to judge them a good person, judging the whole person based on a few qualities can easily lead to errors. This is not to say that it should be assumed that people are always bad, but it is to say that it should not be inferred that a person is good based on a limited set of positive traits or accomplishments.
Another possibility is that a person will think another person is good because they agree with their professed values, religion, ideology, etc. The person’s reasoning is probably something like this:
Premise 1: I believe in value V.
Premise 2: Person A professes belief in value V.
Premise 3: I (think I) am a good person (because I believe V).
Conclusion: Person A is a good person.
For example, Democrats would be more inclined to think that Bill Clinton, John Conyers and Al Franken are good people—because they are fellow Democrats. Likewise, Republicans would be more inclined to think that Trump and Roy Moore are good people. This sort of reasoning is also fueled by various cognitive biases, such as the tendency of people to regard members of their own group as better than those outside the group.
While this reasoning is not entirely terrible, those using it need to carefully consider whether Person A really holds to value V, whether believing in V really is a mark of goodness, and whether they really are a good person. Not surprisingly, people do tend to uncritically accept the professed goodness of those who profess to share their values and this cuts across the entire political spectrum, across all religions and so on. People even hold to their assessment in the face of evidence that contradicts person’s A professed belief in value V.
This discussion does not, of course, exhaust possible explanations as to why (seemingly) good people do bad things. But it does present some possible accounts that are worth considering when trying to answer this question in specific cases.
Whataboutism
While Whataboutism has long served as a tool for Soviet (and now Russian) propagandists, it has now become entrenched in American political discourse. It is, as noted by comedian John Oliver, a beloved tool of Fox News and President Trump.
Whataboutism is a variant of the classic ad hominem tu quoque fallacy. In the standard tu quoque fallacy it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. This type of “argument” has the following form:
- Person A makes claim X.
- Person B asserts that A’s actions or past claims are inconsistent with the truth of claim X.
- Therefore X is false.
The fact that a person makes inconsistent claims does not make any particular claim he makes false (although of any pair of inconsistent claims only one can be true—but both can be false). Also, the fact that a person’s claims are not consistent with his actions might indicate that the person is a hypocrite, but this does not prove his claims are false. For those noting the similarity to the Wikipedia entry on this fallacy, you will note that the citation for the form and example is to my work.
As would be expected, while the Russians used this tactic against the West, Americans use it against each other along political lines. For example, a Republican might “defend” Roy Moore by saying “what about Harvey Weinstein?” A Democrat might do the reverse. I mention that Democrats can use this in anticipation of comments to the effect of “what about Democrats using whataboutism?” People are, of course, free to use Bill Clinton in the example, if they prefer. To return to the subject, the “reasoning” in both cases would be fallacious as is evident when the “logic” is laid bare:
- Premise 1: Person A of affiliation 1 is accused of X by person B of Affiliation 2.
- Premise 2: Person C of affiliation 2 is accused of X by person D of affiliation 1.
- Conclusion: Therefore, A did not do X.
Obviously enough, whether C did X is irrelevant to whether or not it is true that A did X.
Alternatively:
- Premise 1: Person A of affiliation 1 is accused of X by person B of Affiliation 2.
- Premise 2: Person C of affiliation 2 is accused of X by person D of affiliation 1.
- Conclusion: Therefore, it is not wrong that A did X.
Clearly, even if C did X it does not follow that A doing X was not wrong. This sort of “reasoning” can also be seen as a variant on the classic appeal to common practice fallacy. This fallacy has the following structure:
Premise 1. X is a common action.
Conlcusion. Therefore X is correct/moral/justified/reasonable, etc.
The basic idea behind the fallacy is that the fact that most people do X is used as “evidence” to support the action or practice. It is a fallacy because the mere fact that most people do something does not make it correct, moral, justified, or reasonable. In the case of whataboutism, the structure would be like this:
Premise 1. You said X is done by my side.
Premise 2. Whatabout X done by your side?
Premises 3. So, X is commonly done/we both do X.
Conclusion: Therefore, X is correct/moral/justified/reasonable, etc.
It is also common for the tactic of false equivalency to be used in whataboutism. In the form above, the X of premise 1 would not be the moral equivalent of the X of premise 2. In fact, the form should be modified to account for the use of false equivalency:
Premise 1. You said X is done by my side.
Premise 2. Whatabout Y, which I say is just as bad as X, done by your side?
Premises 3. So, things just as bad as X are commonly done/we both do things as bad as X.
Conclusion: Therefore, X is correct/moral/justified/reasonable, etc.
This would be a not-uncommon double fallacy. In this case not only is the comparison between X and Y a false one, even if they were equivalent the fact that both sides do things that are equally bad would still not support the conclusion. Obviously enough, you should not accept this sort of reasoning—especially when it is being used to “support” a conclusion that is appealing.
Whataboutism can also be employed as a tool for creating a red herring. A Red Herring is a fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic. This sort of “reasoning” has the following form:
- Topic A is under discussion.
- Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
- Topic A is abandoned.
In the case of a whataboutism, the structure would be as follows:
- Topic A, my side doing X, is under discussion.
- Topic B is introduced: whatabout X done by the other side?
- Topic A is abandoned.
In closing, it should be noting that if two sides are being compared, then it is obviously relevant to consider the flaws of both sides. For example, if the issue is whether to vote for candidate A or B, then it is reasonable to consider the flaws of both A and B in comparison. However, the flaws of A do not show that B does not have flaws and vice versa. Also, if the issue being discussed is the bad action of A, then bringing up B’s bad action does nothing to mitigate the badness of A’s action. Unless, of course, A had to take a seemingly bad action to protect themselves from B’s unwarranted bad action. For example, if A is accused of punching a person and it is shown that this was because B tried to kill A, then that would obviously be relevant to assessing the ethics of A’s action. But, if A assaulted women and B assaulted women, then bringing up B in a whataboutism to defend A would be an error in logic. Both would be bad.
As far as why you should be worried about whataboutism, the obvious reason is that it is a corrosive that eats at the very structure of truth and morality. While it is a tempting tool to deploy against one’s hated enemies (such as fellow Americans), it is not a precise weapon—each public use splashes the body of society with vile acid.
Reasoning & Natural Disasters II: Inductive Reasoning
Fortunately for my adopted state of Florida, Irma weakened considerably as it moved northward. When it reached my adopted city of Tallahassee, it was barely a tropical storm. While it did some damage, it was nothing compared to last year’s storm. While this was a good thing, there can be a very minor downside when dire predictions turn out to be not so dire.
The problem is, of course, that people might take such dire predictions less seriously in the future. There is even a term for this: hurricane fatigue. When people are warned numerous times about storms and they do not prove as bad as predicted, people tend to get tired of going through the process of preparation. Hence, they tend to slack off in their preparations—especially if they took the last prediction very seriously and engaged in extensive preparations. Such as buying absurd amounts of bottled water. The problem is, of course, that the storm a person does not prepare for properly might turn out to be as bad or worse than predicted. Interestingly enough, inductive reasoning is the heart of this matter in two ways.
Inductive reasoning is, of course, logic in which the premises provide some degree of support (but always less than complete) for the conclusion. Inductive arguments deal in probability and this places them in contrast with deductive arguments—they are supposed to deal in certainty. That is, having all true premises in a deductive argument is supposed to guarantee a true conclusion. While there are philosophers who believe that predictions about such things as the weather can be made deductively, the best current reasoning only allows inductive reasoning regarding weather prediction. To use a simple illustration, when a forecast says there is a 50% chance of rain, what is meant is that on 50% of the days like this one it rained. This is, in fact, an argument by analogy. With such a prediction, it should be no more surprising that it rains than it does not.
While the computer modeling of hurricanes is rather complex, the predictions are still inductive in nature: all the evidence used in the reasoning can be true while the conclusion can still be false. This is because of the famous problem of induction—the gap between the premises and the conclusion means that no matter how strong the reasoning of an inductive argument, the conclusion can still be false. As such, any weather prediction can turn out to be false—even if the prediction is 99.99% likely to be accurate. As such, it should be expected that weather predictions will often be wrong—especially since the models do not have complete information and are limited by the available processing power. That is, there is also a gap between reality and the models. There is also the philosophical question of whether the world is deterministic or not—in a deterministic world, weather would be fully predictable if there was enough information and processing power available to create a perfect model of reality. In a non-deterministic world, even a perfect model could still fail to predict what will happen in the real world. As such, there is both a problem in epistemology (what do we know) and metaphysics (what is the nature of reality).
Interestingly enough, when people start to distrust predictions after past predictions turn out to be wrong, they are also engaging in inductive reasoning. To be specific, if many predictions have turned out to be wrong, then it can be reasonable to infer that the next prediction could be wrong. That is certainly reasonable and thinking that an inductive argument could have a false conclusion is no error.
Where people go wrong is when they place to much confidence in the conclusion that the prediction will be wrong. One way this can happen is through a variation in the gambler’s fallacy. In the classic gambler’s fallacy, a person assumes that a departure from what occurs on average or in the long term will be corrected in the short term. For example, if a person concludes that tails is due because they have gotten heads six times in a row, then they have committed this fallacy. In the case of the “hurricane fallacy” a person overconfidently infers that the streak of failed predictions must continue. The person could, of course, turn out to be right. The error lies in the overconfidence in the conclusion that the prediction will be wrong. Sorting out the confidence one should have in their doubt is a rather challenging matter because it requires understanding the accuracy of the predictions.
As a practical matter, one way to address hurricane fatigue is to follow some excellent advice: rather than going through mad bursts of last second preparation, always be prepared at the recommended minimum level. That is, have enough food and water on hand for three days and make basic preparations for being without power or evacuating. Much of this can easily be integrated into one’s normal life. For example, consuming and replacing canned and dried goods throughout the year means that one will have suitable food on hand. There are also one-time preparations, such as acquiring some crank-powered lights, a small solar panel for charging smart phones, and getting a basic camp stove and a few propane canisters to store.
This does lead to a final closing point, namely the cost of preparation. Since I have a decent income, I can afford to take the extra steps of being always ready for a disaster. That is, I can buy the lights, stove, propane, and such and store them. However, this is not true of everyone. When I was at Publix before the storm, I spoke to some people who said that it was hard for them to get ready for storms—they needed their money for other things and could not afford to have a stockpile of unused supplies let alone things like solar panels or generators. The upfront cost of stockpiling in preparation for the storm was also a challenge—there are, as far as I know, no emergency “storm loans” or rapid aid to help people gear up for impending storms. No doubt some folks would be terrified that storm moochers would be living fat on the public’s money during storms. However, storm aid does sound like decent idea and could even be cost saver for the state. After all, the better prepared people are before the storm, the less the state and others must do during and after the storm.
Reasoning & Natural Disasters
As this is being written, Irma is scouring its way across the Atlantic and my adopted state of Florida will soon feel her terrible embrace. Nearby, Texas is still endeavoring to dry out from its own recent watery disaster. The forces of nature can be overwhelming in their destructive power, but poor reasoning on the part of humans can contribute to the magnitude of a natural disaster. As such, it is worth considering how poor reasoning impacts disaster planning both by individuals and by the state. Or lack of planning.
While human activity can impact nature, the power of nature can kill any human and sweep away anything we can construct. As such, even the best planning can come to nothing. To think that because perfect planning is impossible we should simply let nature shake the dice for us would be to fall into the classic perfectionist fallacy. This is to engage in a false dilemma in which the two assumed options are doing nothing or having a perfect option. While there are no perfect options, there are almost always those that are better than nothing. As such, the first form of bad reasoning to overcome is this (fortunately relatively rare) view that there is no point in planning because something can always go wrong.
Another reason why people tend to not prepare properly is another classic fallacy, that of wishful thinking. This is an error of reasoning in which a person concludes that because they really want something to be true, it follows that it is true. While people do know that a disaster can impact them, it is natural to reject the possibility until it becomes a reality. In many cases, people engage in wishful thinking while the disaster is approaching, feeling that since they do not want it to arrive it follows that it will not. As such, they put off planning and preparation—perhaps until it is too late. This is not to say that people should fall into a form of woeful thinking (the inference that whatever one does not wish to happen will happen)—that would be equally a mistake. Rather, people should engage in the rather difficult task of believing what is supported by the best available evidence.
People also engage in the practice of discounting the future. This is a mistake of valuing a near good more than a future good simply because of the time factor. This is not, of course, to deny that time is a relevant factor in considering value. In the case of mitigating disasters, preparing now incurs a cost in time and resources that will not pay off until later (or even never). For example, money a city spends building storm surge protection is money that will not be available to improve the city parks.
Connected to the matter of time is also the matter of probability—as noted, while disaster preparation might yield benefits in the future, they might not. As such, there is a double discount: time and probability. As such, a rational assessment of the value of disaster preparation needs to consider both time and chance—will disasters strike and if so, when will they strike?
As would be suspected, the more distant a disaster (such as a “500 year flood”) and the less likely the disaster (such as a big meteor hitting the earth), the less people are willing to expend resources now. This can be rational, provided that these factors are given due consideration. There is also the fact that these considerations become quite philosophical in that they are considerations of value rather than purely mathematical calculations. To illustrate, determining whether I should contribute to preparing against a disaster that will not arrive until well after I am dead of old age is a matter of moral consideration and thus requires philosophical reasoning to sort out. Such reasoning need not be bad reasoning and these considerations show why disaster planning can be quite problematic even when people are reasoning well. However, problems do arise when people are unclear (or dishonest) about what values are in play. As such, reasoning well about disaster preparation requires being clear about the values that are informing the decision-making process. Since such considerations typically involve politics and economics, deceit is to be expected.
Another factor is nicely illustrated by a story from Sun Tzu’s Art of War. The tale relates how a lord asked his doctor, a member of a family of healers, which of the family was the most skilled: According to an old story, a lord of ancient China once asked his physician, a member of a family of healers, which of them was the most skilled in the art:
The physician, whose reputation was such that his name became synonymous with medical science in China, replied, “My eldest brother sees the spirit of sickness and removes it before it takes shape, so his name does not get out of the house.
“My elder brother cures sickness when it is still extremely minute, so his name does not get out of the neighborhood.
“As for me, I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords.”
While there are some exceptions, politicians and leaders often act to get attention and credit for their deeds. As the above story indicates, there is little fame to gain by quietly preventing disasters. There is, however, considerable attention and credit to be gained by publicly handling a disaster well (and great infamy to be gained by handling it badly). As such, there is little appeal in preparation for it earns no glory.
There is also to fact that while people can assess what has happened, sorting out what was prevented is rather more challenging. For example, while people clearly notice when a city loses power due to a storm, few would realize when effective planning and infrastructure modification prevented a storm from knocking out the power. After all, the power just keeps on going. Motivating people by trying to appeal to what will be prevented (or what was prevented) can be quite challenging. This can also be illustrated by how some people look at running. Whenever a runner drops dead, my non-running friends will rush to point this out to me, claiming that it is great they do not run because otherwise they would die. When I try to point to the millions of runners who are healthier and live longer than non-runners, they find the absence of early death far less influential.
To be fair, sorting out that something did not happen and why it did not happen can be rather complicated. However, what seems to be an ever-increasing frequency of natural disasters requires that these matters be addressed. While it might not be possible to persuade people of the value of prevention so that they will commit adequate resources to the effort, it is something that must be attempted.
Weight Loss, Philosophy & Science
When I was young and running 90-100 miles a week, I could eat all the things without gaining weight. Time is doubly cruel in that it slowed my metabolism and reduced my ability to endure high mileage. Inundated with the usual abundance of high calorie foods, I found I was building an unsightly pudge band around my middle. My first reaction was to try to get back to my old mileage, but I found that I now top out at 70 miles a week and anything more starts breaking me down. Since I could not exercise more, I was faced with the terrible option of eating less. Being something of an expert on critical thinking, I dismissed all the fad diets and turned to science to glean the best way to beat the bulge. Being a philosopher, I naturally misapplied the philosophy of science to this problem with some interesting results.
Before getting into the discussion, I am morally obligated to point out that I am not a medical professional. As such, what follows should be regarded with due criticism and you should consult a properly credentialed expert before embarking on changes to your exercise or nutrition practices. Or you might die. Probably not; but maybe.
As any philosopher will tell you, while the math used in science is deductive (the premises are supposed to guarantee the conclusion with certainty) scientific reasoning is inductive (the premises provide some degree of support for the conclusion that is less than complete). Because of this, science suffers from the problem of induction. In practical terms, this means that no matter how carefully the reasoning is conducted and no matter how good the evidence is, the conclusion drawn from the evidence can still be false. The basis for this problem is the fact that inductive reasoning involves a “leap” from the evidence/premises (what has been observed) to the conclusion (what has not been observed). Put bluntly, inductive reasoning can always lead to a false conclusion.
Scientists and philosophers have long endeavored to make science a deductive matter. For example, Descartes believed that he could find truths that he could know with certainty and then use valid deductive reasoning to generate a true conclusion with absolute certainty. Unfortunately, this science of certainty is the science of the future and always will be. So, we are stuck with induction.
The problem of induction obviously applies to the sciences that study nutrition, exercise and weight loss and, as such, the conclusions made in these sciences can always be wrong. This helps explain why the recommendations about these matters change relentlessly.
While there are philosophers of science who would disagree, science is mostly a matter of trying to figure things out by doing the best that can be done at the time. This is limited by the resources (such as technology) available at the time and by human epistemic capabilities. As such, whatever science is presenting at the moment is almost certainly at least partially wrong; but the wrongs get reduced over time. Or increase sometimes. This is true of all the sciences—consider, for example, the changes in physics since Thales began it. This also helps explain why the recommendations about diet and exercise change constantly.
While science is sometimes presented as a field of pure reason outside of social influences, science is obviously a social activity conducted by humans. Because of this, science is influence by the usual social factors and human flaws. For example, scientists need money to fund their research and can thus be vulnerable to corporations looking to “prove” various claims that are in their interest. As another example, scientific matters can become issues of political controversy, such as evolution and climate change. This politicization tends to derange science. As a final example, scientists can be motivated by pride and ambition to fudge or fake results. Because of these factors, the sciences dealing with nutrition and exercise are significantly corrupted and this makes it difficult to make a rational judgment about which claims are true. One excellent example is how the sugar industry paid scientists at Harvard to downplay the health risks presented by sugar and play up those presented by fat. Another illustration is the fact that the food pyramid endorsed by the US government has been shaped by the food industries rather than being based entirely on good science.
Given these problems it might be tempting to abandon mainstream science and go with whatever fad or food ideology one finds appealing. That would be a bad idea. While science suffers from these problems, mainstream science is vastly better than the nonscientific alternatives—they tend to have all of the problems of science without having its strengths. So, what should one do? The rational approach is to accept the majority opinion of the qualified and credible experts. One should also keep in mind the above problems and approach the science with due skepticism.
So, what are some of the things the best science of today say about weight loss? First, humans evolved as hunter-gatherers and getting enough calories was a challenge. As such, humans tend to be very good at storing energy in the form of fat which is one reason the calorie rich environment of modern society contributes to obesity. Crudely put, it is in our nature to overeat—because that once meant the difference between life and death.
Second, while exercise does burn calories, it burns far less than many imagine. For most people, the majority of calorie burning is a result of the body staying alive. As an example, I burn about 4,000 calories on my major workout days (estimated based on my Fitbit and activity calculations). But, about 2,500 of those calories are burned just staying alive. On those days I work out about four hours and I am fairly active the rest of the day. As such, while exercising more will help a person lose weight, the calorie impact of exercise is surprisingly low—unless you are willing to commit considerable time to exercise. That said, you should exercise—in addition to burning calories it has a wide range of health benefits.
Third, hunger is a function of the brain and the brain responds differently to different foods. Foods high in protein and fiber create a feeling of fullness that tends to turn off the hunger signal. Foods with a high glycemic index (like cake) tend to stimulate the brain to cause people to consume more calories. As such, manipulating your brain is an effective way to increase the chance of losing weight. Interestingly, as Aristotle argued, habituation to foods can train the brain to prefer foods that are healthier—that is, you can train yourself to prefer things like nuts, broccoli and oatmeal over cookies, cake, and soda. This takes time and effort, but can obviously be done.
Fourth, weight loss has diminishing returns: as one loses weight, one’s metabolism slows and less energy is needed. As such, losing weight makes it harder to lose weight, which is something to keep in mind. Naturally, all of these claims could be disproven in the next round of scientific investigation—but they seem quite reasonable now.
Poverty & the Brain
A key part of the American mythology is the belief that a person can rise to the pinnacle of success from the depths of poverty. While this does occur, most understand that poverty presents a considerable obstacle to success. In fact, the legendary tales that tell of such success typically embrace an interesting double vision of poverty: they praise the hero for overcoming the incredible obstacle of poverty while also asserting that anyone with gumption should be able to achieve this success.
Outside of myths and legends, it is a fact that poverty is difficult to overcome. There are, of course, the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care—both necessities are expensive and the poor typically have less access to good food and good care. There is also recent research by scientists such as Kimberly G. Noble that suggests a link between poverty and brain development.
While the most direct way to study the impact of poverty and the brain is by imaging the brain, this (as researchers have noted) is expensive. However, the research that has been conducted shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.
Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test for the functionality of the brain. As might be expected, children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.
As would be expected, there are individuals who do not conform to the generally correlation. That is, there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. As such, knowing the economic class of a child does not tell one what their individual capabilities are. However, there is a clear correlation when the matter is considered in terms of populations rather than single individuals. This is important to consider when assessing the impact of anecdotes of successful rising from poverty—as with all appeals to anecdotal evidence, they do not outweigh the bulk of statistical evidence.
To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not entail that one knows that Sally is weaker than Bob the boy. Sally might be much stronger than Bob. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the obvious question about whether poverty is a causal factor in brain development.
Those with even passing familiarity with causal reasoning know that correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities that there must be a causal connection would be to fall victim to the most basic of causal fallacies. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both—that is, poverty and the cognitive abilities are both effects.
There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.
This view does have considerable appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are needed for success and it seems reasonable to think that people who were less capable would thus be less successful. To use an analogy, there is a clear correlation between running speed and success in track races. It is not, of course, losing races that makes a person slow. It is being slow that causes a person to lose races.
Despite the appeal of this interpretation of the data, to rush to the conclusion that it is the cognitive abilities that cause poverty would be as much a fallacy as rushing to the conclusion that poverty influences brain development. Both views do seem plausible and it is certainly possible that there is causation going in both directions. The challenge, then, is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble—providing the experimental group of low income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.
Intuitively, it makes sense that an adequate family income would generally have a positive impact on the development of children. After all, this income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that throwing money at poverty is the cure; but reducing poverty is certainly a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who are concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.
False Allegiance
One of the key distinctions in critical thinking is that between persuasion and argumentation. While an argument can be used to persuade, the object of an argument is truth. More specifically, the goal is to present evidence/reasons (known as premises) that logically support the conclusion. In contrast, the goal of persuasion is the acceptance of a claim as true, whether the claim is true or not. As should be expected, argumentation is rather ineffective as a tool of persuasion. Rhetorical devices, which are linguistic tools aimed at persuading, are rather more effective in achieving this goal. While there are many different rhetorical devices, one rather interesting one is what can be called False Allegiance. Formalized, the device is simple:
- A false statement of allegiance to a group, ideology or such is made.
- A statement that seems contrary to the professed allegiance is made, typically presented as being done with reluctance. This is often criticism or an attack.
While there is clearly no logical connection between the (false) statement of allegiance and the accuracy of the statement, a psychological connection can be made. The user’s intent is that their claim of allegiance will grant them credibility and thus make their claim more believable. This perceived credibility could be a matter of the target believing that the critic has knowledge of the matter because of their alleged allegiance. However, the main driving force behind the perceived credibility is typically the assumption that a person who professes allegiance to something will be honest in their claims about their alleged group. That is, they would not attack what they profess allegiance to unless there was truth behind the attack.
Like almost all rhetorical devices, False Allegiance has no allegiance of its own and can be pressed into service for any cause. As an illustration, it works just as well to proclaim a false allegiance to the Democrats as it does to the Republicans. For example, “Although I am a life-long Democrat, and it pains me to do so, I must agree that Trump is right about voter fraud. We need to ensure that illegals are not casting votes in our elections and so voter ID laws are a great idea.” As another example, “I have always voted for Republicans, so it is with great reluctance that I say that Trumpcare is a terrible idea.”
Looking at these examples, one might point out that these claims could be made with complete sincerity. That is, a Democrat could really believe that voter ID laws are a great idea and a Republican could think that Trumpcare is a terrible idea. That is, the professed allegiance could be sincere. This is certainly a point worth considering and everything that looks like it might be a case of False Allegiance need not be this rhetorical device.
In cases in which the person making the claims is known, it is possible to determine if the allegiance is false or not. For example, if John McCain says, “Although I am a loyal Republican I…”, then it is reasonable to infer this is not a case of false allegiance. However, if the identity and allegiance of the person making the claims cannot be confirmed, then the possibility that this device is being used remains.
Fortunately, defending against this device does not require being able to confirm (or deny) the allegiance of the person making the relevant claims. This is because the truth (or falsity) of the assertions being made are obviously independent of the allegiance and identity of the person making the claims. If the claims are adequately supported by evidence or reasons, then it would be reasonable to accept them—regardless of who makes the claims or why they are being made. If the claims are not adequately supported, then it would be unreasonable to accept them. This does not entail that they should be rejected—after all, just as a rhetorical device does not prove anything, its usage does not disprove anything.
It needs to be emphasized that even if it is shown that the person making the claim has a true allegiance, then it does not follow that their claim is thus true. After all, this reasoning is clearly fallacious: “I have an allegiance to X, so what I say about X is true.” They would not be using the False Allegiance rhetorical device, but could be using an appeal to allegiance, which would simply be another type of rhetoric.
In practical terms, when assessing a claim one should simply ignore such professions of allegiance. This is because they have no logical relevance to the claim being made. They can, obviously enough, have psychological force—but this merely is a matter of the power to persuade and not the power to prove.
The Curse of Springtime
As a professional philosopher, I am not inclined to believe in curses. However, my experiences over the years have convinced me that I am the victim of what I call the Curse of Springtime. As far as I know, this curse is limited to me and I do not want anyone to have the impression that I regard Springtime Tallahassee in a negative light. Here is the tale of the curse.
For runners, the most important part of Springtime is the Springtime 10K (and now the 5K). Since I moved to Tallahassee in 1993 I have had something bad happen right before or during the race. Some examples: one year I had a horrible sinus infection. Another year I had my first ever muscle pull. Yet another year I was kicking the kickstand of my Yamaha, slipped and fell-thus injuring my back. 2008 saw the most powerful manifestation of the curse.
On the Thursday before the race, my skylight started leaking. So, I (stupidly) went up to fix it. When I was coming down, the ladder shot out from under me. I landed badly and suffered a full quadriceps tendon tear that took me out of running for months. When Springtime rolled around in 2009 I believed that the curse might kill me and I was extra cautious. The curse seemed to have spent most of its energy on that injury, because although the curse did strike, it was minor. But, the curse continued: I would either get sick or injured soon before the race, or suffer and injury during the race. This year, 2017, was no exception. My knees and right foot started bothering me a week before the race and although I rested up and took care of myself, I was unable to run on Thursday. I hobbled through the 10K on Saturday, cursing the curse.
Since I teach critical thinking, I have carefully considered the Curse of Springtime and have found it makes a good example for applying methods of causal reasoning. I started with the obvious, considering that I was falling victim to the classic post hoc, ergo propter hoc (“after this, therefore because of this”). This fallacy occurs when it is uncritically assumed that because B follows A, that A must be the cause of B. To infer just because I always have something bad happen as Springtime arrives that Springtime is causing it would be to fall into this fallacy. To avoid this fallacy, I would need to sort out a possible causal mechanism—mere correlation is not causation.
One thing that might explain some of the injuries and illnesses is the fact that the race occurs at the same time each year. By the time Springtime rolls around, I have been racing hard since January and training hard as well—so it could be that I am always worn out at this time of year. As such, I would be at peak injury and illness vulnerability. On this hypothesis, there is no Curse—I just get worn down at the same time each year because I have the same sort of schedule each year. However, this explanation does not account for all the incidents—as noted above, I have also suffered injuries that had nothing to do with running, such as falls. Also, sometimes I am healthy and injury free before the race, then have something bad happen in the race itself. As such, the challenge is to find an explanation that accounts for all the adverse events.
It is certainly worth considering that while the injuries and illnesses can be explained as noted above, the rest of the incidents are mere coincidences: it just so happens that when I am not otherwise ill or injured, something has happened. While improbable, this is not impossible. That is, it is not beyond the realm of possibility for random things to always happen for the same race year after year.
It is also worth considering that it only seems that there is a curse because I am ignoring the other bad races I have and considering only the bad Springtime races. If I have many bad races each year, it would not be unusual for Springtime to be consistently bad. Fortunately, I have records of all my races and can look at it objectively: while I do have some other bad races, Springtime is unique in that something bad has happened every year. The same is not true of any other races. As such, I do not seem to be falling into a sort of Texas Sharpshooter Fallacy by only considering the Springtime race data and not all my race data.
There is certainly the possibility that the Curse of Springtime is psychological: because I think something bad will happen it becomes a self-fulfilling prophecy. Alternatively, it could be that because I expect something bad to happen, I carefully search for bad things and overestimate their badness, thus falling into the mistake of confirmation bias: Springtime seems cursed because I am actively searching for evidence of the curse and interpreting events in a way that support the curse hypothesis. This is certainly a possibility and perhaps any race could appear cursed if one spent enough effort seeking evidence of an alleged curse. That said, there is no such consistent occurrence of unfortunate events for any other race, even those that I have run every year since I moved here. This inclines me to believe that there is some causal mechanism at play here. Or a curse. But, I am aware of the vagaries of chance and it could simply be an unfortunate set of coincidences that every Springtime since 1994 has seemed cursed. But, perhaps in 2018 everything will go well and I can dismiss my belief in the curse as mere superstition. Unless the curse kills me then. You know, because curse.
Conservative Conservation
While the scientific evidence for climate change is overwhelming, it has become an ideological matter. In the case of conservatives, climate change denial has become something of a stock position. In the case of liberals, belief in human-caused climate change is a standard position. Because of the way ideological commitments influence thought, those who are committed to climate change denial tend to become immune to evidence or reasons offered against their view. In fact, they tend to double-down in the face of evidence—which is a standard defense people use to protect their ideological identity. This is not to say that all conservatives deny climate change; many accept it is occurring. However, conservatives who accept the reality of climate change tend to deny that it is caused by humans.
This spectrum of beliefs does tend to match the shifting position on climate change held by influential conservatives such as Charles Koch. The initial position was a denial of climate change. This shifted to the acceptance of climate change, but a rejection of the claim that it is caused by humans. The next shift was to accept that climate change is caused by humans, but that it is either not as significant as the scientists claim or that it is not possible to solve the problem. One obvious concern about this slow shift is that it facilitates the delay of action in response to the perils of climate change. If the delay continues long enough, there really will be nothing that can be done about climate change.
Since many conservatives are moving towards accepting human caused climate change, one interesting problem is how to convince them to accept the science and to support effective actions to offset the change. As I teach the students in my Critical Inquiry class, using logic and evidence to try to persuade people tends to be a poor option. Fallacies and rhetoric are vastly more effective in convincing people. As such, the best practical approach to winning over conservatives is not by focusing on the science and trying to advance rational arguments. Instead, the focus should be on finding the right rhetorical tools to win people over.
This does raise a moral concern about whether it is acceptable to use such tactics to get people to believe in climate change and to persuade them to act. One way to justify this approach is on utilitarian grounds: preventing the harms of climate change morally outweighs the moral concerns about using rhetoric rather than reason to convince people. Another way to justify this approach is to note that the goals are not to get people to accept an untruth and to do something morally questionable Quite the contrast, the goal is to get people to accept scientifically established facts and to act in defense of the wellbeing of humans in particular and the ecosystem in general. As such, using rhetoric when reason fails seems warranted in this case. The question is then what sort of rhetoric would work best.
Interestingly, many conservative talking points can be deployed to support acting against climate change. For example, many American conservatives favor energy independence and keeping jobs in America. Developing sustainable energy within the United States, such as wind and solar power, would help with both. After all, while oil can be shipped from Saudi Arabia, shipping solar power is not a viable option (at least not until massive and efficient batteries become economically viable). The trick is, of course, to use rhetorical camouflage to hid that the purpose is to address climate change and environmental issues. As another example, many American conservatives tend to be pro-life—this can be used as a rhetorical angle to argue against pollution that harms fetuses. Of course, this is not likely to be a very effective approach if the main reasons someone is anti-abortion are not based in concern about human life and well-being. As a final example, clean water is valuable resource for business because industry needs clean water and, of course, human do as well. Thus, environmental protection of water can be sold with the rhetorical cover of being pro-business rather than pro-environment.
Thanks to a German study, there is evidence that one effective way to persuade conservatives to be concerned about climate change is to appeal to the fact that conservatives value preserving the past. This study showed that conservatives were influenced significantly more by appeals to restoring the earth to the way it was than by appeals to preventing future environmental harms. That is, conservatives were more swayed by appeals to conservation than by appeals to worries about future harms. As such, those wishing to gain conservative support for combating climate change should focus not on preventing the harms that will arise, but on making the earth great again. Many conservatives enjoy hunting, fishing and the outdoors and no doubt the older ones remember (or think they remember) how things were better when they were young. As examples, I’ve heard people talk about how much better the hunting used to be and how the fish were so much bigger, back in the good old days. This provides an excellent narrative for getting conservatives on board with addressing climate change and environmental issues. After all, presenting environmental protection as part of being a hunter and getting back to the memorable hunts of old is far more appealing than an appeal to hippie style tree-hugging.
11 comments