A Philosopher's Blog

Posted in Ethics, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on November 15, 2017

While Whataboutism has long served as a tool for Soviet (and now Russian) propagandists, it has now become entrenched in American political discourse. It is, as noted by comedian John Oliver, a beloved tool of Fox News and President Trump.

Whataboutism is a variant of the classic ad hominem tu quoque fallacy. In the standard tu quoque fallacy it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. This type of “argument” has the following form:

 

  1. Person A makes claim X.
  2. Person B asserts that A’s actions or past claims are inconsistent with the truth of claim X.
  3. Therefore X is false.

 

The fact that a person makes inconsistent claims does not make any particular claim he makes false (although of any pair of inconsistent claims only one can be true—but both can be false). Also, the fact that a person’s claims are not consistent with his actions might indicate that the person is a hypocrite, but this does not prove his claims are false. For those noting the similarity to the Wikipedia entry on this fallacy, you will note that the citation for the form and example is to my work.

As would be expected, while the Russians used this tactic against the West, Americans use it against each other along political lines. For example, a Republican might “defend” Roy Moore by saying “what about Harvey Weinstein?” A Democrat might do the reverse. I mention that Democrats can use this in anticipation of comments to the effect of “what about Democrats using whataboutism?” People are, of course, free to use Bill Clinton in the example, if they prefer.  To return to the subject, the “reasoning” in both cases would be fallacious as is evident when the “logic” is laid bare:

 

  1. Premise 1: Person A of affiliation 1 is accused of X by person B of Affiliation 2.
  2. Premise 2: Person C of affiliation 2 is accused of X by person D of affiliation 1.
  3. Conclusion: Therefore, A did not do X.

 

Obviously enough, whether C did X is irrelevant to whether or not it is true that A did X.

 

Alternatively:

 

  1. Premise 1: Person A of affiliation 1 is accused of X by person B of Affiliation 2.
  2. Premise 2: Person C of affiliation 2 is accused of X by person D of affiliation 1.
  3. Conclusion: Therefore, it is not wrong that A did X.

 

Clearly, even if C did X it does not follow that A doing X was not wrong. This sort of “reasoning” can also be seen as a variant on the classic appeal to common practice fallacy. This fallacy has the following structure:

 

Premise 1. X is a common action.

Conlcusion. Therefore X is correct/moral/justified/reasonable, etc.

 

The basic idea behind the fallacy is that the fact that most people do X is used as “evidence” to support the action or practice. It is a fallacy because the mere fact that most people do something does not make it correct, moral, justified, or reasonable. In the case of whataboutism, the structure would be like this:

 

Premise 1. You said X is done by my side.

Premise 2. Whatabout  X done by your side?

Premises 3. So, X is commonly done/we both do X.

Conclusion: Therefore, X is correct/moral/justified/reasonable, etc.

 

It is also common for the tactic of false equivalency to be used in whataboutism. In the form above, the X of premise 1 would not be the moral equivalent of the X of premise 2. In fact, the form should be modified to account for the use of false equivalency:

 

Premise 1. You said X is done by my side.

Premise 2. Whatabout  Y, which I say is just as bad as X, done by your side?

Premises 3. So, things just as bad as X are commonly done/we both do things as bad as X.

Conclusion: Therefore, X is correct/moral/justified/reasonable, etc.

 

This would be a not-uncommon double fallacy. In this case not only is the comparison between X and Y a false one, even if they were equivalent the fact that both sides do things that are equally bad would still not support the conclusion. Obviously enough, you should not accept this sort of reasoning—especially when it is being used to “support” a conclusion that is appealing.

Whataboutism can also be employed as a tool for creating a red herring. A Red Herring is a fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic. This sort of “reasoning” has the following form:

 

  1. Topic A is under discussion.
  2. Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
  3. Topic A is abandoned.

 

In the case of a whataboutism, the structure would be as follows:

 

  1. Topic A, my side doing X, is under discussion.
  2. Topic B is introduced: whatabout X done by the other side?
  3. Topic A is abandoned.

 

In closing, it should be noting that if two sides are being compared, then it is obviously relevant to consider the flaws of both sides. For example, if the issue is whether to vote for candidate A or B, then it is reasonable to consider the flaws of both A and B in comparison. However, the flaws of A do not show that B does not have flaws and vice versa. Also, if the issue being discussed is the bad action of A, then bringing up B’s bad action does nothing to mitigate the badness of A’s action. Unless, of course, A had to take a seemingly bad action to protect themselves from B’s unwarranted bad action. For example, if A is accused of punching a person and it is shown that this was because B tried to kill A, then that would obviously be relevant to assessing the ethics of A’s action. But, if A assaulted women and B assaulted women, then bringing up B in a whataboutism to defend A would be an error in logic. Both would be bad.

As far as why you should be worried about whataboutism, the obvious reason is that it is a corrosive that eats at the very structure of truth and morality. While it is a tempting tool to deploy against one’s hated enemies (such as fellow Americans), it is not a precise weapon—each public use splashes the body of society with vile acid.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Advertisements

Reasoning & Natural Disasters II: Inductive Reasoning

Posted in Philosophy, Reasoning/Logic, Uncategorized by Michael LaBossiere on September 15, 2017

Fortunately for my adopted state of Florida, Irma weakened considerably as it moved northward. When it reached my adopted city of Tallahassee, it was barely a tropical storm. While it did some damage, it was nothing compared to last year’s storm. While this was a good thing, there can be a very minor downside when dire predictions turn out to be not so dire.

The problem is, of course, that people might take such dire predictions less seriously in the future. There is even a term for this: hurricane fatigue.  When people are warned numerous times about storms and they do not prove as bad as predicted, people tend to get tired of going through the process of preparation. Hence, they tend to slack off in their preparations—especially if they took the last prediction very seriously and engaged in extensive preparations. Such as buying absurd amounts of bottled water. The problem is, of course, that the storm a person does not prepare for properly might turn out to be as bad or worse than predicted. Interestingly enough, inductive reasoning is the heart of this matter in two ways.

Inductive reasoning is, of course, logic in which the premises provide some degree of support (but always less than complete) for the conclusion. Inductive arguments deal in probability and this places them in contrast with deductive arguments—they are supposed to deal in certainty. That is, having all true premises in a deductive argument is supposed to guarantee a true conclusion. While there are philosophers who believe that predictions about such things as the weather can be made deductively, the best current reasoning only allows inductive reasoning regarding weather prediction. To use a simple illustration, when a forecast says there is a 50% chance of rain, what is meant is that on 50% of the days like this one it rained. This is, in fact, an argument by analogy. With such a prediction, it should be no more surprising that it rains than it does not.

While the computer modeling of hurricanes is rather complex, the predictions are still inductive in nature: all the evidence used in the reasoning can be true while the conclusion can still be false. This is because of the famous problem of induction—the gap between the premises and the conclusion means that no matter how strong the reasoning of an inductive argument, the conclusion can still be false. As such, any weather prediction can turn out to be false—even if the prediction is 99.99% likely to be accurate.  As such, it should be expected that weather predictions will often be wrong—especially since the models do not have complete information and are limited by the available processing power. That is, there is also a gap between reality and the models. There is also the philosophical question of whether the world is deterministic or not—in a deterministic world, weather would be fully predictable if there was enough information and processing power available to create a perfect model of reality. In a non-deterministic world, even a perfect model could still fail to predict what will happen in the real world. As such, there is both a problem in epistemology (what do we know) and metaphysics (what is the nature of reality).

Interestingly enough, when people start to distrust predictions after past predictions turn out to be wrong, they are also engaging in inductive reasoning. To be specific, if many predictions have turned out to be wrong, then it can be reasonable to infer that the next prediction could be wrong. That is certainly reasonable and thinking that an inductive argument could have a false conclusion is no error.

Where people go wrong is when they place to much confidence in the conclusion that the prediction will be wrong. One way this can happen is through a variation in the gambler’s fallacy. In the classic gambler’s fallacy, a person assumes that a departure from what occurs on average or in the long term will be corrected in the short term. For example, if a person concludes that tails is due because they have gotten heads six times in a row, then they have committed this fallacy. In the case of the “hurricane fallacy” a person overconfidently infers that the streak of failed predictions must continue. The person could, of course, turn out to be right. The error lies in the overconfidence in the conclusion that the prediction will be wrong. Sorting out the confidence one should have in their doubt is a rather challenging matter because it requires understanding the accuracy of the predictions.

As a practical matter, one way to address hurricane fatigue is to follow some excellent advice: rather than going through mad bursts of last second preparation, always be prepared at the recommended minimum level. That is, have enough food and water on hand for three days and make basic preparations for being without power or evacuating. Much of this can easily be integrated into one’s normal life. For example, consuming and replacing canned and dried goods throughout the year means that one will have suitable food on hand. There are also one-time preparations, such as acquiring some crank-powered lights, a small solar panel for charging smart phones, and getting a basic camp stove and a few propane canisters to store.

This does lead to a final closing point, namely the cost of preparation. Since I have a decent income, I can afford to take the extra steps of being always ready for a disaster. That is, I can buy the lights, stove, propane, and such and store them. However, this is not true of everyone. When I was at Publix before the storm, I spoke to some people who said that it was hard for them to get ready for storms—they needed their money for other things and could not afford to have a stockpile of unused supplies let alone things like solar panels or generators. The upfront cost of stockpiling in preparation for the storm was also a challenge—there are, as far as I know, no emergency “storm loans” or rapid aid to help people gear up for impending storms. No doubt some folks would be terrified that storm moochers would be living fat on the public’s money during storms. However, storm aid does sound like decent idea and could even be cost saver for the state. After all, the better prepared people are before the storm, the less the state and others must do during and after the storm.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Reasoning & Natural Disasters

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on September 8, 2017

As this is being written, Irma is scouring its way across the Atlantic and my adopted state of Florida will soon feel her terrible embrace. Nearby, Texas is still endeavoring to dry out from its own recent watery disaster. The forces of nature can be overwhelming in their destructive power, but poor reasoning on the part of humans can contribute to the magnitude of a natural disaster. As such, it is worth considering how poor reasoning impacts disaster planning both by individuals and by the state. Or lack of planning.

While human activity can impact nature, the power of nature can kill any human and sweep away anything we can construct. As such, even the best planning can come to nothing. To think that because perfect planning is impossible we should simply let nature shake the dice for us would be to fall into the classic perfectionist fallacy. This is to engage in a false dilemma in which the two assumed options are doing nothing or having a perfect option. While there are no perfect options, there are almost always those that are better than nothing. As such, the first form of bad reasoning to overcome is this (fortunately relatively rare) view that there is no point in planning because something can always go wrong.

Another reason why people tend to not prepare properly is another classic fallacy, that of wishful thinking. This is an error of reasoning in which a person concludes that because they really want something to be true, it follows that it is true. While people do know that a disaster can impact them, it is natural to reject the possibility until it becomes a reality. In many cases, people engage in wishful thinking while the disaster is approaching, feeling that since they do not want it to arrive it follows that it will not. As such, they put off planning and preparation—perhaps until it is too late. This is not to say that people should fall into a form of woeful thinking (the inference that whatever one does not wish to happen will happen)—that would be equally a mistake. Rather, people should engage in the rather difficult task of believing what is supported by the best available evidence.

People also engage in the practice of discounting the future. This is a mistake of valuing a near good more than a future good simply because of the time factor. This is not, of course, to deny that time is a relevant factor in considering value. In the case of mitigating disasters, preparing now incurs a cost in time and resources that will not pay off until later (or even never). For example, money a city spends building storm surge protection is money that will not be available to improve the city parks.

Connected to the matter of time is also the matter of probability—as noted, while disaster preparation might yield benefits in the future, they might not. As such, there is a double discount: time and probability. As such, a rational assessment of the value of disaster preparation needs to consider both time and chance—will disasters strike and if so, when will they strike?

As would be suspected, the more distant a disaster (such as a “500 year flood”) and the less likely the disaster (such as a big meteor hitting the earth), the less people are willing to expend resources now. This can be rational, provided that these factors are given due consideration. There is also the fact that these considerations become quite philosophical in that they are considerations of value rather than purely mathematical calculations. To illustrate, determining whether I should contribute to preparing against a disaster that will not arrive until well after I am dead of old age is a matter of moral consideration and thus requires philosophical reasoning to sort out. Such reasoning need not be bad reasoning and these considerations show why disaster planning can be quite problematic even when people are reasoning well. However, problems do arise when people are unclear (or dishonest) about what values are in play. As such, reasoning well about disaster preparation requires being clear about the values that are informing the decision-making process. Since such considerations typically involve politics and economics, deceit is to be expected.

Another factor is nicely illustrated by a story from Sun Tzu’s Art of War. The tale relates how a lord asked his doctor, a member of a family of healers, which of the family was the most skilled: According to an old story, a lord of ancient China once asked his physician, a member of a family of healers, which of them was the most skilled in the art:

 

The physician, whose reputation was such that his name became synonymous with medical science in China, replied, “My eldest brother sees the spirit of sickness and removes it before it takes shape, so his name does not get out of the house.

“My elder brother cures sickness when it is still extremely minute, so his name does not get out of the neighborhood.

“As for me, I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords.”

 

While there are some exceptions, politicians and leaders often act to get attention and credit for their deeds. As the above story indicates, there is little fame to gain by quietly preventing disasters. There is, however, considerable attention and credit to be gained by publicly handling a disaster well (and great infamy to be gained by handling it badly). As such, there is little appeal in preparation for it earns no glory.

There is also to fact that while people can assess what has happened, sorting out what was prevented is rather more challenging. For example, while people clearly notice when a city loses power due to a storm, few would realize when effective planning and infrastructure modification prevented a storm from knocking out the power. After all, the power just keeps on going. Motivating people by trying to appeal to what will be prevented (or what was prevented) can be quite challenging. This can also be illustrated by how some people look at running. Whenever a runner drops dead, my non-running friends will rush to point this out to me, claiming that it is great they do not run because otherwise they would die. When I try to point to the millions of runners who are healthier and live longer than non-runners, they find the absence of early death far less influential.

To be fair, sorting out that something did not happen and why it did not happen can be rather complicated. However, what seems to be an ever-increasing frequency of natural disasters requires that these matters be addressed. While it might not be possible to persuade people of the value of prevention so that they will commit adequate resources to the effort, it is something that must be attempted.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Weight Loss, Philosophy & Science

Posted in Philosophy, Reasoning/Logic, Running, Science, Sports/Athletics by Michael LaBossiere on August 2, 2017

When I was young and running 90-100 miles a week, I could eat all the things without gaining weight. Time is doubly cruel in that it slowed my metabolism and reduced my ability to endure high mileage. Inundated with the usual abundance of high calorie foods, I found I was building an unsightly pudge band around my middle. My first reaction was to try to get back to my old mileage, but I found that I now top out at 70 miles a week and anything more starts breaking me down. Since I could not exercise more, I was faced with the terrible option of eating less. Being something of an expert on critical thinking, I dismissed all the fad diets and turned to science to glean the best way to beat the bulge. Being a philosopher, I naturally misapplied the philosophy of science to this problem with some interesting results.

Before getting into the discussion, I am morally obligated to point out that I am not a medical professional. As such, what follows should be regarded with due criticism and you should consult a properly credentialed expert before embarking on changes to your exercise or nutrition practices. Or you might die. Probably not; but maybe.

As any philosopher will tell you, while the math used in science is deductive (the premises are supposed to guarantee the conclusion with certainty) scientific reasoning is inductive (the premises provide some degree of support for the conclusion that is less than complete). Because of this, science suffers from the problem of induction. In practical terms, this means that no matter how carefully the reasoning is conducted and no matter how good the evidence is, the conclusion drawn from the evidence can still be false. The basis for this problem is the fact that inductive reasoning involves a “leap” from the evidence/premises (what has been observed) to the conclusion (what has not been observed). Put bluntly, inductive reasoning can always lead to a false conclusion.

Scientists and philosophers have long endeavored to make science a deductive matter. For example, Descartes believed that he could find truths that he could know with certainty and then use valid deductive reasoning to generate a true conclusion with absolute certainty. Unfortunately, this science of certainty is the science of the future and always will be. So, we are stuck with induction.

The problem of induction obviously applies to the sciences that study nutrition, exercise and weight loss and, as such, the conclusions made in these sciences can always be wrong. This helps explain why the recommendations about these matters change relentlessly.

While there are philosophers of science who would disagree, science is mostly a matter of trying to figure things out by doing the best that can be done at the time. This is limited by the resources (such as technology) available at the time and by human epistemic capabilities. As such, whatever science is presenting at the moment is almost certainly at least partially wrong; but the wrongs get reduced over time. Or increase sometimes. This is true of all the sciences—consider, for example, the changes in physics since Thales began it. This also helps explain why the recommendations about diet and exercise change constantly.

While science is sometimes presented as a field of pure reason outside of social influences, science is obviously a social activity conducted by humans. Because of this, science is influence by the usual social factors and human flaws. For example, scientists need money to fund their research and can thus be vulnerable to corporations looking to “prove” various claims that are in their interest. As another example, scientific matters can become issues of political controversy, such as evolution and climate change. This politicization tends to derange science. As a final example, scientists can be motivated by pride and ambition to fudge or fake results. Because of these factors, the sciences dealing with nutrition and exercise are significantly corrupted and this makes it difficult to make a rational judgment about which claims are true. One excellent example is how the sugar industry paid scientists at Harvard to downplay the health risks presented by sugar and play up those presented by fat. Another illustration is the fact that the food pyramid endorsed by the US government has been shaped by the food industries rather than being based entirely on good science.

Given these problems it might be tempting to abandon mainstream science and go with whatever fad or food ideology one finds appealing. That would be a bad idea. While science suffers from these problems, mainstream science is vastly better than the nonscientific alternatives—they tend to have all of the problems of science without having its strengths. So, what should one do? The rational approach is to accept the majority opinion of the qualified and credible experts. One should also keep in mind the above problems and approach the science with due skepticism.

So, what are some of the things the best science of today say about weight loss? First, humans evolved as hunter-gatherers and getting enough calories was a challenge. As such, humans tend to be very good at storing energy in the form of fat which is one reason the calorie rich environment of modern society contributes to obesity. Crudely put, it is in our nature to overeat—because that once meant the difference between life and death.

Second, while exercise does burn calories, it burns far less than many imagine. For most people, the majority of calorie burning is a result of the body staying alive. As an example, I burn about 4,000 calories on my major workout days (estimated based on my Fitbit and activity calculations). But, about 2,500 of those calories are burned just staying alive. On those days I work out about four hours and I am fairly active the rest of the day. As such, while exercising more will help a person lose weight, the calorie impact of exercise is surprisingly low—unless you are willing to commit considerable time to exercise. That said, you should exercise—in addition to burning calories it has a wide range of health benefits.

Third, hunger is a function of the brain and the brain responds differently to different foods. Foods high in protein and fiber create a feeling of fullness that tends to turn off the hunger signal. Foods with a high glycemic index (like cake) tend to stimulate the brain to cause people to consume more calories. As such, manipulating your brain is an effective way to increase the chance of losing weight. Interestingly, as Aristotle argued, habituation to foods can train the brain to prefer foods that are healthier—that is, you can train yourself to prefer things like nuts, broccoli and oatmeal over cookies, cake, and soda. This takes time and effort, but can obviously be done.

Fourth, weight loss has diminishing returns: as one loses weight, one’s metabolism slows and less energy is needed. As such, losing weight makes it harder to lose weight, which is something to keep in mind.  Naturally, all of these claims could be disproven in the next round of scientific investigation—but they seem quite reasonable now.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Poverty & the Brain

Posted in Business, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on July 14, 2017

A key part of the American mythology is the belief that a person can rise to the pinnacle of success from the depths of poverty. While this does occur, most understand that poverty presents a considerable obstacle to success. In fact, the legendary tales that tell of such success typically embrace an interesting double vision of poverty: they praise the hero for overcoming the incredible obstacle of poverty while also asserting that anyone with gumption should be able to achieve this success.

Outside of myths and legends, it is a fact that poverty is difficult to overcome. There are, of course, the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care—both necessities are expensive and the poor typically have less access to good food and good care. There is also recent research by scientists such as Kimberly G. Noble  that suggests a link between poverty and brain development.

While the most direct way to study the impact of poverty and the brain is by imaging the brain, this (as researchers have noted) is expensive. However, the research that has been conducted shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.

Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test for the functionality of the brain. As might be expected, children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.

As would be expected, there are individuals who do not conform to the generally correlation. That is, there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. As such, knowing the economic class of a child does not tell one what their individual capabilities are. However, there is a clear correlation when the matter is considered in terms of populations rather than single individuals. This is important to consider when assessing the impact of anecdotes of successful rising from poverty—as with all appeals to anecdotal evidence, they do not outweigh the bulk of statistical evidence.

To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not entail that one knows that Sally is weaker than Bob the boy. Sally might be much stronger than Bob. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the obvious question about whether poverty is a causal factor in brain development.

Those with even passing familiarity with causal reasoning know that correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities that there must be a causal connection would be to fall victim to the most basic of causal fallacies. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both—that is, poverty and the cognitive abilities are both effects.

There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.

This view does have considerable appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are needed for success and it seems reasonable to think that people who were less capable would thus be less successful. To use an analogy, there is a clear correlation between running speed and success in track races. It is not, of course, losing races that makes a person slow. It is being slow that causes a person to lose races.

Despite the appeal of this interpretation of the data, to rush to the conclusion that it is the cognitive abilities that cause poverty would be as much a fallacy as rushing to the conclusion that poverty influences brain development. Both views do seem plausible and it is certainly possible that there is causation going in both directions. The challenge, then, is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble—providing the experimental group of low income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.

Intuitively, it makes sense that an adequate family income would generally have a positive impact on the development of children. After all, this income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that throwing money at poverty is the cure; but reducing poverty is certainly a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who are concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

False Allegiance

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on July 12, 2017

One of the key distinctions in critical thinking is that between persuasion and argumentation. While an argument can be used to persuade, the object of an argument is truth. More specifically, the goal is to present evidence/reasons (known as premises) that logically support the conclusion. In contrast, the goal of persuasion is the acceptance of a claim as true, whether the claim is true or not. As should be expected, argumentation is rather ineffective as a tool of persuasion. Rhetorical devices, which are linguistic tools aimed at persuading, are rather more effective in achieving this goal. While there are many different rhetorical devices, one rather interesting one is what can be called False Allegiance. Formalized, the device is simple:

  1. A false statement of allegiance to a group, ideology or such is made.
  2. A statement that seems contrary to the professed allegiance is made, typically presented as being done with reluctance. This is often criticism or an attack.

While there is clearly no logical connection between the (false) statement of allegiance and the accuracy of the statement, a psychological connection can be made. The user’s intent is that their claim of allegiance will grant them credibility and thus make their claim more believable. This perceived credibility could be a matter of the target believing that the critic has knowledge of the matter because of their alleged allegiance. However, the main driving force behind the perceived credibility is typically the assumption that a person who professes allegiance to something will be honest in their claims about their alleged group. That is, they would not attack what they profess allegiance to unless there was truth behind the attack.

Like almost all rhetorical devices, False Allegiance has no allegiance of its own and can be pressed into service for any cause. As an illustration, it works just as well to proclaim a false allegiance to the Democrats as it does to the Republicans. For example, “Although I am a life-long Democrat, and it pains me to do so, I must agree that Trump is right about voter fraud. We need to ensure that illegals are not casting votes in our elections and so voter ID laws are a great idea.” As another example, “I have always voted for Republicans, so it is with great reluctance that I say that Trumpcare is a terrible idea.”

Looking at these examples, one might point out that these claims could be made with complete sincerity. That is, a Democrat could really believe that voter ID laws are a great idea and a Republican could think that Trumpcare is a terrible idea. That is, the professed allegiance could be sincere. This is certainly a point worth considering and everything that looks like it might be a case of False Allegiance need not be this rhetorical device.

In cases in which the person making the claims is known, it is possible to determine if the allegiance is false or not. For example, if John McCain says, “Although I am a loyal Republican I…”, then it is reasonable to infer this is not a case of false allegiance. However, if the identity and allegiance of the person making the claims cannot be confirmed, then the possibility that this device is being used remains.

Fortunately, defending against this device does not require being able to confirm (or deny) the allegiance of the person making the relevant claims. This is because the truth (or falsity) of the assertions being made are obviously independent of the allegiance and identity of the person making the claims. If the claims are adequately supported by evidence or reasons, then it would be reasonable to accept them—regardless of who makes the claims or why they are being made. If the claims are not adequately supported, then it would be unreasonable to accept them. This does not entail that they should be rejected—after all, just as a rhetorical device does not prove anything, its usage does not disprove anything.

It needs to be emphasized that even if it is shown that the person making the claim has a true allegiance, then it does not follow that their claim is thus true. After all, this reasoning is clearly fallacious: “I have an allegiance to X, so what I say about X is true.” They would not be using the False Allegiance rhetorical device, but could be using an appeal to allegiance, which would simply be another type of rhetoric.

In practical terms, when assessing a claim one should simply ignore such professions of allegiance. This is because they have no logical relevance to the claim being made. They can, obviously enough, have psychological force—but this merely is a matter of the power to persuade and not the power to prove.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

 

Tagged with: , ,

The Curse of Springtime

Posted in Philosophy, Reasoning/Logic, Uncategorized by Michael LaBossiere on April 3, 2017

Springtime 10KAs a professional philosopher, I am not inclined to believe in curses. However, my experiences over the years have convinced me that I am the victim of what I call the Curse of Springtime. As far as I know, this curse is limited to me and I do not want anyone to have the impression that I regard Springtime Tallahassee in a negative light. Here is the tale of the curse.

For runners, the most important part of Springtime is the Springtime 10K (and now the 5K). Since I moved to Tallahassee in 1993 I have had something bad happen right before or during the race. Some examples: one year I had a horrible sinus infection. Another year I had my first ever muscle pull. Yet another year I was kicking the kickstand of my Yamaha, slipped and fell-thus injuring my back. 2008 saw the most powerful manifestation of the curse.

On the Thursday before the race, my skylight started leaking. So, I (stupidly) went up to fix it. When I was coming down, the ladder shot out from under me. I landed badly and suffered a full quadriceps tendon tear that took me out of running for months. When Springtime rolled around in 2009 I believed that the curse might kill me and I was extra cautious. The curse seemed to have spent most of its energy on that injury, because although the curse did strike, it was minor. But, the curse continued: I would either get sick or injured soon before the race, or suffer and injury during the race. This year, 2017, was no exception. My knees and right foot started bothering me a week before the race and although I rested up and took care of myself, I was unable to run on Thursday. I hobbled through the 10K on Saturday, cursing the curse.

Since I teach critical thinking, I have carefully considered the Curse of Springtime and have found it makes a good example for applying methods of causal reasoning. I started with the obvious, considering that I was falling victim to the classic post hoc, ergo propter hoc (“after this, therefore because of this”). This fallacy occurs when it is uncritically assumed that because B follows A, that A must be the cause of B. To infer just because I always have something bad happen as Springtime arrives that Springtime is causing it would be to fall into this fallacy. To avoid this fallacy, I would need to sort out a possible causal mechanism—mere correlation is not causation.

One thing that might explain some of the injuries and illnesses is the fact that the race occurs at the same time each year. By the time Springtime rolls around, I have been racing hard since January and training hard as well—so it could be that I am always worn out at this time of year. As such, I would be at peak injury and illness vulnerability. On this hypothesis, there is no Curse—I just get worn down at the same time each year because I have the same sort of schedule each year. However, this explanation does not account for all the incidents—as noted above, I have also suffered injuries that had nothing to do with running, such as falls. Also, sometimes I am healthy and injury free before the race, then have something bad happen in the race itself. As such, the challenge is to find an explanation that accounts for all the adverse events.

It is certainly worth considering that while the injuries and illnesses can be explained as noted above, the rest of the incidents are mere coincidences: it just so happens that when I am not otherwise ill or injured, something has happened. While improbable, this is not impossible. That is, it is not beyond the realm of possibility for random things to always happen for the same race year after year.

It is also worth considering that it only seems that there is a curse because I am ignoring the other bad races I have and considering only the bad Springtime races. If I have many bad races each year, it would not be unusual for Springtime to be consistently bad. Fortunately, I have records of all my races and can look at it objectively: while I do have some other bad races, Springtime is unique in that something bad has happened every year. The same is not true of any other races. As such, I do not seem to be falling into a sort of Texas Sharpshooter Fallacy by only considering the Springtime race data and not all my race data.

There is certainly the possibility that the Curse of Springtime is psychological: because I think something bad will happen it becomes a self-fulfilling prophecy. Alternatively, it could be that because I expect something bad to happen, I carefully search for bad things and overestimate their badness, thus falling into the mistake of confirmation bias: Springtime seems cursed because I am actively searching for evidence of the curse and interpreting events in a way that support the curse hypothesis. This is certainly a possibility and perhaps any race could appear cursed if one spent enough effort seeking evidence of an alleged curse. That said, there is no such consistent occurrence of unfortunate events for any other race, even those that I have run every year since I moved here. This inclines me to believe that there is some causal mechanism at play here. Or a curse. But, I am aware of the vagaries of chance and it could simply be an unfortunate set of coincidences that every Springtime since 1994 has seemed cursed. But, perhaps in 2018 everything will go well and I can dismiss my belief in the curse as mere superstition. Unless the curse kills me then. You know, because curse.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Conservative Conservation

Posted in Ethics, Politics, Reasoning/Logic, Science by Michael LaBossiere on March 1, 2017
Embed from Getty Images

While the scientific evidence for climate change is overwhelming, it has become an ideological matter. In the case of conservatives, climate change denial has become something of a stock position. In the case of liberals, belief in human-caused climate change is a standard position.  Because of the way ideological commitments influence thought, those who are committed to climate change denial tend to become immune to evidence or reasons offered against their view. In fact, they tend to double-down in the face of evidence—which is a standard defense people use to protect their ideological identity. This is not to say that all conservatives deny climate change; many accept it is occurring. However, conservatives who accept the reality of climate change tend to deny that it is caused by humans.

This spectrum of beliefs does tend to match the shifting position on climate change held by influential conservatives such as Charles Koch. The initial position was a denial of climate change. This shifted to the acceptance of climate change, but a rejection of the claim that it is caused by humans. The next shift was to accept that climate change is caused by humans, but that it is either not as significant as the scientists claim or that it is not possible to solve the problem. One obvious concern about this slow shift is that it facilitates the delay of action in response to the perils of climate change. If the delay continues long enough, there really will be nothing that can be done about climate change.

Since many conservatives are moving towards accepting human caused climate change, one interesting problem is how to convince them to accept the science and to support effective actions to offset the change. As I teach the students in my Critical Inquiry class, using logic and evidence to try to persuade people tends to be a poor option. Fallacies and rhetoric are vastly more effective in convincing people. As such, the best practical approach to winning over conservatives is not by focusing on the science and trying to advance rational arguments. Instead, the focus should be on finding the right rhetorical tools to win people over.

This does raise a moral concern about whether it is acceptable to use such tactics to get people to believe in climate change and to persuade them to act. One way to justify this approach is on utilitarian grounds: preventing the harms of climate change morally outweighs the moral concerns about using rhetoric rather than reason to convince people. Another way to justify this approach is to note that the goals are not to get people to accept an untruth and to do something morally questionable Quite the contrast, the goal is to get people to accept scientifically established facts and to act in defense of the wellbeing of humans in particular and the ecosystem in general.  As such, using rhetoric when reason fails seems warranted in this case. The question is then what sort of rhetoric would work best.

Interestingly, many conservative talking points can be deployed to support acting against climate change. For example, many American conservatives favor energy independence and keeping jobs in America. Developing sustainable energy within the United States, such as wind and solar power, would help with both. After all, while oil can be shipped from Saudi Arabia, shipping solar power is not a viable option (at least not until massive and efficient batteries become economically viable). The trick is, of course, to use rhetorical camouflage to hid that the purpose is to address climate change and environmental issues. As another example, many American conservatives tend to be pro-life—this can be used as a rhetorical angle to argue against pollution that harms fetuses. Of course, this is not likely to be a very effective approach if the main reasons someone is anti-abortion are not based in concern about human life and well-being. As a final example, clean water is valuable resource for business because industry needs clean water and, of course, human do as well. Thus, environmental protection of water can be sold with the rhetorical cover of being pro-business rather than pro-environment.

Thanks to a German study, there is evidence that one effective way to persuade conservatives to be concerned about climate change is to appeal to the fact that conservatives value preserving the past. This study showed that conservatives were influenced significantly more by appeals to restoring the earth to the way it was than by appeals to preventing future environmental harms. That is, conservatives were more swayed by appeals to conservation than by appeals to worries about future harms. As such, those wishing to gain conservative support for combating climate change should focus not on preventing the harms that will arise, but on making the earth great again. Many conservatives enjoy hunting, fishing and the outdoors and no doubt the older ones remember (or think they remember) how things were better when they were young. As examples, I’ve heard people talk about how much better the hunting used to be and how the fish were so much bigger, back in the good old days. This provides an excellent narrative for getting conservatives on board with addressing climate change and environmental issues. After all, presenting environmental protection as part of being a hunter and getting back to the memorable hunts of old is far more appealing than an appeal to hippie style tree-hugging.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

The Democrats and the Ku Klux Klan

Posted in Ethics, Philosophy, Politics, Reasoning/Logic, Uncategorized by Michael LaBossiere on February 13, 2017
Embed from Getty Images

One interesting tactic employed by the Republicans is to assert, in response to charges of racism against one of their number, that the Democrats are “the party of the Ku Klux Klan.” This tactic was most recently used by Senator Ted Cruz in defense of Jeff Sessions, Trump’s nominee for attorney general.

Cruz went beyond merely claiming the Democrats formed the Klan; he also asserted that the Democrats were responsible for segregation and the infamous Jim Crow laws. As Cruz sees it, the Democrats’ tactic is to “…just accuse anyone they disagree with of being racist.”

Ted Cruz is right about the history of the Democratic party. After the Civil War, the southern Democratic Party explicitly identified itself as the “white man’s party” and accused the Republican party of being “negro dominated.” Some Southern Democrats did indeed support Jim Crow and joined the KKK.

What Ted fails to mention is that as the Democrats became the party associated with civil rights, the Republicans engaged in what has become known as the “southern strategy.” In short, the Republicans appealed to racism against blacks in order to gain political power in the south. Though ironic given the history of the two parties, this strategy proved to be very effective and many southern Democrats became southern Republicans. In some ways, the result was analogous to exchanging the wine in two bottles: the labels remain the same, but the contents have been swapped. As such, while Ted has the history correct, he is criticizing the label rather than the wine.

Another metaphor is the science fiction brain transplant. If Bill and Sam swapped brains, it would appear that Sam was guilty of whatever Bill did, because he now has Bill’s body. However, when it comes to such responsibility what matters is the brain. Likewise for the swapping of political parties in the south: the Southern Democrats condemned by Cruz became the southern Republicans that he now praises. Using the analogy, Ted is condemning the body for what the old brain did while praising that old brain because it is in a new body.

As a final metaphor, consider two cars and two drivers. Driving a blue car, Bill runs over a person. Sam, driving a red car, stops to help the victim. Bill then hops in the red car and drives away while Sam drives the victim to the hospital in the blue car. When asked about the crime, Ted insists that the Sam is guilty because he is in the blue car now and praises Bill because he is in the red car now.  Obviously enough, the swapping of parties no more swaps responsibility than the swapping of cars.

There is also the fact that Cruz is engaged in the genetic fallacy—he is rejecting what the Democrats are saying now because of a defect in the Democratic party of the past. The fact that the Democrats of then did back Jim Crow and segregation is irrelevant to the merit of claims made by current Democrats about Jeff Sessions (or anything else). When the logic is laid bare, the fallacy is quite evident:

 

Premise 1: Some Southern Democrats once joined the KKK.

Premise 2: Some Southern Democrats once backed segregation and Jim Crow Laws.

Conclusion: The current Democrats claims about Jeff Sessions are untrue.

 

As should be evident, the premises have no logical connection to the conclusion, hence Cruz’s reasoning is fallacious. Since Cruz is a smart guy, he obviously knows this—just as he is aware that fallacies are far better persuasive tools than good arguments.

The other part of Cruz’s KKK gambit is to say that the Democrats rely on accusations of racism as their tactic. Cruz is right that a mere accusation of racism does not prove that a person is racist. If it is an unsupported attack, then it proves nothing. Cruz’s tactic does gain some credibility from the fact that accusations of racism are all-to-often made without adequate support. Both ethics and critical thought require that one properly review the evidence for such accusations and not simply accept them. As such, if the Democrats were merely launching empty ad hominem attacks on Sessions (or anyone), then these attacks should be dismissed.

In making his attack on the Southern Democrats of the past, Cruz embraces the view that racism is a bad thing. After all, his condemnation of the current Democrats requires that he condemn the past Democrats for their support of racism, segregation and Jim Crow laws. As such, he purports to agree with the current Democrats’ professed view that racism is bad. But, he condemns them for making what he claims are untrue charges of racism. This, then, is the relevant concern: which claims, if any, made by the Democrats about session being a racist are true? The Democrats claimed that they were offering evidence of Session’s racism while Cruz’s approach was to accuse the Democrats of being racists of old and engaging in empty accusations today. He did not, however, address the claims made by the Democrats or their evidence. As such, Cruz’s response has no merit from the perspective of logic. As a rhetorical move, however, it has proven reasonably successful.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Bans & BS

Posted in Philosophy, Politics, Reasoning/Logic, Uncategorized by Michael LaBossiere on February 10, 2017
Embed from Getty Images

As this is being written, Trump’s travel ban remains suspended by  the courts. The poor wording and implementation of the ban indicates that amateurs are now in charge. Or, alternatively, that Trump’s strategists are intentionally trying to exhaust the opposition. As such, either the ban has been a setback for Trump or a small victory.

While the actual experts on national security (from both parties) have generally expressed opposition to the Trump ban, Trump’s surrogates and some Republican politicians have endeavored to defend it. The fountain of falsehoods, Kellyanne Conway, has been extremely active in defense of the ban. Her zeal in its defense has led her to uncover terrorist attacks beyond our own reality, such as the Bowling Green Massacre that occurred in some other timeline. In that alternative timeline, the Trump ban might be effectively addressing a real problem; but not in the actual world.

More reasonable defenders of the ban endeavor to use at least some facts from this world when making their case. For example, Republican representative Mike Johnson recently defended the ban by making reference to a report by Fordham Law School’s Center on National Security. He claimed that “They determined that nearly 20 percent of alleged facilitators in ISIS prosecutions, in our country, do involve refugees and asylees. I mean, those kinds of facts are not as widely publicized, but they should be. I think the American people have a right to know that.” This approach employs four rather effective rhetorical techniques which I will address in reverse order of use.

By saying “the American people have a right to know”, Johnson seems to be employing innuendo to suggest that the rights of Americans are being violated—that is, there is some sort of conspiracy against the American people afoot. This conspiracy is, of course, that the (presumably liberal) media is not publicizing certain facts. This rhetorical tool is rather clever, for it not only suggests the media is up to something nefarious, but that there are secret facts out there that support the ban. At the very least, this can incline people to think that there are other facts backing Trump that are being intentionally kept secret. This can make people more vulnerable to untrue claims purporting to offer such facts.

Johnson’s lead techniques are, coincidentally enough, rhetorical methods I recently covered in my critical thinking class. One technique is what is often called a “weasler” in which a person protects a claim by weakening it. In this case, the weasel word is “nearly.” If Johnson were called on the correct percentage, which is 18%, he can reply that 18% is nearly 20%, which is true. However, “nearly 20%” certainly creates the impression that it is more than 18%, which is misleading. Why not just say “18%”?  Since the exaggeration is relatively small, it does not qualify as hyperbole. Naturally, a reasonable reply would be that this is nitpicking— “nearly 20%” is close enough to “18%” and Johnson might have simply failed to recall the exact number during the interview. This is certainly a fair point.

Another technique involves presenting numerical claims without proper context, thus creating a misleading impression. In this case, Johnson claims, correctly, that “nearly 20 percent of alleged facilitators in ISIS prosecutions, in our country, do involve refugees and asylees.” The main problem is that no context is given for the “nearly 20%.” Without context, one does not know whether this is a significant matter or not. For example, if I claimed that sales of one of my books increased 20% last year, then you would have no idea how significant my book sales were. If I sold 10 of those books in 2015 and 12 in 2016, then my sales did increase 20%, but my sales would be utterly insignificant in the context of book sales.

In the case of the facilitators Johnson mentioned, the Fordham report includes 19 facilitators and 3 of these (18%) were as Johnson described. So, of the thousands of refugees and asylum seekers the United States took in, there have been three people who were involved in this facilitation. This mostly involved encouraging people to go overseas to fight—these three people were (obviously) not involved in terrorist attacks in the United States. Such a microscopic threat level does not justify the travel ban under any rational threat assessment and response analysis.

The United States does, of course, face some danger from terrorist attacks. However, the most likely source of these attacks is from US born citizens. While the threat from foreigners is not zero, an American is 253 times more likely to be a victim of a “normal” homicide rather than killed in a foreigner engaged in a terrorist attack in the United States. And the odds of being the victim of a homicide are very low. As such, trying to justify the ban with accurate information is all but impossible, which presumably explains why the Republicans are resorting to lies and rhetoric.

While there are clear political advantages to stoking the fear of ill-informed Americans, there are plenty of real problems that Trump and the Republicans could be addressing—responsible leaders would be focusing on these problems, rather than weaving fictions and feeding unfounded fears.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter