In my last essay I looked briefly at how to pick between experts. While people often reply on experts when making arguments, they also rely on studies (and experiments). Since most people do not do their own research, the studies mentioned are typically those conducted by others. While using study results in an argument is quite reasonable, making a good argument based on study results requires being able to pick between studies rationally.
Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick a study based on the fact that it agrees with what you already believe. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it is reasonable to believe it.
Another common approach is to accept a study as correct because the results match what you really want to be true. For example, a liberal might accept a study that claims liberals are smarter and more generous than conservatives. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).
In some cases, people try to create their own “studies” by appealing to their own anecdotal data about some matter. For example, a person might claim that poor people are lazy based on his experience with some poor people. While anecdotes can be interesting, to take an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.
While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations of studies, provided that they have the relevant information about the study. The following provides a concise guide to studies—and experiments.
In normal use, people often jam together studies and experiments. While this is fine for informal purposes, this distinction is actually important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.
The objective of the experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible—the more alike the two groups, the better the experiment.
The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.
Assuming that the experiment was conducted properly, whether or not the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The key idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment actually shows there is a causal connection it is important to know the size of the sample used. After all, the difference between the experimental and control groups might be rather large, but might not be significant. For example, imagine that an experiment is conducted involving 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is actually not statistically significant: the sample is so small, the difference could be due entirely to chance. The following table shows some information about statistical significance.
Sample Size (Control group + Experimental Group)
Approximate Figure That The Difference Must Exceed
To Be Statistically Significant
(in percentage points)
While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to radiation to test its effect would be immoral. In such cases studies are used rather than experiments.
One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a suspected cause. The main difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to radiation by an accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.
After the study has run its course, the results are compared to see if these is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.
Since the study relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the experiment. After all, in the study the researchers have to take what they can find rather than conducting a proper experiment.
In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness, but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to sort things out.
Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause it not known, so those in the experimental group are those showing the effect.
Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if these is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study—which is why a study of adequate size is important.
Of the three methods, this is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would face the problem that those who chew tobacco are often ex-smokers. As such, the smoking might be the actual cause. To sort this out would involve a study involving chewers who are not ex-smokers.
It is also worth referring back to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.
As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.
Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double blind experiment/study.
Overall, here are some key questions to ask when picking a study:
Was the study/experiment properly conducted?
Was the sample size large enough?
Were the results statistically significant?
Were those conducting the study/experiment experts?
One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.
If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.
The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.
Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).
Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).
People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise. For example, a person might be very likeable, but not know a thing about what they are talking about.
Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.
1. The person has sufficient expertise in the subject matter in question.
Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.
2. The claim being made by the person is within her area(s) of expertise.
If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).
3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.
This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.
It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.
It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.
4. The person in question is not significantly biased.
This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.
It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.
Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.
These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.
While same-sex marriage seems to have momentum in its favor in the United States, there is still considerable opposition to its acceptance. This opposition is well stocked up with stock arguments against this practice. One of these is the slippery slope argument: if same-sex marriage is allowed, then people will then be allowed to marry turtles, dolphins, trees, cats, corpses or iPads. Since this would be bad/absurd, same-sex marriage should not be allowed. This is, of course, the classic slippery slope fallacy.
This is a fallacy in which a person asserts that some event must inevitably follow from another without any argument for the inevitability of the event in question. In most cases, there are a series of steps or gradations between one event and the one in question and no reason is given as to why the intervening steps or gradations will simply be bypassed. This “argument” has the following form:
1. Event X has occurred (or will or might occur).
2. Therefore event Y will inevitably happen.
This sort of “reasoning” is fallacious because there is no reason to believe that one event must inevitably follow from another without adequate evidence for such a claim. This is especially clear in cases in which there are a significant number of steps or gradations between one event and another.
In the case of same-sex marriage the folks who claim these dire results do not make the causal link needed to infer, for example, that allowing same-sex marriage will lead to people marrying goats. As such, they are committing this fallacy and inviting others to join them in their error.
While I have written a reply to this fallacious argument before, hearing someone making the argument using goat marriage and corpse marriage got me thinking about the matter once again.
Using goat marriage as an example, the idea is that if same-sex marriage is allowed, then there is no way to stop the slide into people marrying goats. Presumably people marrying goats would be bad, so this should be avoided. In the case of corpse marriage, the gist is that if same-sex marriage is allowed, then there would be no way to stop the slide into people marry corpses. This would presumably be bad and hence must be avoided.
The slide down the slippery slope, it must be assumed, would occur because a principled distinction cannot be drawn between humans and goats. Nor can a principled distinction be drawn between living humans and corpses. After all, if such principled distinctions could be drawn, then the slide from same-sex marriage to goat marriage and corpse marriage could be stopped in a principled way, thus allowing same-sex marriage without the alleged dire consequences.
For the slippery slope arguments to work, there must not be a way to stop the slide. That is, there is a smooth and well-lubricated transition between humans and goats and between living humans and corpses. Since this is a conceptual matter rather than a matter of actual slopes, the slide would go both ways. That is, if we do not have an adequate wall between goats and humans, then the wall can be jumped from either direction. Likewise for corpses.
So, for the sake of argument, let it be supposed that there are not such adequate walls—that once we start moving, we are over the walls or down the slopes. This would, apparently, show that same-sex marriage would lead to goat marriage and corpse marriage. Of course, it would also show that different sex-marriage would lead to a slide into goat marriage and corpse marriage (I argued this point in my book, For Better or Worse Reasoning, so I will not repeat the argument here).
Somewhat more interestingly, the supposition of a low wall (or slippery slope) between humans and animals would also lead to some interesting results. For example, if we allow animals to be hunted and there is no solid wall between humans and animals in terms of laws and practices, then that would put us on the slippery slope to the hunting of humans. So, by the logic of the slippery slope, we should not allow humans to hunt animals. Ditto for eating animals—after all, if same-sex marriage leads to goat marriage, then eating beef must surely lead to cannibalism.
In the case of the low wall (or slippery slope) between corpses and humans, then there would also be some odd results. For example, if we allow corpses to be buried or cremated and there is no solid wall between the living and the dead, then this would put us on the slippery slope to burying or cremating the living. So, by the logic of the slippery slope, we should not allow corpses to be buried or cremated. Ditto for denying the dead the right to vote. After all, if allowing same-sex marriage would warrant necrophilia, then denying corpses the vote would warrant denying the living the right to vote.
Obviously, people will want to say that we can clearly distinguish between animals and humans as well as between the living and corpses. However, if we can do this, then the slippery slope argument against same-sex marriage would lose its slip.
Hyperbole is a rhetorical device in which a person uses an exaggeration or overstatement in order to create a negative or positive feeling. Hyperbole is often combined with a rhetorical analogy. For example, a person might say that someone told “the biggest lie in human history” in order to create a negative impression. It should be noted that not all vivid or extreme language is hyperbole-if the extreme language matches the reality, then it is not hyperbole. So, if the lie was actually the biggest lie in human history, then it would not be hyperbole to make that claim.
People often make use of hyperbole when making rhetorical analogies/comparisons. A rhetorical analogy involves comparing two (or more) things in order to create a negative or positive impression. For example, a person might be said to be as timid as a mouse or as smart as Einstein. By adding in hyperbole, the comparison can be made more vivid (or possibly ridiculous). For example, a professor who assigns a homework assignment that is due the day before spring break might be compared to Hitler. Speaking of Hitler, hyperbole and rhetorical analogies are stock items in political discourse.
Some Republicans have decided that Obamacare is going to be their main battleground. As such, it is hardly surprising that they have been breaking out the hyperbole in attacking it. Dr. Ben Carson launched an attack by seeming to compare Obamacare to slavery, but the response to this led him to “clarify” his remarks to mean that he thinks Obamacare is not like slavery, but merely the worst thing to happen to the United States since slavery. This would, of course, make it worse than all the wars, the Great Depression, 9/11 and so on.
While he did not make a slavery comparison, Ted Cruz made a Nazi comparison during his filibuster. As Carson did, Cruz and his supporters did their best to “clarify” the remark.
Since slavery and Nazis had been taken, Rick Santorum decided to use the death of Mandela as an opportunity to compare Obamacare to Apartheid.
When not going after Obamacare, Obama himself is a prime target for hyperbole. John McCain, who called out Cruz on his Nazi comparison, could not resist making use of some Nazi hyperbole in his own comparison. When Obama shook Raul Castro’s hand, McCain could not resist comparing Obama to Chamberlain and Castro to Hitler.
Democrats and Independents are not complete strangers to hyperbole, but they do not seem to wield it quite as often (or as awkwardly) as Republicans. There have been exceptions, of course-the sweet allure of a Nazi comparison is bipartisan. However, my main concern here is not to fill out political scorecards regarding hyperbole. Rather, it is to discuss why such uses of negative hyperbole are problematic.
One point of note is that while hyperbole can be effective at making people feel a certain way (such as angry), its use often suggests that the user has little in the way of substance. After all, if something is truly bad, then there would seem to be no legitimate need to make exaggerated comparisons. In the case of Obamacare, if it is truly awful, then it should suffice to describe its awfulness rather than make comparisons to Nazis, slavery and Apartheid. Of course, it would also be fair to show how it is like these things. Fortunately for America, it is obviously not like them.
One point of moral concern is the fact that making such unreasonable comparisons is an insult to the people who suffered from or fought against such evils. After all, such comparisons transform such horrors as slavery and Apartheid into mere rhetorical chips in the latest political game. To use an analogy, it is somewhat like a person who has played Call of Duty comparing himself to combat veterans of actual wars. Out of respect for those who suffered from and fought against these horrors, they should not be used so lightly and for such base political gameplay.
From the standpoint of critical thinking, such hyperbole should be avoided because it has no logical weight and serves to confuse matters by playing on the emotions. While that is the intent of hyperbole, this is an ill intent. While rhetoric does have its legitimate place (mainly in making speeches less boring) such absurd overstatements impede rather than advance rational discussion and problem solving.
After a defeat, it is natural for people to try to explain why they were defeated. In some cases, the explanation provided is aimed at doing what an explanation is supposed to do: to provide an illuminating account of how or why something occurred. In other cases, the explanation is aimed primarily at influencing peoples’ attitudes and behavior. Not surprisingly, an explanation that is aimed at achieving these goals is a rhetorical device known as a rhetorical explanation.
This is not to say that a rhetorical explanation need be in error—it could provide an accurate account of how or why something occurred. Being a rhetorical explanation is more a matter of intent—that is, those offering it do so at least in part to cause people to have a positive or negative feeling about a matter.
Back in 2012, the Republicans lost the presidential election and various people endeavored to explain how this happened. Some folks pointed to the demographics of America and how minorities played a critical role in the election. Others claimed that the media’s love for Obama handed him the victory. One of the more interesting explanations was that the Republicans lost because they were not conservative enough.
More recently, the Republicans lost on their bid to get the Democrats to agree to delay or defund Obamacare. After this defeat, various explanations have been offered and among them is the claim that it was the result of the Republicans not conservative enough. In this context, this seems to mean not being will to let the shutdown of the government slide into defaulting on the national debt.
On the face of it, presenting the claim that the Republicans lost because they were not conservative enough seems to be a rhetorical explanation. After all, it seems to be aimed (in part) at chastising the Republicans who are being accused of not being adequately conservative. As such, people are supposed to feel negatively about these Republicans. It also seems to be aimed (in part) at creating positive feelings towards the conservative Republicans—it is supposed to be believed that they had the winning approach (but were betrayed by the Republicans in Name Only). This explanation might prove to have some bite—many Republicans are taking pains to cast themselves as being very conservative and repudiating the charge that they might be moderates.
While rhetorical explanations such as this are often used to make other people feel a certain way (positively or negatively), people can also use them on themselves. Whether the explanation is inflicted on others or self-inflicted, the problem is that such appealing explanations can make it very easy for a person to buy into an explanation that is not correct, thus leading to obvious problems. As such, it is worth considering whether the explanation about these defeats is correct or not.
If the explanation for the 2012 election was correct, then the prediction that would follow would be that the Republicans would have won if they had been more conservative. In this case, winning is clear—Mitt Romney (or a more conservative Republican like Michelle Bachmann) would have been elected rather than Obama.
For this to happen, more people would have had to vote for the Republican than Obama. Since this did not happen, for the explanation at hand to be correct, there seem to be three main options (and perhaps others).
One is that some conservatives voted for Obama because Romney was not conservative enough. They would have, however, voted for someone who was conservative enough. It seems reasonable enough to dismiss this option out of hand on the grounds that such people would not vote for Obama. Thus, it seems rather implausible to think that a more conservative Republican would have pulled votes away from Obama.
A second one is that some conservatives voted for someone other than the two main candidates or wrote in someone else rather than voting for Romney, thus allowing Obama to win. This is more plausible than the first option, but is still fairly unlikely. That is, it does not seem likely that enough people to change the election voted in this manner because Romney was not conservative enough.
A third option is that some conservatives decided to not vote at all because they thought Romney was not conservative enough, thus allowing Obama to win. Of the three, this is the most plausible. Elections in the United States have a low turnout and it certainly is possible that some of those who did not vote would have voted if there had been a candidate that was conservative enough. These voters would thus seem to have preferred allowing Obama to win over voting for Romney, but this would assume that the voters were rationally considering the consequences of their failure to vote. It could be a simple matter of motivation—they were not inspired enough by Romney (or their dislike of Obama) to vote.
It is also worth considering that the explanation is in error because a more conservative Republican would have merely increased the votes for Obama. As noted above, a more conservative Republican would not have pulled votes from Obama. What seems more likely is that a more conservative Republican would have lost the more moderate voters who voted for Romney. As such, if the Republican candidate in 2012 had been “conservative enough” Obama would have either still won or would have still won with a larger number of votes. After all, most Americans are not extremely conservative and being “conservative enough” would seem to involve holding views that most Americans do not hold. Thus, the explanation seems to fail.
Jumping ahead to the most recent defeat, the matter is somewhat more complicated in that the victory conditions are not so clearly defined. At the start of the battle, the Republicans wanted to defund or delay Obamacare—that would have been a win. However, as the shutdown continued, the Republicans seemed to become less clear about what they wanted—especially when Obama made it clear that he was not going to negotiate Obamacare.
Interestingly enough, the shutdown was explained by some as being the fault of the Democrats and after the Republican defeat, the more conservative Republicans are using the narrative that they would have won if the Republicans had been conservative enough—thus creating dueling rhetorical explanations.
But, to get back to the main point, the victory conditions were not clear. However, it could be speculated that a win would involve the Republicans getting more of whatever they ended up wanted than the Democrats got of what they wanted. So, I will go with that.
There is also the question of what it meant to be conservative enough. Given the rhetoric, it seems that what this means is being willing to take the United States into default if one does not get what one wants. If so, the Republicans being conservative enough would not seem to have yielded a win—unless what they wanted was a default on the debt and the ensuing economic and political disaster. If this is what counts as a win, then being conservative enough would have led to that “win”—a win that almost everyone else would regard as a disaster.
Most Americans disapproved of what Congress was doing and most blamed the Republicans. Presumably if the Republicans had been more conservative, this would have merely made people more annoyed with them—after all, the view of most people was that what was going on was bad, not that it did not go far enough into this badness. As such, it would seem that the problem was not that the Republicans were not conservative enough. They lost because they had a poor strategy and most Americans did not like what they were doing. The solution is, obviously enough, not being more of that—the result will just be worse for the Republicans.
On October 7, 2013 Health and Human Services Secretary Kathleen Sebelius was the guest on the Daily Show. Given that Jon Stewart is often regarded as a liberal mouthpiece, most folks probably expected that this would be a mutual admiration sort of interview. However, things certainly turned out rather differently as Stewart did what “real” journalists rarely do: he raised an important concern and refused to allow the person to shift the issue.
The question raised was one that certainly should be answered, namely the question of why large businesses were granted a delay in their implementation of Obamacare while individuals did not receive the same delay. While there should certainly be a fair and rational answer to this question, Sebelius went into verbal acrobatics to avoid answering it. This tactic is known as the smokescreen/red herring in philosophy:
A Red Herring is a fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic. A common variation on this is the smokescreen: it functions like a red herring, but the attempt at diversion involves piling on complexities on the original issue until it is lost in the verbal smoke. This sort of “reasoning” has the following form:
1. Topic A is under discussion.
2. Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
3. Topic A is abandoned.
This sort of “reasoning” is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim.
In the case of Sebelius, her attempts to switch to other issues and to pile on other matters did not answer Stewart’s reasonable question. In general, people use this tactic in response to a question when they either 1) have no answer to the question or 2) have a problematic or bad answer to the question. In the case of Sebelius, I would suspect that the second option holds: she almost certainly has an answer, but it almost certainly is not a good one.
Stewart seems to have drawn this sort of conclusion regarding Sebelius’ maneuvering:
“I still don’t understand why individuals have to sign up and businesses don’t, because if the businesses — if she’s saying, ‘well, they get a delay because that doesn’t matter anyway because they already give health care,’ then you think to yourself, ‘fuck it, then why do they have to sign up at all. And then I think to myself, ‘well, maybe she’s just lying to me.’”
In terms of why this question matters, one obvious point of concern is the matter of fairness. If large businesses get a year delay, then fairness would seem to require that the same courtesy be extended to individuals.
It might be countered that there is a relevant difference between large businesses and individuals that warrants the difference in treatment. If so, Sebelius should have simply presented this difference or differences and that would have quickly settled the matter. For example, it might be the case that a large business would need more time to implement the change on such a large scale, while an individual just has to implement it for herself. But, Sebelius provided no such relevant difference and spent her time trying throw out red herrings and blow smoke. This suggests that she was either ignorant of a relevant difference or was aware that the difference did not actually justify the difference in treatment. That is, there is no legitimate relevant difference. Given her position, the explanation based on her ignorance seems unlikely, so the reasonable conclusion is that she knew the answer, but believes that it would make things look worse than engaging in evasion. Of course, it is also possible that such evasion are just a matter of how politicians operate—like the famous scorpion being carried across the river, they cannot deviate from their nature. In any case, Sebelius’ behavior creates the impression that something is wrong here and creating this impression is, I am sure, not what she intended in her interview.
Interestingly, while this difference between businesses and individuals is a legitimate point of criticism, the Republicans seem to have little interest in engaging Obamacare in depth on points where it actually generates legitimate concerns. While the Republicans have noted that they want to defund or delay Obamacare, they seem to be unable to avoid hyperbole and other excesses of defective rhetoric. I suspect that this occurs for a variety of reasons. One possibility is that they are also like that scorpion: they simply cannot bring themselves to engage the matter of Obamacare in a rational way—instead, they have to sting away with crazy rhetoric and a government shutdown. Another possibility is that they believe that engaging in the actual issues will be bad for them in some manner. A third possibility, which is more specific than the second, is that they believe that their target audience is best played to by such rhetoric and behavior and that they would be ill served politically by engaging on actual issues in a rational manner. As a final possibility, they might not actually care about Obamacare as such—rather, they are simply out to oppose Obama and Obamacare happens to be the point of contention. To use an analogy, they are like Meletus in the Apology—they are not concerned with what they claim to be concerned about, they are just out to get their man.
With the ever increasing cost of college education there is ever more reason to consider whether or not college is worth it. While much of this assessment can be in terms of income, there is also the academic question of whether or not students actually benefit intellectually from college.
The 2011 study Academically Adrift showed that a significant percentage of students received little or no benefit from college, which is obviously a matter of considerable concern. Not surprisingly, there have been additional studies aimed at assessing this matter. Of special concern to me is the claim that a new study shows that students do improve in critical thinking skills. While this study can be questioned, I will attest to the fact that the weight of evidence shows that American college students are generally weak at critical thinking. This is hardly shocking given that most people are weak at critical thinking.
My university, like so many others, has engaged in a concerted effort to enhance the critical thinking skills of students. However, there are reasonable concerns regarding the methodology used in such attempts. There is also the concern as to whether or not it is even possible, in practical terms, to significantly enhance the critical thinking skills of college students over the span of the two or four (or more) degree. While I am something of an expert at critical thinking (I mean actual critical thinking, not the stuff that sprung up so people could profit from being “critical thinking” experts), my optimism in this matter is somewhat weak. This is because I have given due consideration to the practical problem of this matter and have been teaching this subject for over two decades.
As with any form of education, it is wise to begin by considering the general qualities of human beings. For example, if humans are naturally good, then teaching virtue would be easier. In the case at hand, the question would be whether or not humans (in general) are naturally good at critical thinking.
While Aristotle famously regarded humans as rational animals, he also noted that most people are not swayed by arguments or fine ideals. Rather, they are dominated by their emotions and must be ruled by pain. While I will not comment on ruling with pain, I will note that Aristotle’s view about human rationality has been borne out by experience. To fast forward to now, experts speak of the various cognitive biases and emotional factors that impede human rationality. This matches my own experience and I am confident that it matches that of others. To misquote Lincoln, some people are irrational all the time and all the people are irrational some of the time. As such, trying to transform people into competent critical thinkers will generally be very difficult, perhaps as hard as making people virtuous.
In addition to the biological foundation, there is also the matter of preparation. For most students, their first exposure to a substantial course or even coverage of critical thinking occurs in college. It seems unlikely that students who have gone almost two decades without proper training in critical thinking will be significantly altered by college. One obvious solution, taken from Aristotle, is to begin proper training in critical thinking at an early age.
Another matter of serious concern is the fact that students are exposed to influences that discourage critical thinking and actually provide irrational influences. One example of this is the domain of politics. Political discourse tends to be, at best rhetoric, and typically involves the use of a wide range of fallacies such as the straw man, scare tactics and ad hominems of all varieties. For those who are ill-prepared in critical thinking, exposure to these influences can have a very detrimental effect and they can be led far away from reason. I would call for politicians to cease this behavior, but they seem devoted to the tools of irrationality. There is a certain irony in politicians who exploit and encourage poor reasoning being among those lamenting the weak critical thinking skills of students and endeavoring to blame colleges for the problems they themselves have helped create.
Another example of this is the domain of entertainment. As Plato argued in the Republic, exposure to corrupting influences can corrupt. While the usual arguments about corruption from entertainment focus on violence and sexuality, it is also important to consider the impact of certain amusements upon the reasoning skills of students. Television, which has long been said to “rot the brain”, certainly seems to shovel forth fare that is hardly contributing to good reasoning. While I would not suggest censorship, I would encourage students to discriminate and steer clear of shows that seem likely to have a corrosive impact on reasoning. While it might be an overstatement to claim that entertainment can corrode reason, it does seem sensible to note that much of it contributes nothing positive to a person’s mind.
A third example of this is advertising. As with politics, advertising is the domain of persuasion. While good reasoning can persuade, it is (for most people) the weakest tool of persuasion. As such, advertisers flood us with ads employing what they regard as effective tools of persuasion. These typically involve various rhetorical devices and also the use of fallacies. Sadly, the bad logic of fallacies is generally far more persuasive than good reasoning. Students are generally exposed to significant amounts of advertising (they no doubt spend more time exposed to ads than critical thinking) and it makes sense that this exposure would impact them in detrimental ways, at least if they are not already equipped to properly assess such ads with critical thinking skills.
A final example is, of course, everyday life. Students will typically be exposed to significant amounts of poor reasoning and this will have a significant influence on them. Students will also learn what the politicians and advertisers know: the tools of irrational persuasion will serve them better in our society than the tools of reason.
Given these anti-critical thinking influences, it is something of a wonder that students develop any critical thinking skills.
The current narrative is that the Obama administration is floundering in three major scandals: Benghazi, IRS TPT (Tea Party Targeting), and the DOJ’s AP incident. I agree with Socrates’ view that the “gadflies” have a duty to keep the “horse” that is the state from falling into laziness and corruption. But, of course, I also agree with Socrates’ view that we should better ourselves rather than endeavoring to tear others down with deceits. As such, I believe it is rather important to find and properly consider the truth in these matters.
During the first four years of Obama’s administration, those who wished to attack Obama had to generally rely on made up and often absurd attacks, such as the infamous Birther and Secret Muslim movements. Obama was also charged with being a socialist, a communist, a tyrant and so on. However, these charges only seemed to stick within certain minds-those who wished to believe the worst of the president regardless of the evidence. Interestingly, real problems such as drone assassinations, the grotesque disparities in wealth, the endemic problems in the VA, and so on were largely ignored by most folks on the left and the right. Someone more cynical than I might suspect that the pundits and politicians work to focus public rage in what they regard as safe channels.
The start of the second term saw what the folks at Fox probably regarded as a gift from on high, given that they had been desperately flogging Benghazi with little effect: two scandals that might actually have some substance. Interestingly, even the “liberal” media jumped onto the scandal bandwagon. However, the question remains as to whether or not there is any true substance behind these alleged scandals.
Again, someone more cynical than I might suggest that the pundits and politicians are focused primarily on scoring political points against Obama rather than operating from a desire for justice and ethical government. After all, some of the conservative pundits who are expressing outrage at Obama are the same people who embraced contrary views when their favorites engaged in worse misdeeds. Peggy Noonan is, of course, one of the outstanding examples: when it came to Iran-Contra, she claimed that Reagan did not know and was failed by his people. In the case of Obama, she contends that the President is fully accountable. Such blatant inconsistencies nicely reveal the truth of the matter. Naturally, folks on the left do the same thing: many of those who railed against Bush give Obama a pass on the same matters, presumably because he is their guy and Bush was not. But, left or right, such inconsistency is intellectually and morally wrong.
Someone far more cynical than I might even spin a tale of conspiracy-that outrage is generated, managed and directed so as to divert attention from real problems. After all, if the media and the people are in a froth over the IRS or the DOJ, then they have little outrage to spare for such matters as the pathetic state of our infrastructure or the fact that congress engages in legal insider trading. But, to get back to the main subject, I turn to the IRS scandal.
On the face of it, the IRS scandal is being sold as the IRS specifically targeting conservative groups. The flames of the scandal certainly have been fanned by the fact that Lerner pleaded the Fifth before Congress. While she might have been reacting out of fear because of the inflammatory rhetoric, this sort of thing is rather like when Romney refused to release his tax information: it leads people to believe that the damage that could be done by whatever is being hidden is far worse than the damage done by trying to hide it. However, let us go with the facts that are actually available.
One key part of the narrative is that the IRS only targeted conservative groups. However, the numbers show that this is not the case: only 70 of the 300 groups looked at were tea party organizations. There is also the fact that the IRS is required to determine whether or not those applying for tax-exemption are “social welfare” groups or are engaged in the sort of political activity that is forbidden to such groups. As such, the IRS was actually looking for exactly what the law required. As far as why they flagged the 300 rather than everyone, this seems to be a practical matter: the IRS was apparently faced with a flood of documents.
Another part of the narrative is that the IRS harmed those targeted for this review. However, the tax exempt status is not actually contingent on the IRS approving it: such groups can operate with that status even before official approval. Somewhat ironically, the only groups denied this status were three progressive groups: Emerge Nevada, Emerge Maine, and Emerge Massachusetts. The reason they were denied approval was because they were created to support Democrats, a violation of the law. The IRS commissioner at the time was a Bush appointee.
The facts would seem to reveal that there is not much here in the way of a scandal. The IRS and the administration can, however, be dinged for their poor handling of the matter. The Obama administration does have a poor track record of addressing the “scandling” from the right. Most infamously, they threw Shirley Sherrod to the wolves without even bothering to check on the facts. As such, I would say that one true scandal of the administration is how it handles allegations of scandals.
Interestingly, some conservatives are still trying to turn Benghazi into a scandal, and ABC News’ Jonathan Karl apparently engaged in fabrication, only to be exposed by CNN. There real scandal here would seem to be on the part of those who are trying to make Benghazi into a scandal.
It might be countered that the Obama administration is so bad (perhaps a socialist, communist, Muslim tyranny) that all of these tactics are justified. That, for example, it is acceptable to manufacture a scandal so as to undercut Obama’s support (and pave the way to the White House in 2016). The easy and obvious reply to this is that if the Obama administration is truly as bad as claimed, then there would be no need to manufacture scandals. One would merely need to provide evidence of the badness and that should suffice.
I do actually think that there is considerable badness. However, this badness is of the sort that neither party wishes to expose or bring to attention of the public. Thus, we generally get a war of manufactured scandals while the real problems remain festering in the shadows.
There can, of course, be real scandals. However, what is to be rationally expected is actual objective evidence from credible sources supporting the key claims as well as a rational value assessment regarding the seriousness of the scandal. For example, the DOJ AP scandal might be a real problem-if so, a presentation of the actual facts and a rational evaluation of the wrongdoing should reveal the scandal. These rational standards are generally ignored in favor of partisan interests and the desire to keep the eyes of America looking a certain way.
While there is an abundance of violence in the real world, there is also considerable focus on the virtual violence of video games. Interestingly, some people (such as the head of the NRA) blame real violence on the virtual violence of video games. The idea that art can corrupt people is nothing new and dates back at least to Plato’s discussion of the corrupting influence of art. While he was mainly worried about the corrupting influence of tragedy and comedy, he also raised concerns about violence and sex. These days we generally do not worry about the nefarious influence of tragedy and comedy, but there is considerable concern about violence.
While I am a gamer, I do have concerns about the possible influence of video games on actual behavior. For example, one of my published essays is on the distinction between virtual vice and virtual virtue and in this essay I raise concerns about the potential dangers of video games that are focused on vice. While I do have concerns about the impact of video games, there has been little in the way of significant evidence supporting the claim that video games have a meaningful role in causing real-world violence. However, such studies are fairly popular and generally get attention from the media.
The most recent study purports to show that teenage boys might become desensitized to violence because of extensive playing of video games. While some folks will take this study as showing a connection between video games and violence, it is well worth considering the details of the study in the context of causal reasoning involving populations.
When conducting a cause to effect experiment, one rather important factor is the size of experimental group (those exposed to the cause) and the control group (those not exposed to the cause). The smaller the number of subjects, the more likely that the difference between the groups is due to factors other than the (alleged) causal factor. There is also the concern with generalizing the results from the experiment to the whole population.
The experiment in question consisted of 30 boys (ages 13-15) in total. As a sample for determining a causal connection, the sample is too small for real confidence to be placed in the results. There is also the fact that the sample is far too small to support a generalization from the 30 boys to the general population of teenage boys. In fact, the experiment hardly seems worth conducting with such a small sample and is certainly not worth reporting on-except as an illustration of how research should not be conducted.
The researchers had the boys play a violent video game and a non-violent video game in the evening and compared the results. According to the researchers, those who played the violent video game had faster heart rates and lower sleep quality. They also reported “increased feelings of sadness.” After playing the violent game, the boys had greater stress and anxiety.
According to one researcher, “The violent game seems to have elicited more stress at bedtime in both groups, and it also seems as if the violent game in general caused some kind of exhaustion. However, the exhaustion didn’t seem to be of the kind that normally promotes good sleep, but rather as a stressful factor that can impair sleep quality.”
Being a veteran of violent video games, these results are consistent with my own experiences. I have found that if I play a combat game, be it a first person shooter, an MMO or a real time strategy game, too close to bedtime, I have trouble sleeping. Crudely put, I find that I am “keyed” up and if I am unable to “calm down” before trying to sleep, my sleep is generally not very restful. I really noticed this when I was raiding in WOW. A raid is a high stress situation (game stress, anyway) that requires hyper-vigilance and it takes time to “come down” from that. I have experienced the same thing with actual fighting (martial arts training, not random violence). I’ve even experienced something comparable when I’ve been awoken by a big spider crawling on my face-I did not sleep quite so well after that. Graduate school, as might be imagined, put me into this state of poor sleep for about five years.
In general, then, it makes sense that violent video games would have this effect-which is why it is not a good idea to game up until bed time if you want to get a good night’s sleep. Of course, it is a generally a good idea to relax about an hour before bedtime-don’t check email, don’t get on Facebook, don’t do work and so on.
While not playing games before bedtime is a good idea, the question remains as to how these findings connect to violence and video games. According to the researchers, the differences between the two groups “suggest that frequent exposure to violent video games may have a desensitizing effect.”
Laying aside the problem that the sample is far too small to provide significant results that can be reliably extended to the general population of teenage boys, there is also the problem that there seems to be a rather large chasm between the observed behavior (anxiety and lower sleep quality) and being desensitized to violence. The researchers do note that the cause and effect relationship was not established and they did consider the possibility of reversed causation (that the video games are not causing these traits, but that boys with those traits are drawn to violent video games). As such, the main impact of the study seems to be that it got media attention for the researchers. This would suggest another avenue of research: the corrupting influence of media attention on researching video games and violence.