A Philosopher's Blog

The Speed of Rage

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on July 9, 2014
English: A raging face.

(Photo credit: Wikipedia)

The rise of social media has created an entire new world for social researchers. One focus of the research has been on determining how quickly and broadly emotions spread online. The April 2014 issue of the Smithsonian featured and article on this subject by Matthew Shaer.

Not surprisingly, researchers at Beijing University found that the emotion of rage spread the fastest and farthest online. Researchers in the United States found that anger was a speed leader, but not the fastest in the study: awe was even faster than rage. But rage was quite fast. As might be expected, sadness was a slow spreader and had a limited expansion.

This research certainly makes sense—rage tends to be a strong motivator and sadness tends to be a de-motivator. The power of awe was an interesting finding, but some reflection does indicate that this would make sense—the emotion tends to move people to want to share (in the real world, think of people eagerly drawing the attention of strangers to things like beautiful sunsets, impressive feats or majestic animals).

In general, awe is a positive emotion and hence it seems to be a good thing that it travels far and wide on the internet. Rage is, however, something of a mixed bag.

When people share their rage via social media, they are sharing with an intent to express (“I am angry!”) and to infect others with this rage (“you should be angry, too!”). Rage, like many infectious agents, also has the effect of weakening the host’s “immune system.” In the case of anger, the immune system is reason and emotional control. As such, rage tends to suppress reason and lower emotional control. This serves to make people even more vulnerable to rage and quite susceptible to the classic fallacy of appeal to anger—this is the fallacy in which a person accepts her anger as proof that a claim is true. Roughly put, the person “reasons” like this: “this makes me angry, so it is true.” This infection also renders people susceptible to related emotions (and fallacies), such as fear (and appeal to force).

Because of these qualities of anger, it is easy for untrue claims to be accepted far and wide via the internet. This is, obviously enough, the negative side of anger.  Anger can also be positive—to use an analogy, it can be like a cleansing fire that sweeps away brambles and refuse.

For anger to be a positive factor, it would need to be a virtuous anger (to follow Aristotle). Put a bit simply, it would need to be the right degree of anger, felt for the right reasons and directed at the right target. This sort of anger can mobilize people to do good. For example, people might learn of a specific corruption rotting away their society and be moved to act against it. As another example, people might learn of an injustice and be mobilized to fight against it.

The challenge is, of course, to distinguish between warranted and unwarranted anger. This is a rather serious challenge—as noted above, people tend to feel that they are right because they are angry rather than inquiring as to whether their rage is justified or not.

So, when you see a post or Tweet that moves you anger, think before adding fuel to the fire of anger.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Defining Rape I: Definitions

Posted in Law, Politics, Reasoning/Logic, Universities & Colleges by Michael LaBossiere on June 25, 2014
A picture of a dictionary viewed with a lens o...

A picture of a dictionary viewed with a lens on top of it, at the word “Internet” (Photo credit: Wikipedia)

One of the basic lessons of philosophy dating back to at least Socrates is that terms need to be properly defined. Oversimplifying things a bit, a good definition needs to avoid being too narrow and also avoid being too broad. A definition that is too narrow leaves out things that the term should include. One that is too broad allows in too much. A handy analogy for this is the firewall that your computer should have: if it doing its job properly, it lets in what should be allowed into your computer while keeping attacks out. An example of a definition that is too narrow would be to define “art” as “any product of the visual arts, such as painting and sculpture.” This is too narrow because it leaves out what is manifestly art, such as movies and literature. As an example of a definition that is too broad, defining “art” as “that which creates an emotional effect” would be defective since it would consider such things as being punch in the face or winning the lottery as art. A perfect definition would thus be like perfect security: all that belongs is allowed in and all that does not is excluded.

While people have a general understanding of the meaning of “rape”, the usual view covers what my colleague Jean Kazez calls “classic” rape—an attack that involves the clear use of force, threat or coercion. As she notes, another sort of rape is what is called “date” rape—a form of assault that, on college campuses, often involves intoxication rather than overt violence.

In many cases the victims of sexual assault do not classify the assault as rape. According to Cathy Young, “three quarters of the female students who were classified as victims of sexual assault by incapacitation did not believe they had been raped; even when only incidents involving penetration were counted, nearly two-thirds did not call it rape. Two-thirds did not report the incident to the authorities because they didn’t think it was serious enough.”

In some cases, a victim does change her mind (sometimes after quite some time) and re-classify the incident as rape. For example, a woman who eventually reported being raped twice by a friend explained her delay on the grounds that it took her a while to “to identify what happened as an assault.”

The fact that a victim changed her mind does not, obviously, invalidate her claim that she was raped. However, there is the legitimate concern about what is and is not rape—that is, what is a good definition of an extremely vile thing. After all, when people claim there is an epidemic of campus rapes, they point to statistics claiming that 1 in 5 women will be sexually assaulted in college. This statistic is horrifying, but it is still reasonable to consider what it actually means. Jean Kazez has looked at the numbers in some detail here.

One obvious problem with inquiring into the statistics and examining the definition of “rape” is that the definition has become an ideological matter for some. For some on the left, “rape” is very broadly construed and to raise even rational concerns about the broadness of the definition is to invite accusations of ignorant insensitivity (at best) and charges of misogyny. For some on the right, “rape” is very narrowly defined (including the infamous notion of “legitimate” rape) and to consider expanding the definition is to invite accusations of being politically correct or, in the case of women, being a radical feminist or feminazi.

As the ideological territory is staked out and fortified, the potential for rational discussion is proportionally decreased. In fact, to even suggest that there is a matter to be rationally discussed (with the potential for dispute and disagreement) might be greeted with hostility by some. After all, when a view becomes part of a person’s ideological identity, the person tends to believe that there is nothing left to discuss and any attempt at criticism is both automatically in error and a personal attack.

However, the very fact that there are such distinct ideological fortresses indicates a clear need for rational discussion of this matter and I will endeavor to do so in the following essays.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Science & Self-Identity

Posted in Philosophy, Politics, Science, Reasoning/Logic by Michael LaBossiere on June 9, 2014
English: The smallpox vaccine diluent in a syr...

 (Photo credit: Wikipedia)

The assuming an authority of dictating to others, and a forwardness to prescribe to their opinions, is a constant concomitant of this bias and corruption of our judgments. For how almost can it be otherwise, but that he should be ready to impose on another’s belief, who has already imposed on his own? Who can reasonably expect arguments and conviction from him in dealing with others, whose understanding is not accustomed to them in his dealing with himself? Who does violence to his own faculties, tyrannizes over his own mind, and usurps the prerogative that belongs to truth alone, which is to command assent by only its own authority, i.e. by and in proportion to that evidence which it carries with it.

-John Locke

As a philosophy professor who focuses on the practical value of philosophical thinking, one of my main objectives is to train students to be effective critical thinkers. While true critical thinking has been, ironically, threatened by the fact that it has become something of a fad, I stick with a very straightforward and practical view of the subject. As I see it, critical thinking is the rational process of determining whether a claim should be accepted as true, rejected or false or subject to the suspension of judgment. Roughly put, a critical thinker operates on the principle that the belief in a claim should be proportional to the evidence for it, rather than in proportion to our interests or feelings. In this I follow John Locke’s view: “Whatsoever credit or authority we give to any proposition more than it receives from the principles and proofs it supports itself upon, is owing to our inclinations that way, and is so far a derogation from the love of truth as such: which, as it can receive no evidence from our passions or interests, so it should receive no tincture from them.” Unfortunately, people often fail to follow this principle and do so in matters of considerable importance, such as climate change and vaccinations. To be specific, people reject proofs and evidence in favor of interests and passions.

Despite the fact that the scientific evidence for climate change is overwhelming, there are still people who deny climate change. These people are typically conservatives—although there is nothing about conservatism itself that requires denying climate change.

While rejecting the scientific evidence for climate change can be regarded as irrational, it is easy enough to attribute a rational motive behind this view. After all, there are people who have an economic interest in denying climate change or, at least, preventing action from being taken that they regard as contrary to their interests (such as implementing the cap and trade system on carbon originally proposed by conservative thinkers). This interest would provide a motive to lie (that is, make claims that one knows are not true) as well as a psychological impetus to sincerely hold to a false belief. As such, I can easily make sense of climate change denial in the face of overwhelming evidence: big money is on the line. However, the denial less rational for the majority of climate change deniers—after all, they are not owners of companies in the fossil fuel business. However, they could still be motivated by a financial stake—after all, addressing climate change could cost them more in terms of their energy bills. Of course, not addressing climate change could cost them much more.

In any case, I get climate denial in that I have a sensible narrative as to why people reject the science on the basis of interest. However, I have been rather more confused by people who deny the science regarding vaccines.

While vaccines are not entirely risk free, the scientific evidence is overwhelming that they are safe and very effective. Scientists have a good understanding of how they work and there is extensive empirical evidence of their positive impact—specifically the massive reduction in cases of diseases such as polio and measles. Oddly enough, there is significant number of Americans who willfully deny the science of vaccination. What is most unusual is that these people tend to be college educated. They are also predominantly political liberals, thus showing that science denial is bi-partisan. It is fascinating, but also horrifying, to see someone walk through the process of denial—as shown in a segment on the Daily Show. This process is rather complete: evidence is rejected, experts are dismissed and so on—it is as if the person’s mind switched into a Bizzaro version of critical thinking (“kritikal tincing” perhaps). This is in marked contrast with the process of rational disagreement in which the methodology of critical thinking is used in defense of an opposing viewpoint. Being a philosopher, I value rational disagreement and I am careful to give opposing views their due. However, the use of fallacious methods and outright rejection of rational methods of reasoning is not acceptable.

As noted above, climate change denial makes a degree of sense—behind the denial is a clear economic interest. However, vaccine science denial seems to lack that motive. While I could be wrong about this, there does not seem to be any economic interest that would benefit from this denial—except, perhaps, the doctors and hospitals that will be treating the outbreaks of preventable diseases. However, doctors and hospitals obviously encourage vaccination. As such, an alternative explanation is needed.

Recent research does provide some insight into the matter and this research is consistent with Locke’s view that people are influenced by both interests and passions. In this case, the motivating passion seems to be a person’s commitment to her concept of self. The idea is that when a person’s self-concept or self-identity is threatened by facts, the person will reject the facts in favor of her self-identity.  In the case of the vaccine science deniers, the belief that vaccines are harmful has somehow become part of their self-identity. Or so goes the theory as to why these deniers reject the evidence.

To be effective, this rejection must be more than simply asserting the facts are wrong. After all, the person is aiming to deceive herself to maintain her self-identity. As such, the person must create an entire narrative which makes their rejection seem sensible and believable to them. A denier must, as Pascal said in regards to his famous wager, make himself believe his denial. In the case of matters of science, a person needs to reject not just the claims made by scientists but also the method by which the scientists support the claims. Roughly put, the narrative of denial must be a complete story that protects itself from criticism. This is, obviously enough, different from a person who denies a claim on the basis of evidence—since there is rational support for the denial, there is no need to create a justifying narrative.

This, I would say, is one of the major dangers of this sort of denial—not the denial of established facts, but the explicit rejection of the methodology that is used to assess facts. While people often excel at compartmentalization, this strategy runs the risk of corrupting the person’s thinking across the board.

As noted above, as a philosopher one of my main tasks is to train people to think critically and rationally. While I would like to believe that everyone can be taught to be an effective and rational thinker, I know that people are far more swayed by rhetoric and (ironically) fallacious reasoning then they are swayed by good logic. As such, there might be little hope that people can be “cured” of their rejection of science and reasoning. Aristotle took this view—while noting that some can be convinced by “arguments and fine ideals” most people cannot. He advocated the use of coercive habituation to get people to behave properly and this could (and has) been employed to correct incorrect beliefs. However, such a method is agnostic in regards to the truth—people can be coerced into accepting the false as well as the true.

Interestingly enough, a study by Brendan Nyhan shows that reason and persuasion both fail when employed in attempts to change false beliefs that are critical to a person’s self-identity. In the case of Nyhan’s study, there were various attempts to change the beliefs of vaccine science deniers using reason (facts and science) and also various methods of rhetoric/persuasions (appeals to emotions and anecdotes). Since reason and persuasion are the two main ways to convince people, this is certainly a problem.

The study and other research did indicate an avenue that might work. Assuming that it is the threat to a person’s self-concept that triggers the rejection mechanism, the solution is to approach a person in a way that does not trigger this response. To use an analogy, it is like trying to conduct a transplant without triggering the body’s immune system to reject the transplanted organ.

One obvious problem is that once a person has taken a false belief as part of his self-concept, it is rather difficult to get him to regard any attempt to change his mind as anything other than a threat. Addressing this might require changing the person’s self-concept or finding a specific strategy for addressing that belief that is somehow not seen as a threat. Once that is done, the second stage—that of actually addressing the false belief, can begin.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Leadership & Responsibility

Posted in Ethics, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on June 2, 2014
English: Official image of Secretary of Vetera...

English: Official image of Secretary of Veterans Affairs Eric Shinseki (Photo credit: Wikipedia)

The recent resignation of Eric Shinseki from his former position as the head of the Department of Veteran Affairs raised, once again, the issue of the responsibilities of a leader. While I will not address the specific case of Shinseki, I will use this opportunity discuss leadership and responsibility in general terms.

Not surprisingly, people often assign responsibility based on ideology. For example, Democrats would be more inclined to regard a Republican leader as being fully responsible for his subordinates while being more forgiving of fellow Democrats. However, judging responsibility based on political ideology is obviously a poor method of assessment. What is needed is, obviously enough, some general principles that can be used to assess the responsibility of leaders in a consistent manner.

Interestingly (or boringly) enough, I usually approach the matter of leadership and responsibility using an analogy to the problem of evil. Oversimplified quite a bit, the problem of evil is the problem of reconciling God being all good, all knowing and all powerful with the existence of evil in the world. If God is all good, then he would tolerate no evil. If God was all powerful, He could prevent all evil. And if God was all knowing, then He would not be ignorant of any evil. Given God’s absolute perfection, He thus has absolute responsibility as a leader: He knows what every subordinate is doing, knows whether it is good or evil and has the power to prevent or cause any behavior. As such, when a subordinate does evil, God has absolute accountability. After all, the responsibility of a leader is a function of what he can know and the extent of his power.

In stark contrast, a human leader (no matter how awesome) falls rather short of God. Such leaders are clearly not perfectly good and they are obviously not all knowing or all powerful. These imperfections thus lower the responsibility of the leader.

In the case of goodness, no human can be expected to be morally perfect. As such, failures of leadership due to moral imperfection can be excusable—within limits. The challenge is, of course, sorting out the extent to which imperfect humans can legitimately be held morally accountable and to what extent our unavoidable moral imperfections provide a legitimate excuse. These standards should be applied consistently to leaders so as to allow for the highest possible degree of objectivity.

In the case of knowledge, no human can be expected to be omniscient—we have extreme limits on our knowledge. The practical challenge is sorting out what a leader can reasonably be expected to know and the responsibility of the leader should be proportional to that extent of knowledge. This is complicated a bit by the fact that there are at least two factors here, namely the capacity to know and what the leader is obligated to know. Obligations to know should not exceed the human capacity to know, but the capacity to know can often exceed the obligation to know. For example, the President could presumably have everyone spied upon (which is apparently what he did do) and thus could, in theory, know a great deal about his subordinates. However, this would seem to exceed what the President is obligated to know (as President) and probably exceeds what he should know.

Obviously enough, what a leader can know and what she is obligated to know will vary greatly based on the leader’s position and responsibilities. For example, as the facilitator of the philosophy & religion unit at my university, my obligation to know about my colleagues is very limited as is my right to know about them. While I have an obligation to know what courses they are teaching, I do not have an obligation or a right to know about their personal lives or whether they are doing their work properly on outside committees. So, if a faculty member skipped out on committee meetings, I would not be responsible for this—it is not something I am obligated to know about.

As another example, the chair of the department has greater obligations and rights in this regard. He has the right and obligation to know if they are teaching their classes, doing their assigned work and so on. Thus, when assessing the responsibility of a leader, sorting out what the leader could know and what she was obligated to know are rather important matters.

In regards to power (taken in a general sense), even the most despotic dictator’s powers are still finite. As such, it is reasonable to consider the extent to which a leader can utilize her authority or use up her power to compel subordinates to obey. As with knowledge, responsibility is proportional to power. After all, if a leader lacks to power (or authority) to compel obedience in regards to certain matters, then the leader cannot be accountable for not making the subordinates do or not do certain actions. Using myself as an example, my facilitator position has no power: I cannot demote, fire, reprimand or even put a mean letter into a person’s permanent record. The extent of my influence is limited to my ability to persuade—with no rewards or punishments to offer. As such, my responsibility for the actions of my colleagues is extremely limited.

There are, however, legitimate concerns about the ability of a leader to make people behave correctly and this raises the question of the degree to which a leader is responsible for not being persuasive enough or using enough power to make people behave. That is, the concern is when bad behavior based on resisting applied authority or power is the fault of the leader or the fault of the resistor. This is similar to the concern about the extent to which responsibility for failing to learn falls upon the teacher and to which it falls on the student. Obviously, even the best teacher cannot reach all students and it would seem reasonable to believe that even the best leader cannot make everyone do what they should be doing.

Thus, when assessing alleged failures of leadership it is important to determine where the failures lie (morality, knowledge or power) and the extent to which the leader has failed. Obviously, principled standards should be applied consistently—though it can be sorely tempting to damn the other guy while forgiving the offenses of one’s own guy.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Talking Points & Climate Change

Posted in Philosophy, Politics, Reasoning/Logic, Science by Michael LaBossiere on May 14, 2014
English: Animated global map of monthly long t...

English: Animated global map of monthly long term mean surface air temperature (Mollweide projection). (Photo credit: Wikipedia)

While science and philosophy are supposed to be about determining the nature of reality, politics is often aimed at creating perceptions that are alleged to be reality. This is why it is generally wiser to accept claims supported by science and reason over claims “supported” by ideology and interest.

 

The matter of climate change is a matter of both science (since the climate is an objective feature of reality) and politics (since perception of reality can be shaped by rhetoric and ideology). Ideally, the facts of climate change would be left to science and sorting out how to address it via policy would fall, in part, to the politicians. Unfortunately, politicians and other non-scientists have taken it on themselves to make claims about the science, usually in the form of unsupported talking points.

 

On the conservative side, there has been a general shifting in the talking points. Originally, there was one main talking point: there is no climate change and the scientists are wrong. This point was often supported by alleging that the scientists were motivated by ideology to lie about the climate. In contrast, those whose profits could be impacted if climate change was real were taken as objective sources.

 

In the face of mounting evidence and shifting public opinion, this talking point became the claim that while climate change is occurring, it is not caused by humans. This then shifted to the claim that climate change is caused by humans, but there is nothing we can (or should) do now.

 

In response to the latest study, certain Republicans have embraced three talking points. These points do seem to concede that climate change is occurring and that humans are responsible. These points do have a foundation that can be regarded as rational and each will be considered in turn.

 

One talking point is that the scientists are exaggerating the impact of climate change and that it will not be as bad as they claim. This does rest on a reasonable concern about any prediction: how accurate is the prediction? In the case of a scientific prediction based on data and models, the reasonable inquiry would focus on the accuracy of the data and how well the models serve as models of the actual world. To use an analogy, the reliability of predictions about the impact of a crash on a vehicle based on a computer model would hinge on the accuracy of the data and the model and both could be reasonable points of inquiry.

 

Since the climate scientists have the data and models used to make the predications, to properly dispute the predictions would require showing problems with either the data or the models (or both). Simply saying they are wrong would not suffice—what is needed is clear evidence that the data or models (or both) are defective in ways that would show the predictions are excessive in terms of the predicted impact.

 

One indirect way to do this would be to find clear evidence that the scientists are intentionally exaggerating. However, if the scientists are exaggerating, then this would be provable by examining the data and plugging it into an accurate model. That is, the scientific method should be able to be employed to show the scientists are wrong.

 

In some cases people attempt to argue that the scientists are exaggerating because of some nefarious motivation—a liberal agenda, a hatred of oil companies, a desire for fame or some other wickedness. However, even if it could be shown that the scientists have a nefarious motivation, it does not follow that the predictions are wrong. After all, to dismiss a claim because of an alleged defect in the person making the claim is a fallacy. Being suspicious because of a possible nefarious motive can be reasonable, though. So, for example, the fact that the fossil fuel companies have a great deal at stake here does not prove that their claims about climate change are wrong. But the fact that they have considerable incentive to deny certain claims does provide grounds for suspicion regarding their objectivity (and hence credibility).  Naturally, if one is willing to suspect that there is a global conspiracy of scientists, then one should surely be willing to consider that fossil fuel companies and their fellows might be influenced by their financial interests.

 

One could, of course, hold that the scientists are exaggerating for noble reasons—that is, they are claiming it is worse than it will be in order to get people to take action. To use an analogy, parents sometimes exaggerate the possible harms of something to try to persuade their children not to try it. While this is nicer than ascribing nefarious motives to scientists, it is still not evidence against their claims. Also, even if the scientists are exaggerating, there is still the question about how bad things really would be—they might still be quite bad.

 

Naturally, if an objective and properly conducted study can be presented that shows the predictions are in error, then that is the study that I would accept. However, I am still waiting for such a study.

 

The second talking point is that the laws being proposed will not solve the problems. Interestingly, this certainly seems to concede that climate change will cause problems. This point does have a reasonable foundation in that it would be unreasonable to pass laws aimed at climate change that are ineffective in addressing the problems.

 

While crafting the laws is a matter of politics, sorting out whether such proposals would be effective does seem to fall in the domain of science. For example, if a law proposes to cut carbon emissions, there is a legitimate question as to whether or not that would have a meaningful impact on the problem of climate change. Showing this would require having data, models and so on—merely saying that the laws will not work is obviously not enough.

 

Now, if the laws will not work, then the people who confidently make that claim should be equally confident in providing evidence for their claim. It seems reasonable to expect that such evidence be provided and that it be suitable in nature (that is, based in properly gathered data, examined by impartial scientists and so on).

 

The third talking point is that the proposals to address climate change will wreck the American economy. As with the other points, this does have a rational basis—after all, it is sensible to consider the impact on the economy.

 

One way to approach this is on utilitarian grounds: that we can accept X environmental harms (such as coastal flooding) in return for Y (jobs and profits generated by fossil fuels). Assuming that one is a utilitarian of the proper sort and that one accepts this value calculation, then one can accept that enduring such harms could be worth the advantages. However, it is well worth noting that as usual, the costs will seem to fall heavily on those who are not profiting. For example, the flooding of Miami and New York will not have a huge impact on fossil fuel company profits (although they will lose some customers).

 

Making the decisions about this should involve openly considering the nature of the costs and benefits as well as who will be hurt and who will benefit. Vague claims about damaging the economy do not allow us to make a proper moral and practical assessment of whether the approach will be correct or not. It might turn out that staying the course is the better option—but this needs to be determined with an open and honest assessment. However, there is a long history of this not occurring—so I am not optimistic about this occurring.

 

It is also worth considering that addressing climate change could be good for the economy. After all, preparing coastal towns and cities for the (allegedly) rising waters could be a huge and profitable industry creating many jobs. Developing alternative energy sources could also be profitable as could developing new crops able to handle the new conditions. There could be a whole new economy created, perhaps one that might rival more traditional economic sectors and newer ones, such as the internet economy. If companies with well-funded armies of lobbyists got into the climate change countering business, I suspect that a different tune would be playing.

 

To close, the three talking points do raise questions that need to be answered:

 

  • Is climate change going to be as bad as it is claimed?
  • What laws (if any) could effectively and properly address climate change?
  • What would be the cost of addressing climate change and who would bear the cost?

 

 

My Amazon Author Page

 

My Paizo Page

 

My DriveThru RPG Page

 

Enhanced by Zemanta

The Better than Average Delusion

Posted in Reasoning/Logic by Michael LaBossiere on March 28, 2014
Average Joe copy

Average Joe copy (Photo credit: Wikipedia)

One interesting, but hardly surprising, cognitive bias is the tendency of a person to regard herself as better than average—even when no evidence exists for that view. Surveys in which Americans are asked to compare themselves to their fellows are quite common and nicely illustrate this bias: the overwhelming majority of Americans rank themselves as above average in everything ranging from leadership ability to accuracy in self-assessment.

Obviously enough, the majority of people cannot be better than average—that is just how averages work. As to why people think the way they do, the disparity between what is claimed and what is the case can be explained in at least two ways. One is another well-established cognitive bias, namely the tendency people have to believe that their performance is better than it actually is. Teachers get to see this in action quite often—students generally believe that they did better on the test than they actually did. For example, I have long lost count of people who have gotten Cs or worse on papers who say to me “but it felt like an A!” I have no doubt that it felt like an A to the student—after all, people tend to rather like their own work. Given that people tend to regard their own performance as better than it is, it certainly makes sense that they would regard their abilities as better than average—after all, we tend to think that we are all really good.

Another reason is yet another bias: people tend to give more weight to the negative over the positive. As such, when assessing other people, we will tend to consider negative things about them as having more significance than the positive things. So, for example, when Sally is assessing the honesty of Bill, she will give more weight to incidents in which Bill was dishonest relative to those in which he was honest. As such, Sally will most likely see herself as being more honest than Bill. After enough comparisons, she will most likely see herself as above average.

This self-delusion probably has some positive effects—for example, it no doubt allows people to maintain a sense of value and to enjoy the smug self-satisfaction that they are better than most other folks. This surely helps people get by day-to-day.

There are, of course, downsides to this—after all, a person who does not do a good job assessing himself and others will be operating on the basis of inaccurate information and this rarely leads to good decision making.

Interestingly enough, the better-than-average delusion holds up quite well even in the face of clear evidence to the contrary. For example, the British Journal of Social Psychology did a survey of British prisoners asking them to compare themselves to other prisoners and the general population in terms of such traits as honesty, compassion, and trustworthiness. Not surprisingly, the prisoners ranked themselves as above average. They did, however, only rank themselves as average when it came to the trait of law-abidingness. This suggests that reality has some slight impact on people, but not as much as one might hope.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Picking between Studies

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on January 31, 2014

Illustration of swan-necked flask experiment u...

(Photo credit: Wikipedia)

In my last essay I looked briefly at how to pick between experts. While people often reply on experts when making arguments, they also rely on studies (and experiments). Since most people do not do their own research, the studies mentioned are typically those conducted by others. While using study results in an argument is quite reasonable, making a good argument based on study results requires being able to pick between studies rationally.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick a study based on the fact that it agrees with what you already believe. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it is reasonable to believe it.

Another common approach is to accept a study as correct because the results match what you really want to be true. For example, a liberal might accept a study that claims liberals are smarter and more generous than conservatives.  This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

In some cases, people try to create their own “studies” by appealing to their own anecdotal data about some matter. For example, a person might claim that poor people are lazy based on his experience with some poor people. While anecdotes can be interesting, to take an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.

While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations of studies, provided that they have the relevant information about the study. The following provides a concise guide to studies—and experiments.

In normal use, people often jam together studies and experiments. While this is fine for informal purposes, this distinction is actually important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.

The objective of the experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible—the more alike the two groups, the better the experiment.

The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.

Assuming that the experiment was conducted properly, whether or not the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The key idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment actually shows there is a causal connection it is important to know the size of the sample used. After all, the difference between the experimental and control groups might be rather large, but might not be significant. For example, imagine that an experiment is conducted involving 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is actually not statistically significant: the sample is so small, the difference could be due entirely to chance. The following table shows some information about statistical significance.

Sample Size (Control group + Experimental Group)

Approximate Figure That The Difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
100 13
500 6
1,000 4
1,500 3

While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to radiation to test its effect would be immoral. In such cases studies are used rather than experiments.

One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a suspected cause. The main difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to radiation by an accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.

After the study has run its course, the results are compared to see if these is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.

Since the study relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the experiment. After all, in the study the researchers have to take what they can find rather than conducting a proper experiment.

In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness, but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to sort things out.

Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause it not known, so those in the experimental group are those showing the effect.

Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if these is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study—which is why a study of adequate size is important.

Of the three methods, this is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would face the problem that those who chew tobacco are often ex-smokers. As such, the smoking might be the actual cause. To sort this out would involve a study involving chewers who are not ex-smokers.

It is also worth referring back to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.

As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.

Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double blind experiment/study.

Overall, here are some key questions to ask when picking a study:

Was the study/experiment properly conducted?

Was the sample size large enough?

Were the results statistically significant?

Were those conducting the study/experiment experts?

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Picking between Experts

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on January 29, 2014
A logic diagram proposed for WP OR to handle a...

A logic diagram proposed for WP OR to handle a situation where two equal experts disagree. (Photo credit: Wikipedia)

One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.

If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.

The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise.  For example, a person might be very likeable, but not know a thing about what they are talking about.

Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.

 

1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.

 

2. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).

 

3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.

It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.

 

4. The person in question is not significantly biased.

This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.

It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.

Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Slippery Slope, Same Sex Marriage, Goats & Corpses

Posted in Ethics, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on January 8, 2014

Gray-GoatWhile same-sex marriage seems to have momentum in its favor in the United States, there is still considerable opposition to its acceptance. This opposition is well stocked up with stock arguments against this practice. One of these is the slippery slope argument: if same-sex marriage is allowed, then people will then be allowed to marry turtles, dolphins, trees, cats, corpses or iPads.  Since this would be bad/absurd, same-sex marriage should not be allowed. This is, of course, the classic slippery slope fallacy.

This is a fallacy in which a person asserts that some event must inevitably follow from another without any argument for the inevitability of the event in question. In most cases, there are a series of steps or gradations between one event and the one in question and no reason is given as to why the intervening steps or gradations will simply be bypassed. This “argument” has the following form:

1. Event X has occurred (or will or might occur).
2. Therefore event Y will inevitably happen.

This sort of “reasoning” is fallacious because there is no reason to believe that one event must inevitably follow from another without adequate evidence for such a claim. This is especially clear in cases in which there are a significant number of steps or gradations between one event and another.

In the case of same-sex marriage the folks who claim these dire results do not make the causal link needed to infer, for example, that allowing same-sex marriage will lead to people marrying goats.  As such, they are committing this fallacy and inviting others to join them in their error.

While I have written a reply to this fallacious argument before, hearing someone making the argument using goat marriage and corpse marriage got me thinking about the matter once again.

Using goat marriage as an example, the idea is that if same-sex marriage is allowed, then there is no way to stop the slide into people marrying goats. Presumably people marrying goats would be bad, so this should be avoided. In the case of corpse marriage, the gist is that if same-sex marriage is allowed, then there would be no way to stop the slide into people marry corpses. This would presumably be bad and hence must be avoided.

The slide down the slippery slope, it must be assumed, would occur because a principled distinction cannot be drawn between humans and goats. Nor can a principled distinction be drawn between living humans and corpses. After all, if such principled distinctions could be drawn, then the slide from same-sex marriage to goat marriage and corpse marriage could be stopped in a principled way, thus allowing same-sex marriage without the alleged dire consequences.

For the slippery slope arguments to work, there must not be a way to stop the slide. That is, there is a smooth and well-lubricated transition between humans and goats and between living humans and corpses. Since this is a conceptual matter rather than a matter of actual slopes, the slide would go both ways. That is, if we do not have an adequate wall between goats and humans, then the wall can be jumped from either direction. Likewise for corpses.

So, for the sake of argument, let it be supposed that there are not such adequate walls—that once we start moving, we are over the walls or down the slopes. This would, apparently, show that same-sex marriage would lead to goat marriage and corpse marriage. Of course, it would also show that different sex-marriage would lead to a slide into goat marriage and corpse marriage (I argued this point in my book, For Better or Worse Reasoning, so I will not repeat the argument here).

Somewhat more interestingly, the supposition of a low wall (or slippery slope) between humans and animals would also lead to some interesting results. For example, if we allow animals to be hunted and there is no solid wall between humans and animals in terms of laws and practices, then that would put us on the slippery slope to the hunting of humans. So, by the logic of the slippery slope, we should not allow humans to hunt animals. Ditto for eating animals—after all, if same-sex marriage leads to goat marriage, then eating beef must surely lead to cannibalism.

In the case of the low wall (or slippery slope) between corpses and humans, then there would also be some odd results. For example, if we allow corpses to be buried or cremated and there is no solid wall between the living and the dead, then this would put us on the slippery slope to burying or cremating the living. So, by the logic of the slippery slope, we should not allow corpses to be buried or cremated. Ditto for denying the dead the right to vote. After all, if allowing same-sex marriage would warrant necrophilia, then denying corpses the vote would warrant denying the living the right to vote.

Obviously, people will want to say that we can clearly distinguish between animals and humans as well as between the living and corpses. However, if we can do this, then the slippery slope argument against same-sex marriage would lose its slip.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Hyperbole, Again

Posted in Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 16, 2013
English: Protesters at the Taxpayer March on W...

(Photo credit: Wikipedia)

Hyperbole is a rhetorical device in which a person uses an exaggeration or overstatement in order to create a negative or positive feeling. Hyperbole is often combined with a rhetorical analogy. For example, a person might say that someone told “the biggest lie in human history” in order to create a negative impression. It should be noted that not all vivid or extreme language is hyperbole-if the extreme language matches the reality, then it is not hyperbole. So, if the lie was actually the biggest lie in human history, then it would not be hyperbole to make that claim.

People often make use of hyperbole when making rhetorical analogies/comparisons. A rhetorical analogy involves comparing two (or more) things in order to create a negative or positive impression.  For example, a person might be said to be as timid as a mouse or as smart as Einstein. By adding in hyperbole, the comparison can be made more vivid (or possibly ridiculous). For example, a professor who assigns a homework assignment that is due the day before spring break might be compared to Hitler. Speaking of Hitler, hyperbole and rhetorical analogies are stock items in political discourse.

Some Republicans have decided that Obamacare is going to be their main battleground. As such, it is hardly surprising that they have been breaking out the hyperbole in attacking it. Dr. Ben Carson launched an attack by seeming to compare Obamacare to slavery, but the response to this led him to “clarify” his remarks to mean that he thinks Obamacare is not like slavery, but merely the worst thing to happen to the United States since slavery. This would, of course, make it worse than all the wars, the Great Depression, 9/11 and so on.

While he did not make a slavery comparison, Ted Cruz made a Nazi comparison during his filibuster. As Carson did, Cruz and his supporters did their best to “clarify” the remark.

Since slavery and Nazis had been taken, Rick Santorum decided to use the death of Mandela as an opportunity to compare Obamacare to Apartheid.

When not going after Obamacare, Obama himself is a prime target for hyperbole. John McCain, who called out Cruz on his Nazi comparison, could not resist making use of some Nazi hyperbole in his own comparison. When Obama shook Raul Castro’s hand, McCain could not resist comparing Obama to Chamberlain and Castro to Hitler.

Democrats and Independents are not complete strangers to hyperbole, but they do not seem to wield it quite as often (or as awkwardly) as Republicans. There have been exceptions, of course-the sweet allure of a Nazi comparison is bipartisan.  However, my main concern here is not to fill out political scorecards regarding hyperbole. Rather, it is to discuss why such uses of negative hyperbole are problematic.

One point of note is that while hyperbole can be effective at making people feel a certain way (such as angry), its use often suggests that the user has little in the way of substance. After all, if something is truly bad, then there would seem to be no legitimate need to make exaggerated comparisons. In the case of Obamacare, if it is truly awful, then it should suffice to describe its awfulness rather than make comparisons to Nazis, slavery and Apartheid. Of course, it would also be fair to show how it is like these things. Fortunately for America, it is obviously not like them.

One point of moral concern is the fact that making such unreasonable comparisons is an insult to the people who suffered from or fought against such evils. After all, such comparisons transform such horrors as slavery and Apartheid into mere rhetorical chips in the latest political game. To use an analogy, it is somewhat like a person who has played Call of Duty comparing himself to combat veterans of actual wars. Out of respect for those who suffered from and fought against these horrors, they should not be used so lightly and for such base political gameplay.

From the standpoint of critical thinking, such hyperbole should be avoided because it has no logical weight and serves to confuse matters by playing on the emotions. While that is the intent of hyperbole, this is an ill intent. While rhetoric does have its legitimate place (mainly in making speeches less boring) such absurd overstatements impede rather than advance rational discussion and problem solving.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta
Follow

Get every new post delivered to your Inbox.

Join 2,018 other followers