A Philosopher's Blog

The Democrats and the Ku Klux Klan

Posted in Ethics, Philosophy, Politics, Reasoning/Logic, Uncategorized by Michael LaBossiere on February 13, 2017

One interesting tactic employed by the Republicans is to assert, in response to charges of racism against one of their number, that the Democrats are “the party of the Ku Klux Klan.” This tactic was most recently used by Senator Ted Cruz in defense of Jeff Sessions, Trump’s nominee for attorney general.

Cruz went beyond merely claiming the Democrats formed the Klan; he also asserted that the Democrats were responsible for segregation and the infamous Jim Crow laws. As Cruz sees it, the Democrats’ tactic is to “…just accuse anyone they disagree with of being racist.”

Ted Cruz is right about the history of the Democratic party. After the Civil War, the southern Democratic Party explicitly identified itself as the “white man’s party” and accused the Republican party of being “negro dominated.” Some Southern Democrats did indeed support Jim Crow and joined the KKK.

What Ted fails to mention is that as the Democrats became the party associated with civil rights, the Republicans engaged in what has become known as the “southern strategy.” In short, the Republicans appealed to racism against blacks in order to gain political power in the south. Though ironic given the history of the two parties, this strategy proved to be very effective and many southern Democrats became southern Republicans. In some ways, the result was analogous to exchanging the wine in two bottles: the labels remain the same, but the contents have been swapped. As such, while Ted has the history correct, he is criticizing the label rather than the wine.

Another metaphor is the science fiction brain transplant. If Bill and Sam swapped brains, it would appear that Sam was guilty of whatever Bill did, because he now has Bill’s body. However, when it comes to such responsibility what matters is the brain. Likewise for the swapping of political parties in the south: the Southern Democrats condemned by Cruz became the southern Republicans that he now praises. Using the analogy, Ted is condemning the body for what the old brain did while praising that old brain because it is in a new body.

As a final metaphor, consider two cars and two drivers. Driving a blue car, Bill runs over a person. Sam, driving a red car, stops to help the victim. Bill then hops in the red car and drives away while Sam drives the victim to the hospital in the blue car. When asked about the crime, Ted insists that the Sam is guilty because he is in the blue car now and praises Bill because he is in the red car now.  Obviously enough, the swapping of parties no more swaps responsibility than the swapping of cars.

There is also the fact that Cruz is engaged in the genetic fallacy—he is rejecting what the Democrats are saying now because of a defect in the Democratic party of the past. The fact that the Democrats of then did back Jim Crow and segregation is irrelevant to the merit of claims made by current Democrats about Jeff Sessions (or anything else). When the logic is laid bare, the fallacy is quite evident:

 

Premise 1: Some Southern Democrats once joined the KKK.

Premise 2: Some Southern Democrats once backed segregation and Jim Crow Laws.

Conclusion: The current Democrats claims about Jeff Sessions are untrue.

 

As should be evident, the premises have no logical connection to the conclusion, hence Cruz’s reasoning is fallacious. Since Cruz is a smart guy, he obviously knows this—just as he is aware that fallacies are far better persuasive tools than good arguments.

The other part of Cruz’s KKK gambit is to say that the Democrats rely on accusations of racism as their tactic. Cruz is right that a mere accusation of racism does not prove that a person is racist. If it is an unsupported attack, then it proves nothing. Cruz’s tactic does gain some credibility from the fact that accusations of racism are all-to-often made without adequate support. Both ethics and critical thought require that one properly review the evidence for such accusations and not simply accept them. As such, if the Democrats were merely launching empty ad hominem attacks on Sessions (or anyone), then these attacks should be dismissed.

In making his attack on the Southern Democrats of the past, Cruz embraces the view that racism is a bad thing. After all, his condemnation of the current Democrats requires that he condemn the past Democrats for their support of racism, segregation and Jim Crow laws. As such, he purports to agree with the current Democrats’ professed view that racism is bad. But, he condemns them for making what he claims are untrue charges of racism. This, then, is the relevant concern: which claims, if any, made by the Democrats about session being a racist are true? The Democrats claimed that they were offering evidence of Session’s racism while Cruz’s approach was to accuse the Democrats of being racists of old and engaging in empty accusations today. He did not, however, address the claims made by the Democrats or their evidence. As such, Cruz’s response has no merit from the perspective of logic. As a rhetorical move, however, it has proven reasonably successful.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Gun and I: Feeling & Thinking

Posted in Ethics, Philosophy, Politics by Michael LaBossiere on June 22, 2016

After each eruption of gun violence, there is also a corresponding eruption in the debates over gun issues. As with all highly charged issues, people are primarily driven by their emotions rather than by reason. Being a philosopher, I like to delude myself with the thought that it is possible to approach an issue with pure reason. Like many other philosophers, I am irritated when people say things like “I feel that there should be more gun control” or “I feel that gun rights are important. Because of this, when I read student papers I strike through all “inappropriate” uses of “feel” and replace them with “think.” This is, of course, done with a subconscious sense of smug superiority. Or so it was before I started reflecting on emotions in the context of gun issues. In this essay I will endeavor a journey through the treacherous landscape of feeling and thinking in relation to gun issues. I’ll begin with arguments.

As any competent philosopher can tell you, an argument consists of a claim, the conclusion, that is supposed to be supported by the evidence or reasons, the premises, that are given. In the context of logic, as opposed to that of persuasion, there are two standards for assessing an argument. The first is an assessment of the quality of the logic: determining how well the premises support the conclusion. The second is an assessment of the plausibility of the premises: determining the quality of the evidence.

On the face of it, assessing the quality of the logic should be a matter of perfect objectivity. For deductive arguments (arguments whose premises are supposed to guarantee the truth of the conclusion), this is the case. Deductive arguments, as anyone who has had some basic logic knows, can be checked for validity using such things as Venn diagrams, truth tables and proofs. As long as a person knows what she is doing, she can confirm beyond all doubt whether a deductive argument is valid or not. A valid argument is, of course, an argument such that if its premises were true, then its conclusion must be true. While a person might stubbornly refuse to accept a valid argument as valid, this would be as foolish as stubbornly refusing to accept that 2+2= 4 or that triangles have three sides. As an example, consider the following valid argument:

 

Premise 1: If an assault weapon ban would reduce gun violence, then congress should pass an assault weapon ban.

Premise 2: An assault weapon ban would reduce gun violence.

Conclusion: Congress should pass an assault weapon ban.

 

This argument is valid; in fact, it is an example of the classic deductive argument known as modus ponens (also known as affirming the antecedent). As such, questioning the logic of the argument would just reveal one’s ignorance of logic. Before anyone gets outraged, it is important to note that an argument being valid does not entail that any of its content is actually true. While this endlessly confuses students, though a valid argument that has all true premises must have a true conclusion, a valid argument need not have true premises or a true conclusion. Because of this, while the validity of the above argument is beyond question, one could take issue with the premises. They could, along with the conclusion, be false—although the argument is unquestionably valid. For those who might be interested, an argument that is valid and has all true premises is a sound argument. An argument that does not meet these conditions is unsound.

Unfortunately, the assessment of premises does not (in general) admit of a perfectly objective test on par with the tests for validity. In general, premises are assessed in terms of how well they match observations, background information and credible claims from credible sources (which leads right to concerns about determining credibility). As should be expected, people tend to accept premises that are in accord with how they feel rather than based on a cold assessment of the facts. This is true for everyone, be that person the head of the NRA or a latte sipping liberal academic who shivers at the thought of even seeing a gun. Because of this, a person who wants to fairly and justly assess the premises of any argument has to be willing to understand her own feelings and work out how they influence her judgment. Since people, as John Locke noted in his classic essay on enthusiasm, tend to evaluate claims based on the strength of their feelings, doing this is exceptionally difficult. People think they are right because they feel strongly about something and are least likely to engage in critical assessment when they feel strongly.

While deductive logic allows for perfectly objective assessment, it is not the logic that is commonly used in debates over political issues or in general. The most commonly used logic is inductive logic.

Inductive arguments are arguments, so an inductive argument will have one or more premises that are supposed to support a conclusion. Unlike deductive arguments, inductive arguments do not offer certainty—they deal in likelihood. A logically good inductive argument is called a strong argument: one whose premises, if true, would probably make the conclusion true. A bad inductive argument is a weak one. Unlike the case of validity, the strength of an inductive argument is judged by applying the standards specific to that sort of inductive argument to the argument in question. Consider, as an example, the following argument:

 

Premise 1: Tens of thousands of people die each year as a result of automobiles.

Premise 2: Tens of thousands of people die each year as a result of guns.

Premise 3: The tens of thousands of deaths by automobiles are morally acceptable.

Conclusion: The tens of thousands of deaths by gun are also morally acceptable.

 

This is a simple argument by analogy in which it is argued that since cars and guns are alike, if we accept automobile fatalities then we should also accept gun fatalities. Being an inductive argument, there is no perfect, objective test to determine whether the argument is strong or not. Rather, the argument is assessed in terms of how well it meets the standards of an argument by analogy. The gist of these standards is that the more alike the two things (guns and cars) are alike, the stronger the argument. Likewise, the less alike they are, the weaker the argument.

While the standards are reasonably objective, their application admits of considerable subjectivity. In the case of guns and cars, people will differ greatly in terms of how they see them in regards to similarities and differences. As would be suspected, the lenses people see this matter will be deeply colored by their emotions and psychological backstory. As such, rationally assessing inductive arguments is especially challenging: a person must sort through the influence of emotions and psychology on her evaluation of both the premises and the reasoning. Since arguments about guns are generally inductive, it is no wonder it is a mess—even on the rare occasions when people are sincerely trying to be rational and objective.

The lesson here is that a person needs to think about how she feels before she can think about what she thinks. Since this also applies to me, my next essay will be about exploring my psychological backstory in regards to guns.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , , , ,

Threat Assessment I: A Vivid Spotlight

Posted in Ethics, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 16, 2015

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is, very broadly speaking, the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both of these factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is a very low severity threat. There is an exceedingly low probability that there will be a mass shooting, but it is a high severity threat since it can result in injury or death.

While humans have done a fairly good job at surviving, this seems to have been despite our amazingly bad skills at rational threat assessment. To be specific, the worry people feel in regards to a threat generally does not match up with the actual probability of the threat occurring. People do seem somewhat better at assessing the severity, though they are also often in error about this.

One excellent example of poor threat assessment is in regards to the fear Americans have in regards to domestic terrorism. As of December 15, 2015 there have been 45 people killed in the United States in attacks classified as “violent jihadist attacks” and 48 people killed in attacks classified as “far right wing attacks” since 9/11/2001.  In contrast, there were 301,797 gun deaths from 2005-2015 in the United States and over 30,000 people are killed each year in motor vehicle crashes in the United States.

Despite the incredibly low likelihood of a person being killed by an act of terrorism in the United States, many people are terrified by terrorism (which is, of course, the goal of terrorism) and have become rather focused on the matter since the murders in San Bernardino. Although there have been no acts of terrorism on the part of refugees in the United States, many people are terrified of refugees and this had led to calls for refusing to accept Syrian refugees and Donald Trump has famously called for a ban on all Muslims entering the United States.

Given that an American is vastly more likely to be killed while driving than killed by a terrorist, it might be wondered why people are so incredibly bad at this sort of threat assessment. The answer, in regards to having fear vastly out of proportion to the probability is easy enough—it involves a cognitive bias and some classic fallacies.

People follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use proper statistical methods, people generally fall victim to the bias known as the availability heuristic. The idea is that a person unconsciously assigns a probability to something based on how often they think of that sort of event. While an event that occurs often will tend to be thought of often, the fact that something is often thought of does not make it more likely to occur.

After an incident of domestic terrorism, people think about terrorism far more often and thus tend to unconsciously believe that the chance of terrorism occurring is far higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is incredibly low (driving to the beach is vastly more likely to kill you than a shark is). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, very difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite it is usually the worst evidence.

People are also misled about probability by various fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain class or type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention is focused on that fact, leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most Christians are terrorists because the media covered a terrorist who was Christian (who shot up a Planned Parenthood). If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a domestic terrorist attack by Muslims.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. This fallacy is similar to hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample.

People often fall victim to this fallacy because stories and anecdotes tend to have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of said anecdote. Not surprisingly, people most commonly accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of domestic terrorism or tell the story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring. For example, people point to the claim that one of the terrorists in Paris masqueraded as a refugee and infer that refugees pose a great threat to the United States. Or they tell the story about the one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the visa system (which is used by about 25,000 people a year).

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is particularly vivid or dramatic does not make the event more likely to occur, especially in the face of significant statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases tend to make a very strong impression on the human mind. For example, mass shootings by domestic terrorists are vivid and awful, so it is hardly surprising that people feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more a person feels that it will occur.

It should be kept in mind that taking into account the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because the effects of an accident can be very, very dramatic. If he knows that, statistically, the chances of the accident are happening are very low but he considers even a small risk to be unacceptable, then he would not be making this error in reasoning. This then becomes a matter of value judgment—how much risk is a person willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is rather important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter a miniscule threat while spending little on leading causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating fear that is unfounded. In addition to the psychological harm to individuals, there is also the damage to the social fabric. There has already been an increase in attacks on Muslims in America and people are seriously considering abandoning core American values, such as the freedom of religion and being good Samaritans.

In light of the above, I urge people to think rather than feel their way through their concerns about terrorism. Also, I urge people to stop listening to Donald Trump. He has the right of free expression, but people also have the right of free listening.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Doubling Down

Posted in Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on December 11, 2015
A diagram of cognitive dissonance theory. Diss...

 (Photo credit: Wikipedia)

One interesting phenomenon is the tendency of people to double down on beliefs. For those not familiar with doubling down, this occurs when a person is confronted with evidence against a beloved belief and her belief, far from being weakened by the evidence, is strengthened.

One rather plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. Roughly put, when a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the logical inference that the belief is not true, then the rational thing to do is reject the old belief. If the evidence is not plausible or does not strongly support the logical inference that the belief is untrue, then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.

As might be suspected, the assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information about the matter, and credible sources. This assessment can merely push the matter back: the evidence for the evidence will also need to be assessed, which serves to fuel some classic skeptical arguments about the impossibility of knowledge. The idea is that every belief must be assessed and this would lead to an infinite regress, thus making knowing whether a belief is true or not impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some evidence for the belief—even if the evidence is faith or some sort of revelation.

In terms of assessing the reasoning, the matter is entirely objective if it is deductive logic.  Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs, thus the reasoning is objectively good or bad. Inductive reasoning is a different matter. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Unlike deductive arguments, inductive arguments vary greatly in strength and while there are standards of assessment, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.

A second option is to reject the evidence without undergoing the process of honestly assessing the evidence and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or just feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can simply reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies are commonly employed in this task).

This rejection costs less psychologically than engaging the evidence and reasoning, but is often not free. Since the person probably has some awareness of the self-deception, it needs to be psychologically “justified” and this seems to result in the person strengthening her commitment to the belief. People seem to have all sorts of interesting cognitive biases that help out here, such as confirmation bias and other forms of motivated reasoning. These can be rather hard to defend against, since they derange the very mechanisms that are needed to avoid them.

One interesting way people “defend” their beliefs is by regarding the evidence and opposing argument as an unjust attack, which strengthens her resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer that they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. This is rather problematic reasoning—as shown by the fact that people do not infer that they are in error just because people are supporting them.

People also, as John Locke argued in his work on enthusiasm, consider how strongly they feel about a belief as evidence for its truth. When people are challenged, they typically feel angry and this strong emotion makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.

As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with one of doubling down. This accusation, after all, comes with the insinuation that the person is in error and is thus irrationally holding to a false belief. The reasonable defense is to show that evidence and arguments are being used in support of the belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with logic. My opponents double down.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Ex Machina & Other Minds II: Is the Android a Psychopath?

Posted in Epistemology, Ethics, Philosophy, Technology by Michael LaBossiere on September 9, 2015

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The “Two Bads” Fallacy & Racism

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on June 24, 2015

The murder of nine people in the Emanuel AME Church in South Carolina ignited an intense discussion of race and violence. While there has been near-universal condemnation of the murders, some people take effort to argue that these killings are part of a broader problem of racism in America. This claim is supported by reference to the well-known history of systematic violence against blacks in America as well as consideration of data from today. Interestingly, some people respond to this approach by asserting that more blacks are killed by blacks than by whites. Some even seem obligated to add the extra fact that more whites are killed by blacks than blacks are killed by whites.

While these points are often just “thrown out there” without being forged into part of a coherent argument, presumably the intent of such claims is to somehow disprove or at least diminish the significance of claims regarding violence against blacks by whites. To be fair, there might be other reasons for bringing up such claims—perhaps the person is engaged in an effort to broaden the discussion to all violence out of a genuine concern for the well-being of all people.

In cases in which the claims about the number of blacks killed by blacks are brought forth in response to incidents such as the church shooting, this tactic appears to be a specific form of a red herring. This fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic.

This sort of “reasoning” has the following form:

  1. Topic A is under discussion.
  2. Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
  3. Topic A is abandoned.

In the case of the church shooting, the pattern would be as follows:

  1. The topic of racist violence against blacks is being discussed, specifically the church shooting.
  2. The topic of blacks killing other blacks is brought up.
  3. The topic of racist violence against blacks is abandoned in favor of focusing on blacks killing other blacks.

 

This sort of “reasoning” is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim. In the specific case at hand, switching the topic to black on black violence does nothing to address the topic of racist violence against blacks.

While the red herring label would certainly suffice for these cases, it is certainly appealing to craft a more specific sort of fallacy for cases in which something bad is “countered” by bringing up another bad. The obvious name for this fallacy is the “two bads fallacy.” This is a fallacy in which a second bad thing is presented in response to a bad thing with the intent of distracting attention from the first bad thing (or with the intent of diminishing the badness of the first bad thing).

This fallacy has the following pattern:

  1. Bad thing A is under discussion.
  2. Bad thing B is introduced under the guise of being relevant to A (when B is actually not relevant to A in this context).
  3. Bad thing A is ignored, or the badness of A is regarded as diminished or refuted.

In the case of the church shooting, the pattern would be as follows:

  1. The murder of nine people in the AME church, which is bad, is being discussed.
  2. Blacks killing other blacks, which is bad, is brought up.
  3. The badness of the murder of the nine people is abandoned, or its badness is regarded as diminished or refuted.

This sort of “reasoning” is fallacious because the mere fact that something else is bad does not entail that another bad thing thus has its badness lessened or refuted. After all, the fact that there are worse things than something does not entail that it is not bad. In cases in which there is not an emotional or ideological factor, the poorness of this reasoning is usually evident:

Sam: “I broke my arm, which is bad.”
Bill: “Well, some people have two broken arms and two broken legs.”
Joe: “Yeah, so much for your broken arm being bad. You are just fine. Get back to work.”

What seems to lend this sort of “reasoning” some legitimacy is that comparing two things that are bad is relevant to determining relative badness. If a person is arguing about how bad something is, it is certainly reasonable to consider it in the context of other bad things. For example, the following would not be fallacious reasoning:

Sam: “I broke my arm, which is bad.”
Bill: “Some people have two broken arms and two broken legs.”
Joe: “That is worse than one broken arm.”
Sam: “Indeed it is.”
Joe: “But having a broken arm must still suck.”
Sam: “Indeed it does.”

Because of this, it is important to distinguish between cases of the fallacy (X is bad, but Y is also bad, so X is not bad) and cases in which a legitimate comparison is being made (X is bad, but Y is worse, so X is less bad than Y, but still bad).

Tagged with: , , , ,

42 Fallacies for Free in Portuguese

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on October 28, 2014

Thanks to Laércio Lameira my 42 Fallacies is available in Portuguese as a free PDF.

42 Falacias

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Education & Negativity Bias

Posted in Universities & Colleges by Michael LaBossiere on March 3, 2014
StateLibQld 1 113036 Cartoon of students recei...

S (Photo credit: Wikipedia)

In general, people suffer from a wide range of cognitive biases. One of these is known as negativity bias and it is manifested by the tendency people have to give more weight to the negative than to the positive. For example, people tend to weigh the wrongs done to them more heavily than the good done to them. As another example, people tend to be more swayed by negative political advertisements than by positives ones. This bias can also have an impact on education.

A colleague of mine asks his logic students each semester how many of them are planning on law school. In the past, he had many students. Now, the number is considerably less. Curious about this, he checked and found that logic had switched from being a requirement for pre-law to being a mere recommendation. My colleague noted that it seemed irrational for students who plan on taking the LSAT and becoming lawyers to avoid the logic class, given that the LSAT is largely a logic test and that law school requires skill in logic. He made the point that students often prefer to avoid the useful when it is not required and only grudgingly take what is required. We discussed a bit how this relates to the negativity bias: a student who did not take the logic class when it was required would be punished by being unable to graduate. Now that the class is optional, there is only the positive benefit of a likely improvement on the LSAT and better performance in law school. Since people weigh punishments more than rewards, this behavior makes sense—but is still irrational. Especially since many of the students who skip the logic class will end up spending money taking LSAT preparation classes that will endeavor to spackle over their lack of skills in logic.

I have seen a similar sort of thing in my own classes. At my university, university policy allows us to lower student grades on the basis of a lack of attendance. We are even permitted to fail a student for excessive absences. While attendance is mandatory in my classes, I do not have a special punishment for missing class. Not surprisingly, when the students figure this out around week three or four, attendance plummets and then stabilizes at a low level. Before I used BlackBoard for quizzes, exams and for turning in assignments and papers, attendance would spike back up for days on which something had to be done in class. Since students can do their work via BlackBoard, these spikes are gone. They are, however, replaced by post-exam spikes when students do badly on the exams because they have not been in class. Then attendance slumps again. Interestingly, students often claim that they think the class is interesting and useful. But, since there is no direct and immediate punishment for not attending (just a delayed “punishment” in terms of lower grades and a lack of learning), many students are not motivated to attend class.

Naturally, I do consider the possibility that I am a bad professor who is teaching a subject that students regard as useless or boring. However, my evaluations are consistently good, former students have returned to say good things about me and my classes, and so on. That said, perhaps I am merely deluding myself and being humored. That said, it is easy enough to draw an analogy to exercise: exercise does not provide immediate rewards and there is no immediate punishment for not staying fit—just a loss of benefits. Most people elect to under-exercise or avoid it altogether. This, and similar things, does show that people generally avoid that which is difficult now but yields lasting benefits latter.

I have, of course, considered going to the punishment model for my classes. However, I have resisted this for a variety of reasons. The first is that my personality is such that I am more inclined to want to offer benefits rather than punishments. This seems to be a clear mistake given the general psychology of people. The second is that I believe in free choice: like God, I think people should be free to make bad choices and not be coerced into doing what is right. It has to be a free choice. Naturally, choosing poorly brings its own punishment—albeit later on. The third is the hassle of dealing with attendance: the paper work, having to handle excuses, being lied to regularly and so on. The fourth is the fact that classes are generally better for the good students when the students who do not want to be in class elect to not attend. While I want everyone to learn, I would rather have the people who would prefer not to learn not be in class disrupting the learning of others—college is not the place where the educator should have to spend time dealing with behavioral issues in the classroom. The fifth is I prefer to reduce the amount of lying that students think they have to engage in.

In terms of why I have been considering using the punishment model, there are three reasons. One is that if students are compelled to attend, they might very well inadvertently learn something. The second is that this model is a lesson for what the workplace will be like for most of the students—so habituating them to this (or, rather, keeping the habituation they should have acquired in K-12) would be valuable. After all, they will probably need to endure awful jobs until they retire or die. The third is that perhaps many people lack the discipline to do what they should and they simply must be compelled by punishment—this is, of course, the model put forth by thinkers like Aristotle and Hobbes.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

30 More Fallacies in Print

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on March 15, 2013

30_More_Fallacies_Cover_for_Kindle

Now available in print on Amazon and other book sellers.

30 Fallacies is a companion book for 42 Fallacies. 42 Fallacies is not, however, required to use this book. It provides concise descriptions and examples of thirty common informal fallacies.

Accent, Fallacy of
Accident, Fallacy of
Amphiboly, Fallacy of
Appeal to Envy
Appeal to Group Identity
Appeal to Guilt
Appeal to Silence
Appeal to Vanity/Elitism
Argumentum ad Hitlerum
Complex Question
Confusing Explanations and Excuses
Cum Hoc, Ergo Propter Hoc
Equivocation, Fallacy of
Fallacious Example
Fallacy Fallacy
Historian’s Fallacy
Illicit Conversion
Incomplete Evidence
Moving the Goal Posts
Oversimplified Cause
Overconfident Inference from Unknown Statistics
Pathetic Fallacy
Positive Ad Hominem
Proving X, Concluding Y
Psychologist’s fallacy
Rationalization
Reification, Fallacy of
Texas Sharpshooter Fallacy
Victim Fallacy
Weak Analogy

My 99 Books 99 Cents Kickstarter

My Amazon Author Page

Enhanced by Zemanta

Icelandic Logic

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on June 8, 2012

Valgarður Guðjónsson is presenting some of my fallacy material in Icelandic, thus helping to expand the empire of reason.

If you can read Icelandic (or not), you can check out his first blog on the subject: http://blog.eyjan.is/valgardur/2012/06/07/rokvillur/

Tagged with: , ,