A Philosopher's Blog

Aristotle & the Quantified Life

Posted in Ethics, Philosophy, Sports/Athletics, Technology by Michael LaBossiere on May 13, 2015

While Aristotle was writing centuries before the rise of wearable technology, his view of moral education provides a solid foundation for the theory behind what I like to call the benign tyranny of the device. Or, if one prefers, the bearable tyranny of the wearbable.

In his Nicomachean Ethics Aristotle addressed the very practical problem of how to make people good. He was well aware that merely listening to discourses on morality would not make people good. In a very apt analogy, he noted that such people would be like invalids who listened to their doctors, but did not carry out her instructions—they will get no benefit.

His primary solution to the problem is one that is routinely endorsed and condemned today: to use the compulsive power of the state to make people behave well and thus become conditioned in that behavior. Obviously, most people are quite happy to have the state compel people to act as they would like them to act; yet equally unhappy when it comes to the state imposing on them. Aristotle was also well aware of the importance of training people from an early age—something later developed by the Nazis and Madison Avenue.

While there have been some attempts in the United States and other Western nations to use the compulsive power of the state to force people to engage in healthy practices, these have been fairly unsuccessful and are usually opposed as draconian violations of the liberty to be out of shape. While the idea of a Fitness Force chasing people around to make them exercise amuses me, I certainly would oppose such impositions on both practical and moral grounds. However, most people do need some external coercion to force them to engage in healthy behavior. Those who are well-off can hire a personal trainer and a fitness coach. Those who are less well of can appeal to the tyranny of friends who are already self-tyrannizing. However, there are many obvious problems with relying on other people. This is where the tyranny of the device comes in.

While the quantified life via electronics is in its relative infancy, there is already a multitude of devices ranging from smart fitness watches, to smart plates, to smart scales, to smart forks. All of these devices offer measurements of activities to quantify the self and most of them offer coercion ranging from annoying noises, to automatic social media posts (“today my feet did not patter, so now my ass grows fatter”), to the old school electric shock (really).

While the devices vary in their specifics, Aristotle laid out the basic requirements back when lightning was believed to come from Zeus. Aristotle noted that a person must do no wrong either with or against one’s will. In the case of fitness, this would be acting in ways contrary to health.

What is needed, according to Aristotle, is “the guidance of some intelligence or right system that has effective force.” The first part of this is that the device or app must be the “right system.” That is to say, the device must provide correct guidance in terms of health and well-being. Unfortunately, health is often ruled by fad and not actual science.

The second part of this is the matter of “effective force.” That is, the device or app must have the power to compel. Aristotle noted that individuals lacked such compulsive power, so he favored the power of law. Good law has practical wisdom and also compulsive force. However, unless the state is going to get into the business of compelling health, this option is out.

Interesting, Aristotle claims that “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While this seems not entirely true, he did seem to be right in that people find the law less annoying than being bossed around by individuals acting as individuals (like that bossy neighbor telling you to turn down the music).

The same could be true of devices—while being bossed around by a person (“hey fatty, you’ve had enough ice cream, get out and run some”) would annoy most people, being bossed by an app or device could be far less annoying. In fact, most people are already fully conditioned by their devices—they obey every command to pick up their smartphones and pay attention to whatever is beeping or flashing. Some people do this even when doing so puts people at risk, such as when they are driving. This certainly provides a vast ocean of psychological conditioning to tap into, but for a better cause. So, instead of mindlessly flipping through Instagram or texting words of nothingness, a person would be compelled by her digital master to exercise more, eat less crap, and get more sleep.  Soon the machine tyrants will have very fit hosts to carry them around.

So, Aristotle has provided the perfect theoretical foundation for designing the tyrannical device. To recap, it needs the following features:

  1. Practical wisdom: the health science for the device or app needs to be correct and the guidance effective.
  2. Compulsive power: the device or app must be able to compel the user effectively and make them obey.
  3. Not too annoying: while it must have compulsive power, this power must not generate annoyance that exceeds its ability to compel.
  4. A cool name.

So, get to work on those devices and apps. The age of machine tyranny is not going to impose itself. At least not yet.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Obesity, Disability, & Accomodation

Posted in Ethics, Philosophy, Politics by Michael LaBossiere on February 11, 2015

It is estimated that almost 30% of humans are overweight or obese and this percentage seems likely to increase. Given this large number of large people, it is not surprising that various moral and legal issues have arisen regarding the accommodation of the obese. It is also not surprising that people arguing in favor of accommodating the obese content that obesity is a disability. The legal issues are, of course, simply matter of law and are settled by lawsuits. Since I am not a lawyer, I will focus on the ethics of the matter and will address two main issues. The first is whether or not obesity is a disability. The second is whether or not obesity is a disability that morally justifies making accommodations.

On the face of it, obesity is disabling. That is, a person who is obese will have reduced capabilities relative to a person who is not obese. An obese person will tend to have much lower endurance than a non-obese person, less speed, less mobility, less flexibility and so on. An obese person will also tend to suffer from more health issues and be at greater risk for various illnesses. Because of this, an obese person might find it difficult or impossible to perform certain job tasks, such as those involving strenuous physical activity or walking moderate distances.

The larger size and weight of obese individuals also presents challenges regarding such things as standard sized chairs, doors, equipment, clothing and vehicles. For example, an obese person might be unable to operate a forklift with the standard seating and safety belt. As another example, an obese person might not be able to fit in one airline seat and instead require two (or more).  As a third example, an obese student might not be able to fit into a standard classroom desk. As such, obesity could make it difficult or impossible for a person to work or make use of certain goods and services.

Obviously enough, obese people are not the only ones who are disabled. There are people with short term disabilities due to illness or injury. I experienced this myself when I had a complete quadriceps tendon tear—my left leg was locked in an immobilizer for weeks, then all but useless for months. With this injury, I was considerably slower, had difficulty with stairs, could not carry heavy loads, and could not drive. There are also people who have long term or permanent disabilities, such as people who are paralyzed, blind, or are missing limbs due to accidents or war. These people can face considerable challenges in performing tasks at work and in life in general. For example, a person who is permanently confined to a wheelchair due to a spinal injury will find navigating stairs or working in the woods or working at muddy construction sites rather challenging.

In general, there seems to be no moral problem with requiring employees, businesses, schools and so on to make reasonable accommodations for people who are disabled. The basic principle that justifies that is the principle of equal treatment: people should be afforded equal access, even when doing so requires some additional accommodation. As such, while having ramps in addition to stairs costs more, it is a reasonable requirement given that some people cannot fully use their legs. Given that the obese are disabled, it seems easy enough to conclude that they should be accommodated just as the blind and paralyzed are accommodated.

Naturally, it could be argued that there is no moral obligation to provide accommodations for anyone. If this is the case, then there would be no obligation to accommodate the obese. However, it would seem to be rather difficult to prove, for example, that disabled veterans returning to school should just have to work their way up the steps in their wheelchairs. For the sake of the discussion to follow I will assume that there is a moral obligation to accommodate the disabled. However, there is still the question of whether or not this should apply to the obese.

One obvious way to argue against accommodations for the obese is to argue that there is a morally relevant difference between those disabled by obesity and those disabled by injury, birth defects, etc. One difference that people often point to is that obesity is a matter of choice and other disabilities are not. That is, a person’s decisions resulted in her being fat and hence she is responsible in a way a person crippled in an accident is not.

It could be pointed out that some people who are disabled by injury where disabled as the result of their decisions. For example, a person might have driven while drunk and ended up paralyzed. But, of course, the person would not be denied access to handicapped parking or the use of automatic doors because his disability was self-inflicted. The same reasoning could be used for the obese: though their disability is self-inflicted, it is still a disability and thus should be accommodated.

The easy and obvious reply to this is that there is still a relevant difference. While a person crippled in a self-inflicted drunken crash caused his own disability, there is little he can do about that disability. He can change his diet and exercise but this will not restore functionality to his legs. That is, he is permanently stuck with the results of that decision. In contrast, an obese person has to maintain her obesity. While some people are genetically predisposed to being obese, how much a person eats and how much they exercise is a matter of choice. Since they could reduce their weight, the rest of us are under no obligation to provide special accommodations for them. This is because they could take reasonable steps to remove the need for such accommodations. To use analogy, imagine someone who insisted that they be provided with a Seeing Eye dog because she wants to wear opaque glasses all the time. These glasses would result in her being disabled since she would be blind. However, since she does not need to wear such glasses and could easily do without them, there is no obligation to provide her with the dog. In contrast, a person who is actually blind cannot just get new eyes and hence it is reasonable for society to accommodate her.

It can be replied that obesity is not a matter of choice. One approach would be to argue for metaphysical determinism—the obese are obese by necessity and could not be otherwise. The easy reply here would be to say that we are, sadly enough, metaphysically determined not to provide accommodations.

A more sensible approach would be to argue that obesity is, in some cases, a medical condition that is beyond the ability of a person to control—that is, the person lacks agency in regards to his eating and exercise. The most likely avenue of support for this claim would come from neuroscience. If it can be shown that people are incapable of controlling their weight, then obesity would be a true disability, on par with having one’s arm blasted off by an IED or being born with a degenerative neural disorder. This would, of course, require abandoning agency (at least in this context).

It could also be argued that a person does have some choice, but that acting on the choice would be so difficult that it is more reasonable for society to accommodate the individual than it is for the individual to struggle to not be obese. To use an analogy, a disabled person might be able to regain enough functionality to operate in a “mostly normal” way, but doing so might require agonizing effort that is beyond what could be expected of a person. In such a case, one would surely not begrudge the person the accommodations. So, it could be argued that since it is easier for society to accommodate the obese than it is for the obese to not be obese, society should do so.

There is, however, a legitimate concern here. If the principle is adopted that society must accommodate the obese because they are disabled and they cannot help their obesity, then others could appeal to that same sort of principle and perhaps over-extend the realm of disabilities that must be accommodated. For example, people who are addicted to drugs could make a similar argument: they are disabled, yet their addiction is not a matter of choice. As another example, people who are irresponsible or lazy can claim they are disabled as well and should be accommodated on the grounds that they cannot be other than they are. But, perhaps the line can be drawn in a principle way so that the obese are disabled, but others are not.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ebola, Safety & Ethics

Posted in Ethics, Medicine/Health, Philosophy by Michael LaBossiere on October 31, 2014
English: Color-enhanced electron micrograph of...

English: Color-enhanced electron micrograph of Ebola virus particles. Polski: Mikrofotografia elektronowa cząsteczek wirusa Ebola w fałszywych kolorach. (Photo credit: Wikipedia)

Kaci Hickox, a nurse from my home state of Maine, returned to the United States after serving as a health care worker in the Ebola outbreak. Rather than being greeted as a hero, she was confined to an unheated tent with a box for a toilet and no shower. She did not have any symptoms and tested negative for Ebola. After threatening a lawsuit, she was released and allowed to return to Maine. After arriving home, she refused to be quarantined again. She did, however, state that she would be following the CDC protocols. Her situation puts a face on a general moral concern, namely the ethics of balancing rights with safety.

While past outbreaks of Ebola in Africa were met largely with indifference from the West (aside from those who went to render aid, of course), the current outbreak has infected the United States with a severe case of fear. Some folks in the media have fanned the flames of this fear knowing that it will attract viewers. Politicians have also contributed to the fear. Some have worked hard to make Ebola into a political game piece that will allow them to bash their opponents and score points by appeasing fears they have helped create. Because of this fear, most Americans have claimed they support a travel ban in regards to Ebola infected countries and some states have started imposing mandatory quarantines. While it is to be expected that politicians will often pander to the fears of the public, the ethics of the matter should be considered rationally.

While Ebola is scary, the basic “formula” for sorting out the matter is rather simple. It is an approach that I use for all situations in which rights (or liberties) are in conflict with safety. The basic idea is this. The first step is sorting out the level of risk. This includes determining the probability that the harm will occur as well as the severity of the harm (both in quantity and quality). In the case of Ebola, the probability that someone will get it in the United States is extremely low. As the actual experts have pointed out, infection requires direct contact with bodily fluids while a person is infectious. Even then, the infection rate seems relatively low, at least in the United States. In terms of the harm, Ebola can be fatal. However, timely treatment in a well-equipped facility has been shown to be very effective. In terms of the things that are likely to harm or kill an American in the United States, Ebola is near the bottom of the list. As such, a rational assessment of the threat is that it is a small one in the United States.

The second step is determining key facts about the proposals to create safety. One obvious concern is the effectiveness of the proposed method. As an example, the 21-day mandatory quarantine would be effective at containing Ebola. If someone shows no symptoms during that time, then she is almost certainly Ebola free and can be released. If a person shows symptoms, then she can be treated immediately. An alternative, namely tracking and monitoring people rather than locking them up would also be fairly effective—it has worked so far. However, there are the worries that this method could fail—bureaucratic failures might happen or people might refuse to cooperate. A second concern is the cost of the method in terms of both practical costs and other consequences. In the case of the 21-day quarantine, there are the obvious economic and psychological costs to the person being quarantined. After all, most people will not be able to work from quarantine and the person will be isolated from others. There is also the cost of the quarantine itself. In terms of other consequences, it has been argued that imposing this quarantine will discourage volunteers from going to help out and this will be worse for the United States. This is because it is best for the rest of the world if Ebola is stopped in Africa and this will require volunteers from around the world. In the case of the tracking and monitoring approach, there would be a cost—but far less than a mandatory quarantine.

From a practical standpoint, assessing a proposed method of safety is a utilitarian calculation: does the risk warrant the cost of the method? To use some non-Ebola examples, every aircraft could be made as safe as Air-Force One, every car could be made as safe as a NASCAR vehicle, and all guns could be taken away to prevent gun accidents and homicides. However, we have decided that the cost of such safety would be too high and hence we are willing to allow some number of people to die. In the case of Ebola, the calculation is a question of considering the risk presented against the effectiveness and cost of the proposed method. Since I am not a medical expert, I am reluctant to make a definite claim. However, the medical experts do seem to hold that the quarantine approach is not warranted in the case of people who lack symptoms and test negative.

The third concern is the moral concern. Sorting out the moral aspect involves weighing the practical concerns (risk, effectiveness and cost) against the right (or liberty) in question. Some also include the legal aspects of the matter here as well, although law and morality are distinct (except, obviously, for those who are legalists and regard the law as determining morality). Since I am not a lawyer, I will leave the legal aspects to experts in that area and focus on the ethics of the matter.

When working through the moral aspect of the matter, the challenge is determining whether or not the practical concerns morally justify restricting or even eliminating rights (or liberties) in the name of safety. This should, obviously enough, be based on consistent principles in regards to balancing safety and rights. Unfortunately, people tend to be wildly inconsistent in this matter. In the case of Ebola, some people have expressed the “better safe than sorry” view and have elected to impose or support mandatory quarantines at the expense of the rights and liberties of those being quarantined. In the case of gun rights, these are often taken as trumping concerns about safety. The same holds true of the “right” or liberty to operate automobiles: tens of thousands of people die each year on the roads, yet any proposal to deny people this right would be rejected. In general, people assess these matters based on feelings, prejudices, biases, ideology and other non-rational factors—this explains the lack of consistency. So, people are wiling to impose on basic rights for little or no gain to safety, while also being content to refuse even modest infringements in matters that result in great harm. However, there are also legitimate grounds for differences: people can, after due consideration, assess the weight of rights against safety very differently.

Turning back to Ebola, the main moral question is whether or not the safety gained by imposing the quarantine (or travel ban) would justify denying people their rights. In the case of someone who is infectious, the answer would seem to be “yes.” After all, the harm done to the person (being quarantined) is greatly exceeded by the harm that would be inflicted on others by his putting them at risk of infection. In the case of people who are showing no symptoms, who test negative and who are relatively low risk (no known specific exposure to infection), then a mandatory quarantine would not be justified. Naturally, some would argue that “it is better to be safe than sorry” and hence the mandatory quarantine should be imposed. However, if it was justified in the case of Ebola, it would also be justified in other cases in which imposing on rights has even a slight chance of preventing harm. This would seem to justify taking away private vehicles and guns: these kill more people than Ebola. It might also justify imposing mandatory diets and exercise on people to protect them from harm. After all, poor health habits are major causes of health issues and premature deaths. To be consistent, if imposing a mandatory quarantine is warranted on the grounds that rights can be set aside even when the risk is incredibly slight, then this same principle must be applied across the board. This seems rather unreasonable and hence the mandatory quarantine of people who are not infectious is also unreasonable and not morally acceptable.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ethics & E-Cigarettes

Posted in Ethics, Philosophy by Michael LaBossiere on May 26, 2014
Electronic Cigarette Smoking

Electronic Cigarette Smoking (Photo credit: planetc1)

While the patent for an e-cigarette like device dates back to 1965, it is only fairly recently that e-cigarettes (e-cigs) have become popular and readily available. Thanks, in part, to the devastating health impact of traditional cigarettes, there is considerable concern about the e-cig.

A typical e-cig works by electronically heating a cartridge containing nicotine, flavoring and propylene glycol to release a vapor. This vapor is inhaled by the user, delivering the nicotine (and flavor). From the standpoint of ethics, the main concern is whether or not the e-cigs are harmful to the user.

At this point, the health threat, if any, of e-cigs is largely unknown—primarily because of the lack of adequate studies of the product.

While propylene glycol is regarded as safe by the FDA (it is used in soft drinks, shampoos and other products that are consumed or applied to the body), it is not known what effect the substance has if it is heated and inhaled. It might be harmless or it might not. Nicotine, which is regarded as being addictive, might also be harmful. There are also concerns about the “other stuff” in the cartridge that are heated into vapor—there is some indication that the vapors contain carcinogens.  However, e-cigs are largely an unknown—aside from the general notion that inhaling particles generated from burning something is often not a great idea.

From a moral standpoint, there is the obvious concern that people are being exposed to a product whose health impact is not yet known. As of this writing, regulation of e-cigs seems to be rather limited and is often inconsistently enforced. Given that the e-cig is largely an unknown, it certainly seems reasonable to determine their potential impact on the consumer so as to provide a rational basis for regulation (which might be to have no regulation).

One stock argument in favor of e-cigs can be cast in utilitarian grounds. While the health impact of e-cigs is unknown, it seems reasonable to accept (at least initially) that they are probably not as bad for people as traditional cigarettes. If people elect to use e-cigs rather than traditional tobacco products, then they will be harmed less than if they used the tobacco products. This reduced harm would thus make e-cigs morally preferable to traditional tobacco products. Naturally, if e-cigs turn out to be worse than traditional tobacco products (which seems somewhat unlikely), then things would be rather different.

There is also the moral (and health) concern that people who would not use tobacco products would use e-cigs on the grounds that they are safer than the tobacco products. If the e-cigs are still harmful, then this would be of moral concern since people would be harmed who otherwise would not be harmed.

One obvious point of consideration is my view that people have a moral right to self-abuse. This is based on Mill’s arguments regarding liberty—others have no moral right to compel a person to do or not do something merely because doing so would be better, healthier or wiser for a person. The right to compel does covers cases in which a person is harming others—so, while I do hold that I have no right to compel people to not smoke, I do have the right to compel people to not expose me to smoke. As such, I can rightfully forbid people from smoking in my house, but not from smoking in their own.

Given the right of self-abuse, people would thus have every right to use e-cigs, provided that they are not harming others (so, for example, I can rightfully forbid people from using them in my house)—even if the e-cigs are very harmful.

However, I also hold to the importance of informed self-abuse: the person has to be able to determine (if she wants to) whether or not the activity is harmful in order in order for the self-abuse to be morally acceptable. That is, the person needs to be able to determine whether she is, in fact, engaging in self-abuse or not. If the person is unable to acquire the needed information, then this makes the matter a bit more morally complicated.

If the person is being intentionally deceived, then the deceiver is clearly subject to moral blame—especially if the person would not engage in the activity if she was not so deceived. For example, selling people a product that causes health problems and intentionally concealing this fact would be immoral. Or, to use another example, giving people brownies containing marijuana and not telling them would be immoral.

If there is no information available, then the ethics of the situation become rather more debatable. On the one hand, if I know that the effect of a product is unknown and I elect to use it, then it would seem that my decision puts most (if not all) of the moral blame on me, should the product prove to be harmful. This would be, it might be argued, like eating some mushroom found in the woods: if you don’t know what it will do, yet you eat it anyway and it hurts you, shame on you.

On the other hand, it seems reasonable to expect people who sell products intended for consumption be compelled to determine whether these products will be harmful or not. To use another analogy, if I have dinner at someone’s house, I have the moral expectation that they will not throw some unknown mushrooms from the woods onto the pizza they are making for dinner. Likewise, if a company sells e-cigs, the customers have a legitimate moral expectation that the product will not hurt them. Being permitted to sell products whose effect is not known is morally dubious at best. But, it should be said, that people who use such a product do bear some of the moral responsibility—they have an obligation to consider that a product that has not been tested could be harmful before using it. To use an analogy, if I buy a pizza and I know that I have no idea what the mushrooms on it will do to me, then if it kills me some of the blame rests on me—I should know better. But, the person who sells pizza also has an obligation to know what is going on that pizza-they should not sell death pizza.

The same applies to e-cigs: they should not be sold until their effects are at least reasonably determined. But, if people insist on using them without having any real idea whether they are safe or not, they are choosing poorly and deserve some of the moral blame.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The Better than Average Delusion

Posted in Reasoning/Logic by Michael LaBossiere on March 28, 2014
Average Joe copy

Average Joe copy (Photo credit: Wikipedia)

One interesting, but hardly surprising, cognitive bias is the tendency of a person to regard herself as better than average—even when no evidence exists for that view. Surveys in which Americans are asked to compare themselves to their fellows are quite common and nicely illustrate this bias: the overwhelming majority of Americans rank themselves as above average in everything ranging from leadership ability to accuracy in self-assessment.

Obviously enough, the majority of people cannot be better than average—that is just how averages work. As to why people think the way they do, the disparity between what is claimed and what is the case can be explained in at least two ways. One is another well-established cognitive bias, namely the tendency people have to believe that their performance is better than it actually is. Teachers get to see this in action quite often—students generally believe that they did better on the test than they actually did. For example, I have long lost count of people who have gotten Cs or worse on papers who say to me “but it felt like an A!” I have no doubt that it felt like an A to the student—after all, people tend to rather like their own work. Given that people tend to regard their own performance as better than it is, it certainly makes sense that they would regard their abilities as better than average—after all, we tend to think that we are all really good.

Another reason is yet another bias: people tend to give more weight to the negative over the positive. As such, when assessing other people, we will tend to consider negative things about them as having more significance than the positive things. So, for example, when Sally is assessing the honesty of Bill, she will give more weight to incidents in which Bill was dishonest relative to those in which he was honest. As such, Sally will most likely see herself as being more honest than Bill. After enough comparisons, she will most likely see herself as above average.

This self-delusion probably has some positive effects—for example, it no doubt allows people to maintain a sense of value and to enjoy the smug self-satisfaction that they are better than most other folks. This surely helps people get by day-to-day.

There are, of course, downsides to this—after all, a person who does not do a good job assessing himself and others will be operating on the basis of inaccurate information and this rarely leads to good decision making.

Interestingly enough, the better-than-average delusion holds up quite well even in the face of clear evidence to the contrary. For example, the British Journal of Social Psychology did a survey of British prisoners asking them to compare themselves to other prisoners and the general population in terms of such traits as honesty, compassion, and trustworthiness. Not surprisingly, the prisoners ranked themselves as above average. They did, however, only rank themselves as average when it came to the trait of law-abidingness. This suggests that reality has some slight impact on people, but not as much as one might hope.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Why Runners are not Masochists (Usually)

Posted in Ethics, Philosophy, Running, Sports/Athletics by Michael LaBossiere on February 10, 2014

Palace 5KAs a runner, I am often accused of being a masochist or at least having masochistic tendencies. Given that I routinely subject myself to pain and recently wrote an essay about running and freedom that was rather pain focused, this is hardly surprising. Other runners, especially those masochistic ultra-marathon runners, are also commonly accused of masochism.

In some cases, the accusation is made in jest or at least not seriously. That is, the person making it is not actually claiming that runners derive pleasure (perhaps even sexual gratification) their pain. What seems to be going on is merely the observation that runners do things that clearly hurt and that make little sense to many folks. However, some folks do regard runners as masochists in the strict sense of the term. Being a runner and a philosopher, I find this a bit interesting—especially when I am the one being accused of being a masochist.

It is worth noting that I claim that people accuse runners of being masochists with some seriousness. While some people say runners are masochists in jest or with some respect for the toughness of runners, it is sometimes presented as an actual accusation: that there is something mentally wrong with runners and that when they run they are engaged in deviant behavior. While runners do like to joke about being odd and different, I think we generally prefer to not be seen as actually mentally ill or as engaging in deviant behavior. After all, that would indicate that we are doing something wrong—which I believe is (usually) not the case. Based on my experience over years of running and meeting thousands of runners, I think that runners are generally not masochists.

Given that runners engage in some rather painful activities (such as speed work and racing marathons) and that they often just run on despite injuries, it is tempting to believe that runners are really masochists and that I am in denial about the deviant nature of runners.

While this does have some appeal, it rests on a confusion about masochism in regards to matters of means and ends. For the masochist, pain is a means to the end of pleasure. That is, the masochist does not seek pain for the sake of pain, but seeks pain to achieve pleasure. However, there is a special connection between the means of pain and the end of pleasure: for the masochist, the pleasure generated specifically by pain is the pleasure that is desired. While a masochist can get pleasure by other means (such as drugs or cake), it is the desire for pleasure caused by pain that defines the masochist. As such, the pain is not an optional matter—mere pleasure is not the end, but pleasure caused by pain.

This is rather different from those who endure pain as part of achieving an end, be that end pleasure or some other end. For those who endure pain to achieve an end, the pain can be seen as part of the means or, perhaps more accurately, as an effect of the means. It is valuing the end that causes the person to endure the pain to achieve the end—the pain is not sought out as being the “proper cause” of the end. In the case of the masochist, the pain is not endured to achieve an end—it is the “proper cause” of the end, which is pleasure.

In the case of running, runners typically regard pain as something to be endured as part of the process of achieving the desired ends, such as fitness or victory. However, runners generally prefer to avoid pain when they can. For example, while I will endure pain to run a good race, I prefer running well with as little pain as possible. To use an analogy, a person will put up with the unpleasant aspects of a job in order to make money—but they would certainly prefer to have as little unpleasantness as possible. After all, she is in it for the money, not the unpleasant experiences of work. Likewise, a runner is typically running for some other end (or ends) than hurting herself.  It just so happens that achieving that end (or ends) requires doing things that cause pain.

In my essay on running and freedom, I described how I endured the pain in my leg while running the Tallahassee Half Marathon. If I were a masochist, experiencing pleasure by means of that pain would have been my primary end. However, my primary end was to run the half marathon well and the pain was actually an obstacle to that end. As such, I would have been glad to have had a painless start and I was pleased when the pain diminished. I enjoy the running and I do actually enjoy overcoming pain, but I do not enjoy the pain itself—hence the aspirin and Icy Hot in my medicine cabinet.

While I cannot speak for all runners, my experience has been that runners do not run for pain, they run despite the pain. Thus, we are not masochists. We might, however, show some poor judgment when it comes to pain and injury—but that is another matter.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Picking between Experts

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on January 29, 2014
A logic diagram proposed for WP OR to handle a...

A logic diagram proposed for WP OR to handle a situation where two equal experts disagree. (Photo credit: Wikipedia)

One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.

If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.

The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise.  For example, a person might be very likeable, but not know a thing about what they are talking about.

Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.


1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.


2. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).


3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.

It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.


4. The person in question is not significantly biased.

This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.

It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.

Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Poor Fraud

Posted in Business, Ethics, Law, Philosophy, Politics by Michael LaBossiere on January 17, 2014
Fox News Channel

Fox News Channel (Photo credit: Wikipedia)


One stock narrative is the tale of the fraud committed by the poor in regards to government programs. Donald Trump, for example, has claimed that a lot of fraud occurs. Fox News also pushes the idea that government programs aimed to help the poor are fraught with fraud. Interestingly enough, the “evidence” presented in support of such claims seems to be that the people making the claim think or feel that there must be a lot of fraud. However, there seems little inclination to actually look for supporting evidence—presumably if someone feels strongly enough that a claim is true, that is good enough.


The claim that the system is dominated by fraud is commonly used to argue that the system should be cut back or even eliminated.  The basic idea is that the poor are “takers” who are fraudulently living off the “makers.” While fraud is clearly wrong, it is rather important to consider some key questions.


The first question is this: what is the actual percentage of fraud that occurs in such programs? While, as noted above, certain people speak of lots of fraud, the actually statistical data tells another story.  In the case of unemployment insurance, the rate of fraud is estimated to be less than 2%. This is lower than the rate of fraud in the private sector. In the case of welfare, fraud is sometimes reported at being 20%-40% at the state level. However, the “fraud” seems to be primarily the result of errors on the part of bureaucrats rather than fraud committed by the recipients. Naturally, an error rate that high is unacceptable—but is rather a different narrative than that of the wicked poor.


Food stamp fraud does occur—but most of it is committed by businesses rather than the recipients of the stamps.  While there is some fraud on the part of recipients, the best data indicates that fraud accounts for about 1% of the payments. Given the rate of fraud in the private sector, that is exceptionally good.


Given this data, the overwhelming majority of those who receive assistance are not engaged in fraud. This is not to say that fraud should not be a concern—in fact, it is the concern with fraud on the part of the recipients that has resulted in such low incidents of fraud. Interestingly, about one third of fraud involving government money involves not the poor, but defense contractors who account for about $100 billion in fraud per year. Medicare and Medicaid combined have about $100 billion in fraudulent expenditures per year. While there is also a narrative of the wicked poor in regards to Medicare and Medicaid, the fraud is usually perpetrated by the providers of health care rather than the recipients. As such, it would seem that the focus on fraud should shift from the poor recipients of aid to defense contractors and to address Medicare/Medicaid issues. That is, it is not the wicked poor who are siphoning away money with fraud, it is the wicked wealthy who are sucking on the teat of the state. As such the narrative of the poor defrauding the state is a flawed narrative. Certainly it does happen: the percentage of fraud is greater than zero. However, the overall level of fraud on the part of the poor recipients seems to be less than 2%. The majority of fraud, contrary to the narrative, is committed by those who are not poor. While the existence of fraud does show a need to address that fraud, the narrative has cast the wrong people as the villains.


While the idea of mass welfare cheating is thus unfounded, there is still a legitimate concern as to whether or not the poor should be receiving such support from the state. After all,  even if the overwhelming majority of recipients are honestly following the rules and not engaged in fraud, there is still the question of whether or not the state should be providing welfare, food stamps, Medicare, Medicaid and similar such benefits. Of course, the narrative does lose some of its rhetorical power if the poor are not cast as frauds.


My Amazon Author Page


My Paizo Page


My DriveThru RPG Page


Enhanced by Zemanta

Cogs of Self

Posted in Philosophy, Relationships/Dating by Michael LaBossiere on November 20, 2013
English: Clockwork at the Liverpool World Museum

(Photo credit: Wikipedia)

While on a post-race cool down run with a friend, we discussed the failure of relationships. I was asked what I thought about the causes of such failures and, as usual, I came up with an analogy.

While there are many ways to see people, one way is to regard them as wonderful clockworks of cogs. These cogs are metaphors for the qualities, values, interests and other aspects of the personality of the person. Some of the cogs are at the surface of the person’s cog self—these are the ones that interact with the cogs of others. These tend to be the smaller, or minor, cogs. The deep self is made up of the core cogs—which would tend to be the larger cogs of a person. These could be regarded as the large cogs and the greater cogs.

When people interact, their outer cogs meet up. If the cogs spin together well, then the people get along and are compatible. If the cogs clash, then there will be problems.

When a person is in a relationship with another person, their minor cogs will interact and then, if things go well, some of their larger cogs will rotate in sync. While there will be clashes between the cogs, if enough of them spin well together, the relationship will go on. At least for a while.

Over time a person’s minor cogs will change. What she once found amusing will no longer amuse her. A hobby he once liked will no longer hold its charm. The poetry that once bored her will now touch her heart. And so on. A person’s larger cogs can also change, such as in a significant change of values.

In the case of a relationship, the impact of the changes will be doubled—the cogs that once rolled together smoothly might now spin against each other, creating a grinding in the machinery of the soul. If the change is great enough, the cogs can actually destroy each other, doing damage to the person or persons. At a certain point, the clash will doom the interaction, spelling the end of the relationship—or at least dooming those involved.

In other cases, the cogs can grow ever more in sync—spinning together ever closer. Presumably that sometimes happens.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Aesthetic Masochism

Posted in Aesthetics, Philosophy by Michael LaBossiere on July 5, 2013
English: Double Stuf Oreos, by Nabisco.

(Photo credit: Wikipedia)

Like many philosophers, I am rather drawn to science-fiction movies. One of my colleagues, Stephen, deviates from this usual path-while he does not dislike science fiction, his experience with the genre was somewhat limited. After learning that I was “big into sci-fi” he asked me for some recommendations. While he did like some of the films I suggested, he regarded some as rather awful. As should come as no surprise, this got me thinking about the enjoyment (or lack thereof) of bad films.

As my colleague pointed out, one common approach to explaining the enjoyment of bad films is to appeal to the notion that something can be so bad that it is good. On the face of it, bad would seem to be, well, bad. As such, there is a need to sort out what it could mean for something to be so bad that it is good.

One possibility is what could be called accidental aesthetic success. This is when a work succeeds not at what it was intended to be, but rather in being an accidental parody or mockery of the genre. Using the example of science fiction, this commonly occurs when the work is so absurd that although it is horrible science-fiction, it succeeds as an unintentional comedy. Thus the work is a failure in one sense (to borrow from Aristotle, it fails to produce the  intended effect on the audience). But it succeeds in another sense, by producing an unintended but valuable aesthetic experience for the audience.

While this view is certainly tempting, it can also be disputed by contending that the work does not actually succeed. To be specific, while it does produce an effect on the audience, this is a matter of accident rather than intent and hence to credit the work with success would be an error. To use an analogy, if someone intends to defend himself with a devastating martial arts attack, but slips on a banana to great comic effect, then he has not succeeded. Rather, his failure has caused the sort of mocking amusement reserved for failures.

That said, I am willing to extend a certain sort of aesthetic success to works that are so accidentally bad that they are good. There are, of course, works that endeavor to be good at being bad (such as the film Black Dynamite). These works can be assessed at how well they succeed at being good at what is attempted. Intentionally making a work that is good at being bad does open the possibility that the work could fail at being bad in such a way that makes it good in another way. But perhaps in that way lies madness.

Another approach to good badness can be used by drawing an analogy to junk food. Junk food is, by its nature, bad food. At least, it is bad food in terms of its nutritional value. However, people do rather like junk food and regard it as good in regards to how it tastes. The reverse holds for other types of food. For example, as a runner I have tried a wide variety of food products designed for athletes. While such food is often rather good in terms of its nutritional content, the taste is often rather bad (leading to my bad jokes about junk food and anti-junk food).

Going with the food analogy, some works that are so bad they are good could be rather like junk food. That is, they are deficient in what might be regarded as aesthetic nutritional value, yet have a certain tastiness-at least while they are being experienced. As with junk food, the after effects can be rather less pleasant. For example: for me, watching True Blood is like eating a mix of Cheetos and Oreo Cookies washed down with Mountain Dew. Somehow it is enjoyable while it is happening, but after it is done I wonder why the hell I did that…and I feel vaguely sick.

In such cases, I am willing to grant that such works have some sort of aesthetic value, much as I am willing to grant that junk food has some sort of value. However, the value does often seem rather dubious.

One counter to this is to contend that valuing “junk food” aesthetic value is just as big a mistake as valuing actual junk food. While a person might enjoy such experiences, she is making an error. In the case of food, she is making a poor nutritional choice that is masked by a pleasant taste. In the case of art, she is making a poor aesthetic choice, masked by a superficially pleasant experience.

It could be responded that a work might seem to be junk, but that it is actually better than it seems (or sounds, to steal from Twain). Going back to the food analogy, this could have some appeal. After all, food could be bad in one area (taste) but excel in another (nutrition). So, a food could actually be much better than it tastes. However, this sort of approach only works when the thing in question does actually have the capacity to be better than it seems.

In the case of aesthetic experiences it would certainly seem that a work cannot be better than it seems. After all, the aesthetic experience would be the seeming and it is exactly what it is. For example, consider a song that sounds awful. To claim it is better than it sounds would seem to be an error. After all, the song is what it sounds like and if it sounds bad, it is bad. There is nothing beyond the sound that could be appealed to in order to claim that it is better than it sounds. After all, it sounds what it sounds like. This is, of course, in contrast with many other things. For example, it makes sense to say of a wound that it looks worse than it is-the appearance (lots of blood, for example) is distinct from the seriousness of the wound. As another example, it makes sense to say that a car is better than it looks-it  might look like a junker on the outside, but the engine might be brand new.  Naturally, if it can be shown that art has these multiple aspects, then this matter could be properly addressed.

Before moving on, I must note that I am aware that a work of art can be good or bad in various aspects. For example, a song could have great lyrics, but be sung poorly. As another example, a film could have terrible special effects, but a brilliant story. This is, however, a different matter-in the above I am considering the aesthetic experience as a whole. To use an analogy, while a hamburger might have good cheese but a crappy burger, what would be considered is the overall experience of eating the hamburger.

Like other folks I know, I will sometimes indulge in watching a bad sci-fi/horror/fantasy movie that I recognize as being awful and hence prevents any appeal to the idea of good badness. As might be suspected, my colleague asked me why I would waste my time on bad movies that I actually admitted were bad.

My initial response was a somewhat practical one: there are only a limited number of good movies in those genres and when I get a craving for a genre, sometimes the only option is something bad. To use an analogy, this is like getting a craving for a certain food late at night and the only place that is open is rather bad. So, the only options are going without or going bad. In some cases, just as a bad burger is better than no burger, a bad film is better than no film. However, in other cases nothing is better than something bad.

My second response arose from conversations that my colleague and I had about running. While we are both runners, my colleague is the sort of runner who runs for himself and has no real interest in training for or competing in races. I am, however, very much into training and competition. In addition to enjoying the competition, I must admit that I enjoy the painful experience of hard training and running. That is, I obviously have some mild sort of masochism going on in this area which my colleague lacks.

This difference seems to extend beyond running and into aesthetics-I can actually enjoy suffering through a bad movie. Since I know other folks who are the same way, I believe that there is a certain aesthetic masochism that some people possess. I have not worked out a full theory of this, but given the volume of bad films and shows, this does seem like a promising area.

Test your aesthetic masochism on My Amazon Author Page.

Enhanced by Zemanta

Get every new post delivered to your Inbox.

Join 2,492 other followers