While there is an abundance of violence in the real world, there is also considerable focus on the virtual violence of video games. Interestingly, some people (such as the head of the NRA) blame real violence on the virtual violence of video games. The idea that art can corrupt people is nothing new and dates back at least to Plato’s discussion of the corrupting influence of art. While he was mainly worried about the corrupting influence of tragedy and comedy, he also raised concerns about violence and sex. These days we generally do not worry about the nefarious influence of tragedy and comedy, but there is considerable concern about violence.
While I am a gamer, I do have concerns about the possible influence of video games on actual behavior. For example, one of my published essays is on the distinction between virtual vice and virtual virtue and in this essay I raise concerns about the potential dangers of video games that are focused on vice. While I do have concerns about the impact of video games, there has been little in the way of significant evidence supporting the claim that video games have a meaningful role in causing real-world violence. However, such studies are fairly popular and generally get attention from the media.
The most recent study purports to show that teenage boys might become desensitized to violence because of extensive playing of video games. While some folks will take this study as showing a connection between video games and violence, it is well worth considering the details of the study in the context of causal reasoning involving populations.
When conducting a cause to effect experiment, one rather important factor is the size of experimental group (those exposed to the cause) and the control group (those not exposed to the cause). The smaller the number of subjects, the more likely that the difference between the groups is due to factors other than the (alleged) causal factor. There is also the concern with generalizing the results from the experiment to the whole population.
The experiment in question consisted of 30 boys (ages 13-15) in total. As a sample for determining a causal connection, the sample is too small for real confidence to be placed in the results. There is also the fact that the sample is far too small to support a generalization from the 30 boys to the general population of teenage boys. In fact, the experiment hardly seems worth conducting with such a small sample and is certainly not worth reporting on-except as an illustration of how research should not be conducted.
The researchers had the boys play a violent video game and a non-violent video game in the evening and compared the results. According to the researchers, those who played the violent video game had faster heart rates and lower sleep quality. They also reported “increased feelings of sadness.” After playing the violent game, the boys had greater stress and anxiety.
According to one researcher, “The violent game seems to have elicited more stress at bedtime in both groups, and it also seems as if the violent game in general caused some kind of exhaustion. However, the exhaustion didn’t seem to be of the kind that normally promotes good sleep, but rather as a stressful factor that can impair sleep quality.”
Being a veteran of violent video games, these results are consistent with my own experiences. I have found that if I play a combat game, be it a first person shooter, an MMO or a real time strategy game, too close to bedtime, I have trouble sleeping. Crudely put, I find that I am “keyed” up and if I am unable to “calm down” before trying to sleep, my sleep is generally not very restful. I really noticed this when I was raiding in WOW. A raid is a high stress situation (game stress, anyway) that requires hyper-vigilance and it takes time to “come down” from that. I have experienced the same thing with actual fighting (martial arts training, not random violence). I’ve even experienced something comparable when I’ve been awoken by a big spider crawling on my face-I did not sleep quite so well after that. Graduate school, as might be imagined, put me into this state of poor sleep for about five years.
In general, then, it makes sense that violent video games would have this effect-which is why it is not a good idea to game up until bed time if you want to get a good night’s sleep. Of course, it is a generally a good idea to relax about an hour before bedtime-don’t check email, don’t get on Facebook, don’t do work and so on.
While not playing games before bedtime is a good idea, the question remains as to how these findings connect to violence and video games. According to the researchers, the differences between the two groups “suggest that frequent exposure to violent video games may have a desensitizing effect.”
Laying aside the problem that the sample is far too small to provide significant results that can be reliably extended to the general population of teenage boys, there is also the problem that there seems to be a rather large chasm between the observed behavior (anxiety and lower sleep quality) and being desensitized to violence. The researchers do note that the cause and effect relationship was not established and they did consider the possibility of reversed causation (that the video games are not causing these traits, but that boys with those traits are drawn to violent video games). As such, the main impact of the study seems to be that it got media attention for the researchers. This would suggest another avenue of research: the corrupting influence of media attention on researching video games and violence.
While it sounds a bit like science fiction, the issue of whether or not human genes can be owned has become a matter of concern. While the legal issue is interesting, my focus will be on the philosophical aspects of the matter. After all, it was once perfectly legal to own human beings—so what is legal is rather different from what is right.
Perhaps the most compelling argument for the ownership of genes is a stock consequentialist argument. If corporations cannot patent and thus profit from genes, then they will have no incentive to engage in expensive genetic research (such as developing tests for specific genes that are linked to cancer). The lack of such research will mean that numerous benefits to individuals and society will not be acquired (such as treatments for specific genetic conditions). As such, not allowing patents on human genes would be wrong.
While this argument does have considerable appeal, it can be countered by another consequentialist argument. If human genes can be patented, then this will allow corporations to take exclusive ownership of these genes, thus allowing them a monopoly. Such patents will allow them to control the allowed research conducted even at non-profit institutions such as universities (who sometimes do research for the sake of research), thus restricting the expansion of knowledge and potentially slowing down the development of treatments. This monopoly would also allow the corporation to set the pricing for relevant products or services without any competition. This is likely to result in artificially high prices which could very well deny people needed medical services or products simply because they cannot meet the artificially high prices arising from the lack of competition. As such, allowing patents on human genes would be wrong.
Naturally, this counter argument can be countered. However, the harms of allowing the ownership of human genes would seem to outweigh the benefits—at least when the general good is considered. Obviously, such ownership would be very good for the corporation that owns the patent.
In addition to the moral concerns regarding the consequences, there is also the general matter of whether it is reasonable to regard a gene as something that can be owned. Addressing this properly requires some consideration of the basis of property.
John Locke presents a fairly plausible account of property: a person owns her body and thus her labor. While everything is initially common property, a person makes something her own property by mixing her labor with it. To use a simple example, if Bill and Sally are shipwrecked on an ownerless island and Sally gathers coconuts from the trees and build a hut for herself, then the coconuts and hut are her property. If Bill wants coconuts or a hut, he’ll have to either do work or ask Sally for access to her property.
On Locke’s account, perhaps researchers could mix their labor with the gene and make it their own. Or perhaps not—I do not, for example, gain ownership of the word “word” in general because I mixed my labor with it by typing it out. I just own the work I have created in particular. That is, I own this essay, not the words making it up.
Sticking with Locke’s account, he also claims that we are owned by God because He created us. Interestingly, for folks who believe that God created the world, it would seem to follow that a corporation cannot own a human gene. After all, God is the creator of the genes and they are thus His property. As such, any attempt to patent a human gene would be an infringement on God’s property rights.
It could be countered that although God created everything, since He allows us to own the stuff He created (like land, gold, and apples), then He would be fine with people owning human genes. However, the basis for owning a gene would still seem problematic—it would be a case of someone trying to patent an invention which was invented by another person—after all, if God exists then He invented our genes, so a corporation cannot claim to have invented them. If the corporation claims to have a right to ownership because they worked hard and spent a lot of money, the obvious reply is that working hard and spending a lot of money to discover what is already owned by another would not transfer ownership. To use an analogy, if a company worked hard and spent a lot to figure out the secret formula to Coke, it would not thus be entitled to own Coca Cola’s formula.
Naturally, if there is no God, then the matter changes (unless we were created by something else, of course). In this case, the gene is not the property of a creator, but something that arose naturally. In this case, while someone can rightfully claim to be the first to discover a gene, no one could claim to be the inventor of a naturally occurring gene. As such, the idea that ownership would be confirmed by mere discovery would seem to be a rather odd one, at least in the case of a gene.
The obvious counter is that people claim ownership of land, oil, gold and other resources by discovering them. One could thus argue that genes are analogous to gold or oil: discovering them turns them into property of the discoverer. There are, of course, those who claim that the ownership of land and such is unjustified, but this concern will be set aside for the sake of the argument (but not ignored—if discovery does not confer ownership, then gene ownership would be right out in regards to natural genes).
While the analogy is appealing, the obvious reply is that when someone discovers a natural resource, she gains ownership of that specific find and not all instances of what she found. For example, when someone discovers gold, they own that gold but not gold itself. As another example, if I am the first human to stumble across naturally occurring Unobtanium on an owner-less alien world, I thus do not gain ownership of all instances of Unobtanium even if it cost me a lot of money and work to find it. However, if I artificially create it in my philosophy lab, then it would seem to be rightfully mine. As such, the researchers that found the gene could claim ownership of that particular genetic object, but not the gene in general on the grounds that they merely found it rather than created it. Also, if they had created a new artificial gene that occurs nowhere in nature, then they would have grounds for a claim of ownership—at least to the degree they created the gene.
There are many ways to die, but the public concern tends to focus on whatever is illuminated in the media spotlight. 2012 saw considerable focus on guns and some modest attention on a somewhat unexpected and perhaps ironic killer, namely pain medication. In the United States, about 20,000 people die each year (about one every 19 minutes) due to pain medication. This typically occurs from what is called “stacking”: a person will take multiple pain medications and sometimes add alcohol to the mix resulting in death. While some people might elect to use this as a method of suicide, most of the deaths appear to be accidental—that is, the person had no intention of ending his life.
The number of deaths is so high in part because of the volume of painkillers being consumed in the United States. Americans consume 80% of the world’s painkillers and the consumption jumped 600% from 1997 to 2007. Of course, one rather important matter is the reasons why there is such an excessive consumption of pain pills.
One reason is that doctors have been complicit in the increased use of pain medications. While there have been some efforts to cut back on prescribing pain medication, medical professionals were generally willing to write prescriptions for pain medication even in cases when such medicine was not medically necessary. This is similar to the over-prescribing of antibiotics that has come back to haunt us with drug resistant strains of bacteria. In some cases doctors no doubt simply prescribed the drugs to appease patients. In other cases profit was perhaps a motive. Fortunately, there have been serious efforts to address this matter in the medical community.
A second reason is that pharmaceutical companies did a good job selling their pain medications and encouraged doctors to prescribe them and patients to use them. While the industry had no intention of killing its customers, the pushing of pain medication has had that effect.
Of course, the doctors and pharmaceutical companies do not bear the main blame. While the companies supplied the product and the doctors provided the prescriptions, the patients had to want the drugs and use the drugs in order for this problem to reach the level of an epidemic.
The main causal factor would seem to be that the American attitude towards pain changed and resulted in the above mentioned 600% increase in the consumption of pain killers. In the past, Americans seemed more willing to tolerate pain and less willing to use heavy duty pain medications to treat relatively minor pains. These attitudes changed and now Americans are generally less willing to tolerate pain and more willing to turn to prescription pain killers. I regard this as a moral failing on the part of Americans.
As an athlete, I am no stranger to pain. I have suffered the usual assortment of injuries that go along with being a competitive runner and a martial artist. I also received some advanced education in pain when a fall tore my quadriceps tendon. As might be imagined, I have received numerous prescriptions for pain medication. However, I have used pain medications incredibly sparingly and if I do get a prescription filled, I usually end up properly disposing of the vast majority of the medication. I do admit that I did make use of pain medication when recovering from my tendon tear—the surgery involved a seven inch incision in my leg that cut down until the tendon was exposed. The doctor had to retrieve the tendon, drill holes through my knee cap to re-attach the tendon and then close the incision. As might be imagined, this was a source of considerable pain. However, I only used the pain medicine when I needed to sleep at night—I found that the pain tended to keep me awake at first. Some people did ask me if I had any problem resisting the lure of the pain medication (and a few people, jokingly I hope, asked for my extras). I had no trouble at all. Naturally, given that so many people are abusing pain medication, I did wonder about the differences between myself and my fellows who are hooked on pain medication—sometimes to the point of death.
A key part of the explanation is my system of values. When I was a kid, I was rather weak in regards to pain. I infer this is true of most people. However, my father and others endeavored to teach me that a boy should be tough in the face of pain. When I started running, I learned a lot about pain (I first started running in basketball shoes and got huge, bleeding blisters). My main lesson was that an athlete did not let pain defeat him and certainly did not let down the team just because something hurt. When I started martial arts, I learned a lot more about pain and how to endure it. This training instilled me with the belief that one should endure pain and that to give in to it would be dishonorable and wrong. This also includes the idea that the use of painkillers is undesirable. This was balanced by the accompanying belief, namely that a person should not needlessly injure his body. As might be suspected, I learned to distinguish between mere pain and actual damage occurring to my body.
Of course, the above just explains why I believe what I do—it does not serve to provide a moral argument for enduring pain and resisting the abuse of pain medication. What is wanted are reasons to think that my view is morally commendable and that the alternative is to be condemned. Not surprisingly, I will turn to Aristotle here.
Following Aristotle, one becomes better able to endure pain by habituation. In my case, running and martial arts built my tolerance for pain, allowing me to handle the pain ever more effectively, both mentally and physically. Because of this, when I fell from my roof and tore my quadriceps tendon, I was able to drive myself to the doctor—I had one working leg, which is all I needed. This ability to endure pain also serves me well in lesser situations, such as racing, enduring committee meetings and grading papers.
This, of course, provides a practical reason to learn to endure pain—a person is much more capable of facing problems involving pain when she is properly trained in the matter. Someone who lacks this training and ability will be at a disadvantage when facing situations involving pain and this could prove harmful or even fatal. Naturally, a person who relies on pain medication to deal with pain will not be training themselves to endure. Rather, she will be training herself to give in to pain and become dependent on medication that will become increasingly ineffective. In fact, some people end up becoming even more sensitive to pain because of their pain medication.
From a moral standpoint, a person who does not learn to endure pain properly and instead turns unnecessarily to pain medication is doing harm to himself and this can even lead to an untimely death. Naturally, as Aristotle would argue, there is also an excess when it comes to dealing with pain: a person who forces herself to endure pain beyond her limits or when doing so causes actually damage is not acting wisely or virtuously, but self-destructively. This can be used in a utilitarian argument to establish the wrongness of relying on pain medication unnecessarily as well as the wrongness of enduring pain stupidly. Obviously, it can also be used in the context of virtue theory: a person who turns to medication too quickly is defective in terms of deficiency; one who harms herself by suffering beyond the point of reason is defective in terms of excess.
Currently, Americans are, in general, suffering from a moral deficiency in regards to the matter of pain tolerance and it is killing us at an alarming rate. As might be suspected, there have been attempts to address the matter through laws and regulations regarding pain medication prescriptions. This supplies people with a will surrogate—if a person cannot get pain medication, then she will have to endure the pain. Of course, people are rather adept at getting drugs illegally and hence such laws and regulations are of limited effectiveness.
What is also needed is a change in values. As noted above, Americans are generally less willing to tolerate even minor pains and are generally willing to turn towards powerful pain medication. Since this was not always the case, it seems clear that this could be changed via proper training and values. What people need is, as discussed in an earlier essay, training of the will to endure pain that should be endured and resist the easy fix of medication.
In closing, I am obligated to add that there are cases in which the use of pain medication is legitimate. After all, the body and will are not limitless in their capacities and there are times when pain should be killed rather than endured. Obvious cases include severe injuries and illnesses. The challenge then, is sorting out what pain should be endured and what should not. Since I am a crazy runner, I tend to err on the side of enduring pain—sometimes foolishly so. As such, I would probably not be the best person to address this matter.
When a person does terrible things that seem utterly senseless, like murder children, there is sometimes a division in the assessment of the person. Some people will take the view that the person is mentally ill on the grounds that a normal, sane person would not do something so terrible and senseless. Others take the view that the person is evil on the grounds that a normal, non-evil person would not do something so terrible and senseless. Both of these views express an attempt to explain and understand what occurred. As might be imagined, the distinction between being evil and being mentally ill is a matter of significant concern.
One key point of concern is the matter of responsibility and the correct way to respond to a person who has done something terrible. If a person acts from mental illness rather than evil, then it seems somewhat reasonable to regard them as not being accountable for the action (at least to the degree the person is ill). After all, if something terrible occurs because a person suffers from a physical illness, the person is generally not held accountable (there are, obviously, exceptions). For example, my running friend Jay told me about a situation in which a person driving on his street had an unexpected seizure. Oddly, the person’s foot stomped down on the gas pedal and the car rocketed down the street, smashing into another car and coming to a stop in someone’s back yard. The car could have easily plowed over my friend, injuring or killing him. However, since the person was not physically in control of his actions (and he had no reason to think he would have a seizure) he was not held morally accountable. That is, he did nothing wrong. If a person had intentionally tried to murder my friend with his car, then that would be seen as an evil action. Unless, perhaps, the driver was mentally ill in a way that disabled him in a way comparable to a stroke. In that case, the driver might be as “innocent” as the stroke victim.
There seem to be at least two ways that a mentally ill person might be absolved of moral responsibility (at least to the degree she is mentally ill).
First, the person might be suffering from what could be classified as perceptual and interpretative disorders. That is, they have mental defects that cause them to perceive and interpret reality incorrectly. For example, a person suffering from extreme paranoia might think that my friend Jay intends to steal his brain, even Jay has no such intention. In such a case, it seems reasonable to not regard the person as evil if he tries to harm Jay—after all, he is acting in what he thinks is legitimate self-defense rather than from a wicked motivation. In contrast, someone who wanted to kill Jay to rob his house or just for fun would be acting in an evil way. Put in general terms, mental conditions that distort a person’s perception and interpretation of reality might lead him to engage in acts of wrongful violence even though his moral reasoning might remain normal. Following Thomas Aquinas, it seems sensible to consider that such people might be following their conscience as best they can, only they have distorted information to work with in their decision making process and this distortion results from mental illness.
Second, the person might be suffering from what could be regarded as a disorder of judgment. That is, the person’s ability to engage in reasoning is damaged or defective due to a mental illness. The person might (or might not) have correct information to work with, but the processing is defective in a way that causes a person to make judgments that would be regarded as evil if made by a “normal” person. For example, a person might infer from the fact that someone is wearing a blue hat that the person should be killed.
One obvious point of concern is that “normal” people are generally bad at reasoning and commit fallacies with alarming regularity. As such, there would be a need to sort out the sort of reasoning that is merely bad reasoning from reasoning that would count as being mentally ill. One point worth considering is that bad reasoning could be fixed by education whereas a mental illness would not be fixed by learning, for example, logic.
A second obvious point of concern is discerning between mental illness as a cause of such judgments and evil as a cause of such judgments. After all, evil people can be seen as having a distorted sense of judgment in regards to value. In fact, some philosophers (such as Kant and Socrates) regard evil as a mental defect or a form of irrationality. This has some intuitive appeal—after all, people who do terrible and senseless things would certainly seem to have something wrong with them. Whether this is a moral wrongness or health wrongness is, of course, the big question here.
One of the main reasons to try to sort out the difference is figuring out whether a person should be treated (cured) or punished (which might also cure the person). As noted above, a person who did something terrible because of mental illness would (to a degree) not be accountable for the act and hence should not be punished (or the punishment should be duly tempered). For some it is tempting to claim that the choice of evil is an illusion because there is no actual free choice (that is, we do what we do because of the biochemical and electrical workings of the bodies that are us). As such, people should not be punished, rather they should be repaired. Of course, there is a certain irony in such advice: if we do not have choice, then advising us to not punish makes no sense since we will just do what we do. Of course, the person advising against punishment would presumably have no choice but to give such advice.
While high fructose corn syrup (which is typical a blend of fructose and glucose) is a ubiquitous food ingredient, it now has something of a poor reputation. For example, the front label on the ketchup bottle in my fridge has “no high fructose corn syrup!” in large letters. This is, presumably, considered a positive selling point. Given that many consumers are less than enamored with high fructose corn syrup, it is hardly surprising that the Corn Refiners Association tried to get the United States FDA to accept their proposal to rename the syrup “corn sugar.” While there are health concerns regarding sugar, sugar is generally looked upon more favorably than high fructose corn syrup. The FDA denied this request on the grounds that it does not meet the definition of “sugar” and thus, using an argument by definition, it follows that high fructose corn syrup is not sugar. According to the FDA, sugar must be “a solid, dried and crystallized food.” High fructose corn syrup, being syrup, obviously does not meet this definition-thus it is not a sugar.
The Corn Refiners Association has also been engaged in a marketing campaign to convince the public that high fructose corn syrup is a form of sugar and is comparable to table sugar. Not surprisingly, the Sugar Association brought a lawsuit against the Corn Refiners Association in response to this campaign. This sort of battle of names in the food industry is nothing new or particularly unusual. For example, there has also been a battle between the producers of milk and the makers of soy milk over whether or not soy milk should be legally allowed to be called “soy milk.”
It might, of course, be wondered why the naming matters. In the case of high fructose corn syrup, the most likely reason was noted above: while high fructose corn syrup has gotten a bad reputation (deserved or not), sugar still has a better reputation (deserved or not). As such, replacing “high fructose corn syrup” with “corn sugar” on ingredient labels would tend to lead uninformed consumers to believe that they were not consuming high fructose corn syrup but something else. Given that the reputation of high fructose corn syrup seems to be suffering, such re-naming would probably result in more sales relative to attempting to sell products with the original name.
Because of government subsidies for corn, high fructose corn syrup is cheaper than table sugar and thus is widely used because it provides more sweetness for the dollar. As such, high fructose corn syrup is a competitor to sugar that enjoys a price advantage. As might be suspected, it seems reasonable that the Sugar Association would not want to allow a major competitor to change the name of their signature product and thus yield an important advantage in regards to reputation.
For those who recall basic chemistry, this dispute might seem a bit odd. After all, fructose is chemically classified as a sugar (as is, obviously, glucose). As such, it is tempting to agree with the Corn Refiners Association: high fructose corn syrup is sugar. However, the FDA does not define ”sugar” chemically, but also in terms of its state (it must be a solid-at least in its “normal” state). As such, syrup is not a sugar-even if it is chemically a sugar (or two sugars mixed together). This, of course, might suggest that the dispute is thus the result of some sort of arcane legal process in which the definition of what seem to be a chemical term is set by bureaucrats and lobbyists rather than by chemists. Given that chemists are presumably the legitimate experts on what counts as a sugar, it would seem more rational to rely on the scientific rather than seemingly political definition of “sugar.”
One obvious reply is that the FDA might have a legitimate reason for classifying sugar in a way that involves it being a specific sort of solid rather than based on its chemical composition. After all, looking at the matter from the standpoint of food classifications, there does seem to be a reasonable distinction between a syrup and sugar. To use an appeal to intuition, imagine that you ask for some sugar for your coffee and you are handed a bottle of syrup. If the person said “this is fructose syrup, which is a sugar”, then you would probably reply that you mean the white crystal stuff. As such, looked at from the standpoint of how consumers understand “sugar” and “syrup”, high fructose corn syrup would be syrup and not corn sugar. This leads to the second point.
A second obvious reply is that renaming high fructose corn syrup would seem to primarily serve to mislead consumers. As noted above, until consumers sorted out that “corn sugar” actually refers to high fructose corn syrup, they are likely to buy products thinking that they are avoiding an ingredient they do not want. While I will not make any claims about the true intentions of the Corn Refiners Association, misleading the public in this way is morally dubious at best.
My second reply could be countered by arguing that the name change is not intended to mislead consumers but to offset an unfounded bad reputation that high fructose corn syrup does not deserve. After all, consumers often seem to regard high fructose corn syrup as bad-at least as being worse than sugar. If this is not the case, then the syrup is being judged unfairly. If this is the case, then it could be contended that the name change would merely allow the maligned ingredient to shed its unearned bad reputation. This could be seen as a person who has been falsely accused of misdeeds electing to change his name for a fresh start because he has been unable to erase the stain on his original name. This does have a certain moral appeal to it, but I am inclined to think that it is offset by the fact that most consumers would be ignorant of the name change and hence would be misled by such labels. As such, I do agree with the FDA’s decision to not allow the name change.
In terms of the science, the American Medical Association takes the view that there is not adequate evidence to start restricting the use of the syrup. More research is, however, is planned. The Center for Science in the Public Interest currently takes the view that the syrup is no worse than table sugar-but that Americans consume too much of both sweeteners.
The United States Supreme Court is considering the constitutionality of the Affordable Care Act and this has created quite a political stir. One of the main points of concern is the individual mandate. The gist of this is that individuals are required to buy health insurance. Those who fail to do so will be fined.
Setting aside the rabid rhetoric, the main philosophical issue seems to be whether or not the state has a legitimate right to impose this mandate. Or, as opponents of the mandate put it, whether or not the state has the right to require people to buy a private product.
On the face of it, I am inclined to agree that the state does not have a general right to compel citizens to buy products even when it would be wise and good to do so. As critics have noted, while broccoli is good for people, the state would seem to have no legitimate right to compel people to buy it. This sort of reasoning is consistent with my own view of liberty, which is roughly based on that of John Stuart Mill’s view. The general idea is that people only have a moral right to compel people when the actions in question can cause unwarranted harm to others. Even if doing something would be good or wise, society has no right to compel an individual into doing (or not doing something) when it is not their legitimate concern (that is, involves harm to others).
Because of my adherence to this view of liberty, I would be against the state compelling people to buy broccoli, to exercise or to quit smoking. After all, in such matters the individual is sovereign. Since I endeavor to be consistent in my principles, I also oppose the illegality of recreational drugs as well as any law that would ban same-sex marriage. After all, if it would violate liberty to force someone to buy broccoli because it is good for them, it would also seem to violate liberty to force someone to forgo marijuana because it is bad for them or to forgo same sex marriage because some people do not like it. Not surprisingly, some folks are not quite consistent in these matters: they scream for freedom when an individual mandate is on the line but are quite happy to impose on others when the issue turns to same-sex marriage.
Given my view on a broccoli mandate it might be suspected I would oppose the individual mandate. However, this is not the case-I actually support it. Naturally, some folks might accuse me of supporting it from blind liberalism. However, my reasons for supporting it are classic conservatism. This should not be at all shocking since the individual mandate actually has a fine conservative pedigree.
Given its origin, it might be tempting to argue that the conservative assault on the mandate is misguided. However, to claim that something is good (or bad) based on its origin would be an error (specifically the genetic fallacy). It might also be tempting to argue that the conservatives are being inconsistent in attacking the mandate given that it was supported by conservatives in the past. However, this would be a mere ad hominem tu quoque. However, it is certainly interesting to note that the conservative opposition to the mandate seems to be driven by their opposition to Obama rather than the result of a reasoned repudiation of the conservative arguments in favor of the mandate. As such, one might suspect that the rejection of the mandate is motivated in part by an ad homimen attack amounting to “Obama and the Democrats are for it, so it must be bad.” However, my goal is not to consider the history and psychology of the matter, but to present conservative arguments for the mandate.
One stock conservative principle is that people should take responsibility for themselves. This principle is often taken to entail more specific principles, such as the one that people should pay for what they receive and the one that the state should always endeavor to avoid providing welfare and its ilk.
These principles seem eminently reasonable. After all, if I fail to take responsibility and because of this I get aid from the state that I have not paid for, it would seem reasonable to regard me as a thief. To use a specific example, if I decide that I am tired of working and quit my job to go on welfare, then I would seem to be stealing from my fellows. After all, I could support myself and merely would have chosen not to do so. To use another example, if my company gets subsidies from the state when it is profitable on its own, I would thus seem to be robbing my fellows. After all, my company can easily support itself without sponging off the taxpayers.
At this point, one might be wondering what these principles have to do with the individual mandate. After all, it has been cast as the state imposing on liberty by forcing people to buy a product. However, this is not the proper way to see the mandate. To see that this is the case, consider the following.
Back in 1986 the United States Congress passed the Emergency Medical Treatment and Labor Act. This act mandates that hospitals cannot turn away or transfer a patient unnecessarily when there is an emergency condition. While hospitals can ask about the patient’s ability to pay, they cannot delay or refuse treatment based on a lack of ability to pay. Hospitals can, of course, refuse to provide treatment or examination in non-emergency situations. Hospitals that violate the law can be fined as can doctors who are complicit in declaring a patient’s condition to be a non-emergency when it actually was.
Since people know that hospitals cannot turn away emergency cases, people who do not have insurance often turn to emergency rooms for medical treatment. In some cases, they do so even for routine care on the assumption that the medical personnel will provide at least some care even in the case of non-emergencies. While there has been some dispute over the exact numbers, this has been a problem in many hospitals for quite some time.
Obviously enough, when a hospital provides “free” medical care to the uninsured, it still must be paid for. After all, medical personnel do not work for free nor do the supplies and equipment needed for treatment come free. While hospitals do try to collect from the uninsured patients, this often does not cover the bill. After all, most people who are uninsured are without insurance because they cannot afford it rather than as a matter of choosing to forgo it. As such, the costs must be passed on to those who have insurance as well as on to the state. It is estimated that covering the bills of the uninsured adds $1500 to a family’s insurance premiums and about $500 to that of an individual.
As such, under the current system hospitals are required to provide services to those who cannot pay and the insured and the taxpayers are compelled to pay the bill. Thus, some people are not taking responsibility by paying for what they receive and others are left to pick up the tab-including the state. This is exactly the sort of situation that one would expect a conservative to rail against. After all, it involves people getting something for nothing as well as other people being compelled to pay more. And, of course, it also involves the state in providing “handouts.”
In this situation, there seem to be two main legitimate conservative options. The first is to ensure an end to the free ride and the government handouts by compelling people to get insurance. This way they would be paying for what they received and not being free riders. This, coupled with the Affordable Care Act, would also have the benefit of allowing people affordable access to non-emergency preventative care which would be better for their health and also reduce the strain on emergency rooms. There is, however, a second option.
A second way to address this problem is to repeal the part of the Emergency Medical Treatment and Labor Act that requires hospitals to provide emergency care to people who cannot pay. If those without insurance or money were not treated, then there would be no extra cost to pass on to the insured or to the state, thus solving the problem at hand.
Obviously, while the second solution would save some people money, it would not come without a price. It would require accepting that people should be left to die if they lack the financial resources to pay for vastly overpriced medical care. I would certainly hope that this is not a value that my fellow Americans would endorse, but perhaps this is not the case. Perhaps we should be free of the burden of caring for others and they should be free to die on the curb of a hospital because the job creators did not create an adequate job for them.
It seems a bit odd arguing about contraception in 2012. After all, the matter seemed to have been large resolved some time ago. While it is tempting to say that Contraception 2012 is a manufactured conflict, there do seem to be some points worth addressing in this context.
One talking point that has been presented by some folks, such as mainstream American media personality Rush Limbaugh, is that insurance coverage of contraception is the same thing as paying someone to have sex.
In the case of people who are prescribed contraceptives because of medical conditions (such as ovarian cysts), this is obviously not the case. In cases in which the person is simply using the contraception as contraception, she is still not being paid to have sex any more than the coverage of Viagra and comparable medicine for men is paying men to have sex. At most, what is being paid for is the means to have sex (Viagra) and the means to avoid getting pregnant (contraception). True, these are connected to sex, but covering either is not the same thing as paying people to have sex.
Another common talking point is that the plan to cover contraception will be “using people’s money” to pay for something they do not approve of.
One obvious reply to this is that for most folks insurance coverage is either paid for by the individual or as part of a benefit package for a job. Either way, the person is earning her coverage. To use an analogy, my insurance covered my quadriceps tendon repair (mostly). This was not using some other people’s money since I pay for my insurance. Likewise, if a woman get contraception covered by her insurance, she is paying for that (either directly or by getting benefits as part of her compensation).
It might be countered that some women get coverage from the state, so tax dollars could go to pay for birth control. Since some folks are against contraception or do not want to pay for it, this should not be done.
The stock reply to this is that our tax dollars are routinely used to pay for things that we might not want to pay for or that we might even oppose. For example, I’d rather not have my tax dollars pay for subsidies to corporations and I certainly don’t want to be paying for other dudes’ Viagra. This is the way democracy works-provided that the spending is set up through due process, by agreeing to the legitimacy of the state we also give our consent to the spending-even for things we would rather not contribute to.
Naturally, it can be argued that we should not be required to pay for anything we oppose and this has considerable appeal (see Thoreau’s arguments about civil disobedience for an interesting look at this matter). However, if we adopt this principle for contraception, it must also apply across the board. So, for example, folks who are against war can insist that war should not be paid for using tax dollars and so on. It seems likely that for every proposed spending there will be a person who opposes it-thus the state should not spend money on anything. While this would solve the deficit problem, it would seem a rather absurd solution.
A third talking point is that contraception should not be covered because it does not treat a condition. This is most often brought up when defending the coverage of Viagra (which restores a natural function).
The easy reply to this is that some forms of contraception are used to treat medical conditions (such as ovarian cysts). As such, this use should be covered. But, of course, this would not warrant the coverage of contraception as contraception.
One reply worth considering is that the framing of the debate begs the question against women. After all, the claim is that anything that is covered must treat or prevent a harmful condition and this would exclude contraception (except in cases in which a women would be medically harmed by being pregnant). However, this framing tends to be simply assumed rather than being argued for, which is rather unfair to women in this regard. After all, the matter of pregnancy seems to be unique (and limited to women) and hence it seems questionable to insist that it must automatically fall under the framing in question. It can, of course, be argued that it does-but an argument is wanted here to show that is the case.
While some might be tempted to cast pregnancy as the harmful medical condition that is being prevented by contraception, the idea of casting pregnancy as a harmful medical condition has rather limited appeal. After all, while pregnancy puts considerable strain on the woman, it seems rather difficult to cast it as an illness that needs to be prevented or treated as if it were comparable to measles or cancer.
A more fruitful line of approach is to argue that contraception provides medical control over a woman’s quality of life. That is, it enables her to chose whether to be pregnant or not. Doing this clearly falls under the domain of medicine and women do seem to have a legitimate claim to this right. After all, much of medicine deals with maintaining a desired quality of life and women would seem to have as much right to that as men.
Naturally, it might be countered that I am treating pregnancy as a disease (which would be some major rhetorical points against me). But this is not the case. All I am claiming is that given that pregnancy can be rather challenging for a woman and, of course, a child is a major consumer of resources a women has a legitimate right to use medical means to maintain her desired quality of life-just as a man has a legitimate right to use Viagra and its ilk to maintain his desired quality of life. Just as Viaga is covered as a quality of life drug, so should contraception.
A fourth, somewhat uncommon, talking point is that contraception is on par with abortion, so covering contraception is covering abortion.
One stock reply is the obvious fact that contraception lowers the number of unwanted pregnancies and this lowers the number of abortions. As such, folks who are worried about abortion would seem to have a good reason to favor covering contraception.
Of course, some folks contend that contraception is like abortion in that it prevents a possible person from becoming an actual person. While this does have some philosophical interest, it would seem to entail that every moment I am not out and about impregnating women, I am engaged in acts comparable to abortion. After all, by not impregnating as many women as possible, I am preventing some possible people from becoming actual people. Put a bit less absurdly, if I am practicing abstinence, then I am effectively engaged in abortion since all those possible people will never become actualized.
It could be countered that this only applies to cases in which I am actually having sex (and presumably that I should only be having sex with a woman I am married to). That is, every time I have sex, there should be a roll of the dice to see whether or not the woman gets pregnant. Presumably if either of use chooses to use any method that lowers the probability of pregnancy, then this would be on par with attempting an abortion. As such, the only acceptable family planning would be to decide to have sex only when one plans on a pregnancy since intentionally preventing it would be unacceptable. I would be interested in seeing some arguments for this that do not involve an appeal to theology.
When the Catholic Church and conservatives decided to make an issue of the coverage of contraceptives in health care plans, it appeared that the Democrats were going to take a beating. After all, the narrative had been presented as one of religious freedom: the tyrannical hand of government had reached out to force Catholic institutions to violate their moral stance on contraception. This fired up the conservative base and even gave a few religious liberals pause. With the re-surging economy, it appeared that God had smiled down upon the Republicans and granted them a stick with which to beat Obama.
And then Rush Limbaugh called Sandra Fluke a slut and a prostitute (and, creepily requested that she post sex videos on Youtube) because she defended the coverage of contraception by health insurance plans, which shifted the narrative. Instead of a morality play in which the cruel liberal state was imposing on the faithful, the morality play shifted to one in which a young woman was being branded a slut and a prostitute for speaking out for the rights of women. This, as might be imagined, shifted the narrative in favor of the Democrats and Obama.
Not surprisingly, some folks decided to “play politics” with this and also attempted to use the situation to raise funds for Obama and the Democrats. This was met with righteous indignation from the right-who were no doubt angry that they had seemingly lost their political and fundraising advantage by this narrative shift. Of course, both parties are right: they each happily play politics and exploit events for fundraising. In this regards, they both seem to be in the wrong.
While I am usually branded a liberal (but never a slut), I do agree that there is a legitimate moral issue in regards to the state requiring employers with a religious affiliation to provide health care that conflicts with the professed morality of said institutions. After all, the liberty of conscience is a basic liberty (as per Mill’s arguments) and alleged impositions on this liberty should be taken seriously. However, I do believe that the Church’s officials are in error in regards to birth control and have argued for this elsewhere. As such, I believe that their appeal to conscience is unjustified and that they do not have adequate moral grounds to deny their employees such coverage. I do, however, respect the fact that they are taking a moral stand and that the Church does provide arguments in support of the official line. Of course, this is a rather a moot point now-the insurance companies will pick up the tab so the Catholic Church’s money can remain untainted by sin (well, aside from the money they pay their employees who might use it to buy birth control).
As will shock no one, I believe that Rush acted wrongly (both in terms of ethics and in terms of reasoning) in accusing Sandra Fluke of being a slut and a prostitute. As Rush saw it, Fluke wanted to be paid to have sex. However, Fluke never made that claim. Rather, she contended that insurance should cover the cost of contraception. This is no more paying women to have sex than the coverage of Viagra is paying men to have sex. Rather, medicine is being covered by health insurance-which is, as far as I know, what it is supposed to do. As such, even if the state is paying for contraception (or Viagra) it is not paying people to have sex. Thus, Rush’s reasoning is (shockingly enough) flawed.
In terms of the moral aspect of the matter, accusing a woman of being a slut and a prostitute are two rather serious and insulting accusations. As such, to make such accusations without warrant is certainly unethical. There is also the fact that such accusations are usually used to dismiss or attack women who dare to stand up for themselves and speak out for their rights. In the case of Fluke, this seems to be exactly what occurred. This bashing of women in an attempt to silence or dismiss them is clearly unacceptable in a democracy. There is also the matter of liberty of conscience and expression: just as the Catholic Church has the right to present its moral view without being attacked with hateful slurs and unwarranted accusations so does Sandra Fluke. Liberty is supposed to apply to all of us, not just men.
While I do expect such behavior from Rush, I did expect more from the Republican candidates. The gist of their replies seemed to be that their disagreement was with Rush’s choice of words. That is, they disagreed with his semantic choices. Given that these candidates speak relentlessly about moral values, their replies are tepid at best. I do understand why they are failing to show moral backbone: while many of Rush’s advertisers are dropping him, he is still a force to be reckoned with in regards to the conservative base (and the base conservatives). There is also the possibility that the candidates actually accept the misogyny behind Rush’s savage attack. Santorum, for example, has said some rather questionable things about women.
While the Republicans are no doubt trying to appeal to a certain part of the base, they are playing a rather risky game. While there are many conservative women, most American women hold to what can be seen as classically liberal views on many issues that are regarded as women’s issues (such as access to contraception, having equal opportunity, having equal rights, not being sexually harassed at work, and so on). As such, the Republicans should rethink what seems to be a strategy aimed at rolling back the rights of American women. While that might play well in some quarters, it will most assuredly not play well in the general election.
In contrast to the Republican candidates Obama took a proper moral stance in condemning these remarks. While it is easy to dismiss this as mere political game playing, this action was certainly consistent with both Obama’s professed values and the fact that he is the father of two girls. In short, he did the right thing. I would like to see the Republican candidates do this as well-if only to show that they have the political sense to realize that they are not getting points with most women voters.