On the face of it, the idea seems reasonable enough: if a person has health insurance, then she is less likely to use the emergency room. To expand on this a bit, what seems sensible is that a person with health insurance will be more likely to use primary care and thus less likely to need to use the emergency room. It also seems to make sense that a person with insurance would get more preventative care and thus be less likely to need a trip to the emergency room.
Intuitively, reducing emergency room visits would be a good thing. One reason is that emergency room care is rather expensive and reducing it would save money—which is good for patients and also good for those who have to pay the bills for the uninsured. Another reason is that the emergency room should be for emergencies—reducing the number of visits can help free up resources and lower waiting times.
As such, extending insurance coverage to everyone should be a good thing: it would reduce emergency room visits and this is good. However, it turns out that extending insurance might actually increase emergency room visits. In what seems to be an excellent study, insurance coverage actually results in more emergency room visits.
One obvious explanation is that people who are insured would be more likely to use medical services for the same reason that insured motorists are likely to use the service of mechanics: they are more likely to be able to pay the bills for repairs.
On the face of it, this would not be so bad. After all, if people can afford to go to the emergency room and be treated because they have insurance, that is certainly better than having people suffer simply because they lack insurance or the money to pay for care. However, what is most interesting about the study is that the expansion of Medicaid coverage resulted in an increase in emergency room visits for treatments that would have been more suitable in a primary care environment. That is, people decided to go to the emergency room for non-emergencies. The increase in emergency use was significant—about 40%. The study was large enough that this is statistically significant.
Given that Obamacare aims to both expand Medicaid and ensure that everyone is insured, it is certainly worth being concerned about the impact of these changes on the emergency room situation. Especially since one key claim has been that these changes would reduce costs by reducing emergency room visits.
One possibility is that the results from the Medicaid study will hold true across the country and will also apply to the insurance expansion. If so, there would be a significant increase in emergency room visits and this would certainly not results in a reduction of health care costs—especially if people go to the expensive emergency room rather than the less costly primary care options. Given the size and nature of the study, this concern is certainly legitimate in regards to the Medicaid expansion.
The general insurance expansion might not result in significantly more non-necessary emergency room visits. The reason is that private insurance companies often try to deter emergency room visits by imposing higher payments for patients. In contrast, Medicaid does not impose this higher cost. Thus, those with private insurance will tend to have a financial incentive to avoid the emergency room while those on Medicaid will not. While it would be wrong to impose a draconian penalty for going to the emergency room, one obvious solution is to impose a financial penalty for emergency room visits—preferably tied to using the emergency room for services that can be provided by primary care facilities. This can be quite reasonable, given that emergency room treatment is more expensive than comparable primary care treatment. In my own case, I know that the emergency room costs me more than visiting my primary care doctor—which gives me yet another good reason to avoid the emergency room.
There is also some reason to think that people use emergency rooms rather than primary care because they do not know their options. That is, if more people were better educated about their medical options, they would chose primary care options over the emergency room when they did not need the emergency room services. Given that going to the emergency room is generally stressful and typically involves a long wait (especially for non-emergencies) people are likely to elect for primary care when they know they have that option. This is not to say education will be a cure-all, but it is likely to help reduce unnecessary emergency room visits. Which is certainly a worthwhile objective.
On October 7, 2013 Health and Human Services Secretary Kathleen Sebelius was the guest on the Daily Show. Given that Jon Stewart is often regarded as a liberal mouthpiece, most folks probably expected that this would be a mutual admiration sort of interview. However, things certainly turned out rather differently as Stewart did what “real” journalists rarely do: he raised an important concern and refused to allow the person to shift the issue.
The question raised was one that certainly should be answered, namely the question of why large businesses were granted a delay in their implementation of Obamacare while individuals did not receive the same delay. While there should certainly be a fair and rational answer to this question, Sebelius went into verbal acrobatics to avoid answering it. This tactic is known as the smokescreen/red herring in philosophy:
A Red Herring is a fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic. A common variation on this is the smokescreen: it functions like a red herring, but the attempt at diversion involves piling on complexities on the original issue until it is lost in the verbal smoke. This sort of “reasoning” has the following form:
1. Topic A is under discussion.
2. Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
3. Topic A is abandoned.
This sort of “reasoning” is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim.
In the case of Sebelius, her attempts to switch to other issues and to pile on other matters did not answer Stewart’s reasonable question. In general, people use this tactic in response to a question when they either 1) have no answer to the question or 2) have a problematic or bad answer to the question. In the case of Sebelius, I would suspect that the second option holds: she almost certainly has an answer, but it almost certainly is not a good one.
Stewart seems to have drawn this sort of conclusion regarding Sebelius’ maneuvering:
“I still don’t understand why individuals have to sign up and businesses don’t, because if the businesses — if she’s saying, ‘well, they get a delay because that doesn’t matter anyway because they already give health care,’ then you think to yourself, ‘fuck it, then why do they have to sign up at all. And then I think to myself, ‘well, maybe she’s just lying to me.’”
In terms of why this question matters, one obvious point of concern is the matter of fairness. If large businesses get a year delay, then fairness would seem to require that the same courtesy be extended to individuals.
It might be countered that there is a relevant difference between large businesses and individuals that warrants the difference in treatment. If so, Sebelius should have simply presented this difference or differences and that would have quickly settled the matter. For example, it might be the case that a large business would need more time to implement the change on such a large scale, while an individual just has to implement it for herself. But, Sebelius provided no such relevant difference and spent her time trying throw out red herrings and blow smoke. This suggests that she was either ignorant of a relevant difference or was aware that the difference did not actually justify the difference in treatment. That is, there is no legitimate relevant difference. Given her position, the explanation based on her ignorance seems unlikely, so the reasonable conclusion is that she knew the answer, but believes that it would make things look worse than engaging in evasion. Of course, it is also possible that such evasion are just a matter of how politicians operate—like the famous scorpion being carried across the river, they cannot deviate from their nature. In any case, Sebelius’ behavior creates the impression that something is wrong here and creating this impression is, I am sure, not what she intended in her interview.
Interestingly, while this difference between businesses and individuals is a legitimate point of criticism, the Republicans seem to have little interest in engaging Obamacare in depth on points where it actually generates legitimate concerns. While the Republicans have noted that they want to defund or delay Obamacare, they seem to be unable to avoid hyperbole and other excesses of defective rhetoric. I suspect that this occurs for a variety of reasons. One possibility is that they are also like that scorpion: they simply cannot bring themselves to engage the matter of Obamacare in a rational way—instead, they have to sting away with crazy rhetoric and a government shutdown. Another possibility is that they believe that engaging in the actual issues will be bad for them in some manner. A third possibility, which is more specific than the second, is that they believe that their target audience is best played to by such rhetoric and behavior and that they would be ill served politically by engaging on actual issues in a rational manner. As a final possibility, they might not actually care about Obamacare as such—rather, they are simply out to oppose Obama and Obamacare happens to be the point of contention. To use an analogy, they are like Meletus in the Apology—they are not concerned with what they claim to be concerned about, they are just out to get their man.
Once again, the United States government has been shut down. As is to be expected, the politicians and pundits are engaging in the blame game. A key Republican talking point is that Obama and the Democrats are to blame because they would not compromise on the matter of Obamacare. If, say the Tea Party Republicans, Obama had been willing to defund or delay Obamacare, then they would not have been forced to do what they did.
The obvious counter to this is that Obamacare became a law via the proper constitutional process and hence this is no longer a compromise situation. It should also be noted that the proposed compromise is a rather odd one. It is as if the Republicans in question are saying: “here is our compromise: we get our way on Obamacare and, in return, we will not shut down the government.” That hardly seems like a reasonable compromise. To use an analogy, it would be like being in a bus heading to an event that was voted on by the people on the bus. Then some folks say that they do not like where the bus is going and one of them grabs the wheel. He then says “here is my compromise: we go where I want to go, or I’ll drive us into a tree.” That is hardly a compromise. Or even sane.
It could be argued that Obama and the Democrats should have done a better job in the past in terms of getting Republican buy-in on Obamacare. Or that the fact that the Republicans are a majority in the house shows that Americans want to be rid of Obamacare. These are not unreasonable points. However, they do not justify shutting down the government.
While I believe that Obamacare is chock full of problems and will have a variety of unpleasant consequences, I also believe in the importance of following the constitution. That is, I believe in the process of law. Obamacare went through that process and properly became a law. As such, there do not seem to be any grounds for claiming that it should be stopped because it is somehow an improperly passed law.
There have been claims that Obamacare is unconstitutional. There are some merits to these claims, but the matter was properly settled by the Supreme Court. Presumably the matter could be reconsidered at a later date, but the constitutional process has been properly followed. As such, the rhetorical points that Obamacare is unconstitutional lack merit. However, even if there was new and most excellent legal argument for this claim, this would not warrant shutting down the government to block the law. It would warrant having the Supreme Court consider the argument. That is proper procedure—that is how a system of government should operate. Using the threat of a shutdown against a law is certainly not how things should be done. That is essentially attempting to “govern” by threats, coercion and blackmail.
To use an analogy, imagine a night baseball game in which one side is losing. That side has argued every call repeatedly and used all the rules of the game to try to not lose. But it is still losing. So the coach of the losing team says that his team will turn out the lights, take all the balls, rip up the bases, and throw away the bats unless the other team “compromises” and gives them all the points they want. That would obviously be absurd. Likewise for the Tea Party Republican shut down.
A possible approach to warranting the shutdown is based on the idea of popular democracy. Some have argued that Obamacare is unpopular with most Americans. While this seems true, it also is true that most Americans do not seem to have enough of an understanding of Obamacare to have a rational opinion and much of the alleged dislike seems to stem from how the questions are asked. Interestingly, many people seem to really like things like the fact that people cannot be denied coverage because of pre-existing conditions and that children can stay on their parents’ insurance until they are 26.
Since this is supposed to be something of a democracy, considering the will of the people (however confused and ill-informed the people might be) seems reasonable. However, this would need to be a consistent principle. That is, if the Tea Party Republicans say that they are warranted in shutting down the government because a majority of Americans are opposed to Obamacare, then they would need to accept that the same principle applies in the case of other laws as well. So, if most Americans believe that X should be a law or that X should not be a law, then that is what must be done—and if it is not done, the government must be shut down. Given the overwhelming support for certain gun control laws that congress refused to pass, if this principle is accepted then these laws must pass—or the government must be shut down.
However, the Tea Party Republicans are clearly not operating on a principle here, unless it is the principle of “we’ll shut down the government if we don’t get what we want”—but that is hardly a reasonable or democratic principle.
Another plausible approach to countering this is to argue that a shutdown can be justified on the grounds that a legitimately passed, Supreme Court tested law is so bad that action must be taken. While this could not be warranted on constitutional grounds, it could be justified on moral grounds, most likely utilitarian grounds. The idea would be that the consequences of allowing the law to go into effect would be so dire that the consequences of shutting down the government are offset by the achievement of a greater good. Or, rather, the prevention of a greater bad.
Interestingly, this could be seen as a variation on civil disobedience. But, rather than have citizens breaking an unjust law to get arrested, there are lawmakers breaking the government—or at least the parts that don’t pay their salary.
Since I find Thoreau’s arguments in favor of such civil disobedience appealing, I have considerable sympathy for lawmakers deciding to serve the state with their consciences. However, what needs to be shown is that the law is so unjust that it warrants such a serious act of civil disobedience.
Ted Cruz and other Tea Party Republicans have made various dire claims about Obamacare—it will result in people being fired, it will cause employers to cut hours so that workers become part-time workers, and so on. Cruz even brought out a comparison to the Nazis, which did not go over well with the Republican senator John McCain. Interestingly, Cruz and others have attributed backwards causation powers to Obamacare: the stock talking points well before Obamacare went into effect included claims that Americans were already suffering under Obamacare—despite the fact that it was not in effect.
When pressed on the damage that Obamacare will do, the Tea Party Republicans tend to be rather vague—they throw out claims about how it will come between a patient and her doctor and so on. However, they never got around to presenting an obective coherent, supported case regarding the likely harms of Obamacare. This is hardly surprising. As a general rule, if someone busts out a Nazi analogy, then this is a fairly reliable sign that they have nothing substantial to say. This is, I think, unfortunate and unnecessary: Obamacare no doubt has plenty of problems and if it is as bad as the Tea Party Republicans claim, they should have been able to present a clear list without having to resort to rhetoric, scare tactic, hyperbole and Nazi analogies. So, I ask for such a clear case for the harms of Obamacare.
As a final point, Obama has made the reasonable point that he has been asking the Republicans for their input and their alternative plan for health care for quite some time. Some Republicans have advocated the emergency room, which I wrote about earlier, but their main offering seems to be purely negative: get rid of Obamacare. In terms of a positive alternative, they seem to have nothing. But, I am a fair person and merely ask for at least an outline of their alternative plan.
Ted Cruz undertook an almost marathon talking session against Obamacare. Not surprisingly, he does not have any need of Obamacare. As a senator, he already has access to government funded healthcare. However, he also does not need this coverage as, apparently, he falls under his wife’s Goldman Sachs’ coverage. Interestingly, while one of the anti-Obamacare talking points is that the cost of providing insurance will destroy business, the top executives at Goldman Sachs have their $40,500+ family premiums paid for by the company. As a point of comparison, the median household income in the United States is $50,000.
Naturally, to attack Cruz’s claims by pointing out his health care situation would be a mere ad homimem. However, his situation does serve to illustrate the incredible health care gap between the wealthy pundits and politicians attacking Obamacare and average Americans. It is certainly a thing of beauty to see a man with incredible coverage provided for by his wife’s employer rail against a law that would require almost many employers to provide lesser coverage to their employees.
It also illustrates an interesting inconsistency, namely that he seems to hold to the position that his wife should receive health care benefits from her employer but that the same is not true for other Americans. Of course, it is consistent with the view that the wealthy should be treated differently from everyone else.
It might, however, be objected that Cruz is right. After all, Goldman Sachs is incredibly profitable and can easily afford such premiums as part of the very generous (some might say excessive) compensation packages they offer to their “top talent.” Lesser businesses, those run by and employing the little people, cannot afford to provide even the minimum health care benefits required by Obamacare and, apparently, the employees do not deserve such coverage. As such, health care benefits from employers are for the wealthy but not for the little people.
While this approach has some merit when it comes to small businesses, the obvious counter is that the smaller businesses are exempt from this requirement. However, the potential economic impact of Obamacare is worth considering. As is the potential economic damage of the threatened government shut down.
It has been claimed that the cost of implementing Obamacare will cause businesses to fire people and to cut employee hours so that they are not full time employees. Presumably this will not impact the wealthy—Cruz did not seem worried that Goldman Sachs would fire his wife or cut her hours so they would not need to provide healthcare benefits.
While cost is a point of concern, there is the obvious question of whether businesses actually need to fire people and reduce hours or not as a rational response to Obamacare. That is, would the increased cost be so onerous that the firing and cutting would be a matter of survival? Or would it merely be a matter of slightly less profits? After all, some businesses obviously believe they can afford to provide extremely generous health care benefits to some people, so perhaps those affected can afford to provide lesser benefits to their workers.
This does, of course, raise some interesting questions about what benefits employees should receive and what constitutes economic necessity. However, these matters go beyond the scope of this essay. However, I will note that I do agree that health care should not be linked to employment and that I do agree that it should not be the responsibility of businesses to provide health care coverage. Unfortunately, the structure of health care benefits in the United States is such that having businesses as the provider is the main viable option. The other is, of course, having it provided by the state. Unless, of course, health care could be reformed to the point where average individuals could afford quality health care on their average incomes.
Oddly enough, Cruz and others have spoken of all the terrible damage that Obamacare has done and is doing. While this might be merely a slip of tenses, Obamacare cannot be doing any damage yet—it has not gone into effect. As such, it is an error to speak of the damage it has done—at least until it starts doing damage.
Cruz also made use of hyperbole and a rhetorical analogy by trotting out the absurd comparison of Obamacare to the Nazis. In the past, I have advocated a bi-partisan ban on this (Democrats use it, too) and I still support this proposal. As a general rule, only things that are comparable in badness to the Nazis should be compared to the Nazis. Even if Obamacare does all the awful things that certain Republicans claim it will do, it will obviously fall far short of starting a world war and engaging in genocide. Making the Nazi comparsion seems to show that a person has nothing substantial to say or that he has an impaired grasp of reality.
While Obamacare will certainly have problems, Cruz and his fellows have not offered any alternative plan of any substance. For the most part they make vague claims about market reforms and some even advance the absurd idea that people can just rely on the emergency room. While it is fair to be critical of a law when one does not have an alternative, the Republicans need to offer something other than threats to shut down the government. This makes these Republicans seem rather crazy.
As Obamacare marches onward, its opponents are still endeavoring to stop its advance and send it packing. Of course, the opponents need to provide an alternative system. Interestingly, certain Republicans such as Rick Perry and Jim DeMint have claimed that uninsured Americans are better off relying on the emergency room for treatment. While the battle over Obamacare is largely ideological, the viability of using the emergency room would seem to be an objective matter.
On the positive side, anyone can go to the emergency room and hospitals cannot refuse to treat people with legitimate medical needs—even people who lack insurance or cannot pay.
However, there are numerous problems with the uninsured (or even the insured) relying on the emergency room. The first is the matter of cost. The emergency room is generally more expensive than the non-emergency options. It is certainly more expensive that routine preventative care that can keep a person out of the emergency room. The high costs are problematic because of the burden on the uninsured (medical bills is a leading cause of bankruptcy in America) and also because when the uninsured cannot pay, the cost is passed on to the rest of us (most often in the form of higher health insurance premiums). Thus, relying on the emergency room to treat the uninsured places a heavy burden on everyone and is actually a form of highly inefficient socialism in which those with insurance pay for needlessly expensive treatment for the uninsured. From a purely economic standpoint, if we are going to have medical socialism, we should at least go with the more economically efficient version.
The second is the matter of preventative medicine and ongoing treatments, such as routine checkups and dialysis. The emergency room hardly seems to be set for these medical matters, although people who are unable to avail themselves of them stand a significant chance of ending up in the emergency room, thus taking us back to the first problem. As such, the emergency room option does not seem to be a viable alternative to Obamacare. This is not to say that Obamacare is the only option or even a good option—just that it is better than the emergency-room-for-the-uninsured option.
The third is the matter of compassion. While hospitals cannot deny people necessary medical care, such care is certainly not charity: either the patient must pay or the cost is passed on to the rest of us. As such, relying on the emergency room as a matter of social policy is essentially saying to people that they can get treatment, provided that it is an emergency and that either the patient can pay or the cost can be passed on to everyone else. It is generally agreed that we should collectively protect each other from terrorism, foreign enemies, and our own criminals. This same concern should also extend to protecting each other from disease and injury. After all, whether Sally is dead because of cancer, a criminal’s bullet or a terrorist’s bomb, she is still dead. So, if we can have a huge collective defense against these other threats, we surely can have a developed collective defense against medical threats—one that is better than the emergency room.
While there is an abundance of violence in the real world, there is also considerable focus on the virtual violence of video games. Interestingly, some people (such as the head of the NRA) blame real violence on the virtual violence of video games. The idea that art can corrupt people is nothing new and dates back at least to Plato’s discussion of the corrupting influence of art. While he was mainly worried about the corrupting influence of tragedy and comedy, he also raised concerns about violence and sex. These days we generally do not worry about the nefarious influence of tragedy and comedy, but there is considerable concern about violence.
While I am a gamer, I do have concerns about the possible influence of video games on actual behavior. For example, one of my published essays is on the distinction between virtual vice and virtual virtue and in this essay I raise concerns about the potential dangers of video games that are focused on vice. While I do have concerns about the impact of video games, there has been little in the way of significant evidence supporting the claim that video games have a meaningful role in causing real-world violence. However, such studies are fairly popular and generally get attention from the media.
The most recent study purports to show that teenage boys might become desensitized to violence because of extensive playing of video games. While some folks will take this study as showing a connection between video games and violence, it is well worth considering the details of the study in the context of causal reasoning involving populations.
When conducting a cause to effect experiment, one rather important factor is the size of experimental group (those exposed to the cause) and the control group (those not exposed to the cause). The smaller the number of subjects, the more likely that the difference between the groups is due to factors other than the (alleged) causal factor. There is also the concern with generalizing the results from the experiment to the whole population.
The experiment in question consisted of 30 boys (ages 13-15) in total. As a sample for determining a causal connection, the sample is too small for real confidence to be placed in the results. There is also the fact that the sample is far too small to support a generalization from the 30 boys to the general population of teenage boys. In fact, the experiment hardly seems worth conducting with such a small sample and is certainly not worth reporting on-except as an illustration of how research should not be conducted.
The researchers had the boys play a violent video game and a non-violent video game in the evening and compared the results. According to the researchers, those who played the violent video game had faster heart rates and lower sleep quality. They also reported “increased feelings of sadness.” After playing the violent game, the boys had greater stress and anxiety.
According to one researcher, “The violent game seems to have elicited more stress at bedtime in both groups, and it also seems as if the violent game in general caused some kind of exhaustion. However, the exhaustion didn’t seem to be of the kind that normally promotes good sleep, but rather as a stressful factor that can impair sleep quality.”
Being a veteran of violent video games, these results are consistent with my own experiences. I have found that if I play a combat game, be it a first person shooter, an MMO or a real time strategy game, too close to bedtime, I have trouble sleeping. Crudely put, I find that I am “keyed” up and if I am unable to “calm down” before trying to sleep, my sleep is generally not very restful. I really noticed this when I was raiding in WOW. A raid is a high stress situation (game stress, anyway) that requires hyper-vigilance and it takes time to “come down” from that. I have experienced the same thing with actual fighting (martial arts training, not random violence). I’ve even experienced something comparable when I’ve been awoken by a big spider crawling on my face-I did not sleep quite so well after that. Graduate school, as might be imagined, put me into this state of poor sleep for about five years.
In general, then, it makes sense that violent video games would have this effect-which is why it is not a good idea to game up until bed time if you want to get a good night’s sleep. Of course, it is a generally a good idea to relax about an hour before bedtime-don’t check email, don’t get on Facebook, don’t do work and so on.
While not playing games before bedtime is a good idea, the question remains as to how these findings connect to violence and video games. According to the researchers, the differences between the two groups “suggest that frequent exposure to violent video games may have a desensitizing effect.”
Laying aside the problem that the sample is far too small to provide significant results that can be reliably extended to the general population of teenage boys, there is also the problem that there seems to be a rather large chasm between the observed behavior (anxiety and lower sleep quality) and being desensitized to violence. The researchers do note that the cause and effect relationship was not established and they did consider the possibility of reversed causation (that the video games are not causing these traits, but that boys with those traits are drawn to violent video games). As such, the main impact of the study seems to be that it got media attention for the researchers. This would suggest another avenue of research: the corrupting influence of media attention on researching video games and violence.
While it sounds a bit like science fiction, the issue of whether or not human genes can be owned has become a matter of concern. While the legal issue is interesting, my focus will be on the philosophical aspects of the matter. After all, it was once perfectly legal to own human beings—so what is legal is rather different from what is right.
Perhaps the most compelling argument for the ownership of genes is a stock consequentialist argument. If corporations cannot patent and thus profit from genes, then they will have no incentive to engage in expensive genetic research (such as developing tests for specific genes that are linked to cancer). The lack of such research will mean that numerous benefits to individuals and society will not be acquired (such as treatments for specific genetic conditions). As such, not allowing patents on human genes would be wrong.
While this argument does have considerable appeal, it can be countered by another consequentialist argument. If human genes can be patented, then this will allow corporations to take exclusive ownership of these genes, thus allowing them a monopoly. Such patents will allow them to control the allowed research conducted even at non-profit institutions such as universities (who sometimes do research for the sake of research), thus restricting the expansion of knowledge and potentially slowing down the development of treatments. This monopoly would also allow the corporation to set the pricing for relevant products or services without any competition. This is likely to result in artificially high prices which could very well deny people needed medical services or products simply because they cannot meet the artificially high prices arising from the lack of competition. As such, allowing patents on human genes would be wrong.
Naturally, this counter argument can be countered. However, the harms of allowing the ownership of human genes would seem to outweigh the benefits—at least when the general good is considered. Obviously, such ownership would be very good for the corporation that owns the patent.
In addition to the moral concerns regarding the consequences, there is also the general matter of whether it is reasonable to regard a gene as something that can be owned. Addressing this properly requires some consideration of the basis of property.
John Locke presents a fairly plausible account of property: a person owns her body and thus her labor. While everything is initially common property, a person makes something her own property by mixing her labor with it. To use a simple example, if Bill and Sally are shipwrecked on an ownerless island and Sally gathers coconuts from the trees and build a hut for herself, then the coconuts and hut are her property. If Bill wants coconuts or a hut, he’ll have to either do work or ask Sally for access to her property.
On Locke’s account, perhaps researchers could mix their labor with the gene and make it their own. Or perhaps not—I do not, for example, gain ownership of the word “word” in general because I mixed my labor with it by typing it out. I just own the work I have created in particular. That is, I own this essay, not the words making it up.
Sticking with Locke’s account, he also claims that we are owned by God because He created us. Interestingly, for folks who believe that God created the world, it would seem to follow that a corporation cannot own a human gene. After all, God is the creator of the genes and they are thus His property. As such, any attempt to patent a human gene would be an infringement on God’s property rights.
It could be countered that although God created everything, since He allows us to own the stuff He created (like land, gold, and apples), then He would be fine with people owning human genes. However, the basis for owning a gene would still seem problematic—it would be a case of someone trying to patent an invention which was invented by another person—after all, if God exists then He invented our genes, so a corporation cannot claim to have invented them. If the corporation claims to have a right to ownership because they worked hard and spent a lot of money, the obvious reply is that working hard and spending a lot of money to discover what is already owned by another would not transfer ownership. To use an analogy, if a company worked hard and spent a lot to figure out the secret formula to Coke, it would not thus be entitled to own Coca Cola’s formula.
Naturally, if there is no God, then the matter changes (unless we were created by something else, of course). In this case, the gene is not the property of a creator, but something that arose naturally. In this case, while someone can rightfully claim to be the first to discover a gene, no one could claim to be the inventor of a naturally occurring gene. As such, the idea that ownership would be confirmed by mere discovery would seem to be a rather odd one, at least in the case of a gene.
The obvious counter is that people claim ownership of land, oil, gold and other resources by discovering them. One could thus argue that genes are analogous to gold or oil: discovering them turns them into property of the discoverer. There are, of course, those who claim that the ownership of land and such is unjustified, but this concern will be set aside for the sake of the argument (but not ignored—if discovery does not confer ownership, then gene ownership would be right out in regards to natural genes).
While the analogy is appealing, the obvious reply is that when someone discovers a natural resource, she gains ownership of that specific find and not all instances of what she found. For example, when someone discovers gold, they own that gold but not gold itself. As another example, if I am the first human to stumble across naturally occurring Unobtanium on an owner-less alien world, I thus do not gain ownership of all instances of Unobtanium even if it cost me a lot of money and work to find it. However, if I artificially create it in my philosophy lab, then it would seem to be rightfully mine. As such, the researchers that found the gene could claim ownership of that particular genetic object, but not the gene in general on the grounds that they merely found it rather than created it. Also, if they had created a new artificial gene that occurs nowhere in nature, then they would have grounds for a claim of ownership—at least to the degree they created the gene.
There are many ways to die, but the public concern tends to focus on whatever is illuminated in the media spotlight. 2012 saw considerable focus on guns and some modest attention on a somewhat unexpected and perhaps ironic killer, namely pain medication. In the United States, about 20,000 people die each year (about one every 19 minutes) due to pain medication. This typically occurs from what is called “stacking”: a person will take multiple pain medications and sometimes add alcohol to the mix resulting in death. While some people might elect to use this as a method of suicide, most of the deaths appear to be accidental—that is, the person had no intention of ending his life.
The number of deaths is so high in part because of the volume of painkillers being consumed in the United States. Americans consume 80% of the world’s painkillers and the consumption jumped 600% from 1997 to 2007. Of course, one rather important matter is the reasons why there is such an excessive consumption of pain pills.
One reason is that doctors have been complicit in the increased use of pain medications. While there have been some efforts to cut back on prescribing pain medication, medical professionals were generally willing to write prescriptions for pain medication even in cases when such medicine was not medically necessary. This is similar to the over-prescribing of antibiotics that has come back to haunt us with drug resistant strains of bacteria. In some cases doctors no doubt simply prescribed the drugs to appease patients. In other cases profit was perhaps a motive. Fortunately, there have been serious efforts to address this matter in the medical community.
A second reason is that pharmaceutical companies did a good job selling their pain medications and encouraged doctors to prescribe them and patients to use them. While the industry had no intention of killing its customers, the pushing of pain medication has had that effect.
Of course, the doctors and pharmaceutical companies do not bear the main blame. While the companies supplied the product and the doctors provided the prescriptions, the patients had to want the drugs and use the drugs in order for this problem to reach the level of an epidemic.
The main causal factor would seem to be that the American attitude towards pain changed and resulted in the above mentioned 600% increase in the consumption of pain killers. In the past, Americans seemed more willing to tolerate pain and less willing to use heavy duty pain medications to treat relatively minor pains. These attitudes changed and now Americans are generally less willing to tolerate pain and more willing to turn to prescription pain killers. I regard this as a moral failing on the part of Americans.
As an athlete, I am no stranger to pain. I have suffered the usual assortment of injuries that go along with being a competitive runner and a martial artist. I also received some advanced education in pain when a fall tore my quadriceps tendon. As might be imagined, I have received numerous prescriptions for pain medication. However, I have used pain medications incredibly sparingly and if I do get a prescription filled, I usually end up properly disposing of the vast majority of the medication. I do admit that I did make use of pain medication when recovering from my tendon tear—the surgery involved a seven inch incision in my leg that cut down until the tendon was exposed. The doctor had to retrieve the tendon, drill holes through my knee cap to re-attach the tendon and then close the incision. As might be imagined, this was a source of considerable pain. However, I only used the pain medicine when I needed to sleep at night—I found that the pain tended to keep me awake at first. Some people did ask me if I had any problem resisting the lure of the pain medication (and a few people, jokingly I hope, asked for my extras). I had no trouble at all. Naturally, given that so many people are abusing pain medication, I did wonder about the differences between myself and my fellows who are hooked on pain medication—sometimes to the point of death.
A key part of the explanation is my system of values. When I was a kid, I was rather weak in regards to pain. I infer this is true of most people. However, my father and others endeavored to teach me that a boy should be tough in the face of pain. When I started running, I learned a lot about pain (I first started running in basketball shoes and got huge, bleeding blisters). My main lesson was that an athlete did not let pain defeat him and certainly did not let down the team just because something hurt. When I started martial arts, I learned a lot more about pain and how to endure it. This training instilled me with the belief that one should endure pain and that to give in to it would be dishonorable and wrong. This also includes the idea that the use of painkillers is undesirable. This was balanced by the accompanying belief, namely that a person should not needlessly injure his body. As might be suspected, I learned to distinguish between mere pain and actual damage occurring to my body.
Of course, the above just explains why I believe what I do—it does not serve to provide a moral argument for enduring pain and resisting the abuse of pain medication. What is wanted are reasons to think that my view is morally commendable and that the alternative is to be condemned. Not surprisingly, I will turn to Aristotle here.
Following Aristotle, one becomes better able to endure pain by habituation. In my case, running and martial arts built my tolerance for pain, allowing me to handle the pain ever more effectively, both mentally and physically. Because of this, when I fell from my roof and tore my quadriceps tendon, I was able to drive myself to the doctor—I had one working leg, which is all I needed. This ability to endure pain also serves me well in lesser situations, such as racing, enduring committee meetings and grading papers.
This, of course, provides a practical reason to learn to endure pain—a person is much more capable of facing problems involving pain when she is properly trained in the matter. Someone who lacks this training and ability will be at a disadvantage when facing situations involving pain and this could prove harmful or even fatal. Naturally, a person who relies on pain medication to deal with pain will not be training themselves to endure. Rather, she will be training herself to give in to pain and become dependent on medication that will become increasingly ineffective. In fact, some people end up becoming even more sensitive to pain because of their pain medication.
From a moral standpoint, a person who does not learn to endure pain properly and instead turns unnecessarily to pain medication is doing harm to himself and this can even lead to an untimely death. Naturally, as Aristotle would argue, there is also an excess when it comes to dealing with pain: a person who forces herself to endure pain beyond her limits or when doing so causes actually damage is not acting wisely or virtuously, but self-destructively. This can be used in a utilitarian argument to establish the wrongness of relying on pain medication unnecessarily as well as the wrongness of enduring pain stupidly. Obviously, it can also be used in the context of virtue theory: a person who turns to medication too quickly is defective in terms of deficiency; one who harms herself by suffering beyond the point of reason is defective in terms of excess.
Currently, Americans are, in general, suffering from a moral deficiency in regards to the matter of pain tolerance and it is killing us at an alarming rate. As might be suspected, there have been attempts to address the matter through laws and regulations regarding pain medication prescriptions. This supplies people with a will surrogate—if a person cannot get pain medication, then she will have to endure the pain. Of course, people are rather adept at getting drugs illegally and hence such laws and regulations are of limited effectiveness.
What is also needed is a change in values. As noted above, Americans are generally less willing to tolerate even minor pains and are generally willing to turn towards powerful pain medication. Since this was not always the case, it seems clear that this could be changed via proper training and values. What people need is, as discussed in an earlier essay, training of the will to endure pain that should be endured and resist the easy fix of medication.
In closing, I am obligated to add that there are cases in which the use of pain medication is legitimate. After all, the body and will are not limitless in their capacities and there are times when pain should be killed rather than endured. Obvious cases include severe injuries and illnesses. The challenge then, is sorting out what pain should be endured and what should not. Since I am a crazy runner, I tend to err on the side of enduring pain—sometimes foolishly so. As such, I would probably not be the best person to address this matter.
When a person does terrible things that seem utterly senseless, like murder children, there is sometimes a division in the assessment of the person. Some people will take the view that the person is mentally ill on the grounds that a normal, sane person would not do something so terrible and senseless. Others take the view that the person is evil on the grounds that a normal, non-evil person would not do something so terrible and senseless. Both of these views express an attempt to explain and understand what occurred. As might be imagined, the distinction between being evil and being mentally ill is a matter of significant concern.
One key point of concern is the matter of responsibility and the correct way to respond to a person who has done something terrible. If a person acts from mental illness rather than evil, then it seems somewhat reasonable to regard them as not being accountable for the action (at least to the degree the person is ill). After all, if something terrible occurs because a person suffers from a physical illness, the person is generally not held accountable (there are, obviously, exceptions). For example, my running friend Jay told me about a situation in which a person driving on his street had an unexpected seizure. Oddly, the person’s foot stomped down on the gas pedal and the car rocketed down the street, smashing into another car and coming to a stop in someone’s back yard. The car could have easily plowed over my friend, injuring or killing him. However, since the person was not physically in control of his actions (and he had no reason to think he would have a seizure) he was not held morally accountable. That is, he did nothing wrong. If a person had intentionally tried to murder my friend with his car, then that would be seen as an evil action. Unless, perhaps, the driver was mentally ill in a way that disabled him in a way comparable to a stroke. In that case, the driver might be as “innocent” as the stroke victim.
There seem to be at least two ways that a mentally ill person might be absolved of moral responsibility (at least to the degree she is mentally ill).
First, the person might be suffering from what could be classified as perceptual and interpretative disorders. That is, they have mental defects that cause them to perceive and interpret reality incorrectly. For example, a person suffering from extreme paranoia might think that my friend Jay intends to steal his brain, even Jay has no such intention. In such a case, it seems reasonable to not regard the person as evil if he tries to harm Jay—after all, he is acting in what he thinks is legitimate self-defense rather than from a wicked motivation. In contrast, someone who wanted to kill Jay to rob his house or just for fun would be acting in an evil way. Put in general terms, mental conditions that distort a person’s perception and interpretation of reality might lead him to engage in acts of wrongful violence even though his moral reasoning might remain normal. Following Thomas Aquinas, it seems sensible to consider that such people might be following their conscience as best they can, only they have distorted information to work with in their decision making process and this distortion results from mental illness.
Second, the person might be suffering from what could be regarded as a disorder of judgment. That is, the person’s ability to engage in reasoning is damaged or defective due to a mental illness. The person might (or might not) have correct information to work with, but the processing is defective in a way that causes a person to make judgments that would be regarded as evil if made by a “normal” person. For example, a person might infer from the fact that someone is wearing a blue hat that the person should be killed.
One obvious point of concern is that “normal” people are generally bad at reasoning and commit fallacies with alarming regularity. As such, there would be a need to sort out the sort of reasoning that is merely bad reasoning from reasoning that would count as being mentally ill. One point worth considering is that bad reasoning could be fixed by education whereas a mental illness would not be fixed by learning, for example, logic.
A second obvious point of concern is discerning between mental illness as a cause of such judgments and evil as a cause of such judgments. After all, evil people can be seen as having a distorted sense of judgment in regards to value. In fact, some philosophers (such as Kant and Socrates) regard evil as a mental defect or a form of irrationality. This has some intuitive appeal—after all, people who do terrible and senseless things would certainly seem to have something wrong with them. Whether this is a moral wrongness or health wrongness is, of course, the big question here.
One of the main reasons to try to sort out the difference is figuring out whether a person should be treated (cured) or punished (which might also cure the person). As noted above, a person who did something terrible because of mental illness would (to a degree) not be accountable for the act and hence should not be punished (or the punishment should be duly tempered). For some it is tempting to claim that the choice of evil is an illusion because there is no actual free choice (that is, we do what we do because of the biochemical and electrical workings of the bodies that are us). As such, people should not be punished, rather they should be repaired. Of course, there is a certain irony in such advice: if we do not have choice, then advising us to not punish makes no sense since we will just do what we do. Of course, the person advising against punishment would presumably have no choice but to give such advice.
While high fructose corn syrup (which is typical a blend of fructose and glucose) is a ubiquitous food ingredient, it now has something of a poor reputation. For example, the front label on the ketchup bottle in my fridge has “no high fructose corn syrup!” in large letters. This is, presumably, considered a positive selling point. Given that many consumers are less than enamored with high fructose corn syrup, it is hardly surprising that the Corn Refiners Association tried to get the United States FDA to accept their proposal to rename the syrup “corn sugar.” While there are health concerns regarding sugar, sugar is generally looked upon more favorably than high fructose corn syrup. The FDA denied this request on the grounds that it does not meet the definition of “sugar” and thus, using an argument by definition, it follows that high fructose corn syrup is not sugar. According to the FDA, sugar must be “a solid, dried and crystallized food.” High fructose corn syrup, being syrup, obviously does not meet this definition-thus it is not a sugar.
The Corn Refiners Association has also been engaged in a marketing campaign to convince the public that high fructose corn syrup is a form of sugar and is comparable to table sugar. Not surprisingly, the Sugar Association brought a lawsuit against the Corn Refiners Association in response to this campaign. This sort of battle of names in the food industry is nothing new or particularly unusual. For example, there has also been a battle between the producers of milk and the makers of soy milk over whether or not soy milk should be legally allowed to be called “soy milk.”
It might, of course, be wondered why the naming matters. In the case of high fructose corn syrup, the most likely reason was noted above: while high fructose corn syrup has gotten a bad reputation (deserved or not), sugar still has a better reputation (deserved or not). As such, replacing “high fructose corn syrup” with “corn sugar” on ingredient labels would tend to lead uninformed consumers to believe that they were not consuming high fructose corn syrup but something else. Given that the reputation of high fructose corn syrup seems to be suffering, such re-naming would probably result in more sales relative to attempting to sell products with the original name.
Because of government subsidies for corn, high fructose corn syrup is cheaper than table sugar and thus is widely used because it provides more sweetness for the dollar. As such, high fructose corn syrup is a competitor to sugar that enjoys a price advantage. As might be suspected, it seems reasonable that the Sugar Association would not want to allow a major competitor to change the name of their signature product and thus yield an important advantage in regards to reputation.
For those who recall basic chemistry, this dispute might seem a bit odd. After all, fructose is chemically classified as a sugar (as is, obviously, glucose). As such, it is tempting to agree with the Corn Refiners Association: high fructose corn syrup is sugar. However, the FDA does not define ”sugar” chemically, but also in terms of its state (it must be a solid-at least in its “normal” state). As such, syrup is not a sugar-even if it is chemically a sugar (or two sugars mixed together). This, of course, might suggest that the dispute is thus the result of some sort of arcane legal process in which the definition of what seem to be a chemical term is set by bureaucrats and lobbyists rather than by chemists. Given that chemists are presumably the legitimate experts on what counts as a sugar, it would seem more rational to rely on the scientific rather than seemingly political definition of “sugar.”
One obvious reply is that the FDA might have a legitimate reason for classifying sugar in a way that involves it being a specific sort of solid rather than based on its chemical composition. After all, looking at the matter from the standpoint of food classifications, there does seem to be a reasonable distinction between a syrup and sugar. To use an appeal to intuition, imagine that you ask for some sugar for your coffee and you are handed a bottle of syrup. If the person said “this is fructose syrup, which is a sugar”, then you would probably reply that you mean the white crystal stuff. As such, looked at from the standpoint of how consumers understand “sugar” and “syrup”, high fructose corn syrup would be syrup and not corn sugar. This leads to the second point.
A second obvious reply is that renaming high fructose corn syrup would seem to primarily serve to mislead consumers. As noted above, until consumers sorted out that “corn sugar” actually refers to high fructose corn syrup, they are likely to buy products thinking that they are avoiding an ingredient they do not want. While I will not make any claims about the true intentions of the Corn Refiners Association, misleading the public in this way is morally dubious at best.
My second reply could be countered by arguing that the name change is not intended to mislead consumers but to offset an unfounded bad reputation that high fructose corn syrup does not deserve. After all, consumers often seem to regard high fructose corn syrup as bad-at least as being worse than sugar. If this is not the case, then the syrup is being judged unfairly. If this is the case, then it could be contended that the name change would merely allow the maligned ingredient to shed its unearned bad reputation. This could be seen as a person who has been falsely accused of misdeeds electing to change his name for a fresh start because he has been unable to erase the stain on his original name. This does have a certain moral appeal to it, but I am inclined to think that it is offset by the fact that most consumers would be ignorant of the name change and hence would be misled by such labels. As such, I do agree with the FDA’s decision to not allow the name change.
In terms of the science, the American Medical Association takes the view that there is not adequate evidence to start restricting the use of the syrup. More research is, however, is planned. The Center for Science in the Public Interest currently takes the view that the syrup is no worse than table sugar-but that Americans consume too much of both sweeteners.