In general, people suffer from a wide range of cognitive biases. One of these is known as negativity bias and it is manifested by the tendency people have to give more weight to the negative than to the positive. For example, people tend to weigh the wrongs done to them more heavily than the good done to them. As another example, people tend to be more swayed by negative political advertisements than by positives ones. This bias can also have an impact on education.
A colleague of mine asks his logic students each semester how many of them are planning on law school. In the past, he had many students. Now, the number is considerably less. Curious about this, he checked and found that logic had switched from being a requirement for pre-law to being a mere recommendation. My colleague noted that it seemed irrational for students who plan on taking the LSAT and becoming lawyers to avoid the logic class, given that the LSAT is largely a logic test and that law school requires skill in logic. He made the point that students often prefer to avoid the useful when it is not required and only grudgingly take what is required. We discussed a bit how this relates to the negativity bias: a student who did not take the logic class when it was required would be punished by being unable to graduate. Now that the class is optional, there is only the positive benefit of a likely improvement on the LSAT and better performance in law school. Since people weigh punishments more than rewards, this behavior makes sense—but is still irrational. Especially since many of the students who skip the logic class will end up spending money taking LSAT preparation classes that will endeavor to spackle over their lack of skills in logic.
I have seen a similar sort of thing in my own classes. At my university, university policy allows us to lower student grades on the basis of a lack of attendance. We are even permitted to fail a student for excessive absences. While attendance is mandatory in my classes, I do not have a special punishment for missing class. Not surprisingly, when the students figure this out around week three or four, attendance plummets and then stabilizes at a low level. Before I used BlackBoard for quizzes, exams and for turning in assignments and papers, attendance would spike back up for days on which something had to be done in class. Since students can do their work via BlackBoard, these spikes are gone. They are, however, replaced by post-exam spikes when students do badly on the exams because they have not been in class. Then attendance slumps again. Interestingly, students often claim that they think the class is interesting and useful. But, since there is no direct and immediate punishment for not attending (just a delayed “punishment” in terms of lower grades and a lack of learning), many students are not motivated to attend class.
Naturally, I do consider the possibility that I am a bad professor who is teaching a subject that students regard as useless or boring. However, my evaluations are consistently good, former students have returned to say good things about me and my classes, and so on. That said, perhaps I am merely deluding myself and being humored. That said, it is easy enough to draw an analogy to exercise: exercise does not provide immediate rewards and there is no immediate punishment for not staying fit—just a loss of benefits. Most people elect to under-exercise or avoid it altogether. This, and similar things, does show that people generally avoid that which is difficult now but yields lasting benefits latter.
I have, of course, considered going to the punishment model for my classes. However, I have resisted this for a variety of reasons. The first is that my personality is such that I am more inclined to want to offer benefits rather than punishments. This seems to be a clear mistake given the general psychology of people. The second is that I believe in free choice: like God, I think people should be free to make bad choices and not be coerced into doing what is right. It has to be a free choice. Naturally, choosing poorly brings its own punishment—albeit later on. The third is the hassle of dealing with attendance: the paper work, having to handle excuses, being lied to regularly and so on. The fourth is the fact that classes are generally better for the good students when the students who do not want to be in class elect to not attend. While I want everyone to learn, I would rather have the people who would prefer not to learn not be in class disrupting the learning of others—college is not the place where the educator should have to spend time dealing with behavioral issues in the classroom. The fifth is I prefer to reduce the amount of lying that students think they have to engage in.
In terms of why I have been considering using the punishment model, there are three reasons. One is that if students are compelled to attend, they might very well inadvertently learn something. The second is that this model is a lesson for what the workplace will be like for most of the students—so habituating them to this (or, rather, keeping the habituation they should have acquired in K-12) would be valuable. After all, they will probably need to endure awful jobs until they retire or die. The third is that perhaps many people lack the discipline to do what they should and they simply must be compelled by punishment—this is, of course, the model put forth by thinkers like Aristotle and Hobbes.
As I write this, it is finals week. Obviously, one of my last duties in regards to a class is to record the grade for each student be it an A, B, C, D or F. Or the newer option, WF. For those not in the know, an F grade is what a student gets when she fails the course (I do not fail students-I merely record their failure). A WF is a sort-of-new thing in which a student fails by “walking away” from the course. To be specific, if a student earns an F but last attended only prior to the withdrawal deadline (November 8 this year) then the student gets a WF.
The distinction is rather important: if a students earns an F, she fails but gets to keep the financial aid for the course. If a students gets a WF, then she (or the university) has to pay the money back. In order for financial aid to be released, a student has to attend at least once. To keep it, a student needs to attend once more-after the withdrawal deadline (in theory, a student could just attend once as long as it is after that deadline). Every semester I get at least one student who never attends class. Ever. I also always get 3-6 WF students. Some only attend one class, then never again. Others attend within two days of the withdrawal deadline and thus just miss keeping the money. Presumably enduring one more class with me is too much. Or perhaps they get the date wrong.
In addition to the WF policy, my university also has a general attendance policy. A student gets three unexcused absences without any consequences or questions. After that, faculty are permitted to impose penalties, such as lowering the overall grade one letter grade for each extra unexcused absence. Some faculty are very strict about this and require students to be on time and remain the entire class, tracking each student as she enters and leaves the classroom. Woe to the student who misses too often, arrives too late or leaves too early: the F of doom looms.
I do keep track of attendance, mainly for two reasons. One is for my own curiosity: how often do students show up? The other is for the purpose of distinguishing between the F and WF grade. Since money is on the line, I have to be sure to get the attendance right-although students do tend to try to sign in for their fellows.
I have never, however, lowered (or raised) a grade simply because of attendance. My general view has been that if a student can do the work and earn a grade, then that grade should not be arbitrarily lowered simply because the student failed to bask in my radiant knowledge (or shiver in my shadowy ignorance). I also take the view that the students are (in theory) adults and hence they have the choice as to whether they wish to attend or not. If they elect to not attend and do not learn, then the grade they earn will reflect this. If they elect to not attend, yet still learn, then the grade they earn will reflect that. Some people like the customer metaphor: a student has bought a ticket to the show, but it is her choice to go or not. The seat is paid for, but the student is under no obligation to fill it. Naturally, if the student is attending on someone else’s dime, then this makes matters a bit more complex-especially if the student is expected to maintain a certain grade to keep the support.
Of course, there is something to be said for enforcing attendance with punishment. My experience, which matches the data from studies of human behavior, is that people weigh the negative more than the positive. In the case of a class, the (alleged) reward of education from attending has little impact on many students. However, the stick of failure for not attending is a strong motivator, especially for those who have little interest in education (as opposed to getting the paper to get the job to get the money…and then die). There is also the view that most people, even adults, must be ruled by pain rather than fine ideals or arguments (as per Aristotle). Less extreme, there is the view that college kids are just that, kids: many are incapable of using the freedom to attend or not attend wisely and hence the professor must use her wisdom to guide them to good behavior by punishing a failure to attend. It could even be argued that a professor, like a high school teacher or nanny, has a moral obligation to force students to attend for their own good.
I tend to go with God’s policy: people are free to do as they will, they get every chance, but they get what they earn.
In an earlier essay I looked at the matter of the ethics of overhead in regards to charities. In that essay, I focused on Dan Pallotta’s discussion of the matter and in this essay I will discuss the matter more generally.
While people do vary in their opinions of the matter, there does seem to be a general moral intuition that a charitable non-profit should have minimal overhead. The idea is, presumably, that the money should go to the charitable cause rather than to the cost of overhead. Thus, the idea is that the lower the overhead, the greater the virtue. In this context it is assumed that the overhead is generally legitimate (that is, the money for overhead is not simply wasted or misused).
The obvious way to discuss this matter in the context of ethics is to consider it within established approaches to ethics, specifically those of virtue theory, Kant and utilitarianism.
Borrowing from Aristotle and Aquinas, when assessing charity one needs to consider such factors as the object of the action, the circumstances of the action, and the end of the action. Aristotle, in defining what it is to act virtuously, puts considerable emphasis on the idea that a person must do the virtuous act for its own sake. Using the example of giving to charity, exercising the virtue of charity (or generosity) requires that the giving be done for the sake of giving. If, for example, I give for the sake of getting a tax break, then I am not exercising the virtue of charity. This would seem to provide some foundation for the intuition that charities should have low overhead. After all, for those engaged in the charitable function (be it a road race, a bake sale or something else) to be acting from the virtue of charity they would need to engage in the activity for its own sake. If, for example, I work for a charity to get a salary, then it would seem that I am not acting virtuously. As such, to be acting virtuously it would seem that those involved in a charity would need to be engaged in the charity for its owns sake, which would certainly seem to involve the expectation that they make sacrifices for the charity since they are supposed to be acting for its sake and not for some other sake, such as making a large salary.
Not surprisingly, people are praised for making sacrifices for charity—be it a person who volunteers for free or a person who could be a CEO of a major corporation but instead works for a charity for a mere fraction of what she could make in the for-profit sector.
Kant claimed that what matters morally is the good will and not what the good will accomplishes. Roughly put, if a person wills the moral law, then that is what matters. Whether the person accomplishes anything practical or not is not relevant to the ethics of the matter. In the case of a charity, what would presumably matter is that a person will in the appropriately good way and the consequences would not matter morally. This would certainly match the idea that what matters in a charity is that this will be shown by focusing on minimizing overhead and maximizing what goes to the charitable cause. Naturally, a person can will the good and also have success in terms of the consequences. However, people are praised for their intent. So, as Pallotta noted, those running a bake sale with a low overhead that raises a tiny amount of money are regarded as morally superior to those running a high-overhead event that raises a great deal of money. It is presumably assumed that those with the low overhead are focused on (willing) charity while those who are involved in the high overhead operation are really concerned with their own income.
In the case of utilitarianism, the focus is not on the intentions of those involved nor on what they will or do not will. Rather, what matters is the consequences. On this moral view, it would certainly seem that a high overhead charity could be superior to a low overhead charity in terms of the consequences. In fact, Pallotta seems to be giving what amounts to a utilitarian argument: what matters is the overall consequences. On this view, a charity is assessed based rather like any business: costs and benefits. So, for example, if a charity has large expenses in terms of salaries and promotions, yet successfully raises millions for charity, then it is better than a charity with tiny expenses that raises a tiny amount of money.
While it is tempting to claim that those operating from the utilitarian perspective would be doing so in a way that rejects the idea of the true virtue of charity, this need not be the case. Acting in a virtuous manner presumably does not require that a person act less effectively. As such, if a person accepts a large salary to work at a charity for the sake of the charity, then the person can still be regarded as virtuous, albeit well compensated for her virtue.
The obvious counter is that a person who was truly motivated by a sense of charity would accept a much lower salary so that more would go to charity. This is certainly a legitimate concern and raises the question of how much a person should sacrifice in order to be virtuous. In this case, a person who could make a huge salary effectively selling bottle water to the masses instead elects to make a large salary effectively combating malaria could be regarded as being virtuous—provided that she chose the one over the other for the sake of helping others. While a person who accepted a lower salary for doing the job could (and perhaps should) be regarded as more virtuous, it does seem misguided to automatically regard someone who is doing good as lacking virtue merely because they receive such compensation. If only from a practical sense, it seems like a good idea to reward people for doing what is good.
If, however, a person picks the charitable job for other reasons (such as location or to boost his image for planned political run), then the person would not be acting virtuously even if he happened to do good. We do not, of course, always know what is motivating a person. This probably explains why people tend to praise charities with lower overhead—since those involved are obviously not getting anything for themselves (in terms of money), then they surely must be motivated by charity’s sake. Or so it is assumed.
One longstanding philosophical concern is the matter of why people behave badly. One example of this that filled the American news in July of 2013 was the new chapter in the sordid tale of former congressman Anthony Weiner. Weiner was previously best known for resigning from office after a scandal involving his internet activities and his failed campaign of deception regarding said activities. Weiner decided to make a return to politics by running for mayor of New York. However, his bid for office was overshadowed by revelations that he was sexting under the nom de sext “Carlos Danger” even after his resignation and promise to stop such behavior.
While his behavior has been more creepy and pathetic than evil, it does provide a context for discussion the matter of why people behave badly.
Socrates, famously, gave the answer that people do wrong out of ignorance. He did not mean that people elected to do wrong because they lacked factual knowledge (such as being unaware that stabbing people hurts them). This is not to say that bad behavior cannot stem from mere factual knowledge. For example, a person might be unaware that his joke about a rabbit caused someone great pain because she had just lost her beloved Mr. Bunny to a tragic weed whacker accident. In the case of Weiner, there is some possibility that ignorance of facts played a role in his bad behavior. For example, it seems that Weiner was in error about his chances of getting caught again, despite the fact that he had been caught before. Interestingly, Weiner’s fellow New York politician and Democrat Elliot Spitzer was caught in his scandal using the exact methods he himself had previously used and even described on television. In this case, the ignorance in question could be an arrogant overestimation of ability.
While such factual ignorance might play a role in a person’s decision to behave badly, there would presumably need to be much more in play in cases such as Weiner’s. For him to act on his (alleged) ignorance he would also need an additional cause or causes to engage in that specific behavior. For Socrates, this cause would be a certain sort of ignorance, namely a lack of wisdom.
While Socrates’ view has been extensively criticized (Aristotle noted that it contradicted the facts), it does have a certain appeal.
One way to consider such ignorance is to focus on the possibility that Weiner is ignorant of certain values. To be specific, it could be contended that Weiner acted badly because he did not truly know that he was choosing something worse (engaging in sexting) over something better (being faithful to his wife). In such cases a person might claim that he knows that he has picked the lesser over the greater, but it could be replied that doing this repeatedly displays an ignorance of the proper hierarchy of values. That is, it could be claimed that Weiner acted badly because he did not have proper knowledge of the good. To use an analogy, a person who is offered a simple choice (that is, no bizarre philosophy counter-example conditions) between $5 and $100 and picks the $5 as greater than $100 would seem to show a failure to grasp that 100 is greater than 5.
Socrates presented the obvious solution to evil: if evil arises from ignorance, than knowledge of the good attained via philosophy is just what would be needed.
The easy and obvious reply is that knowledge of what is better and what is worse is consistent with a person choosing to behave badly rather than better. To use an analogy, people who eat poorly and do not exercise profess to value health while acting in ways that directly prevent them from being healthy. This is often explained not in terms of a defect in values but, rather, in a lack of will. The idea that a person could have or at least understand the proper values but fail to act consistently with them because of weakness is certainly intuitively appealing. As such, one plausible explanation for Weiner’s actions is that while he knows he is doing wrong, he lacks the strength to prevent himself from doing so. Going back to the money analogy, it is not that the person who picks the $5 over the $100 does not know that 100 is greater than 5. Rather, in this scenario the $5 is easy to get and the $100 requires a strength the person lacks: she wants the $100, but simply cannot jump high enough to reach it.
Assuming a person knows what is good, the solution to this cause of evil would be, as Aristotle argued, proper training to make people stronger (or, at least, to condition them to select the better out of fear of punishment) so they can act on their knowledge of the good properly.
According to the hype, 3D printers are going to change the world in many positive ways. For example, home 3D printers will allow people to create replacement parts when something breaks. As another example, home 3D printers will allow anyone (with the money) to create their own objects (although much of this will be plastic junk). As a third example, the fact that 3D printers are almost universal machines (that is, they can theoretically make almost anything) will allow cheaper manufacturing. Not surprisingly, there is also a dark side to 3D printing.
One obvious point of moral concern is that such printers can allow people to print their own weapons and use these to harm people. While the first printed gun is not much of a weapon (it essentially a plastic “zip gun”), it did show that guns can be printed using the current technology. As the technology improves, it seems reasonable to believe that much better weapons could be printed, thus allowing the usual suspects (criminals, terrorists, and so on) to secretly print up their own weapons.
While this is a concern, people can and do already make their own weapons. While these weapons are usually fairly crude, they can be quite deadly—as the Boston Marathon bombing of 2013 showed. As such, 3D printing would not seem to significantly increase this sort of threat.
People can also get the metalworking tools needed to make more sophisticated weapons, although these are rather expensive and require skill to operate. Because of this, 3D printing might present an actual threat—a person does not need any special skills to print up a gun, although a printer capable of making an effective gun would probably be rather expensive.
Overall, until the printer technology is cheap and effective enough to print effective guns (that is, comparable to manufactured firearms), they will not present a significant threat. As such, there seems to be (as of now) little moral reason to be worried about this sort of use of 3D printing.
Another matter of obvious moral concern is that 3D printers will allow people to easily and secretly duplicate patented and copyrighted objects. Using a currently available home 3D printer, a person could print up copies of toys, miniatures (for games like D&D), parts and so on. Thus, 3D printing will allow people to do with objects what they have been doing with music, movies and software, namely engaging in piracy.
“Solid piracy” or “3D piracy” does differ from digital piracy in at least one key respect. In the case of printing an object, a person is not stealing the physical object that the manufacturer made. For example, if I were to print a copy of a copyrighted dragon (or gargoyle) miniature for my Pathfinder game, this is rather different from me going to the local gaming store and shoplifting that miniature.
On the one hand, this does seem to be a meaningful difference: by printing the dragon, I am not actually stealing the object. After all, no one is deprived of the object. As such, copying and printing a patented or copyrighted object would not be theft in the usual sense of stealing an actual object. Similar arguments have, of course, been given as to why pirating software, movies and music is not theft.
On the other hand, this does still seem to be theft. While I am not guilty of stealing the matter that makes up my dragon (assuming I did not steal that) I did steal the design of the dragon. For something like a plastic dragon miniature, the matter that makes it up is not the valuable component. Rather, to go with Aristotle, it is the form of the matter. In this case, the form of an imaginary dragon.
This sort of theft of design is nothing new—people have been stealing designs and producing their own objects for quite some time. What is different about 3D printing is that it makes such theft of form very easy. Sticking with my dragon example, before 3D printing it would have been very difficult for me to steal the dragon design/form: I would have had to create a mold of the dragon, melted down the plastic to make it and so on. It would, obviously, be cheaper and easier to just buy the dragon. However, 3D printing would allow me to easily copy the dragon. While there would be the cost of the printer (and perhaps a 3D scanner) and the materials, if I did enough copying and the material was cheap enough, it would also be cheaper to steal the dragon design than buy the dragon.
However, it would still be theft—I would be using the design owned by someone else without providing just compensation and this would be just as wrong as stealing a movie, software or music. Of course, there are those who contend that copying movies, software or music is not theft and they would presumably hold the same view about solid/3D piracy.
With the ever increasing cost of college education there is ever more reason to consider whether or not college is worth it. While much of this assessment can be in terms of income, there is also the academic question of whether or not students actually benefit intellectually from college.
The 2011 study Academically Adrift showed that a significant percentage of students received little or no benefit from college, which is obviously a matter of considerable concern. Not surprisingly, there have been additional studies aimed at assessing this matter. Of special concern to me is the claim that a new study shows that students do improve in critical thinking skills. While this study can be questioned, I will attest to the fact that the weight of evidence shows that American college students are generally weak at critical thinking. This is hardly shocking given that most people are weak at critical thinking.
My university, like so many others, has engaged in a concerted effort to enhance the critical thinking skills of students. However, there are reasonable concerns regarding the methodology used in such attempts. There is also the concern as to whether or not it is even possible, in practical terms, to significantly enhance the critical thinking skills of college students over the span of the two or four (or more) degree. While I am something of an expert at critical thinking (I mean actual critical thinking, not the stuff that sprung up so people could profit from being “critical thinking” experts), my optimism in this matter is somewhat weak. This is because I have given due consideration to the practical problem of this matter and have been teaching this subject for over two decades.
As with any form of education, it is wise to begin by considering the general qualities of human beings. For example, if humans are naturally good, then teaching virtue would be easier. In the case at hand, the question would be whether or not humans (in general) are naturally good at critical thinking.
While Aristotle famously regarded humans as rational animals, he also noted that most people are not swayed by arguments or fine ideals. Rather, they are dominated by their emotions and must be ruled by pain. While I will not comment on ruling with pain, I will note that Aristotle’s view about human rationality has been borne out by experience. To fast forward to now, experts speak of the various cognitive biases and emotional factors that impede human rationality. This matches my own experience and I am confident that it matches that of others. To misquote Lincoln, some people are irrational all the time and all the people are irrational some of the time. As such, trying to transform people into competent critical thinkers will generally be very difficult, perhaps as hard as making people virtuous.
In addition to the biological foundation, there is also the matter of preparation. For most students, their first exposure to a substantial course or even coverage of critical thinking occurs in college. It seems unlikely that students who have gone almost two decades without proper training in critical thinking will be significantly altered by college. One obvious solution, taken from Aristotle, is to begin proper training in critical thinking at an early age.
Another matter of serious concern is the fact that students are exposed to influences that discourage critical thinking and actually provide irrational influences. One example of this is the domain of politics. Political discourse tends to be, at best rhetoric, and typically involves the use of a wide range of fallacies such as the straw man, scare tactics and ad hominems of all varieties. For those who are ill-prepared in critical thinking, exposure to these influences can have a very detrimental effect and they can be led far away from reason. I would call for politicians to cease this behavior, but they seem devoted to the tools of irrationality. There is a certain irony in politicians who exploit and encourage poor reasoning being among those lamenting the weak critical thinking skills of students and endeavoring to blame colleges for the problems they themselves have helped create.
Another example of this is the domain of entertainment. As Plato argued in the Republic, exposure to corrupting influences can corrupt. While the usual arguments about corruption from entertainment focus on violence and sexuality, it is also important to consider the impact of certain amusements upon the reasoning skills of students. Television, which has long been said to “rot the brain”, certainly seems to shovel forth fare that is hardly contributing to good reasoning. While I would not suggest censorship, I would encourage students to discriminate and steer clear of shows that seem likely to have a corrosive impact on reasoning. While it might be an overstatement to claim that entertainment can corrode reason, it does seem sensible to note that much of it contributes nothing positive to a person’s mind.
A third example of this is advertising. As with politics, advertising is the domain of persuasion. While good reasoning can persuade, it is (for most people) the weakest tool of persuasion. As such, advertisers flood us with ads employing what they regard as effective tools of persuasion. These typically involve various rhetorical devices and also the use of fallacies. Sadly, the bad logic of fallacies is generally far more persuasive than good reasoning. Students are generally exposed to significant amounts of advertising (they no doubt spend more time exposed to ads than critical thinking) and it makes sense that this exposure would impact them in detrimental ways, at least if they are not already equipped to properly assess such ads with critical thinking skills.
A final example is, of course, everyday life. Students will typically be exposed to significant amounts of poor reasoning and this will have a significant influence on them. Students will also learn what the politicians and advertisers know: the tools of irrational persuasion will serve them better in our society than the tools of reason.
Given these anti-critical thinking influences, it is something of a wonder that students develop any critical thinking skills.
There are many ways to die, but the public concern tends to focus on whatever is illuminated in the media spotlight. 2012 saw considerable focus on guns and some modest attention on a somewhat unexpected and perhaps ironic killer, namely pain medication. In the United States, about 20,000 people die each year (about one every 19 minutes) due to pain medication. This typically occurs from what is called “stacking”: a person will take multiple pain medications and sometimes add alcohol to the mix resulting in death. While some people might elect to use this as a method of suicide, most of the deaths appear to be accidental—that is, the person had no intention of ending his life.
The number of deaths is so high in part because of the volume of painkillers being consumed in the United States. Americans consume 80% of the world’s painkillers and the consumption jumped 600% from 1997 to 2007. Of course, one rather important matter is the reasons why there is such an excessive consumption of pain pills.
One reason is that doctors have been complicit in the increased use of pain medications. While there have been some efforts to cut back on prescribing pain medication, medical professionals were generally willing to write prescriptions for pain medication even in cases when such medicine was not medically necessary. This is similar to the over-prescribing of antibiotics that has come back to haunt us with drug resistant strains of bacteria. In some cases doctors no doubt simply prescribed the drugs to appease patients. In other cases profit was perhaps a motive. Fortunately, there have been serious efforts to address this matter in the medical community.
A second reason is that pharmaceutical companies did a good job selling their pain medications and encouraged doctors to prescribe them and patients to use them. While the industry had no intention of killing its customers, the pushing of pain medication has had that effect.
Of course, the doctors and pharmaceutical companies do not bear the main blame. While the companies supplied the product and the doctors provided the prescriptions, the patients had to want the drugs and use the drugs in order for this problem to reach the level of an epidemic.
The main causal factor would seem to be that the American attitude towards pain changed and resulted in the above mentioned 600% increase in the consumption of pain killers. In the past, Americans seemed more willing to tolerate pain and less willing to use heavy duty pain medications to treat relatively minor pains. These attitudes changed and now Americans are generally less willing to tolerate pain and more willing to turn to prescription pain killers. I regard this as a moral failing on the part of Americans.
As an athlete, I am no stranger to pain. I have suffered the usual assortment of injuries that go along with being a competitive runner and a martial artist. I also received some advanced education in pain when a fall tore my quadriceps tendon. As might be imagined, I have received numerous prescriptions for pain medication. However, I have used pain medications incredibly sparingly and if I do get a prescription filled, I usually end up properly disposing of the vast majority of the medication. I do admit that I did make use of pain medication when recovering from my tendon tear—the surgery involved a seven inch incision in my leg that cut down until the tendon was exposed. The doctor had to retrieve the tendon, drill holes through my knee cap to re-attach the tendon and then close the incision. As might be imagined, this was a source of considerable pain. However, I only used the pain medicine when I needed to sleep at night—I found that the pain tended to keep me awake at first. Some people did ask me if I had any problem resisting the lure of the pain medication (and a few people, jokingly I hope, asked for my extras). I had no trouble at all. Naturally, given that so many people are abusing pain medication, I did wonder about the differences between myself and my fellows who are hooked on pain medication—sometimes to the point of death.
A key part of the explanation is my system of values. When I was a kid, I was rather weak in regards to pain. I infer this is true of most people. However, my father and others endeavored to teach me that a boy should be tough in the face of pain. When I started running, I learned a lot about pain (I first started running in basketball shoes and got huge, bleeding blisters). My main lesson was that an athlete did not let pain defeat him and certainly did not let down the team just because something hurt. When I started martial arts, I learned a lot more about pain and how to endure it. This training instilled me with the belief that one should endure pain and that to give in to it would be dishonorable and wrong. This also includes the idea that the use of painkillers is undesirable. This was balanced by the accompanying belief, namely that a person should not needlessly injure his body. As might be suspected, I learned to distinguish between mere pain and actual damage occurring to my body.
Of course, the above just explains why I believe what I do—it does not serve to provide a moral argument for enduring pain and resisting the abuse of pain medication. What is wanted are reasons to think that my view is morally commendable and that the alternative is to be condemned. Not surprisingly, I will turn to Aristotle here.
Following Aristotle, one becomes better able to endure pain by habituation. In my case, running and martial arts built my tolerance for pain, allowing me to handle the pain ever more effectively, both mentally and physically. Because of this, when I fell from my roof and tore my quadriceps tendon, I was able to drive myself to the doctor—I had one working leg, which is all I needed. This ability to endure pain also serves me well in lesser situations, such as racing, enduring committee meetings and grading papers.
This, of course, provides a practical reason to learn to endure pain—a person is much more capable of facing problems involving pain when she is properly trained in the matter. Someone who lacks this training and ability will be at a disadvantage when facing situations involving pain and this could prove harmful or even fatal. Naturally, a person who relies on pain medication to deal with pain will not be training themselves to endure. Rather, she will be training herself to give in to pain and become dependent on medication that will become increasingly ineffective. In fact, some people end up becoming even more sensitive to pain because of their pain medication.
From a moral standpoint, a person who does not learn to endure pain properly and instead turns unnecessarily to pain medication is doing harm to himself and this can even lead to an untimely death. Naturally, as Aristotle would argue, there is also an excess when it comes to dealing with pain: a person who forces herself to endure pain beyond her limits or when doing so causes actually damage is not acting wisely or virtuously, but self-destructively. This can be used in a utilitarian argument to establish the wrongness of relying on pain medication unnecessarily as well as the wrongness of enduring pain stupidly. Obviously, it can also be used in the context of virtue theory: a person who turns to medication too quickly is defective in terms of deficiency; one who harms herself by suffering beyond the point of reason is defective in terms of excess.
Currently, Americans are, in general, suffering from a moral deficiency in regards to the matter of pain tolerance and it is killing us at an alarming rate. As might be suspected, there have been attempts to address the matter through laws and regulations regarding pain medication prescriptions. This supplies people with a will surrogate—if a person cannot get pain medication, then she will have to endure the pain. Of course, people are rather adept at getting drugs illegally and hence such laws and regulations are of limited effectiveness.
What is also needed is a change in values. As noted above, Americans are generally less willing to tolerate even minor pains and are generally willing to turn towards powerful pain medication. Since this was not always the case, it seems clear that this could be changed via proper training and values. What people need is, as discussed in an earlier essay, training of the will to endure pain that should be endured and resist the easy fix of medication.
In closing, I am obligated to add that there are cases in which the use of pain medication is legitimate. After all, the body and will are not limitless in their capacities and there are times when pain should be killed rather than endured. Obvious cases include severe injuries and illnesses. The challenge then, is sorting out what pain should be endured and what should not. Since I am a crazy runner, I tend to err on the side of enduring pain—sometimes foolishly so. As such, I would probably not be the best person to address this matter.