While driving to yet another committee meeting, I heard an advertisement for cool shaping, which apparently is some sort of method for shaping body fat to make a person appear less fat. What struck me about the commercial was the claim that cool shaping would give a person the body they deserve. While this is certainly a clever advertising phrase, it does raise a matter worth considering.
On the face of it, a person who has not suffered an unfortunate accident or illness would have exactly the body he deserves. After all, the body a person has is the body he has forged by his efforts (or lack thereof), diet and lifestyle. That is to say, the body one has is the product of one’s choices and is thus deserved in that it has been properly earned. So, if a person is fit and lean or soft and flabby, then he has just what he deserves. If this is plausible, then something like cool shaping would not give a person the body they deserve, since the person already has exactly that body.
It could be countered that a person could have a body they do not deserve by arguing that while a person does earn his body by his actions and choices, the body he starts with is not one that he has chosen. After all, a person is born with (or as, for those who are materialists in the philosophical sense) whatever body he happens to get and this body is not something a person earns or deserves. After all, one just get (or is) it. Naturally, it could be claimed that Karma or some other metaphysical system in which a person does get the body he deserves (such as being reborn as a banana slug)-but I will set aside those considerations and just go with the view that they body one is born with is not deserved.
A person born with a genetic predisposition towards packing on the pounds would not deserve this predisposition and hence, it could be claimed, would not have the body he deserves. However, this leads to the obvious question: what sort of body does a person deserve? Do people, in general, deserve to have better bodies than they have? Or is this absurd?
I am inclined to stick with my original view, namely that even though people just get (or are) whatever body they are born with without deserving it, people do (in general) end up with the body that they deserve in the sense that they get what they earn-and that is what deserving is all about. In fact, aside from cases of unfortunate accidents, diseases and other such dire undeserved circumstances, this is one of the rare cases in which a person does get exactly what he deserves-that is, the body he has forged.
When a person does terrible things that seem utterly senseless, like murder children, there is sometimes a division in the assessment of the person. Some people will take the view that the person is mentally ill on the grounds that a normal, sane person would not do something so terrible and senseless. Others take the view that the person is evil on the grounds that a normal, non-evil person would not do something so terrible and senseless. Both of these views express an attempt to explain and understand what occurred. As might be imagined, the distinction between being evil and being mentally ill is a matter of significant concern.
One key point of concern is the matter of responsibility and the correct way to respond to a person who has done something terrible. If a person acts from mental illness rather than evil, then it seems somewhat reasonable to regard them as not being accountable for the action (at least to the degree the person is ill). After all, if something terrible occurs because a person suffers from a physical illness, the person is generally not held accountable (there are, obviously, exceptions). For example, my running friend Jay told me about a situation in which a person driving on his street had an unexpected seizure. Oddly, the person’s foot stomped down on the gas pedal and the car rocketed down the street, smashing into another car and coming to a stop in someone’s back yard. The car could have easily plowed over my friend, injuring or killing him. However, since the person was not physically in control of his actions (and he had no reason to think he would have a seizure) he was not held morally accountable. That is, he did nothing wrong. If a person had intentionally tried to murder my friend with his car, then that would be seen as an evil action. Unless, perhaps, the driver was mentally ill in a way that disabled him in a way comparable to a stroke. In that case, the driver might be as “innocent” as the stroke victim.
There seem to be at least two ways that a mentally ill person might be absolved of moral responsibility (at least to the degree she is mentally ill).
First, the person might be suffering from what could be classified as perceptual and interpretative disorders. That is, they have mental defects that cause them to perceive and interpret reality incorrectly. For example, a person suffering from extreme paranoia might think that my friend Jay intends to steal his brain, even Jay has no such intention. In such a case, it seems reasonable to not regard the person as evil if he tries to harm Jay—after all, he is acting in what he thinks is legitimate self-defense rather than from a wicked motivation. In contrast, someone who wanted to kill Jay to rob his house or just for fun would be acting in an evil way. Put in general terms, mental conditions that distort a person’s perception and interpretation of reality might lead him to engage in acts of wrongful violence even though his moral reasoning might remain normal. Following Thomas Aquinas, it seems sensible to consider that such people might be following their conscience as best they can, only they have distorted information to work with in their decision making process and this distortion results from mental illness.
Second, the person might be suffering from what could be regarded as a disorder of judgment. That is, the person’s ability to engage in reasoning is damaged or defective due to a mental illness. The person might (or might not) have correct information to work with, but the processing is defective in a way that causes a person to make judgments that would be regarded as evil if made by a “normal” person. For example, a person might infer from the fact that someone is wearing a blue hat that the person should be killed.
One obvious point of concern is that “normal” people are generally bad at reasoning and commit fallacies with alarming regularity. As such, there would be a need to sort out the sort of reasoning that is merely bad reasoning from reasoning that would count as being mentally ill. One point worth considering is that bad reasoning could be fixed by education whereas a mental illness would not be fixed by learning, for example, logic.
A second obvious point of concern is discerning between mental illness as a cause of such judgments and evil as a cause of such judgments. After all, evil people can be seen as having a distorted sense of judgment in regards to value. In fact, some philosophers (such as Kant and Socrates) regard evil as a mental defect or a form of irrationality. This has some intuitive appeal—after all, people who do terrible and senseless things would certainly seem to have something wrong with them. Whether this is a moral wrongness or health wrongness is, of course, the big question here.
One of the main reasons to try to sort out the difference is figuring out whether a person should be treated (cured) or punished (which might also cure the person). As noted above, a person who did something terrible because of mental illness would (to a degree) not be accountable for the act and hence should not be punished (or the punishment should be duly tempered). For some it is tempting to claim that the choice of evil is an illusion because there is no actual free choice (that is, we do what we do because of the biochemical and electrical workings of the bodies that are us). As such, people should not be punished, rather they should be repaired. Of course, there is a certain irony in such advice: if we do not have choice, then advising us to not punish makes no sense since we will just do what we do. Of course, the person advising against punishment would presumably have no choice but to give such advice.
The mass murder that occurred at Sandy Hook Elementary school has created significant interest in both gun control and mental health. In this essay I will focus on the matter of mental health.
When watching the coverage on CNN, I saw a segment in which Dr. Gupta noted that currently people can only be involuntarily detained for mental health issues when they present an imminent danger. He expressed concern about this high threshold, noting that this has the practical impact that authorities generally cannot act until someone has done something harmful and then it can be rather too late. One rather important matter is sorting out what the threshold for official intervention.
On the one hand, it can be argued that the relevant authorities need to be proactive. They should not wait until they learn that someone with a mental issue is plotting to shoot children before acting. They certainly should not wait until after someone with a mental issue has murdered dozens of people. They have to determine whether or not a person with a mental issue (or issues) is likely to engage in such behavior and deal with the person well before people are hurt. That is, the authorities need to catch and deal with the person while he is still a pre-criminal rather than an actual criminal.
In terms of arguing in favor of this, a plausible line of approach would be a utilitarian argument: dealing with people with mental issues before they commit acts of violence will prevent the harmful consequences that otherwise would have occurred.
On the other hand, there is the obvious moral concern with allowing authorities to detain and deal with people not for something they have done or have even plotted to do but merely might do. Obviously, there is rather serious practical challenge of sorting out what a person might do when they are not actually conspiring or planning a misdeed. There is also the moral concern of justifying coercing or detaining a person for what they might do. Intuitively, the mere fact that a person could or might do something wrong does not warrant acting against the person. The obvious exception is when there is adequate evidence to establish that a person is plotting or conspiring to commit a crime. However, these sorts of things are already covered by the law, so what would seem to be under consideration would be coercing people without adequate evidence that they are plotting or conspiring to commit crimes. On the face of it, this would seem unacceptable.
One obvious way to justify using the coercive power of the state against those with mental issues before they commit or even plan a crime is to argue that certain mental issues are themselves adequate evidence that a person is reasonably likely to engage in a crime, even though nothing she has done meets the imminent danger threshold.
On an abstract level, this does have a certain appeal. To use an analogy to physical health, if certain factors indicate a high risk of a condition occurring, then it make sense to treat for that condition before it manifests. Likewise, if certain factors indicate a high risk of a person with mental issues engaging in violence against others, then it makes sense to treat for that condition before it manifests.
It might be objected that people can refuse medical treatment for physical conditions and hence they should be able to do the same for dangerous mental issues. The obvious reply is that if a person refuses treatment for a physical ailment, he is only endangering himself. But if someone refuses treatment for a condition that can result in her engaging in violence against others, then she is putting others in danger without their consent and she does not have the liberty or right to do this.
Moving into the realm of the concrete, the matter becomes rather problematic. One rather obvious point of concern is that mental health science is lagging far behind the physical health sciences (I am using the popular rather than philosophical distinction between mental and physical here) and the physical health sciences are still rather limited. As such, using the best mental health science of the day to predict how likely a person is likely to engage in violence (in the absence of evidence of planning and actual past crimes) will typically result in a prediction of dubious accuracy. To use the coercive power of the state against an individual on the basis of such dubious evidence would not be morally acceptable. After all, a person can only be justly denied liberty on adequate grounds and such a prediction does not seem strong enough to warrant such action.
It might be countered that in the light of such events as the shootings at Sandy Hook and Colorado, there are legitimate grounds to use the coercive power of the state against people who might engage in such actions on the grounds that preventing another mass murder is worth the price of denying people their freedom on mere suspicion.
As might be imagined, without very clear guidelines and limitations, this sort of principle could easily be extended to anyone who might commit a crime—thus justifying locking up people for being potential criminals. This would certainly be wrong.
It might be countered that there is no danger of the principle being extended and that such worries are worries based on a slippery slope. After all, one might say, the principle only applies to those deemed to have the right (or rather wrong) sort of mental issues. Normal people, one might say in a calm voice, have nothing to worry about.
However, it seems that normal people might. After all, it is normal for people to have the occasional mental issue (such as depression) and there is the concern that the application of the fuzzy science of mental health might result in incorrect determinations of mental issues.
To close, I am not saying that we should not reconsider the threshold for applying the coercive power of the state to people with mental issues. Rather, my point is that this should be done with due care to avoid creating more harm than it would prevent.
As a runner, martial artist and philosopher I have considerable interest in the matter of the will. As might be imagined, my view of the will is shaped mostly by my training and competitions. Naturally enough, I see the will from my own perspective and in my own mind. As such, much as Hume noted in his discussion of personal identity, I am obligated to note that other people might find that their experiences vary considerably. That is, other people might see their will as very different or they might even not believe that they have a will at all.
As a gamer, I also have the odd habit of modeling reality in terms of game rules and statistics—I am approaching the will in the same manner. This is, of course, similar to modeling reality in other ways, such as using mathematical models.
In my experience, my will functions as a mental resource that allows me to remain in control of my actions. To be a bit more specific, the use of the will allows me to prevent other factors from forcing me to act or not act in certain ways. In game terms, I see the will as being like “hit points” that get used up in the battle against these other factors. As with hit points, running out of “will points” results in defeat. Since this is rather abstract, I will illustrate this with two examples.
This morning (as I write this) I did my usual Tuesday work out: two hours of martial arts followed by about two hours of running. Part of my running workout was doing hill repeats in the park—this involves running up and down the hill over and over (rather like marching up and down the square). Not surprisingly, this becomes increasingly painful and fatiguing. As such, the pain and fatigue were “trying” to stop me. I wanted to keep running up and down the hill and doing this required expending those will points. This is because without my will the pain and fatigue would stop me well before I am actually physically incapable of running anymore. Roughly put, as long as I have will points to expend I could keep running until I collapse from exhaustion. At that point no amount of will can move the muscles and my capacity to exercise my will in this matter would also be exhausted. Naturally, I know that training to the point of exhaustion would do more harm than good, so I will myself to stop running even though I desire to keep going. I also know from experience that my will can run out while racing or training—that is, I give in to fatigue or pain before my body is actually at the point of physically failing. These occurrences are failures of will and nicely illustrate that the will can run out or be overcome.
After my run, I had my breakfast and faced the temptation of two boxes of assorted chocolates. Like all humans, I really like sugar and hence there was a conflict between my hunger for chocolate and my choice to not shove lots of extra calories and junk into my pie port. My hunger, of course, “wants” to control me. But, of course, if I yield to the hunger for chocolate then I am not in control—the desire is directing me against my will. Of course, the hunger is not going to simply “give up” and it must be controlled by expending will and doing this keeps me in control of my actions by making them my choice.
Naturally, many alternatives to the will can be presented. For example, Hobbes’ account of deliberation is that competing desires (or aversions) “battle it out”, but the stronger always wins and thus there is no matter of will or choice. However, I rather like my view more and it seems to match my intuitions and experiences.
The United States and other countries face the rather odd problem of having a significant portion of the population both overweight and malnourished. One factor that contributes to this is that calorie dense and nutritionally empty foods are cheap and accessible while foods that are nutritionally rich tend to be more expensive and less accessible.
To anticipate a likely response, a person’s diet is also obviously also a matter of choice-people are not forced to down Twinkies, burgers, chips and cola at gun point. However, a person’s choices are obviously impacted by factors like cost and accessibility (as well as marketing). As such, it is hardly surprising that people tend to choose the food that is cheaper are more readily accessible over the food that is more expensive and takes more effort to acquire.
While calorie dense and nutritionally lacking food tends to be cheaper than more nutritionally rich food for a variety of reasons, one reason for price differences lies in differences in state subsidies. In an interesting irony, the federal nutrition recommendations are a reverse of the federal food subsides. This is nicely illustrated by the following pyramids:
This, as might be imagined, raises some interesting moral concerns in the area of food ethics. The most obvious concern is that the United States government’s subsidies impacts the pricing of food in a way that food that we should (by the government’s own recommendations) eat less of will tend to be cheaper than the food that we should be eating more off. As such, as the heading says, a salad will tend to be more expensive than a fast food burger, despite the fact that the salad is better for a person nutritionally.
To focus directly on the ethics, by making less healthy food cheaper through subsidies, the state is making it more likely that people will make harmful dietary choices. That is, that they will pick the calories rich and nutritionally lacking foods (such as fast food and junk food) over the nutritionally rich food. In short, the folks who make these decisions are contributing to harming people, which certainly seems to be wrong (if only on utilitarian grounds).
If the state is going to subsidize foods, then the rational and ethical approach would be to subsidize foods based on legitimate scientific recommendations. That is, the food that is better for people should be subsidized and food that tends to not be good for people (or is actually harmful) should either not be subsidized or should be subsidized proportional to its nutritional value.
The reality is, of course, that subsidies are not based on concerns of health or food ethics, but rather based on political influence. As such, the subsidies help create a situation in which unhealthy food is cheap and hence tends to be consumed more than healthy foods. This in turn contributes to health problems (obesity, for example) which costs us even more. Thus, we are paying to eat poorly and then paying again for the effects of these poor diets. This seems to be something we should not be doing, both from a practical and a moral standpoint.
Having been around a while, I have seen celebrity endorsed fad diets come and go. One of the most recent trends is the gluten-free diet. This diet has been presented as a way of losing weight and some have even suggested that it can help with autism. While various celebrities have promoted this diet, health advice from celebrities should be subject to proper critical assessment.
As might be imagined, people have a tendency to confuse celebrity status with expertise. That is, people often believe what a celebrity claims is true because the celebrity is famous. However, while reputation is a factor in assessing expertise, the reputation has to be within the field in which the person is making the claim. So, for example, a person’s fame as an actor has no relevance to her ability to make credible claims about diets. There is also the fact that a person’s expertise depends primarily not on their fame but on such factors as education, experience, and accomplishments within the field. A lack of excessive bias is also an important factor in assessing the claims of an expert. Accepting claims based on unwarranted authority (such as buying into a diet simply because a celebrity endorses it) would be to fall victim to a fallacious argument from authority.
Relying on experts is not, of course, a fallacy. However, one has to be careful to turn to the right experts-that is, people who have the knowledge and experience to be be able to make informed claims and who have the objectivity and lack of bias to be trustworthy. As might be imagined, celebrities who are pushing specific products would tend to be lacking in both areas.
As a specific example, consider the fad of gluten free diets. Like some fad diets, there is some truth behind the fad. In the case of gluten, there is a condition called Celiac Disease. People with this disease need to have a gluten free diet in order to avoid various health problems. While this is a real condition, only about 1% of the US population has Celiac Disease. As such, 99% of the population does not need a gluten free diet.
However, those pushing a gluten free diet claim that it has health benefits for people who do not have this disease. If so, then the diet would be worth considering. However, there seems to be no objective scientific data supporting these claims-thus there would seem to be no reason for people who lack the disease to go on such a diet.
But, one of the main reasons for going on a diet is weight loss and the gluten free diet has been pushed as a means of losing weight. However, the evidence is that the gluten free diet has no special capacity to cause weight loss. See, for example, Wendy Marcason’s “Is the Evidence to Support the Claim that a Gluten-Free Diet Should Be Used for Weight Loss”, page 1786 in in the November 2011 Journal of the American Dietetic Association.
As has long been known, weight loss is primarily a matter of expending more calories that you take in. While gluten products do have calories, gluten calories are simply calories-as are non-gluten calories. In fact, as Marcason points out, some gluten free products have more calories and fat than their gluten containing counterparts. Eating such products in favor of the lower calorie versions will, obviously enough, not promote weight loss.
From the standpoint of thinking well about these matters, there are three main points to take away from this. First, celebrities are not (unless they are also health experts) experts on dieting and health. Second, advice about dieting should be sought from the actual experts-who are generally not celebrities and who tend to give mundane advice like “eat less, eat better and exercise more”. Third, losing weight is a matter of expending more calories than one takes in and there is obviously no fad diet that can change this basic equation. Naturally, a good diet is also more than just a matter of calories-there is also the rather critical matter of nutrients (ironically, there are people who are both obese and malnourished at the same time). But, do not take my word for it-listen to the experts.
Although scientists and philosophers have speculated that time is not real (though they have never missed lunchtime on that basis), it certainly seems to be real enough as an opponent.
When I hit 40 and won my first Master’s award (Master=old), I started looking into the impact of aging on running. I had, of course, learned about aging back when I took anatomy and physiology, but this was a bit more real. While I will spare you the details, the gist of it is that once we humans hit our mid to late twenties, we start a slow spiral downwards (or rapid, depending on how one handles it). While everyone notices this, competitive runners tend to notice it more. This is not because we are somehow more realistic or more perceptive. Rather, it is the fact that we get to see the aging play out it cold, objective numbers as our times get slower and slower. There is also the subjective factor: runs seem to hurt more, one’s stride feels less snappy, and recovery seems to take longer. Or maybe gravity is just increasing in a selective manner-that is, under me.
Fortunately, there is some compensation for these harsh facts: running and exercise in general can be used to fight time. Running is especially effective at literally keeping the cells younger (no magic, just biology) which is why runners often look younger than they are (or, more aptly, other folks look older than they should). Exercise is also critical to resisting two major problems of aging: muscle and bone loss. Like an eroding sandbar, time eats away at the very makeup of our body. Fortunately, exercise that builds muscle and bone can slow down this loss, thus enabling the body to handle aging better. Exercise can also help with balance. Since falls tend to be a major threat to the elderly, building up your fall avoidance and resistance is a smart thing.
Exercise alone, as they say about losing weight, is not enough: diet is also important. When I was young, it mattered less what I ate (or so I thought). Being older, I have less margin of junk (so to speak), and I have had to change my diet to be significantly more healthy. What is actually pretty cool is that what I eat now is not only better for me, but it actually tastes better than much of what I used to eat. It does help that I am not a poor graduate student: eating well is not a luxury, but it is not as cheap as ramen and generic rice puff cereal.
My main goal is not to live really long (although I am fine with that) but to have a good life as long as possible. That seems to be something almost any of us can do, with a little planning and a lot of sweat.
In the end, however, time kills us all. But all races must end and the glory is in the running.
“With each puff, the victim inhaled polonium, unaware that her cigarette was killing her.” While this might sound like a line from a bad spy novel (no doubt featuring rogue former KGB agents), it is actually what smokers experience with each puff from a normal cigarette. Tobacco plants pick up polonium via contamination from fertilizer which is made from phosphate rock that happens to be rich in uranium. Some of this contamination comes through absorption via the roots and some comes from contamination via the leaves.
Obviously, people will point out that the radioactive material in tobacco is not the major threat. After all, tobacco is chock full of dangerous stuff and it is generally a bad idea to inhale any smoke. That is quite correct, but it is worth noting that the polonium makes tobacco even more dangerous. It is, in fact, estimated that it causes about 2% of the smoke caused lung cancers. This means that it kills thousands of people each year.
It is tempting to simply say that people know the risks and if they prefer to harm themselves, then that is their right. Laying aside the matter of second hand smoke and the fact that smokers often become health care burdens for the rest of us, there is also the fact that tobacco could be made less dangerous by reducing or eliminating the polonium. This can actually be done without undue hardship on the part of the tobacco industry.
In fact, the industry studied the matter for quite some time and found that changing fertilizer would have a significant impact as would using a different sort of filter. Also, something as simple as washing the tobacco leaves would have a significant effect (it might also remove other contaminants). Not surprisingly, the industry decided to stay quiet about its findings and elected to not actually address the polonium problem because “removal of these materials would have no commercial advantage.”
While my natural hatred of tobacco inclines me to advocate simply outlawing it, my moral principles require me to allow people to engage in self-harm under certain conditions (such as knowing what they are doing). As such, I have to oppose an actual ban on tobacco. I can, however, consistently support bans on public smoking (you can smoke, but I do not want to share your smoke). I can also support making tobacco less dangerous to those who smoke it on the grounds that this would enable them to get their enjoyment while harming themselves less. I am assuming that people who smoke do so for the allegedly enjoyable aspects of the drug use rather than the aspects that involved cancer and death. As such, I would infer that smokers would not object to having a product that would be somewhat less likely to hurt them. If they do smoke for the harm, then they can easily find substitutes, such as burning and inhaling plastic.
Tobacco companies might object to the cost of making their product less dangerous, but it would seem odd for them to claim they have a right to poison people with radiation when they could easily remove it. To use an analogy, imagine if cell phones actually gave off cancer causing radiation that caused thousands of deaths and that this flaw could be cheaply and easily rectified. It would seem to be unacceptable for companies to refuse to do so on the grounds that they would gain “no commercial advantage” and that it would cost them a little money to kill fewer people. The same would seem to hold for tobacco.
Anya Kamenetz recently wrote an article, “Bribing the Poor”, about Esther Duflo’s strategy of giving the poor incentives to be immunized. While the article mainly just reported on the practice, it did get me thinking about the ethics of this approach. But before getting to the moral matter, a little background is in order.
In developed countries, about 90% of children receive immunization. This has had a significant impact on the health of the population. In contrast, less-developed countries tend to have far lower immunization rates. For example, India has an overall rate of 44%, but specific areas have rates that drop to 22% or even 2%. While humans can have natural resistance to diseases, the lack of immunization means that people get sick (and sometimes die) needlessly.
Duflo focused on India, and hence the best information is available for that country. Duflo found that there were various obstacles to immunization. The first is that many clinics in the rural area Duflo studied were closed because the government paid nurses did not show up for work. The second is superstition. Many people still believe in supernatural causes of illness and such people will tend to not put much faith in immunization (unless, perhaps, it was presented as magic-something that Duflo did not propose). The third is that immunizations have an image problem. When they work, there is nothing to see. When they do not work or they cause a harmful effect, the results are visible and tend to stick in people’s minds. People then tend to “reason” that immunizations are harmful in general, thus falling victim to misleading vividness, hasty generalization or the fallacy of anecdotal evidence. This is not, of course, confined to the developing world. In the United States unfounded fears about vaccination causing autism caused people to forgo immunization for their children. Irrationality, like disease, is a global phenomenon. The third is that getting immunization can require effort. The fourth is that a clear and obvious incentive (other than avoiding disease) was not provided.
Duflo’s solution involved two parts. The first was aimed at making immunization easy. This was done by setting up camps in villages. To ensure that the nurses showed up, they were paid only when they did so. This provided the nurses with a financial incentive to actually do their jobs. Making it easier to get the shots boosted the rate of immunization from 2% to 18%.
The second part was aimed at giving people a clear incentive to get immunized. As many thinkers have noted, people tend to place less value on the future and also seem to find a negative (not getting disease) less appealing than a positive (a gain, such as a gift). As such, the incentive to get immunization that will prevent something from happening latter will tend to be relatively low. However, an incentive that involves getting something right now will tend to be more effective. Duflo’s solution was to offer a $1 bag of lentils as an incentive to get one’s child immunized. This tactic increased the immunization rate from 2% to 38%, which is certainly a significant boost. As an added bonus, the overall cost was lower: the nurses are paid by the hour, so more people were immunized in less time.
While this seems like a very sensible approach, people on both the left and the right have attached it as unethical (which might be taken as evidence in its favor).
People on the left tend to advance the argument that bribing the poor to get immunized is patronizing and paternalistic. To use an analogy, it could be compared to giving a child a treat so she will cooperate and get her shots. While this is fine with an actual child (they do not know better), it might well be regarded as condescending paternalism that casts the poor as children who must be bribed to do what a rational person would do without a bribe.This would seem to be wrong.
While this does have some appeal, it can be countered. One reply would be to follow John Stuart Mill’s view: “Despotism is a legitimate mode of government in dealing with barbarians, provided the end be their improvement, and the means justified by actually effecting that end.” Swap out “paternalism” for “despotism” and keep the appeal to consequences, and this would be a possible approach. After all, the good that is done for the children and others would seem to outweigh any harm done by giving people an incentive to get immunized.
A second reply is that this incentive approach need not be paternalistic. After all, offering people an incentive hardly seems to be inherently patronizing. To use an example, students might be offered extra credit to go to an event that would benefit them. This hardly seems paternalistic. Or, to use another example, companies often provide free stuff at expos to get people to look at their goods and services. That hardly seems patronizing. Another point worth considering is that people do not claim that paying the nurses to give the immunizations is patronizing. If paying the nurse to do her duty is not patronizing, then paying the people to do their social duty is not patronizing either.
On the right, the usual objection is that the poor should be responsible and should not be given a handout. As a moral argument it does have some appeal. After all, bribing someone to do what they should do because it is right does seem to be morally questionable (at least on some grounds). To use an analogy, if a person is given $1 when she tells the truth and tells the truth for the sake of the money, then she is not acting on the basis of morality. The person who bribes her might have good intentions, but s/he can be seen as acting wrongly, at least some views. For example, Kant would regard this in a rather negative light: for him, people are supposed to do good out of a sense of duty rather than a desire for gain.
Despite the appeal, this can be countered in various ways. One obvious way is to argue on utilitarian grounds: handing out free lentils with the free immunizations ends up preventing the harms of illness and death. Put in the financial terms so beloved to the right, it is a good investment in terms of the money saved on later medical care and the worker productivity that would be lost to illness and death. A second way to argue it is that while the parents are being bribed to do the right thing, the folks on the right should be more worried about the children than the adults. While it might be wrong to bribe parents to get their children immunized, it would be worse to allow children to go without immunization. As such, while it might be claimed that the parents have acted wrongly, it would seem that the people doing the bribing have acted rightly. Finally, the folks on the right should appreciate the value of providing financial incentives to get people to do things. After all, that is what capitalism is all about.
In light of the above arguments, bringing the poor in this manner seems to be morally acceptable.