Modern agriculture does deserve considerable praise for the good that it does. Food is plentiful, relatively cheap and easy to acquire. Instead of having to struggle with raising crops and livestock or hunting and gathering, I can simply drive to the supermarket and stock up with the food I need to not die. However, as with all things, there is a price.
The modern agricultural complex is now highly centralized and industrialized, which does have its advantages and disadvantages. There are also the harms of specific, chosen practices aimed at maximizing profits. While there are many ways to maximize profits, two common ones are to pay the lowest wages possible (which the agricultural industry does—and not just to the migrant laborers, but to the ranchers and farmers) and to shift the costs to others. I will look, briefly, at one area of cost shifting: the widespread use of antibiotics in meat production.
While most people think of antibiotics as a means of treating diseases, food animals are now routinely given antibiotics when they are healthy. One reason for this is to prevent infections: factory farming techniques, as might be imagined, vastly increase the chances of a disease spreading like wildfire among an animal population. Antibiotics, it is claimed, can help reduce the risk of bacterial infections (antibiotics are useless against viruses, of course). A second reason is that antibiotics increase the growth rate of healthy animals, allowing them to pack on more meat in less time—and time is money. These uses allow the industry to continue factory farming and maintain high productivity—which initially seems laudable. The problem is, however, that this use of antibiotics comes with a high price that is paid for by everyone else.
Eric Schlosser wrote “A Safer Food Future, Now”, which appeared in the May 2016 issue of Consumer Reports. In this article, he notes that this practice has contributed significantly to the rise of antibiotic resistant bacteria. Each year, about two million Americans are infected with resistant strains and about 20,000 die. The healthcare cost is about $20 billion. To be fair, the agricultural industry is not the only contributor to this problem: improper use of antibiotics in humans has also added to this problem. That said, the agricultural use of antibiotics accounts for about 75% of all antibiotic usage in the United States, thus converting the factory farms into for resistant bacteria.
The harmful consequences of this antibiotic use have been known for years and there have, not surprisingly, been attempts to address this through legislation. It should, however, come as little surprise that our elected leaders have failed to take action. One likely explanation is that the lobbying on the part of the relevant corporations has been successful in preventing action. After all these is a strong incentive on the part of industry to keep antibiotics in use: this increases profits by enabling factory farming and the faster growth of animals. That said, it could be contended that the lawmakers are ignorant of the harms, doubt there are harms from antibiotics or honestly believe that the harms arising from their use are outweighed by the benefits to society. That is, the lawmakers have credible reasons other than straight up political bribery (or “lobbying” as it is known in polite company). This is a factual matter, albeit one that is difficult to settle: no professional politician who has been swayed by lobbying will attribute her decision to any but the purist of motivations.
This matter is certainly one of ethical concern and, like most large scale ethical matters that involves competing interests, is one that seems best approached by utilitarian considerations. On the side of using the antibiotics, there is the increased productivity (and profits) of the factory farming system of producing food. This allows more and cheaper food to be provided to the population, which can be regarded as pluses. The main reasons to not use the antibiotics, as noted above, are that they contribute to the creation of antibiotic resistant strains that sicken and kill many people (vastly more Americans than are killed by terrorism). This inflicts considerable costs on the sickened and those who are killed as well as those who care about them. There are also the monetary costs in the health care system (although the increased revenue can be tagged as a plus for health care providers). In addition to these costs, there are also other social and economic costs, such as lost hours of work. As this indicates, the cost (illness, death, etc.) of the use of the antibiotics is shifted: the industry does not pay these costs, they are paid by everyone else.
Using a utilitarian calculation requires weighing the cost to the general population against the profits of the industry and the claimed benefits to the general population. Put roughly, the moral question is whether the improved profits and greater food production outweigh the illness, deaths and costs suffered by the public. The people in the government seem to believe that the answer is “yes.”
If the United States were in a food crisis in which the absence of the increased productivity afforded by antibiotics would cause more suffering and death than their presence, then their use would be morally acceptable. However, this does not seem to be the case—while banning this sort of antibiotic use would decrease productivity (and impact profits), the harm of doing this would seem to be vastly exceeded by the reduction in illness, deaths and health care costs. However, if an objective assessment of the matter showed that the ban on antibiotics would not create more benefits than harms, then it would be reasonable and morally acceptable to continue to use them. This is partially a matter of value (in terms of how the harms and benefits are weighted) and partially an objective matter (in terms of monetary and health costs). I am inclined to agree that the general harm of using the antibiotics exceeds the general benefits, but I could be convinced otherwise by objective data.
All professions have their problem members and the field of medicine is no exception. Fortunately, the percentage of bad doctor is rather low—but this small percentage can do considerable harm. After all, when your professor is incompetent, you might not learn as much as you should. If your doctor is incompetent, she could kill you.
The May, 2016 issue of Consumer Reports includes a detailed article by Rachel Rabkin Peachman covering the subject of bad doctors and the difficulty patients face in learning whether a physician is a good doctor or a disaster.
Based on the research in the article, there are three main problems. The first is that there are bad doctors. The article presents numerous examples to add color to the dry statistics and this include such tales of terror as doctors molesting patients, doctors removing healthy body parts, and patient deaths due to negligence, impairment or incompetence. These are obvious all moral and professional failings on part of the doctors and they should clearly not be engaged in such misdeeds.
The second is that, according to Peachman, the disciplinary actions taken by the profession tend to be rather less than ideal. While doctors should enjoy the protection of a due process, the hurdles are, perhaps, too high. There is also the problem that the responses to the misdeeds are often very mild. For example, a doctor whose negligence has resulted in the death of patients can be allowed to keep practicing with only minor limitations. As another example, a doctor who has engaged in sexual misconduct might continue practicing after a class or two on ethics and with the requirement that someone else be present when he is seeing patients. In addition to the practical concerns about this, there is also the moral concern that the disciplinary boards are failing to protect patients.
One possible argument against harsher punishments is that there is a shortage of doctors and taking a doctor out of practice would have worse consequences than allowing a bad doctor to keep practicing. This would be the basis for a utilitarian argument for continuing mild punishments. Crudely put, it is better to have a doctor who might kill a patient or two than no doctor at all.
This argument does have some appeal. However, there is the factual question of whether or not the mild punishments do more good than harm. If they do, then one would need to accept that this approach is morally tolerable. If not, then the argument would fail. There is also the response that consequences are not what matters—people should be reprimanded based on their misdeeds and not based on some calculation of utility. This also has some intuitive appeal.
It could also be argued that it should be left to patients to judge if they want to take the risk. If a doctor is known for sexual misdeeds with female patients but is fine with male patients, then a man who has few or no other options might decide that the doctor is his best choice. This leads to the third problem.
The third problem is that it is very difficult for patients to learn about bad doctors. While there is a National Practitioner Data Bank (NPDB), it is off limits to patients and is limited to people in law enforcement, hospital administration, insurance and a few other groups.
The main argument advanced against allowing public access to the NPDB is based on the premise that it contains inaccurate information which could be harmful to innocent doctors. Interestingly enough, this makes it similar to the credit report data—it is notorious for containing harmful inaccuracies that can plague people.
While the possibility of incorrect data is a matter of concern, that premise best supports the conclusion that the NPDB should be reviewed regularly to ensure that the information is accurate. While perfect accuracy is not possible, it would seem to be well within the realm of possibility for the information to meet a reasonable standard of accuracy. This could be aided by providing robust tools for doctors to inform those running the NPDB of errors and to inform doctors about the content of their files. As such, the error argument is easily defeated.
Patients do have some access to data about doctors, but there are many barriers in place. In some cases, there is a financial cost to access data. In almost all cases, the patient will need to grind through lengthy documents and penetrate the code of legalize. There is also the fact that this data is often incomplete and inaccurate. While it could be argued that a responsible patient would expend the resources needed to research a doctor, this seems to be an unreasonable request—a patient should not need to do all this just to know that the doctor is competent. A reason for this is that a patient might be in rough shape and expecting her to engage in all this work would seem unfair. There is also the fact that one legitimate role of the state is to protect citizens from harm and having a clear means of identifying bad doctors would seem to fall within this.
Given the above, it seems reasonable to accept that a patient has the right to know about her doctor’s competence and should have an easy means of acquiring accurate information. This enables a patient to make an informed choice about her physician without facing an undue burden. This will also help the profession—good doctors will attract more patients and bad doctors will have a greater incentive to improve their practice.
While my husky, Isis, and I have both slowed down since we teamed up in 2004, she is doing remarkably well these days. As I often say, pulling so many years will slow down man and dog. While Isis faced a crisis, most likely due to the wear of time on her spine, the steroids seemed to have addressed the pain and inflammation so that we have resumed our usual adventures. Tail up and bright eyed is the way she is now and the way she should be.
In my previous essay I looked at using causal reasoning on a small sale by applying the methods of difference and agreement. In this essay I will look at thinking critically about experiments and studies.
The gold standard in science is the controlled cause to effect experiment. The objective of this experiment is to determine the effect of a cause. As such, the question is “I wonder what this does?” While the actual conducting of such an experiment can be complicated and difficult, the basic idea is rather simple. The first step is to have a question about a causal agent. For example, it might be wondered what effect steroids have on arthritis in elderly dogs. The second step is to determine the target population, which might already be taken care of in the first step—for example, elderly dogs would be the target population. The third step is to pull a random sample from the target population. This sample needs to be representative (that is, it needs to be like the target population and should ideally be a perfect match in miniature). For example, a sample from the population of elderly dogs would ideally include all breeds of dogs, male dogs, female dogs, and so on for all relevant qualities of dogs. The problem with a biased sample is that the inference drawn from the experiment will be weak because the sample might not be adequately like the general population. The sample also needs to be large enough—a sample that is too small will also fail to adequately support the inference drawn from the experiment.
The fourth step involves splitting the sample into the control group and the experimental group. These groups need to be as similar as possible (and can actually be made of the same individuals). The reason they need to be alike is because in the fifth step the experimenters introduce the cause (such as steroids) to the experimental group and the experiment is run to see what difference this makes between the two groups. The final step is getting the results and determining if the difference is statistically significant. This occurs when the difference between the two groups can be confidently attributed to the presence of the cause (as opposed to chance or other factors). While calculating this properly can be complicated, when assessing an experiment (such as a clinical trial) it is easy enough to compare the number of individuals in the sample to the difference between the experimental and control groups. This handy table from Critical Thinking makes this quite easy and also shows the importance of having a large enough sample.
|Number in Experimental Group
(with similarly sized control group)
|Approximate Figure That the difference Must Exceed
To Be Statistically Significant
(in percentage points)
Many “clinical trials” mentioned in articles and blog posts have very small samples sizes and this often makes their results meaningless. This table also shows why anecdotal evidence is fallacious: a sample size of one is all but completely useless when it comes to an experiment.
The above table also assumes that the experiment is run correctly: the sample was representative, the control group was adequately matched to the experimental group, the experimenters were not biased, and so on for all the relevant factors. As such, when considering the results of an experiment it is important to consider those factors as well. If, for example, you are reading an article about an herbal supplement for arthritic dogs and it mentions a clinical trial, you would want to check on the sample size, the difference between the two groups and determine whether the experiment was also properly conducted. Without this information, you would need to rely entirely on the credibility of the source. If the source is credible and claims that the experiment was conducted properly, then it would be reasonable to trust the results. If the source’s credibility is in question, then trust should be withheld. Assessing credibility is a matter of determining expertise and the goal is to avoid being a victim of a fallacious appeal to authority. Here is a short checklist for determining whether a person (or source) is an expert or not:
- The person has sufficient expertise in the subject matter in question.
- The claim being made by the person is within her area(s) of expertise.
- There is an adequate degree of agreement among the other experts in the subject in question.
- The person in question is not significantly biased.
- The area of expertise is a legitimate area or discipline.
- The authority in question must be identified.
While the experiment is the gold standard, there are times when it cannot be used. In some cases, this is a matter of ethics: exposing people or animals to something potentially dangerous might be deemed morally unacceptable. In other cases, it is a matter of practicality or necessity. In such cases, studies are used.
One type of study is the non-experimental cause to effect study. This is identical to the cause to effect experiment with one rather critical difference: the experimental group is not exposed to the cause by those running the study. For example, a study might be conducted of dogs who recovered from Lyme disease to see what long term effects it has on them.
The study, as would be expected, runs in the same basic way as the experiment and if there is a statistically significant difference between the two groups (and it has been adequately conducted) then it is reasonable to make the relevant inference about the effect of the cause in question.
While useful, this sort of study is weaker than the experiment. This is because those conducting the study have to take what they get—the experimental group is already exposed to the cause and this can create problems in properly sorting out the effect of the cause in question. As such, while a properly run experiment can still get erroneous results, a properly run study is even more likely to have issues.
A second type of study is the effect to cause study. It differs from the cause to effect experiment and study in that the effect is known but the cause is not. Hence, the goal is to infer an unknown cause from the known effect. It also differs from the experiment in that those conducting the study obviously do not introduce the cause.
This study is conducted by comparing the experimental group and the control group (which are, ideally, as similar as possible) to sort out a likely cause by considering the differences between them. As would be expected, this method is far less reliable than the others since those doing the study are trying to backtrack from an effect to a cause. If considerable time has passed since the suspected cause, this can make the matter even more difficult to sort out. The conducting the study also have to work with the experimental group they happen to get and this can introduce many complications into the study, making a strong inference problematic.
An example of this would be a study of elderly dogs who suffer from paw knuckling (the paw flips over so the dog is walking on the top of the paw) to determine the cause of this effect. As one might suspect, finding the cause would be challenging—there would be a multitude of potential causes in the history of the dogs ranging from injury to disease. It is also quite likely that there are many causes in play here, and this would require sorting out the different causes for this same effect. Because of such factors, the effect to cause study is the weakest of the three and supports the lowest level of confidence in its results even when conducted properly. This explains why it can be so difficult for researchers to determine the causes of many problems that, for example, elderly dogs suffer from.
In the case of Isis, the steroids that she is taking have been well-studied, so it is quite reasonable for me to believe that they are a causal factor in her remarkable recovery. I do not, however, know for sure what caused her knuckling—there are so many potential causes for that effect. However, the important thing is that she is now walking normally about 90% of the time and her tail is back in the air, showing that she is a happy husky.
As mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a fast sprinting and long running blur of fur, she now merely saunters along. Still, lesser beasts fear her (and to a husky, all creatures are lesser beasts) and the sun is warm—so her life is still good.
Faced with the challenge of keeping her healthy and happy, I have relied a great deal on what I learned as a philosopher. As noted in the preceding essay, I learned to avoid falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.
One of the most useful tool for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since way before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.
Fortunately, the method is quite simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better—although more cases of something bad (like arthritis pain) would certainly be undesirable from other standpoints. The two cases can actually involve the same individual at different times—it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also looked into other cases in which dogs suffered from knuckling issues and when they did not.
The cases in which the effect is present and those in which it is absent are then compared in order to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy—to conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be purely a matter of coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect—especially since she has been eating peanut butter her whole life. It is also important to consider that an alleged cause might actually be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing the knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage. You must also keep in mind the possibility of reversed causation—that the alleged cause is actually the effect. For example, a person might think that the limping is causing the knuckling, but it might turn out that the knuckling is the cause of the limping.
In some cases, sorting out the cause can be very easy. For example, if a dog slips and falls, then has trouble walking, then the most likely cause is the fall (but it could still be something else—perhaps the fall and walking trouble were caused by something else). In other cases, sorting out the cause can be very difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (apparently even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert—such as a vet. Medical tests, for example, are useful for sorting out the difference and finding a likely cause.
The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of some soreness, I started giving her senior dog food, glucosamine and some extra protein. What followed was an improvement in her mobility and the absence of the signs of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I do consider that I could be wrong. Fortunately, I do have good evidence that the steroids Isis has been prescribed work—she made a remarkable improvement after starting the steroids and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids are the cause of her improvement—though this could also be a coincidence.
The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all the cases. In this method, the cases exhibiting the effect (such as knuckling) are considered in order to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence so as to avoid falling into a post hoc fallacy.
The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.
One of the main weaknesses of these methods is that they tend to have very small sample sizes—sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies—the subject of the next essay in this series.
As part of my critical thinking class, I cover the usual topics of credibility and experiments/studies. Since people often find critical thinking a dull subject, I regularly look for real-world examples that might be marginally interesting to students. As such, I was intrigued by John Bohannon’s detailed account of how he “fooled millions into thinking chocolate helps weight loss.”
Bohannon’s con provides an excellent cautionary tale for critical thinkers. First, he lays out in detail how easy it is to rig an experiment to get (apparently) significant results. As I point out to my students, a small experiment or study can generate results that seem significant, but really are not. This is why it is important to have an adequate sample size—as a starter. What is also needed is proper control, proper selection of the groups, and so on.
Second, he provides a clear example of a disgraceful stain on academic publishing, namely “pay to publish” journals that do not engage in legitimate peer review. While some bad science does slip through peer review, these journals apparently publish almost anything—provided that the fee is paid. Since the journals have reputable sounding names and most people do not know which journals are credible and which are not, it is rather easy to generate a credible seeming journal publication. This is why I cover the importance of checking sources in my class.
Third, he details how various news outlets published or posted the story without making even perfunctory efforts to check its credibility. Not surprisingly, I also cover the media in my class both from the standpoint of being a journalist and being a consumer of news. I stress the importance of confirming credibility before accepting claims—especially when doing so is one’s job.
While Bohannon’s con does provide clear evidence of problems in regards to corrupt journals, uncritical reporting and consumer credulity, the situation does raise some points worth considering. One is that while he might have “fooled millions” of people, he seems to have fooled relative few journalists (13 out of about 5,000 reporters who subscribe to the Newswise feed Bohannon used) and these seem to be more of the likes of the Huffington Post and Cosmopolitan as opposed to what might be regarded as more serious health news sources. While it is not known why the other reporters did not run the story, it is worth considering that some of them did look at it critically and rejected it. In any case, the fact that a small number of reporters fell for a dubious story is hardly shocking. It is, in fact, just what would be expected given the long history of journalism.
Another point of concern is the ethics of engaging in such a con. It is possible to argue that Bohannon acted ethically. One way to do this is to note that using deceit to expose a problem can be justified on utilitarian grounds. For example, it seems morally acceptable for a journalist or police officer to use deceit and go undercover to expose criminal activity. As such, Bohannon could contend that his con was effectively an undercover operation—he and his fellows pretended to be the bad guys to expose a problem and thus his deceit was morally justified by the fact that it exposed problems.
One obvious objection to this is that Bohannon’s deceit did not just expose corrupt journals and incautious reporters. It also misinformed the audience who read or saw the stories. To be fair, the harm would certainly be fairly minimal—at worst, people who believed the story would consume dark chocolate and this is not exactly a health hazard. However, intentionally spreading such misinformation seems morally problematic—especially since story retractions or corrections tend to get far less attention than the original story.
One way to counter this objection is to draw an analogy to the exposure of flaws by hackers. These hackers reveal vulnerabilities in software with the stated intent of forcing companies to address the vulnerabilities. Exposing such vulnerabilities can do some harm by informing the bad guys, but the usual argument is that this is outweighed by the good done when the vulnerability is fixed.
While this does have some appeal, there is the concern that the harm done might not outweigh the good done. In Bohannon’s case it could be argued that he has done more harm than good. After all, it is already well-established that the “pay to publish” journals are corrupt, that there are incautious journalists and credulous consumers. As such, Bohannon has not exposed anything new—he has merely added more misinformation to the pile.
It could be countered that although these problems are well known, it does help to continue to bring them to the attention of the public. Going back to the analogy of software vulnerabilities, it could be argued that if a vulnerability is exposed, but nothing is done to patch it, then the problem should be brought up until it is fixed, “for it is the doom of men that they forget.” Bohannon has certainly brought these problems into the spotlight and this might do more good than harm. If so, then this con would be morally acceptable—at least on utilitarian grounds.
The United States recently saw an outbreak of the measles (644 cases in 27 states) with the overwhelming majority of victims being people who had not been vaccinated. Critics of the anti-vaccination movement have pointed to this as clear proof that the movement is not only misinformed but also actually dangerous. Not surprisingly, those who take the anti-vaccination position are often derided as stupid. After all, there is no evidence that vaccines cause the harms that the anti-vaccination people refer to when justifying their position. For example, one common claim is that vaccines cause autism, but this seems to be clearly untrue. There is also the fact that vaccinations have been rather conclusively shown to prevent diseases (though not perfectly, of course).
It is, of course, tempting for those who disagree with the anti-vaccination people to dismiss them uniformly as stupid people who lack the brains to understand science. This, however, is a mistake. One reason it is a mistake is purely pragmatic: those who are pro-vaccination want the anti-vaccination people to change their minds and calling them stupid, mocking and insulting them will merely cause them to entrench. Another reason it is a mistake is that the anti-vaccination people are not, in general, stupid. There are, in fact, grounds for people to be skeptical or concerned about matters of health and science. To show this, I will briefly present some points of concern.
One point of rational concern is the fact that scientific research has been plagued with a disturbing amount of corruption, fraud and errors. For example, the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies. As such, it is hardly stupid to be concerned that scientific research might not be accurate. Somewhat ironically, the study that started the belief that vaccines cause autism is a paradigm example of bad science. However, it is not stupid to consider that the studies that show vaccines are safe might have flaws as well.
Another matter of concern is the influence of corporate lobbyists on matters relating to health. For example, the dietary guidelines and recommendations set forth by the United States Government should be set on the basis of the best science. However, the reality is that these matters are influenced quite strongly by industry lobbyists, such as the dairy industry. Given the influence of the corporate lobbyists, it is not foolish to think that the recommendations and guidelines given by the state might not be quite right.
A third point of concern is the fact that the dietary and health guidelines and recommendations undo what seems to be relentless and unwarranted change. For example, the government has warned us of the dangers of cholesterol for decades, but this recommendation is being changed. It would, of course, be one thing if the changes were the result of steady improvements in knowledge. However, the recommendations often seem to lack a proper foundation. John P.A. Ioannidis, a professor of medicine and statistics at Stanford, has noted “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome. In this literature of epidemic proportions, how many results are correct?” Given such criticism from experts in the field, it hardly seems stupid of people to have doubts and concerns.
There is also the fact that people do suffer adverse drug reactions that can lead to serious medical issues and even death. While the reported numbers vary (one FDA page puts the number of deaths at 100,000 per year) this is certainly a matter of concern. In an interesting coincidence, I was thinking about this essay while watching the Daily Show on Hulu this morning and one of my “ad experiences” was for Januvia, a diabetes drug. As required by law, the ad mentioned all the side effects of the drug and these include some rather serious things, including death. Given that the FDA has approved drugs with dangerous side effects, it is hardly stupid to be concerned about the potential side effects from any medicine or vaccine.
Given the above points, it would certainly not be stupid to be concerned about vaccines. At this point, the reader might suspect that I am about to defend an anti-vaccine position. I will not—in fact, I am a pro-vaccination person. This might seem somewhat surprising given the points I just made. However, I can rationally reconcile these points with my position on vaccines.
The above points do show that there are rational grounds for taking a general critical and skeptical approach to matters of health, medicine and science. However, this general skepticism needs to be properly rational. That is, it should not be a rejection of science but rather the adoption of a critical approach to these matters in which one considers the best available evidence, assesses experts by the proper standards (those of a good argument from authority), and so on. Also, it is rather important to note that the general skepticism does not automatically justify accepting or rejecting specific claims. For example, the fact that there have been flawed studies does not prove that the specific studies about vaccines as flawed. As another example, the fact that lobbyists influence the dietary recommendations does not prove that vaccines are harmful drugs being pushed on Americans by greedy corporations. As a final example, the fact that some medicines have serious and dangerous side effects does not prove that the measles vaccine is dangerous or causes autism. Just as one should be rationally skeptical about pro-vaccination claims one should also be rationally skeptical about anti-vaccination claims.
To use an obvious analogy, it is rational to have a general skepticism about the honesty and goodness of people. After all, people do lie and there are bad people. However, this general skepticism does not automatically prove that a specific person is dishonest or evil—that is a matter that must be addressed on the individual level.
To use another analogy, it is rational to have a general concern about engineering. After all, there have been plenty of engineering disasters. However, this general concern does not warrant believing that a specific engineering project is defective or that engineering itself is defective. The specific project would need to be examined and engineering is, in general, the most rational approach to building stuff.
So, the people who are anti-vaccine are not, in general, stupid. However, they do seem to be making the mistake of not rationally considering the specific vaccines and the evidence for their safety and efficacy. It is quite rational to be concerned about medicine in general, just as it is rational to be concerned about the honesty of people in general. However, just as one should not infer that a friend is a liar because there are people who lie, one should not infer that a vaccine must be bad because there is bad science and bad medicine.
Convincing anti-vaccination people to accept vaccination is certainly challenging. One reason is that the issue has become politicized into a battle of values and identity. This is partially due to the fact that the anti-vaccine people have been mocked and attacked, thus leading them to entrench and double down. Another reason is that, as argued above, they do have well-founded concerns about the trustworthiness of the state, the accuracy of scientific studies, and the goodness of corporations. A third reason is that people tend to give more weight to the negative and also tend to weigh potential loss more than potential gain. As such, people would tend to give more weight to negative reasons against vaccines and fear the alleged dangers of vaccines more than they would value their benefits.
Given the importance of vaccinations, it is rather critical that the anti-vaccination movement be addressed. Calling people stupid, mocking them and attacking them are certainly not effective ways of convincing people that vaccines are generally safe and effective. A more rational and hopefully more effective approach is to address their legitimate concerns and consider their fears. After all, the goal should be the health of people and not scoring points.
Kaci Hickox, a nurse from my home state of Maine, returned to the United States after serving as a health care worker in the Ebola outbreak. Rather than being greeted as a hero, she was confined to an unheated tent with a box for a toilet and no shower. She did not have any symptoms and tested negative for Ebola. After threatening a lawsuit, she was released and allowed to return to Maine. After arriving home, she refused to be quarantined again. She did, however, state that she would be following the CDC protocols. Her situation puts a face on a general moral concern, namely the ethics of balancing rights with safety.
While past outbreaks of Ebola in Africa were met largely with indifference from the West (aside from those who went to render aid, of course), the current outbreak has infected the United States with a severe case of fear. Some folks in the media have fanned the flames of this fear knowing that it will attract viewers. Politicians have also contributed to the fear. Some have worked hard to make Ebola into a political game piece that will allow them to bash their opponents and score points by appeasing fears they have helped create. Because of this fear, most Americans have claimed they support a travel ban in regards to Ebola infected countries and some states have started imposing mandatory quarantines. While it is to be expected that politicians will often pander to the fears of the public, the ethics of the matter should be considered rationally.
While Ebola is scary, the basic “formula” for sorting out the matter is rather simple. It is an approach that I use for all situations in which rights (or liberties) are in conflict with safety. The basic idea is this. The first step is sorting out the level of risk. This includes determining the probability that the harm will occur as well as the severity of the harm (both in quantity and quality). In the case of Ebola, the probability that someone will get it in the United States is extremely low. As the actual experts have pointed out, infection requires direct contact with bodily fluids while a person is infectious. Even then, the infection rate seems relatively low, at least in the United States. In terms of the harm, Ebola can be fatal. However, timely treatment in a well-equipped facility has been shown to be very effective. In terms of the things that are likely to harm or kill an American in the United States, Ebola is near the bottom of the list. As such, a rational assessment of the threat is that it is a small one in the United States.
The second step is determining key facts about the proposals to create safety. One obvious concern is the effectiveness of the proposed method. As an example, the 21-day mandatory quarantine would be effective at containing Ebola. If someone shows no symptoms during that time, then she is almost certainly Ebola free and can be released. If a person shows symptoms, then she can be treated immediately. An alternative, namely tracking and monitoring people rather than locking them up would also be fairly effective—it has worked so far. However, there are the worries that this method could fail—bureaucratic failures might happen or people might refuse to cooperate. A second concern is the cost of the method in terms of both practical costs and other consequences. In the case of the 21-day quarantine, there are the obvious economic and psychological costs to the person being quarantined. After all, most people will not be able to work from quarantine and the person will be isolated from others. There is also the cost of the quarantine itself. In terms of other consequences, it has been argued that imposing this quarantine will discourage volunteers from going to help out and this will be worse for the United States. This is because it is best for the rest of the world if Ebola is stopped in Africa and this will require volunteers from around the world. In the case of the tracking and monitoring approach, there would be a cost—but far less than a mandatory quarantine.
From a practical standpoint, assessing a proposed method of safety is a utilitarian calculation: does the risk warrant the cost of the method? To use some non-Ebola examples, every aircraft could be made as safe as Air-Force One, every car could be made as safe as a NASCAR vehicle, and all guns could be taken away to prevent gun accidents and homicides. However, we have decided that the cost of such safety would be too high and hence we are willing to allow some number of people to die. In the case of Ebola, the calculation is a question of considering the risk presented against the effectiveness and cost of the proposed method. Since I am not a medical expert, I am reluctant to make a definite claim. However, the medical experts do seem to hold that the quarantine approach is not warranted in the case of people who lack symptoms and test negative.
The third concern is the moral concern. Sorting out the moral aspect involves weighing the practical concerns (risk, effectiveness and cost) against the right (or liberty) in question. Some also include the legal aspects of the matter here as well, although law and morality are distinct (except, obviously, for those who are legalists and regard the law as determining morality). Since I am not a lawyer, I will leave the legal aspects to experts in that area and focus on the ethics of the matter.
When working through the moral aspect of the matter, the challenge is determining whether or not the practical concerns morally justify restricting or even eliminating rights (or liberties) in the name of safety. This should, obviously enough, be based on consistent principles in regards to balancing safety and rights. Unfortunately, people tend to be wildly inconsistent in this matter. In the case of Ebola, some people have expressed the “better safe than sorry” view and have elected to impose or support mandatory quarantines at the expense of the rights and liberties of those being quarantined. In the case of gun rights, these are often taken as trumping concerns about safety. The same holds true of the “right” or liberty to operate automobiles: tens of thousands of people die each year on the roads, yet any proposal to deny people this right would be rejected. In general, people assess these matters based on feelings, prejudices, biases, ideology and other non-rational factors—this explains the lack of consistency. So, people are wiling to impose on basic rights for little or no gain to safety, while also being content to refuse even modest infringements in matters that result in great harm. However, there are also legitimate grounds for differences: people can, after due consideration, assess the weight of rights against safety very differently.
Turning back to Ebola, the main moral question is whether or not the safety gained by imposing the quarantine (or travel ban) would justify denying people their rights. In the case of someone who is infectious, the answer would seem to be “yes.” After all, the harm done to the person (being quarantined) is greatly exceeded by the harm that would be inflicted on others by his putting them at risk of infection. In the case of people who are showing no symptoms, who test negative and who are relatively low risk (no known specific exposure to infection), then a mandatory quarantine would not be justified. Naturally, some would argue that “it is better to be safe than sorry” and hence the mandatory quarantine should be imposed. However, if it was justified in the case of Ebola, it would also be justified in other cases in which imposing on rights has even a slight chance of preventing harm. This would seem to justify taking away private vehicles and guns: these kill more people than Ebola. It might also justify imposing mandatory diets and exercise on people to protect them from harm. After all, poor health habits are major causes of health issues and premature deaths. To be consistent, if imposing a mandatory quarantine is warranted on the grounds that rights can be set aside even when the risk is incredibly slight, then this same principle must be applied across the board. This seems rather unreasonable and hence the mandatory quarantine of people who are not infectious is also unreasonable and not morally acceptable.
In science fiction stories, movies and games automated medical services are quite common. Some take the form of autodocs—essentially an autonomous robotic pod that treats the patient within its confines. Medbots, as distinct from the autodoc, are robots that do not enclose the patient, but do their work in a way similar to a traditional doctor or medic. There are also non-robotic options using remote-controlled machines—this would be an advanced form of telemedicine in which the patient can actually be treated remotely. Naturally, robots can be built that can be switched from robotic (autonomous) to remote controlled mode. For example, a medbot might gather data about the patient and then a human doctor might take control to diagnose and treat the patient.
One of the main and morally commendable reasons to create medical robots and telemedicine capabilities is to provide treatment to people in areas that do not have enough human medical professionals. For example, a medical specialist who lives in the United States could diagnose and treat patients in a remote part of the world using a suitable machine. With such machines, a patient could (in theory) have access to any medical professional in the world and this would certainly change medicine. True medical robots would obviously change medicine—after all, a medical robot would never get tired and such robots could, in theory, be sent all over the world to provide medical care. There is, of course, the usual concern about the impact of technology on jobs—if a robot can replace medical personnel and do so in a way that increases profits, that will certainly happen. While robots would certainly excel at programmable surgery and similar tasks, it will certainly be quite some time before robots are advanced enough to replace human medical professionals on a large scale
Another excellent reason to create medical robots and telemedicine capabilities has been made clear by the Ebola outbreak: medical personnel, paramedics and body handlers can be infected. While protective gear and protocols do exist, the gear is cumbersome, flawed and hot and people often fail to properly follow the protocols. While many people are moral heroes and put themselves at risk to treat the ill and bury the dead, there are no doubt people who are deterred by the very real possibility of a horrible death. Medical robots and telemedicine seem ideal for handling such cases.
First, human diseases cannot infect machines: a robot cannot get Ebola. So, a doctor using telemedicine to treat Ebola patients would be at not risk. This lack of risk would presumably increase the number of people willing to treat such diseases and also lower the impact of such diseases on medical professionals. That is, far fewer would die trying to treat people.
Second, while a machine can be contaminated, decontaminating a properly designed medical robot or telemedicine machine would be much easier than disinfecting a human being. After all, a sealed machine could be completely hosed down by another machine without concerns about it being poisoned, etc. While numerous patients might be exposed to a machine, machines do not go home—so a contaminated machine would not spread a disease like an infected or contaminated human would.
Third, medical machines could be sent, even air-dropped, into remote and isolated areas that lack doctors yet are often the starting points of diseases. This would allow a rapid response that would help the people there and also help stop a disease before it makes its way into heavily populated areas. While some doctors and medical professionals are willing to be dropped into isolated areas, there are no doubt many more who would be willing to remotely operate a medical machine that has been dropped into a remote area suffering from a deadly disease.
There are, of course, some concerns about the medical machines, be they medbots, autodocs or telemedicine devices.
One is that such medical machines might be so expensive that it would be cost prohibitive to use them in situations in which they would be ideal (namely in isolated or impoverished areas). While politicians and pundits often talk about human life being priceless, human life is rather often given a price and one that is quite low. So, the challenge would be to develop medical machines that are effective yet inexpensive enough that they would be deployed where they would be needed.
Another is that there might be a psychological impact on the patient. When patients who have been treated by medical personal in hazard suits speak about their experiences, they often remark on the lack of human contact. If a machine is treating the patient, even one remotely operated by a person, there will be a lack of human contact. But, the harm done to the patient would presumably be outweighed by the vastly lowered risk of the disease spreading. Also, machines could be designed to provide more in the way of human interaction—for example, a telemedicine machine could have a screen that allows the patient to see the doctor’s face and talk to her.
A third concern is that such machines could malfunction or be intentionally interfered with. For example, someone might “hack” into a telemedicine device as an act of terrorism. While it might be wondered why someone would do this, it seems to be a general rule that if someone can do something evil, then someone will do something evil. As such, these devices would need to be safeguarded. While no device will be perfect, it would certainly be wise to consider possible problems ahead of time—although the usual process is to have something horrible occur and then fix it. Or at least talk about fixing it.
In sum, the recent Ebola outbreak has shown the importance of developing effective medical machines that can enable treatment while taking medical and other personnel out of harm’s way.
There are about three million Americans and about 170 million people around the world infected with Hepatitis C. In the recent past, the cost of treatment could be up to $300,000 in extreme cases. A new drug, Sovaldi, would reduce that cost to about $84,000. On the face of it, that seems like a great deal. However, the company manufacturing the drug has generated some outrage. The reason is simple: the company, Gilead, plans to charge $1,000 per pill.
While $1,000 for a pill might seem exorbitant, Gilead has made the reasonable point that they have the right to recover the cost of developing the medicine. This is certainly correct—the expense of developing a product can be legitimately passed on to the customers.
In the case of Sovalidi, Gilead “developed” it by buying the company that developed it for $11 billion. While this is a certainly a large sum of money, if 150,000 people are treated at the asking price of $1,000 per pill, the company will have recovered what it spent to buy the company that developed it. This is a not uncommon practice in areas with high initial development costs. For example, new technology initially comes at a premium price and then the price drops as a company recovers its development costs.
When asked if Gilead would reduce the cost once it recovered its money, the vice president of the company said, “”That’s very unlikely that we would do that. I appreciate the thought.” One way to justify this is by contending that the cost of producing the pill warrants keeping the price high. After all, the cost of production is clearly a legitimate factor in calculating a fair price for a product.
However, the drug is most likely fairly cheap to produce. According to Andrew Hill, who is in the Department of Pharmacology and Therapeutics at the University of Liverpool, the cost per treatment would be $150-250 per person. If this is correct, the company would be making truly massive profits off a drug that is rather cheap to produce. On the face of it, such a mark-up would seem to be unfair.
It might be contended that the free-market will sort this out. However, there are two major concerns here. The first is that Gilead’s ownership of the drug rather limits the competitive force of the market. Until another company produces a competing drug, Gilead has an effective monopoly. Competing companies would need to spend considerable sums to develop a competing drug and they would have to avoid infringing on the ownership rights of Gilead. Whether this is seen as wrong or not depends on how one looks at the matter. On the one hand, there is the view that a company has the right to its government enforced monopoly and can use this to charge any amount it deems fit until competition forces it to reduce prices. On the other hand, there is the view that it is wrong for a company to use the coercive power of the state (the state ensures that the drug cannot be copied and sold by others) to exploit the very citizens that the state is supposed to protect from exploitation. The second is that the treatment is not a luxury item for the patients but a necessity—without it they risk severe illness and death. As such, the customers are coerced by their condition and this is being exploited by Gilead. If Gilead were selling $84,000 watches or cars, people could elect to buy them or not—so Gilead would need to make the product match the price. In the case of medicine, Gilead can set its price and give people a choice between buying and dying.
Interestingly, Gilead does plan to offer lower prices in countries such as India, Pakistan, Egypt and China. While the price is not set, the estimate is that “It’ll be from the high hundreds to low thousands for these types of markets.” This rather obviously indicates that Gilead could sell the pills for less in the United States. This lower cost could be seen in at least two ways. One is that Gilead is being nice by offering people in these other countries a price break. Another is that Gilead knows that it will simply not be able to sell the pills for $1,000 each in such countries and are settling for taking what they can get. That is, some profit is better than none.
If Gilead is giving patients in these countries a real break—that is, selling the product with a very narrow profit margin, then the company would seem to be acting in a laudable way by providing an important treatment while only making large profits. However, given the estimated cost of providing the treatment ($150-250) the company would be making very large profits by selling the treatments for the high hundreds to low thousands. The company would also be making what might be regarded as obscene profits in countries like the United States where the pills would sell for $1,000 each.
Given that Gilead would recover its costs quickly and the actual cost of providing treatment is relatively low, what remains to be determined is what would warrant charging such a high price for a essential treatment.
Alton presents a standard reason for this: “Those who are bold and go out and innovate like this and take the risk — there needs to be more of a reward on that. Otherwise, it would be very difficult for people to make that investment.”
Alton’s basic point is reasonable. Developing new medicines is a risky business since most drugs never actually make it to being a sellable product. As such, this increases what companies must spend to actually develop a product they can sell.
One point of concern is the degree of risk that Gilead took when it bought the company that developed the drug. If that company took risks and developed the drug, then that company certainly earned the right to recover the cost of the risks it took. However, it is not clear that Gilead was bold, innovative and risk taking by buying that company.
Another point of concern is determining the cost and value of risk. That is, sorting out how risk taking legitimately contributes to a higher price. Oversimplifying things a bit, it would seem fair to consider the cost of legitimate attempts to develop drugs that failed as part of the legitimate operating expenses of a company and thus these can justly passed on to the consumer. However, as noted above, Gilead will recover the cost of buying the developer of the drug quickly and hence will lose the justification that it must charge a high price in compensation for its risk. Even if it is granted that risk taking warrants charging high prices, this should not warrant the high prices when the cost of the risk has been recovered. At that point a new justification would be needed for the high price. In the case of the medicine, the cost of providing the treatment would not warrant the high price. Also as noted above, the market is effectively not free since the state ensures that Gilead has a monopoly on the medicine it bought and the patients are coerced by their illness. If the patients tried to produce the medicine on their own by copying the pills, the state would send police to arrest them and they would face severe legal action.
It could be replied that $84,000 is a bargain compared to the current cost and this justifies the high price. To use an analogy, if one surgeon charges $300,000 to do a procedure and I will provide the same results for $84,000 then that seems like a good deal. However, if it only costs me $250 to treat the person, that would hardly seem to be a fair price. It would be a better price—but better is not the same as fair.
I freely admit that I have not settled the matter of what is a fair price. However, it does seem clear that $1,000 per pill is not a fair price.