Human flesh is weak and metal is strong. So, it is no surprise that military science fiction has often featured soldiers enhanced by cybernetics ranging from the minor to the extreme. An example of a minor cybernetic is an implanted radio. The most extreme example would be a full body conversion: the brain is removed from the original body and placed within a mechanical body. This body might look like a human (known as a Gemini full conversion in Cyberpunk) or be a vehicle such as a tank, as in Keith Laumer’s A Plague of Demons.
One obvious point of moral concern with cybernetics is the involuntary “upgrading” of soldiers, such as the sort practiced by the Cybermen of Doctor Who. While important, the issue of involuntary augmentation is not unique to cybernetics and was addressed in the second essay in this series. For the sake of this essay, it will be assumed that the soldiers volunteer for their cybernetics and are not coerced or deceived. This then shifts the moral concern to the ethics of the cybernetics themselves.
Restorative cybernetics are devices used to restore (hopefully) normal functions to a wounded soldier. Examples would include cyberoptics (replacement eyes), cyberlimbs (replacements legs and arms), and cyberorgans (such as an artificial heart). Soldiers are already being fitted with such devices, although by the standards of science fiction they are still primitive. Given that these devices merely restore functionality and the ethics of prosthetics and similar replacements is well established, there seems to be no moral concern about using such technology in what is essentially a medical role. In fact, it could be argued that nations have a moral obligation to use such technology to restore their wounded soldiers.
While enhancement cybernetics might be used to restore functionality to a wounded soldier, enhancement cybernetics go beyond mere restoration. By definition, they are intended to improve on the original. These enhancements break down into two main classes. The first class consists of replacement cybernetics—these devices require the removal of the original part (be it an eye, limb or organ) and serve as replacements that improve on the original in some manner. For example, cyberoptics could provide a soldier with night vision, telescopic visions and immunity to being blinded by flares and flashes. As another example, cybernetic limbs could provide greater speed, strength and endurance. And, of course, a full conversion could provide a soldier with a vast array of superhuman abilities.
The obvious moral concern with these devices is that they require the removal of the original organic parts—something that certainly seems problematic, even if they do offer enhanced abilities. This could, of course, be offset if the original parts were preserved and restored when the soldier left the service. There is also the concern raised in science fiction about the mental effects of such removals and replacements—the Cyberpunk role playing game developed the notion of cyberpsychosis, a form of insanity caused by having flesh replaced by machines. Obviously, it is not yet known what negative effects (if any) such enhancements will have on people. As in any case of weighing harms and benefits, the likely approach would be utilitarian: are the advantages of the technology worth the cost to the soldier?
A second type of enhancement is an add-on which does not replace existing organic parts. Instead, as the name implies, an add-on involves the addition of a device to the body of the soldier. Add-on cybernetics differ from wearables and standard gear in that they are actually implanted in or attached to the soldier’s body. As such, removal can be rather problematic.
A fairly minor example would be something like an implanted radio. A rather extreme example would be the case of the comic book villain Doctor Octopus—his mechanical limbs are add-ons. Other examples of add-ons include such things as implanted sensors, implanted armor, implanted weapons (such as in the comic book hero Wolverine), and other such augmentations.
Since these devices do not involve removal of healthy parts, they do avoid that moral concern. However, there are still legitimate concerns about the physical and mental harms that might be caused by such devices. It is easy enough to imagine implanted devices having serious side effects on soldiers. As noted above, these matters would probably be best addressed by utilitarian ethics—weighing the harms against the benefits.
Both types of enhancements also raise a moral concern about returning the soldier to the civilian population after her term of service. In the case of restorative grade devices, there is not as much concern—these soldiers would, ideally, function as they did before their injuries. However, the enhancements do present a potential problem since they, by definition, give the soldier capabilities that exceed that of normal humans. In some cases, re-integration would probably not be a problem. For example, a soldier with enhanced cyberoptics would presumably present no special problems. However, certain augmentations would present serious problems, such as implanted weapons or full conversions. Ideally, augmented soldiers could be restored to normal after their service has ended, but there could obviously be cases in which this was not done—either because of the cost or because the augmentation could not be reversed. This has been explored in science fiction—soldiers that can never stop being soldiers because they are machines of war. While this could be justified on utilitarian grounds (after all, war itself is often justified on such grounds), it is certainly a matter of concern—or will be.
It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.
If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.
While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.
While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.
Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.
While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.
The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.
It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.
To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.
As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.
Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.
Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.
Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.
Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.
The bookshelves of the world abound with tomes on self-help. Many of these profess to help people with various emotional woes, such as sadness, and make vague promises about happiness. Interestingly enough, philosophers have long been in the business of offering advice on how to be happy. Or at least not too sad.
Each spring semester I teach Modern Philosophy and cover our good dead friend Spinoza. In addition to an exciting career as a lens grinder, he also manage to avoid being killed by an assassin. However, breathing in all that glass dust seems to have ultimately contributed to his untimely death. But enough about his life and death, it is time to get to the point of this essay.
As Spinoza saw it, people are slaves to their emotion and chained to what they love, such as fame, fortune and other people. This inevitably leads to sadness: the people we love betray us or die. That fancy Tesla can be smashed in a wreck. The beach house can be swept away by the rising tide. A job can be lost as a company seeks to boost its stock prices by downsizing the job fillers. And so on, through all the ways things can go badly.
While Spinoza was a pantheist and believed that everything is God and God is everything, his view of human beings is similar to that of the philosophical mechanist: humans are not magically exempt from the laws of nature. He was also a strict determinist: each event occurs from necessity and cannot be otherwise—there is no chance or choice. So, for example, the Seahawks could not have won the 2015 Super Bowl. As another example, I could not have written this essay in any other manner, so I had to make that remark about the Seahawks losing rather than mentioning their 2014 victory.
Buying into determinism, Spinoza took the view that human behavior and motivations can be examined as one might examine “lines, planes or bodies.” More precisely, he took the view that emotions follow the same necessity as all other things, thus making the effects of the emotions predictable—provided that one has enough knowledge. Spinoza then used this idea as the basis for his “self-help” advice.
According to Spinoza all emotions are responses to the past, present or future. For example, a person might feel regret because she believes she could have made her last relationship work if she had only put more effort into it. As another example, a person might worry because he thinks that he might lose his job in the next round of downsizing at his company. These negative feelings rest, as Spinoza sees it, on the false belief that the past could have been otherwise and that the future is undetermined. Once a person realizes nothing could have been any different and the future cannot be anything other than what it will be, then that person will suffer less from the emotions. Thus, for Spinoza, freedom from the enslaving chains is the recognition and acceptance that what was could not have been otherwise and what will be cannot be otherwise.
This view does have a certain appeal and it does make sense that it can have some value. In regards to the past, people do often beat themselves up emotionally over what they regard as past mistakes. This can lead a person to be chained by regrets and thus be partially trapped in the past as she spends countless hours wondering “what if?” This is not to say that feeling regret or guilt is wrong—far from it. But, it is to say that lamenting about the past to the detriment of now is a problem. It is also a problem to believe that things could have been different when they, in fact, could not have been different.
This is also not to say that a person should not reflect on the past—after all, a person who does not learn from her mistakes is doomed to repeat them. People can, of course, also be trapped by the past because of what they see as good things about the past—they are chained to what they (think) they once had or once were (such as being the big woman on campus back in college).
In regards to the future, it is very easy to be trapped by anxiety, fear and even hope. It can be reassuring to embrace the view that what will be will be and to not worry and be happy. This is not to say that one should be foolish about the future, of course.
There is, unfortunately, one crushing and obvious problem with Spinoza’s advice. If everything is necessary and determined, his advice makes no sense: what is, must be and cannot be otherwise. To use an analogy, it would be like shouting advice at someone watching a cut scene in a video game. This is pointless, since the person cannot do anything to change what is occurring. What occurs must occur and cannot be otherwise. For Spinoza, while we might think life is a like a game, it is like that cut scene: we are spectators of the show and not players controlling the game.
The obvious counter is to say “but I feel free! I feel like I am making choices!” Spinoza was well aware of this objection. In response, he claims that if a stone were conscious and hurled through the air, it would think it was free to choose to move and land where it does. People think they are free because they are “conscious of their own actions, and ignorant of the causes by which those actions are determined.” In other words, we think we are free because we do not know better. Going back to the video game analogy, we think we are in control as we push the buttons, but this is because we do not know how the game actually works—that is, we are just along for the ride and not in control.
Since everything is determined, whether or not a person heeds Spinoza’s advice is also determined—if you do, then you do and you could not do otherwise. If you do not, you could not do otherwise. As such, his advice would seem to be beyond useless. This is a stock paradox faced by determinists who give advice: their theory says that people cannot chose to follow this advice—they will just do what they are determined to do. That said, it is possible to salvage some useful advice from Spinoza.
The first step is for me to reject his view that I lack free will. I have a stock argument for this that goes as follows. Obviously, I have free will or I do not. It is equally obvious that there is no way to tell whether I do or not. From an empirical standpoint, a universe with free will looks and feels just like a universe without free will: you just observe people doing stuff and apparently making decisions while thinking and feeling that you are doing the same.
Suppose someone rejects free will and they are wrong. In this case they are not only mistaken but also consciously rejecting real freedom.
Suppose someone rejects free will and they are correct. In that case, they are right—but not in the sense that they made the correct choice. They would have been determined to have that view and it would just so happen that it matches reality.
Suppose someone accepts free will and they are right. In this case, they have the correct view. They have also made the right choice—since choice would be real, making right and wrong choices is possible. More importantly, if they act consistently with this view, then they will be doing things right—not in the moral sense, but in the sense that they are acting in accord with how the universe works.
Suppose someone accepts free will and they are wrong. In this case they are in error, but have not made an incorrect choice (for obvious reasons). They believe they are freely making choices, but obviously are not.
If I can choose, then I should obviously choose free will. If I cannot choose, then I will think I chose whatever it is I am determined to believe. If I can choose and choose to think I cannot, I am in error. Since I cannot know which option is correct, it seems best to accept free will. If I am actually free, I am right. If I am not free, then I am mistaken but had no choice.
Given the above argument, I accept that I have agency. This makes it possible for me to meaningfully give and accept (or reject) advice. Turning back to Spinoza, I obviously cannot accept his advice that I am enslaved by determinism. However, I can accept some of his claims, namely that I am acted upon by my attachments and emotions. As he sees it, the emotions are things that act upon us—on my view, they would thus be things that impinge upon our agency. As I love to do, I will use an analogy to running.
As I ran this morning, I was thinking about this essay and focused on the fact that feelings of pain (I have various old and new injuries) and tiredness were impinging on me in a manner similar to the way the cold or rain might impinge on me. In the case of pain and tiredness, the attack is from inside. In the case of the cold or rain, the attack is from the outside. Whether the attack is from inside or out, the attack is trying to make the choice for me—to rob me of my agency as a runner. If the pain, cold or rain makes me stop, then I am not acting. I am being acted upon. If I chose to stop, then I am acting. If I chose to go on, I am also acting. And acting rightly. As a runner I know the difference between choosing to stop and being forced to stop.
Being aware of this is very useful for running—thanks to decades of experience I understand, in a way Spinoza might approve, the workings of pain, fatigue and so on. To use a specific example, I know that I am being acted upon by the pain and I understand quite well how it works. As such, the pain is not in control—I am. If I wish, I can run myself to ruin (and I have done just this). Or I can be wiser and avoid damaging myself.
Turning back to emotions, feelings impinge upon me in ways analogous to pain and fatigue. I do not have full control over how I feel—the emotions simply occur, perhaps in response to events or perhaps simply as the result of an electrochemical imbalance. To use a specific example, like most folks I will feel depressed and know that I have no reason to feel that way. It is like the cold or fatigue—it is just impinging on me. As Spinoza argued, my knowledge of how this works is critical to dealing with it. While I cannot fully control the feeling, I understand why I feel that way. It is like the cold I felt running in the Maine winters—it is a natural phenomenon that is, from my perspective, trying to destroy me. In the case of the cold, I can wear warmer clothing and stay moving—knowing how it works enables me to choose how to combat it. Likewise, knowing how the negative feelings work enables me to choose how to combat them. If I am depressed for no reason, I know it is just my brain trying to kill me. It is not pleasant, but it does not get to make the decisions for me. Fortunately, our good dead friend Aristotle has some excellent advice for training oneself to handle the emotions.
That said, the analogy to cold is particularly apt. The ice of the winter can kill even those who understand it and know how to resist it—sometimes the cold is just too much for the body. Likewise, the emotions can be like the howling icy wind—they can be too much for the mind. We are, after all, only human and have our limits. Knowing these is a part of wisdom. Sometimes you just need to come in from the cold or it will kill you. Have some hot chocolate. With marshmallows.
Interested in playing a Fallacy game? My 42 Fallacies have been transformed into a game. The link is http://dontfallacy.me/
I’m not associated with the game, other than their use of my fallacies.
The Kindle version of my book about the arguments against same sex-marriage will be free on Amazon (all countries) from February 23, 2015 to February 27, 2015.
Here is the link to the Amazon.com (USA) version.
Here is the link to the UK version.
The Keystone XL Pipeline has become a powerful symbol in American politics. Those that oppose it can take it as a symbol of all that is wrong: environmental dangers, global warming, big corporations, and other such evils. Those who support it can take it as a symbol of all that is good: jobs, profits, big corporations and other such goods. While I am no expert when it comes to pipelines, I thought it would be worthwhile to present a concise discussion of the matter.
The main substantial objections against the pipeline are environmental. One concern is that pipelines do suffer from leaks and these leaks can inflict considerable damage to the environment (including the water sources that are used by people). The material that will be transported by the Keystone XL pipeline is supposed to be rather damaging to the environment and rather problematic in terms of its cleanup.
Those who support the pipeline counter these objections by claiming that the pipelines are relatively safe—but this generally does not reassure people who have seen the impact of previous leaks. Another approach used by supporters is to point out that if the material is not transported by pipeline, companies will transport it by truck and by train. These methods, some claim, are more dangerous than the pipelines. Recent explosions of trains carrying such material do tend to serve as evidence for this claim. There is also the claim that using trucks and trains as a means of transport will create more CO2 output and hence the pipeline is a better choice in regards to the environment.
Some of those who oppose the pipeline contend that the higher cost of using trucks and trains will deter companies from using them (especially with oil prices so low). So, if the pipeline is not constructed, there would not be the predicted increase in CO2 levels from the use of these means of transportation. The obvious counter to this is that companies are already using trucks and trains to transport this material, so they already seem to be willing to pay the higher cost. It can also be pointed out that there are already a lot of pipelines so that one more would not make that much difference.
In addition to the leaks, there is also the concern about the environmental impact of acquiring the material to be transported by the pipeline and the impact of using the fossil fuels created from this material. Those opposed to the pipeline point out how it will contribute to global warming and pollution.
Those who support the pipeline tend to deny climate change or accept climate change but deny that humans cause it, or accept that humans cause it but contend that there is nothing that we can do that would be effective (mainly because China and other countries will just keep polluting). Another approach is to argue that the economic benefits outweigh any alleged harms.
Proponents of the pipeline claim that it will create a massive number of jobs. Opponents point out that while there will be some job creation when it is built (construction workers will be needed), the number of long term jobs will be very low. The opponents seem to be right—leaving out cleanup jobs, it does not take a lot of people to maintain a modern pipeline. Also, it is not like businesses will open up along the pipeline once it is constructed—it is not like the oil needs hotels or food. It is, of course, true that the pipeline can be a moneymaker for the companies—but it does seem unlikely that this pipeline will have a significant impact on the economy. After all, it would just be one more pipeline among many.
As might be guessed, some of the debate is over the matters of fact discussed above, such the environmental impact of building or not building the pipeline. Because many of the parties presenting the (alleged) facts have a stake in the matter, this makes getting objective information a bit of a problem. After all, those who have a financial or ideological interest in the pipeline will tend to present numbers that support the pipeline—that it creates many jobs and will not have much negative impact. Those who oppose it will tend to do the opposite—their numbers will tend to tell against the pipeline. This is not to claim that people are lying, but to simply point out the obvious influences of biases.
Even if the factual disputes could be settled, the matter is rather more than a factual disagreement—it is also a dispute over values. Environmental issues are generally political in the United States, with the right usually taking stances for business and against the environment and the left taking pro-environment and anti-business stances. The Keystone XL pipeline is no exception and has, in fact, become a symbol of general issues in regards to the environment and business.
As noted above, those who support the pipeline (with some interesting exceptions) generally reject or downplay the environmental concerns in favor of their ideological leaning. Those that oppose it generally reject or downplay the economic concerns in favor of their ideological leaning.
While I am pro-environment, I do not have a strong rational opposition to the pipeline. The main reasons are that there are already many pipelines, that the absence of the pipeline would not lower fossil fuel consumption, and that companies would most likely expand the use of trains and trucks (which would create more pollution and potentially create greater risks). However, if I were convinced that not having the pipeline would be better than having it, I would certainly change my position.
There is, of course, also the matter of symbolism—that one should fight or support something based on its symbolic value. It could be contended that the pipeline is just such an important symbol and that being pro-environment obligates a person to fight it, regardless of the facts. Likewise, someone who is pro-business would be obligated to support it, regardless to the facts.
While I do appreciate the value of symbols, the idea of supporting or opposing something regardless of the facts strikes me as both irrational and immoral.
While some countries will pay ransoms to free hostages, the United States has a public policy of not doing this. Thanks to ISIS, the issue of whether ransoms should be paid to terrorists groups or not has returned to the spotlight.
One reason to not pay a ransom for hostages is a matter of principle. This principle could be that bad behavior should not be rewarded or that hostage taking should be punished (or both).
One of the best arguments against paying ransoms for hostages is both a practical and a utilitarian moral argument. The gist of the argument is that paying ransoms gives hostage takers an incentive to take hostages. This incentive will mean that more people will be taken hostage. The cost of not paying is, of course, the possibility that the hostage takers will harm or kill their initial hostages. However, the argument goes, if hostage takers realize that they will not be paid a ransom, they will not have an incentive to take more hostages. This will, presumably, reduce the chances that the hostage takers will take hostages. The calculation is, of course, that the harm done to the existing hostages will be outweighed by the benefits of not having people taken hostage in the future.
This argument assumes, obviously enough, that the hostage takers are primarily motivated by the ransom payment. If they are taking hostages primarily for other reasons, such as for status, to make a statement or to get media attention, then not paying them a ransom will not significantly reduce their incentive to take hostages. This leads to a second reason to not pay ransoms.
In addition to the incentive argument, there is also the funding argument. While a terrorist group might have reasons other than money to take hostages, they certainly benefit from getting such ransoms. The money they receive can be used to fund additional operations, such as taking more hostages. Obviously enough, if ransoms are not paid, then such groups do lose this avenue of funding which can impact their operations. Since paying a ransom would be funding terrorism, this provides both a moral a practical reason not to pay ransoms.
While these arguments have a rational appeal, they are typically countered by a more emotional appeal. A stock approach to arguing that ransoms should be paid is the “in their shoes” appeal. The method is very straightforward and simply involves asking a person whether or not she would want a ransom to be paid for her (or a loved one). Not surprising, most people would want the ransom to be paid, assuming doing so would save her (or her loved one). Sometimes the appeal is made explicitly in terms of emotions: “how would you feel if your loved one died because the government refuses to pay ransoms?” Obviously, any person would feel awful.
This method does have considerable appeal. The “in their shoes” appeal can be seem similar to the golden rule approach (do unto others as you would have them do unto you). To be specific, the appeal is not to do unto others, but to base a policy on how one would want to be treated in that situation. If I would not want the policy applied to me (that is, I would want to be ransomed or have my loved one ransomed), then I should be morally opposed to the policy as a matter of consistency. This certainly makes sense: if I would not want a policy applied in my case, then I should (in general) not support that policy.
One obvious counter is that there seems to be a distinction between what a policy should be and whether or not a person would want that policy applied to herself. For example, some universities have a policy that if a student misses more than three classes, the student fails the course. Naturally, no student wants that policy to be applied to her (and most professors would not have wanted it applied to them when they were students), but this hardly suffices to show that the policy is wrong. As another example, a company might have a policy of not providing health insurance to part time employees. While the CEO would certainly not like the policy if she were part time, it does not follow that the policy must be a bad one. As such, policies need to be assessed not just in terms of how a persons feels about them, but in terms of their merit or lack thereof.
Another obvious counter is to use the same approach, only with a modification. In response to the question “how would you feel if you were the hostage or she were a loved one?” one could ask “how would you feel if you or a loved one were taken hostage in an operation funded by ransom money? Or “how would you feel if you or a loved one were taken hostage because the hostage takers learned that people would pay ransoms for hostages?” The answer would be, of course, that one would feel bad about that. However, while how one would feel about this can be useful in discussing the matter, it is not decisive. Settling the matter rationally does require considering more than just how people would feel—it requires looking at the matter with a degree of objectivity. That is, not just asking how people would feel, but what would be right and what would yield the best results in the practical sense.
The United States recently saw an outbreak of the measles (644 cases in 27 states) with the overwhelming majority of victims being people who had not been vaccinated. Critics of the anti-vaccination movement have pointed to this as clear proof that the movement is not only misinformed but also actually dangerous. Not surprisingly, those who take the anti-vaccination position are often derided as stupid. After all, there is no evidence that vaccines cause the harms that the anti-vaccination people refer to when justifying their position. For example, one common claim is that vaccines cause autism, but this seems to be clearly untrue. There is also the fact that vaccinations have been rather conclusively shown to prevent diseases (though not perfectly, of course).
It is, of course, tempting for those who disagree with the anti-vaccination people to dismiss them uniformly as stupid people who lack the brains to understand science. This, however, is a mistake. One reason it is a mistake is purely pragmatic: those who are pro-vaccination want the anti-vaccination people to change their minds and calling them stupid, mocking and insulting them will merely cause them to entrench. Another reason it is a mistake is that the anti-vaccination people are not, in general, stupid. There are, in fact, grounds for people to be skeptical or concerned about matters of health and science. To show this, I will briefly present some points of concern.
One point of rational concern is the fact that scientific research has been plagued with a disturbing amount of corruption, fraud and errors. For example, the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies. As such, it is hardly stupid to be concerned that scientific research might not be accurate. Somewhat ironically, the study that started the belief that vaccines cause autism is a paradigm example of bad science. However, it is not stupid to consider that the studies that show vaccines are safe might have flaws as well.
Another matter of concern is the influence of corporate lobbyists on matters relating to health. For example, the dietary guidelines and recommendations set forth by the United States Government should be set on the basis of the best science. However, the reality is that these matters are influenced quite strongly by industry lobbyists, such as the dairy industry. Given the influence of the corporate lobbyists, it is not foolish to think that the recommendations and guidelines given by the state might not be quite right.
A third point of concern is the fact that the dietary and health guidelines and recommendations undo what seems to be relentless and unwarranted change. For example, the government has warned us of the dangers of cholesterol for decades, but this recommendation is being changed. It would, of course, be one thing if the changes were the result of steady improvements in knowledge. However, the recommendations often seem to lack a proper foundation. John P.A. Ioannidis, a professor of medicine and statistics at Stanford, has noted “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome. In this literature of epidemic proportions, how many results are correct?” Given such criticism from experts in the field, it hardly seems stupid of people to have doubts and concerns.
There is also the fact that people do suffer adverse drug reactions that can lead to serious medical issues and even death. While the reported numbers vary (one FDA page puts the number of deaths at 100,000 per year) this is certainly a matter of concern. In an interesting coincidence, I was thinking about this essay while watching the Daily Show on Hulu this morning and one of my “ad experiences” was for Januvia, a diabetes drug. As required by law, the ad mentioned all the side effects of the drug and these include some rather serious things, including death. Given that the FDA has approved drugs with dangerous side effects, it is hardly stupid to be concerned about the potential side effects from any medicine or vaccine.
Given the above points, it would certainly not be stupid to be concerned about vaccines. At this point, the reader might suspect that I am about to defend an anti-vaccine position. I will not—in fact, I am a pro-vaccination person. This might seem somewhat surprising given the points I just made. However, I can rationally reconcile these points with my position on vaccines.
The above points do show that there are rational grounds for taking a general critical and skeptical approach to matters of health, medicine and science. However, this general skepticism needs to be properly rational. That is, it should not be a rejection of science but rather the adoption of a critical approach to these matters in which one considers the best available evidence, assesses experts by the proper standards (those of a good argument from authority), and so on. Also, it is rather important to note that the general skepticism does not automatically justify accepting or rejecting specific claims. For example, the fact that there have been flawed studies does not prove that the specific studies about vaccines as flawed. As another example, the fact that lobbyists influence the dietary recommendations does not prove that vaccines are harmful drugs being pushed on Americans by greedy corporations. As a final example, the fact that some medicines have serious and dangerous side effects does not prove that the measles vaccine is dangerous or causes autism. Just as one should be rationally skeptical about pro-vaccination claims one should also be rationally skeptical about anti-vaccination claims.
To use an obvious analogy, it is rational to have a general skepticism about the honesty and goodness of people. After all, people do lie and there are bad people. However, this general skepticism does not automatically prove that a specific person is dishonest or evil—that is a matter that must be addressed on the individual level.
To use another analogy, it is rational to have a general concern about engineering. After all, there have been plenty of engineering disasters. However, this general concern does not warrant believing that a specific engineering project is defective or that engineering itself is defective. The specific project would need to be examined and engineering is, in general, the most rational approach to building stuff.
So, the people who are anti-vaccine are not, in general, stupid. However, they do seem to be making the mistake of not rationally considering the specific vaccines and the evidence for their safety and efficacy. It is quite rational to be concerned about medicine in general, just as it is rational to be concerned about the honesty of people in general. However, just as one should not infer that a friend is a liar because there are people who lie, one should not infer that a vaccine must be bad because there is bad science and bad medicine.
Convincing anti-vaccination people to accept vaccination is certainly challenging. One reason is that the issue has become politicized into a battle of values and identity. This is partially due to the fact that the anti-vaccine people have been mocked and attacked, thus leading them to entrench and double down. Another reason is that, as argued above, they do have well-founded concerns about the trustworthiness of the state, the accuracy of scientific studies, and the goodness of corporations. A third reason is that people tend to give more weight to the negative and also tend to weigh potential loss more than potential gain. As such, people would tend to give more weight to negative reasons against vaccines and fear the alleged dangers of vaccines more than they would value their benefits.
Given the importance of vaccinations, it is rather critical that the anti-vaccination movement be addressed. Calling people stupid, mocking them and attacking them are certainly not effective ways of convincing people that vaccines are generally safe and effective. A more rational and hopefully more effective approach is to address their legitimate concerns and consider their fears. After all, the goal should be the health of people and not scoring points.