Health Care Workers and Moral Objections II: Patients/Clients
As noted in an earlier essay, the Trump administration plans to modify the Health and Human Services (HHS) civil rights office to protect health care workers who have moral or religious objections to performing certain medical procedures or treating certain patients. In that essay I addressed the general moral issue of whether health workers have the moral right to refuse certain services. I now turn to the general issue of whether they have the moral right to refuse to treat certain patients (or clients) based on the identity of the patients (or clients). The legal matter, of course, is something for the courts to settle.
As noted in the earlier essay, a person does not surrender their moral rights or conscience when they enter a profession. As such, it should not simply be assumed that a health care worker cannot refuse to treat a person because of the worker’s values. But, of course, it should also not be assumed that the moral or religious values of a health care worker grant them the right to refuse treatment based on the identity of the patient.
One moral argument for the right to refuse treatment because of the patient’s identity is based on the general right to refuse to provide a good or service. A key freedom, one might argue, is this freedom from compulsion. For example, an author has every right to determine who they will and will not write for.
Another moral argument for the right to refuse is a general one about the right to not be forced to interact with people whom one regards as evil or at least immoral. This can also be augmented by contending that serving the needs of an immoral person is to engage in an immoral action, if only by association. For example, a Jewish painter has every right to refuse to paint a mural for Nazis.
While these arguments have considerable appeal, especially in cases in which the refusal is directed at the sorts of people one dislikes, it is important to consider the implications of a right of refusal based on values. One obvious implication is that such a right could warrant a health care worker to refuse to treat you if they regarded you as immoral. In general terms, moral rights need to be assessed by applying a moral method I call reversing the situation. Parents and others often employ this method informally by asking questions such as “how would you feel if someone did that to you?”
Somewhat more formally, this method is based on the Golden Rule: “do unto others as you would have them do unto you.” Assuming this rule is correct, if a person is unwilling to abide by his own principles when the situation is reversed, then it is reasonable to question those principles. In the case at hand, while a person might be fine with the right to refuse services to those they dislike because of their values, they would presumably not be fine with it if the situation were reversed.
An obvious objection is that reversing the situation would, strictly speaking, only apply to health workers themselves. Fortunately, there is a modified version of this method that would apply to everyone. In this method one test of a moral right, principle or rule is for a person to replace the proposed target of the right, principle or rule with themselves or a group (or groups) they belong to. For example, a Christian who thinks it is morally fine to refuse services to transgender people based on religious freedom should consider their thoughts on atheists refusing services to Christians based on religious freedom. Naturally, a person could insist that the right, rule or principle should only be applied to those they do not like—but if anyone can take this out, then it would seem everyone could as well, thus the objection would fail.
One reasonable reply to this method is to point out that there are clear exceptions to its application. For example, while most Christians are fine with convicted murders being locked up, it does that follow that they are wrong about this because they would not want to be locked up for being Christians. In such cases, which also applies to reversing the situation, it can be argued that there is a morally relevant difference between the two people or groups that justifies the difference in treatment. For example, convicted murders generally deserve to be punished for being murders while Christians obviously do not merit punishment just for being Christians. As such, when considering the moral right of health care workers to refuse services based on the identity of the patient (or client) the possibility of relevant differences must be given due consideration.
The obvious problem with relevant difference considerations is that people will tend to think there is a relevant difference between themselves and those they want to apply the right of refusal. For example, a person who is a social justice warrior might regard a member of the alt-right as an evil racist and see this as a relevant difference that warrants refusing service to such a person. One solution is to appeal to an objective moral judge—but this creates the obvious problem of finding such a person. Another solution is for the person to take special pains to be objective—but this is rather difficult and especially so in cases in which objectivity is often most needed.
A final relevant consideration is the fact that while entering a profession does not strip a person of their conscience or moral agency, it often imposes professional ethics on the person that supersede their own values within the professional context. For example, lawyers must accept a professional ethics that requires them to keep certain secrets their client might have (the most obvious being when they did the crime) even when doing so might violate their personal ethics. As another example, lawyers (especially public defenders) are expected to defend their clients even if they find their clients morally awful. As a third example, as a professor I (in general) cannot insist that a student be removed from my class by appealing to my religious or moral values regarding the identity of the student. As a professor, I am obligated to teach anyone enrolled in my class, if they do not engage in behavior that would warrant their removal (such as assaulting other students). Health care workers generally fall under professional ethics as well and these typically include requirements to render care to people regardless of what the worker things of the morality of the person. For example, a doctor does not have the right to refuse to perform surgery on someone just because they committed adultery, are a compulsive liar, have engaged in shady and even illegal business practices or expressed their proclivity to grab people by a certain part of their anatomy. This is not to say that there cannot be exceptions, but professional medical ethics would seem to forbid refusing service just because of the moral judgment by the service provider of the patient (or client). This, obviously enough, is distinct from refusing services because a patient or client has engaged in behavior that warrants refusal, such as attacking the service provider.
The Corruption of Academic Science
STEM (Science, Technology, Engineering and Mathematics) fields are supposed to be the new darlings of the academy, so I was slightly surprised when I heard an NPR piece on how researchers are struggling for funding. After all, even the politicians devoted to cutting education funding have spoken glowingly of STEM. My own university recently split the venerable College of Arts & Sciences, presumably to allow more money to flow to STEM without risking that professors in the soft sciences and the humanities might inadvertently get some of the cash. As such I was somewhat curious about this problem, but mostly attributed it to a side-effect of the general trend of defunding public education. Then I read “Bad Science” by Llewellyn Hinkes-Jones. This article was originally published in issue 14, 2014 of Jacobin Magazine. I will focus on the ethical aspects of the matters Hinkes-Jones discussed in this article, which is centered on the Bayh-Dole Act.
The Bayh-Dole Act was passed in 1980 and was presented as having very laudable goals. Before the act was passed, universities were limited in regards to what they could do with the fruits of their scientific research. After the act was passes, schools could sell their patents or engage in exclusive licensing deals with private companies (that is, monopolies on the patents). Supporters asserted this act would be beneficial in three main ways. The first is that it would secure more private funding for universities because corporations would provide money in return for the patents or exclusive licenses. The second is that it would bring the power of the profit motive to public research: since researchers and schools could profit, they would be more motivated to engage in research. The third is that the private sector would be motivated to implement the research in the form of profitable products.
On the face of it, the act was a great success. Researchers at Columbia University patented the process of DNA cotransfrormation and added millions to the coffers of the school. A patent on recombinant DNA earned Stanford over $200 million. Companies, in turn, profited greatly. For example, researchers at the University of Utah created Myriad Genetics and took ownership of their patent on the BRCA1 and BRCA2 tests for breast cancer. The current cost of the test is $4,000 (in comparison a full sequencing of human DNA costs $1,000) and the company has a monopoly on the test.
Given these apparent benefits, it is easy enough to advance a utilitarian argument in favor of the act and its consequences. After all, if allows universities to fund their research and corporations to make profits, then its benefits would seem to be considerable, thus making it morally good. However, a proper calculation requires considering the harmful consequences of the act.
The first harm is that the current situation imposes a triple cost on the public. One cost is that the taxpayers fund the schools that conduct the research. The next is that thanks to the monopolies on patents the taxpayers have to pay whatever prices the companies wish to charge, such as the $4,000 for a test that should cost far less. In an actual free market there would be competition and lower prices—but what we have is a state controlled and regulated market. Ironically, those who are often crying the loudest against government regulation and for the value of competition are quite silent on this point. The final cost of the three is that the corporations can typically write off their contributions on their taxes, thus leaving other taxpayers to pick up their slack. These costs seem to be clear harms and do much to offset the benefits—at least when looked at from the perspective of the whole society and not just focusing on those reaping the benefits.
The second harm is that, ironically, this system makes research more expensive. Since processes, strains of bacteria and many other things needed for research are protected by monopolistic patents the researchers who do not hold these patents have to pay to use them. The costs are usually quite high, so while the patent holders benefit, research in general suffers. In order to pay for these things, researchers need more funding, thus either imposing more cost on taxpayers or forcing them to turn to private funding (which will typically result in more monopolistic patents).
The third harm is the corruption of researchers. Researchers are literally paid to put their names on positive journal articles that advance the interests of corporations. They are also paid to promote drugs and other products while presenting themselves as researchers rather than paid promoters. If the researchers are not simply bought, the money is clearly a biasing factor. Since we are depending on these researchers to inform the public and policy makers about these products, this is clearly a problem and presents a clear danger to the public good.
A fourth harm is that even the honest researchers who have not been bought are under great pressure to produce “sexy science” that will attract grants and funding. While it has always been “publish or perish” in modern academics, the competition is even fiercer in the sciences now. As such, researchers are under great pressure to crank out publications. The effect has been rather negative as evidenced by the fact that the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies. Far from driving advances in science, the act has served as an engine of corruption, fraud and bad science. This would be bad enough, but there is also the impact on a misled and misinformed public. I must admit that I fell for the antioxidant and omega-3 “research”—I modified my diet to include more antioxidants and omega-3. While this bad science does get debunked, the debunking takes a long time and most people never hear about it. For example, how many people know that the antioxidant and omega-3 “research” is flawed and how many still pop omega-3 “fish oil pills” and drink “antioxidant teas”?
A fifth harm is that universities have rushed to cash in on the research, driven by the success of the research schools that have managed to score with profitable patents. However, setting up research labs aimed at creating million dollar patents is incredibly expensive. In most cases the investment will not yield the hoped for returns, thus leaving many schools with considerable expenses and little revenue.
To help lower costs, schools have turned to employing adjuncts to do the teaching and research, thus creating a situation in which highly educated but very low-paid professionals are toiling away to secure millions for the star researchers, the administrators and their corporate benefactors. It is, in effect, sweat-shop science.
This also shows another dark side to the push for STEM: as the number of STEM graduates increase, the value of the degrees will decrease and wages for the workers will continue to fall. This is great for the elite, but terrible for those hoping that a STEM degree will mean a good job and a bright future.
These harms would seem to outweigh the alleged benefits of the act, thus indicating it is morally wrong. Naturally, it can be countered that the costs are worth it. After all, one might argue, the incredible advances in science since 1980 have been driven by the profit motive and this has been beneficial overall. Without the profit motive, the research might have been conducted, but most of the discoveries would have been left on the shelves. The easy and obvious response is to point to all the advances that occurred due to public university research prior to 1980 as well as the research that began before then and came to fruition.
While solving this problem is a complex matter, there seem to be some easy and obvious steps. The first would be to restore public funding of state schools. In the past, the publicly funded universities drove America’s worldwide dominance in research and helped fuel massive economic growth while also contributing to the public good. The second would be replacing the Bayh-Dole Act with an act that would allow universities to benefit from the research, but prevent the licensing monopolies that have proven so damaging. Naturally, this would not eliminate patents but would restore competition to what is supposed to be a competitive free market by eliminating the creation of monopolies from public university research. The folks who complain about the state regulating business and who praise the competitive free market will surely get behind this proposal.
It might also be objected that the inability to profit massively from research will be a disincentive. The easy and obvious reply is that people conduct research and teach with great passion for very little financial compensation. The folks that run universities and corporations know this—after all, they pay such people very little yet still often get exceptional work. True, there are some people who are solely motivated by profit—but those are typically the folks who are making the massive profit rather than doing the actual research and work that makes it all possible.
Obligations to People We Don’t Know
One of the classic moral problems is the issue of whether or not we have moral obligations to people we do not know. If we do have such obligations, then there are also questions about the foundation, nature and extent of these obligations. If we do not have such obligations, then there is the obvious question about why there are no such obligations. I will start by considering some stock arguments regarding our obligations to others.
One approach to the matter of moral obligations to others is to ground them on religion. This requires two main steps. The first is establishing that the religion imposes such obligations. The second is making the transition from the realm of religion to the domain of ethics.
Many religions do impose such obligations on their followers. For example, John 15:12 conveys God’s command: “This is my commandment, That you love one another, as I have loved you.” If love involves obligations (which it seems to), then this would certainly seem to place us under these obligations. Other faiths also include injunctions to assist others.
In terms of transitioning from religion to ethics, one easy way is to appeal to divine command theory—the moral theory that what God commands is right because He commands it. This does raise the classic Euthyphro problem: is something good because God commands it, or is it commanded because it is good? If the former, goodness seems arbitrary. If the latter, then morality would be independent of God and divine command theory would be false.
Using religion as the basis for moral obligation is also problematic because doing so would require proving that the religion is correct—this would be no easy task. There is also the practical problem that people differ in their faiths and this would make a universal grounding for moral obligations difficult.
Another approach is to argue for moral obligations by using the moral method of reversing the situation. This method is based on the Golden Rule (“do unto others as you would have them do unto you”) and the basic idea is that consistency requires that a person treat others as she would wish to be treated.
To make the method work, a person would need to want others to act as if they had obligations to her and this would thus obligate the person to act as if she had obligations to them. For example, if I would want someone to help me if I were struck by a car and bleeding out in the street, then consistency would require that I accept the same obligation on my part. That is, if I accept that I should be helped, then consistency requires that I must accept I should help others.
This approach is somewhat like that taken by Immanuel Kant. He argues that because a person necessarily regards herself as an end (and not just a means to an end), then she must also regard others as ends and not merely as means. He endeavors to use this to argue in favor of various obligations and duties, such as helping others in need.
There are, unfortunately, at least two counters to this sort of approach. The first is that it is easy enough to imagine a person who is willing to forgo the assistance of others and as such can consistently refuse to accept obligations to others. So, for example, a person might be willing to starve rather than accept assistance from other people. While such people might seem a bit crazy, if they are sincere then they cannot be accused of inconsistency.
The second is that a person can argue that there is a relevant difference between himself and others that would justify their obligations to him while freeing him from obligations to them. For example, a person of a high social or economic class might assert that her status obligates people of lesser classes while freeing her from any obligations to them. Naturally, the person must provide reasons in support of this alleged relevant difference.
A third approach is to present a utilitarian argument. For a utilitarian, like John Stuart Mill, morality is assessed in terms of consequences: the correct action is the one that creates the greatest utility (typically happiness) for the greatest number. A utilitarian argument for obligations to people we do not know would be rather straightforward. The first step would be to estimate the utility generated by accepting a specific obligation to people we do not know, such as rendering aid to an intoxicated person who is about to become the victim of sexual assault. The second step is to estimate the disutility generated by imposing that specific obligation. The third step is to weigh the utility against the disutility. If the utility is greater, then such an obligation should be imposed. If the disutility is greater, then it should not.
This approach, obviously enough, rests on the acceptance of utilitarianism. There are numerous arguments against this moral theory and these can be employed against attempts to ground obligations on utility. Even for those who accept utilitarianism, there is the open possibility that there will always be greater utility in not imposing obligations, thus undermining the claim that we have obligations to others.
A fourth approach is to consider the matter in terms of rational self-interest and operate from the assumption that people should act in their self-interest. In terms of a moral theory, this would be ethical egoism: the moral theory that a person should act in her self-interest rather than acting in an altruistic manner.
While accepting that others have obligations to me would certainly be in my self-interest, it initially appears that accepting obligations to others would be contrary to my self-interest. That is, I would be best served if others did unto me as I would like to be done unto, but I was free to do unto them as I wished. If I could get away with this sort of thing, it would be ideal (assuming that I am selfish). However, as a matter of fact people tend to notice and respond negatively to a lack of reciprocation. So, if having others accept that they have some obligations to me were in my self-interest, then it would seem that it would be in my self-interest to pay the price for such obligations by accepting obligations to them.
For those who like evolutionary just-so stories in the context of providing foundations for ethics, the tale is easy to tell: those who accept obligations to others would be more successful than those who do not.
The stock counter to the self-interest argument is the problem of Glaucon’s unjust man and Hume’s sensible knave. While it certainly seems rational to accept obligations to others in return for getting them to accept similar obligations, it seems preferable to exploit their acceptance of obligations while avoiding one’s supposed obligations to others whenever possible. Assuming that a person should act in accord with self-interest, then this is what a person should do.
It can be argued that this approach would be self-defeating: if people exploited others without reciprocation, the system of obligations would eventually fall apart. As such, each person has an interest in ensuring that others hold to their obligations. Humans do, in fact, seem to act this way—those who fail in their obligations often get a bad reputation and are distrusted. From a purely practical standpoint, acting as if one has obligations to others would thus seem to be in a person’s self-interest because the benefits would generally outweigh the costs.
The counter to this is that each person still has an interest in avoiding the cost of fulfilling obligations and there are various practical ways to do this by the use of deceit, power and such. As such, a classic moral question arises once again: why act on your alleged obligations if you can get away with not doing so? Aside from the practical reply given above, there seems to be no answer from self-interest.
A fifth option is to look at obligations to others as a matter of debts. A person is born into an established human civilization built on thousands of years of human effort. Since each person arrives as a helpless infant, each person’s survival is dependent on others. As the person grows up, she also depends on the efforts of countless other people she does not know. These include soldiers that defend her society, the people who maintain the infrastructure, firefighters who keep fire from sweeping away the town or city, the taxpayers who pay for all this, and so on for all the many others who make human civilization possible. As such, each member of civilization owes a considerable debt to those who have come before and those who are here now.
If debt imposes an obligation, then each person who did not arise ex-nihilo owes a debt to those who have made and continue to make their survival and existence in society possible. At the very least, the person is obligated to make contributions to continue human civilization as a repayment to these others.
One objection to this is for a person to claim that she owes no such debt because her special status obligates others to provide all this for her with nothing owed in return. The obvious challenge is for a person to prove such an exalted status.
Another objection is for a person to claim that all this is a gift that requires no repayment on the part of anyone and hence does not impose any obligation. The challenge is, of course, to prove this implausible claim.
A final option I will consider is that offered by virtue theory. Virtue theory, famously presented by thinkers like Aristotle and Confucius, holds that people should develop their virtues. These classic virtues include generosity, loyalty and other virtues that involve obligations and duties to others. Confucius explicitly argued in favor of duties and obligations as being key components of virtues.
In terms of why a person should have such virtues and accept such obligations, the standard answer is that being virtuous will make a person happy.
Virtue theory is not without its detractors and the criticism of the theory can be employed to undercut it, thus undermining its role in arguing that we have obligations to people we do not know.
Anyone Home?
As I tell my students, the metaphysical question of personal identity has important moral implications. One scenario I present is that of a human in what seems to be a persistent vegetative state. I say “human” rather than “person”, because the human body in question might no longer be a person. To use a common view, if a person is her soul and the soul has abandoned the shell, then the person is gone.
If the human is still a person, then it seems reasonable to believe that she has a different moral status than a mass of flesh that was once a person (or once served as the body of a person). This is not to say that a non-person human would have no moral status at all—I do not want to be interpreted as holding that view. Rather, my view is that personhood is a relevant factor in the morality of how an entity is treated.
To use a concrete example, consider a human in what seems to be a vegetative state. While the body is kept alive, people do not talk to the body and no attempt is made to entertain the body, such as playing music or audiobooks. If there is no person present or if there is a person present but she has no sensory access at all, then this treatment would seem to be acceptable—after all it would make no difference whether people talked to the body or not.
There is also the moral question of whether such a body should be kept alive—after all, if the person is gone, there would not seem to be a compelling reason to keep an empty shell alive. To use an extreme example, it would seem wrong to keep a headless body alive just because it can be kept alive. If the body is no longer a person (or no longer hosts a person), then this would be analogous to keeping the headless body alive.
But, if despite appearances, there is still a person present who is aware of what is going on around her, then the matter is significantly different. In this case, the person has been effectively isolated—which is certainly not good for a person.
In regards to keeping the body alive, if there is a person present, then the situation would be morally different. After all, the moral status of a person is different from that of a mass of merely living flesh. The moral challenge, then, is deciding what to do.
One option is, obviously enough, to treat all seemingly vegetative (as opposed to brain dead) bodies as if the person was still present. That is, the body would be accorded the moral status of a person and treated as such.
This is a morally safe option—it would presumably be better that some non-persons get treated as persons rather than risk persons being treated as non-persons. That said, it would still seem both useful and important to know.
One reason to know is purely practical: if people know that a person is present, then they would presumably be more inclined to take the effort to treat the person as a person. So, for example, if the family and medical staff know that Bill is still Bill and not just an empty shell, they would tend to be more diligent in treating Bill as a person.
Another reason to know is both practical and moral: should scenarios arise in which hard choices have to be made, knowing whether a person is present or not would be rather critical. That said, given that one might not know for sure that the body is not a person anymore it could be correct to keep treating the alleged shell as a person even when it seems likely that he is not. This brings up the obvious practical problem: how to tell when a person is present.
Most of the time we judge there is a person present based on appearance, using the assumption that a human is a person. Of course, there might be non-human people and there might be biological humans that are not people (headless bodies, for example). A somewhat more sophisticated approach is to use the Descartes’s test: things that use true language are people. Descartes, being a smart person, did not limit language to speaking or writing—he included making signs of the sort used to communicate with the deaf. In a practical sense, getting an intelligent response to an inquiry can be seen as a sign that a person is present.
In the case of a body in an apparent vegetative state applying this test is quite a challenge. After all, this state is marked by an inability to show awareness. In some cases, the apparent vegetative state is exactly what it appears to be. In other cases, a person might be in what is called “locked-in-syndrome.” The person is conscious, but can be mistaken for being minimally conscious or in a vegetative state. Since the person cannot, typically, respond by giving an external sign some other means is necessary.
One breakthrough in this area is due to Adrian M. Owen. Overs implying things considerably, he found that if a person is asked to visualize certain activities (playing tennis, for example), doing so will trigger different areas of the brain. This activity can be detected using the appropriate machines. So, a person can ask a question such as “did you go to college at Michigan State?” and request that the person visualize playing tennis for “yes” or visualize walking around her house for “no.” This method provides a way of determining that the person is still present with a reasonable degree of confidence. Naturally, a failure to respond would not prove that a person is not present—the person could still remain, yet be unable (or unwilling) to hear or respond.
One moral issue this method can held address is that of terminating life support. “Pulling the plug” on what might be a person without consent is, to say the least, morally problematic. If a person is still present and can be reached by Owen’s method, then thus would allow the person to agree to or request that she be taken off life support. Naturally, there would be practical questions about the accuracy of the method, but this is distinct from the more abstract ethical issue.
It must be noted that the consent of the person would not automatically make termination morally acceptable—after all, there are moral objections to letting a person die in this manner even when the person is fully and clearly conscious. Once it is established that the method adequately shows consent (or lack of consent), the broader moral issue of the right to die would need to be addressed.
The Robots of Deon
The United States military has expressed interest in developing robots capable of moral reasoning and has provided grant money to some well-connected universities to address this problem (or to at least create the impression that the problem is being considered).
The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, the killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of the constraints of ethics.
While there are various reasons to imbue (or limit) robots with ethics (or at least engage in the pretense of doing so), one of these is public relations. Thanks to science fiction dating back at least to Frankenstein, people tend to worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics serves to reassure the public (or so it is hoped). Another reason is to make the public relations gimmick a reality—to actually place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds.
While science fiction features ethical robots, the authors (like philosophers who discuss the ethics of robots) are extremely vague about how robot ethics actually works. In the case of truly intelligent robots, their ethics might work the way our ethics works—which is something that is still a mystery debated by philosophers and scientists to this day. We are not yet to the point of having such robots, so the current practical challenge is to develop ethics for the sort of autonomous or semi-autonomous robots we can build now.
While creating ethics for robots might seem daunting, the limitations of current robot technology means that robot ethics is essentially a matter of programming these machines to operate in specific ways defined by whatever ethical system is being employed as the guide. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I regard shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot her.
While a suitably programmed robot would act in a way that seemed ethical, the robot is obviously not engaged in ethical behavior. After all, it is merely a more complex version of the automatic door. The supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil. Likewise, the killbot that does not shoot you in the face because its cameras show that you are unarmed is not ethical. The killbot that chops you into meaty chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.
To be fair to the killbots, perhaps we humans are not ethical or unethical under these requirements for ethics—we could just be meat-bots operating under the illusion of ethics. Also, it is certainly sensible to focus on the practical aspect of the matter: if you are a civilian being targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you or not. As such, the general practical problem is getting our killbots to behave in accord with our ethical values.
Achieving this goal involves three main steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will presumably involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating the ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior. This would require creating a definition of civilian (or perhaps just an unarmed person) that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize. The third step would be coding that behavior in whatever programming language is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender so as to get close enough to destroy the killbot).
Since these robots would be following programmed rules, they would presumably be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.
An interesting practical question is whether or not the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy enough to imagine an ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).
The largest impact of the government funding for this sort of research will be that properly connected academics will get surprisingly large amounts of cash to live the science-fiction dream of teaching robots to be good. That way the robots will feel a little bad when they kill us all.
Kant & Economic Justice
One of the basic concerns is ethics is the matter of how people should be treated. This is often formulated in terms of our obligations to other people and the question is “what, if anything, do we owe other people?” While it does seem that some would like to exclude the economic realm from the realm of ethics, the burden of proof would rest on those who would claim that economics deserves a special exemption from ethics. This could, of course, be done. However, since this is a brief essay, I will start with the assumption that economic activity is not exempt from morality.
While I subscribe to virtue theory as my main ethics, I do find Kant’s ethics both appealing and interesting. In regards to how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”
It is reasonable to inquire why this should be accepted. Kant’s reasoning certainly seems sensible enough. He notes that “a man necessarily conceives his own existence as such” and this applies to all rational beings. That is, Kant claims that a rational being sees itself as being an end, rather than a thing to be used as a means to an end. So, for example, I see myself as a person who is an end and not as a mere thing that exists to serve the ends of others.
Of course, the mere fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could apparently regard myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers or slaves.
However, Kant claims that I must regard other rational beings as ends as well. The reason is fairly straightforward and is a matter of consistency: if I am an end rather than a means because I am a rational being, then consistency requires that I accept that other rational beings are ends as well. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant my treating them as means only and not as ends. People have, obviously enough, endeavored to justify treating other people as things. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings.
From this, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not entail that I cannot ever treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from being a pimp who uses women as mere means of revenue. I would, however, not be forbidden from having someone check me out at the grocery store—provided that I treated the person as a person and not a mere means.
One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. That is, the problem is figuring out when a person is being treated as a mere means and thus the action would be immoral.
Interestingly enough, many economic relationships would seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use the obvious example, if an employer treats her employees merely as means to making a profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.
One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than persons. The challenge is, of course, to show that the economic realm grants a special exemption in regards to ethics. Of course, if it does this, then the exemption would presumably be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms.
Another obvious reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics rather like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then people have the same right to use might against employers and other folks—that is, the state of nature applies to all.
1 comment