A Philosopher's Blog

Robopunishment

Posted in Ethics, Law, Philosophy by Michael LaBossiere on March 25, 2015
Crime and Punishment

Crime and Punishment (Photo credit: Wikipedia)

While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machines—machines that might take actions that would be considered misdeeds or crimes if committed by a human (such as the oft-predicted genocide).

In general, punishment is aimed at one of more of the following goals: retribution, rehabilitation, or deterrence. Each of these goals will be considered in turn in the context of machines.

Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it has incurred by its misdeed. Reparation can, to be a bit sloppy, be included under retaliation—at least in the sense of the repayment of a debt incurred by the commission of a misdeed.

While a machine can be damaged or destroyed, there is clearly the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down on her or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution would seem to require that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut my foot, but it cannot wrong me.

If a machine can be an agent, which was discussed in an earlier essay, then it would seem to be able to do wrongful deeds and thus be a potential candidate for retribution. However, even if a machine had agency, there is still the question of whether or not retribution would really apply. After all, retribution requires more than just agency on the part of the target. It also seems to require that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution—since retribution seems to be based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But, the android does not suffer—it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless—the books would not be balanced.

This could be countered by arguing that the target of the retribution need not suffer—what is required is merely the right sort of balancing of the books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn, but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.

Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence—this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence often aims at affects others as well.

Obviously enough, a machine that lacks agency cannot be subject to rehabilitative punishment—it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.

To use an obvious example, if a computer crashes and destroys a file that a person had been working on for hours, punishing the computer in an attempt to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.

A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog—after leaking oil in the house or biting the robo-cat and being scolded, it would learn not to do those misdeeds again.

It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seems to be wired in a way that requires that we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our machine creations would need to be the same way. But, perhaps, it is not just a matter of the organic—perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach—we will be, by our nature, cruel teachers of our machine children.

Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings—and this would raise the same sort of issues about how agents should be treated.

The purpose of deterrence is to motivate the agent who did the misdeed and/or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.

As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment and the other targets must be capable of changing their actions in response to that punishment.

Deterrence, obviously enough, does not work in regards to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for house, punishing it will not deter it. Smashing it in front of other computers will not deter them.

A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again it could serve as a warning, and thus a deterrence, to other combat robots.

Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Florida’s Bathroom Law

Posted in Ethics, Philosophy by Michael LaBossiere on March 23, 2015
English: I photographed this picture from a pu...

 (Photo credit: Wikipedia)

Being from Maine, I got accustomed to being asked about the cold, lobsters, moose and Stephen King. Living in Florida, I have become accustomed to being asked about why my adopted state is so insane. Most recently, I was asked about the bathroom bill making its way through the House.

The bathroom bill, officially known as HB 583, proposes that it should be a second-degree misdemeanor to “knowingly and willfully” enter a public facility restricted to members “of the other biological sex.” The bill proposes a maximum penalty of 60 days in jail and a $500 fine.

Some opponents of the bill contend that it is aimed at discriminating against transgender people. Some part of Florida have laws permitting people to use public facilities based on the gender they identify with rather than their biological sex.

Obviously enough, proponents of the bill are not claiming that they are motivated by a dislike of transgender people. Rather, the main argument used to support the bill centers on the claim that it is necessary to protect women and girls. The idea seems to be that women and girls will be assaulted or raped by males who will gain access to locker rooms and bathrooms by claiming they have a right to enter such places because they are transgender.

Opponents of the bill have pointed out the obvious reply to this argument: there are already laws against assault and rape. There are also laws against lewd and lascivious behavior. As such, there does not seem to be a need for this proposed law if its purpose is to protect women and girls from such misdeeds. To use an analogy, there is no need to pass a law making it a crime for a man to commit murder while dressed as a woman—murder is already illegal.

It could be countered that the bill is still useful because it would add yet another offense that a perpetrator could be charged with. While this does have a certain appeal, the idea of creating laws just to stack offenses seems morally problematic—it seems that a better policy would be to craft laws that adequately handle the “base” offenses.

It could also be claimed that the bill is needed in order to provide an initial line of defense. After all, one might argue, it would be better that a male never got into the bathroom or locker room to commit his misdeeds and this bill will prevent this from occurring.

The obvious reply is that the bill would work in this manner if the facilities are guarded by people capable of turning such masquerading males away at the door. This guards would presumably need to have the authority to check the “plumbing” of anyone desiring entry to the facility. After all, it is not always easy to discern between a male and a female by mere outward appearance. Of course, if such guards are going to be posted, then they might as well be posted inside the facilities themselves, thus providing much better protection. As such, if the goal is to make such facilities safe, then a better bill would mandate guards for such facilities.

Opponents of the bill do consider the dangers of assault. However, they contend that it is transgender people who are most likely to be harmed if they are compelled to use facilities for their biological sex. It would certainly be ironic if a bill (allegedly) aimed at protect people turned out to lead to more harm.

A second line of argumentation focuses on the privacy rights of biological women. “Women have an expectation of privacy,” said Anthony Verdugo of Christian Family Coalition Florida. “My wife does not want to be in a public facility with a man, and that is her right. … No statute in Florida right now specifically prohibits a person of one sex from entering a facility intended for use by a person of another sex.”

This does have a certain appeal. When I was in high school, I and some other runners were changing after a late practice and someone had “neglected” to tell us that basketball cheerleaders from another school would be coming through the corridor directly off the locker room. Being a typical immature nerd, I was rather embarrassed by this exposure. I do recall that one of my more “outgoing” fellow runners offered up a “free show” before being subdued with a rattail to the groin. As such, I do get that women and girls would not want males in their bathrooms or locker rooms “inspecting their goods.” That said, there are some rather obvious replies to this concern.

The first reply is that it seems likely that transgender biological males that identify as female would not be any more interested in checking out the “goods” of biological females than would biological females. But, obviously, there is the concern that such biological males might be bi-sexual or interested only in females. This leads to the second reply.

The second reply is that the law obviously does not protect females from biological females that are bi-sexual or homosexual. After all, a lesbian can openly go into the women’s locker room or bathroom. As such, the privacy of women (if privacy is taken to include the right to not be seen while naked by people who might be sexually attracted to one) is always potentially threatened.

Though some might now be considering bills aimed at lesbians and bi-sexuals in order to protect the privacy of straight women, there is really no need of these bills—or HB 583. After all, there are already laws against harassment and other such bad behavior.

It might be countered that merely being seen by a biological male in such places is sufficient to count as a violation of privacy, even if the male is well-behaved and not sexually interested. There are, after all, laws (allegedly) designed to protect women from the prying eyes of men, such as some parts of Sharia law. However, it would seem odd to say that a woman should be protected by law merely from the eyes of a male when the male identifies as a woman and is not engaged in what would be reasonably regarded as bad behavior (like staring through the gaps in a stall to check out a woman).

Switching gears a bit, in an interesting coincidence I was thinking about this essay when I found that the men’s bathroom at the FSU track was locked, but the women’s bathroom was open. The people in ROTC were doing their track workout at the same time and the male cadets were using the women’s bathroom—since the alternative was public urination. If this bill passed, the cadets would have been subject to arrest, jail and a fine for their crime.

For athletes, this sort of bathroom switching is not at all unusual. While training or at competitions, people often find the facilities closed or overburdened, so it is common for people to use whatever facilities are available—almost always with no problems or issues. For example, the Women’s Distance Festival is a classic race in Tallahassee that is open to men and women, but has a very large female turnout. On that day, the men get a porta-pottie and the men’s room is used by the women—which would be illegal if this bill passed. I have also lost count of the times that female runners have used the men’s room because the line to the women’s facilities was way too long. No one cared, no one was assaulted and no one was arrested. But if this bill became law, that sort of thing would be a crime.

My considered view of this bill is that there is no need for it. The sort of bad behavior that it is aimed to counter is already illegal and it would criminalize behavior that is not actually harmful (like the male ROTC cadets using the only open bathroom at the track).

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Androids, Autonomy & Agency

Posted in Ethics, Metaphysics, Philosophy, Technology by Michael LaBossiere on March 18, 2015
Blade Runner

Blade Runner (Photo credit: Wikipedia)

Philosophers have long speculated about the subjects of autonomy and agency, but the rise of autonomous systems have made these speculations ever more important.  Keeping things fairly simple, an autonomous system is one that is capable of operating independent of direct control. Autonomy comes in degrees in terms of the extent of the independence and the complexity of the operations. It is, obviously, the capacity for independent operation that distinguishes autonomous systems from those controlled externally.

Simple toys provide basic examples of the distinction. A wind-up mouse toy has a degree of autonomy: once wound and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy—a puppeteer must control it. Robots provide examples of rather more complex autonomous systems. Google’s driverless car is an example of a relatively advanced autonomous machine—once programmed and deployed, it will be able to drive itself to its destination. A normal car is an example of a non-autonomous system—the driver controls it directly. Some machines allow for both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.

Autonomy, at least in this context, is quite distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is clearly a connection between autonomy and moral agency: moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. A puppet is, obviously, not accountable for what the puppeteer makes it do.

While autonomy seems necessary for agency, it is clearly not sufficient—while all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy, but has no agency. A robot drone following a pre-programed flight-plan has a degree of autonomy, but would lack agency—if it collided with a plane it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or, put another way, it lacks freedom.  Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.

One obvious problem with basing agency on freedom (especially metaphysical freedom of the will) is that there is considerable debate about whether or not such freedom exists. There is also the epistemic problem of how one would know if an entity has such freedom.

As a practical matter, it is usually assumed that people have the freedom needed to make them into agents. Kant, rather famously, took this approach. What he regarded as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality—so it should be accepted for this reason.

While humans are willing (generally) to attribute freedom and agency to other humans, there seem to be good reasons to not attribute freedom and agency to autonomous machines—even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans they would do what they do because they are what they are. This would be in clear contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.

This distinction between humans and suitably complex machines would seem to be a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency—at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine.  But, of course, it would not really be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).

The German philosopher Leibiniz held the view that what each person will do is pre-established by her inner nature. On the face of it, this would seem to entail that there is no freedom: each person does what she does because of what she is—and she cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept the common view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.

For Leibniz, being metaphysically without freedom would involve being controlled from the outside—like a puppet controlled by a puppeteer or a vehicle being operated by remote control.  In contrast, freedom is acting from one’s values and character (what Leibniz and Taoists call “inner nature”). If a person is acting from this inner nature and not external coercion—that is, the actions are the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this sort of view, humans do have agency because they have the needed degree of freedom and autonomy.

If this model works for humans, it could also be applied to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.

An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then it would seem that a machine could also be a moral agent.

From a moral standpoint, I would suggest a Moral Descartes’ Test (or, for those who prefer, a Moral Turing Test). Descartes argued that the sure proof of a being having a mind is its capacity to use true language. Turning later proposed a similar sort of test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency—can the machine be as convincing as a human in regards to its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed in order to prevent mere prejudice from infecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regards to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A moral agent might have rather different emotions, etc. than a human. The challenge is, obviously enough, developing a proper test for moral agency. It would, of course, be rather interesting if humans could not pass it.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Guns on Campus

Posted in Ethics, Law, Philosophy, Universities & Colleges by Michael LaBossiere on March 6, 2015

As I write this, the Florida state legislature is considering a law that will allow concealed carry permit holders to bring their guns to college campuses. As is to be expected, some opponents and some proponents are engaging in poor reasoning, hyperbole and other such unhelpful means of addressing the issue. As a professor and a generally pro-gun person, I have more than academic interest in this matter. My goal is, as always, is to consider this issue rationally, although I do recognize the role of emotions in this matter.

From an emotional standpoint, I am divided in my heart. On the pro-gun feeling side, all of my gun experiences have been positive. I learned to shoot as a young man and have many fond memories of shooting and hunting with my father. Though I now live in Florida, we still talk about guns from time to time. As graduate student, I had little time outside of school, but once I was a professor I was able to get in the occasional trip to the range. I have, perhaps, been very lucky: the people I have been shooting with and hunting with have all been competent and responsible people. No one ever got hurt. I have never been a victim of gun crime.

On the anti-gun side, like any sane human I am deeply saddened when I hear of people being shot down. While I have not seen gun violence in person, Florida State University (which is just across the tracks from my university) recently had a shooter on campus. I have spoken with people who have experienced gun violence and, not being callous, I can understand their pain. Roughly put, I can feel the two main sides in the debate. But, feeling is not a rational way to settle a legal and moral issue.

Those opposed to guns on campus are concerned that the presence of guns carried by permit holders would result in increase in injuries and deaths. Some of these injuries and deaths would be intentional, such as suicide, fights escalating to the use of guns, and so on. Some of these injuries and deaths, it is claimed, would be the result of an accidental discharge. From a moral standpoint, this is obviously a legitimate concern. However, it is also a matter for empirical investigation: would allowing concealed carry on campus increase the likelihood of death or injury to a degree that would justify banning guns?

Some states already allow licensed concealed carry on campus and there is, of course, considerable data available about concealed carry in general. The statistically data would seem to indicate that allowing concealed carry on campus would not result in an increase in injuries and death on campus. This is hardly surprising: getting a permit requires providing proof of competence with a firearm as well as a thorough background check—considerably more thorough than the background check to purchase a firearm. Such permits are also issued at the discretion of the state. As such, people who have such licenses are not likely engage in random violence on campus.

This is, of course, an empirical matter. If it could be shown that allowing licensed conceal carry on campus would result in an increase in deaths and injuries, then this would certainly impact the ethics of allowing concealed carry.

Those who are opposed to guns on campus are also rightfully concerned that someone other than the license holder will get the gun and use it. After all, theft is not uncommon on college campuses and someone could grab a gun from a licensed holder.

While these concerns are not unreasonable, someone interested in engaging in gun violence can easily acquire a gun without stealing it from a permit holder on campus. She could buy one or steal one from somewhere else. As far as grabbing a gun from a person carrying it legally, attacking an armed person is generally not a good idea—and, of course, someone who is prone to gun grabbing would presumably also try to grab a gun from a police officer. In general, these do not seem to be compelling reasons to ban concealed carry on campus.

Opponents of allowing guns on campus also point to psychological concerns: people will feel unsafe knowing that people around them might be legally carry guns. This might, it is sometimes claimed, result in a suppression of discussion in classes and cause professors to hand out better grades—all from fear that a student is legally carrying a gun.

I do know people who are actually very afraid of this—they are staunchly anti-gun and are very worried that students and other faculty will be “armed to the teeth” on campus and “ready to shoot at the least provocation.” The obvious reply is that someone who is dangerously unstable enough to shoot students and faculty over such disagreements would certainly not balk at illegally bringing a gun to campus. Allowing legal concealed carry by permit holders would, I suspect, not increase the odds of such incidents. But, of course, this is a matter of emotions and fear is rarely, if ever, held at bay by reason.

Opponents of legal carry on campus also advance a reasonable argument: there is really no reason for people to be carrying guns on campus. After all, campuses are generally safe, typically have their own police forces and are places of learning and not shooting ranges.

This does have considerable appeal. When I lived in Maine, I had a concealed weapon permit but generally did not go around armed. My main reason for having it was convenience—I could wear my gun under my jacket when going someplace to shoot. I must admit, of course, that as a young man there was an appeal in being able to go around armed like James Bond—but that wore off quickly and I never succumbed to gun machismo. I did not wear a gun while running (too cumbersome) or while socializing (too…weird). I have never felt the need to be armed with a gun on campus, though all the years I have been a student and professor. So, I certainly get this view.

The obvious weak point for this argument is that the lack of a reason to have a gun on campus (granting this for the sake of argument) is not a reason to ban people with permits from legally carrying on campus. After all, the permit grants the person the right to carry the weapon legally and more is needed to deny the exercise of that right than just the lack of need.

Another obvious weak point is that a person might need a gun on campus for legitimate self-defense. While this is not likely, that is true in most places. After all, a person going to work or out for a walk in the woods is not likely to need her gun. I have, for example, never needed one for self-defense. As such, there would seem to be as much need to have a gun on campus as many other places where it is legal to carry. Of course, this argument could be turned around to argue that there is no reason to allow concealed carry at all.

Proponents of legal concealed carry on campus often argue that “criminals and terrorists” go to college campuses in order to commit their crimes, since they know no one will be armed. There are two main problems with this. The first is that college campuses are, relative to most areas, very safe. So, criminals and terrorists do not seem to be going to them that often. As opponents of legal carry on campus note, while campus shootings make the news, they are actually very rare.

The second is that large campuses have their own police forces—in the shooting incident at FSU, the police arrived rapidly and shot the shooter. As such, I do not think that allowing concealed carry will scare away criminals and terrorists. Especially since they do not visit campuses that often already.

Proponents of concealed carry also sometimes claim that the people carrying legally on campus will serve as the “good guy with guns” to shoot the “bad guys with guns.” While there is a chance that a good guy will be able to shoot a bad guy, there is the obvious concern that the police will not be able to tell the good guy from the bad guy and the good guy will be shot. In general, the claims that concealed carry permit holders will be righteous and effective vigilantes on campus are more ideology and hyperbole than fact. Not surprisingly, most reasonable pro-gun people do not use that line of argumentation. Rather, they focus on more plausible scenarios of self-defense and not wild-west vigilante style shoot-outs.

My conclusion is that there is not a sufficiently compelling reason to ban permit holders from carrying their guns on campus. But, there does not seem to be a very compelling reason to carry a gun on campus.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics IV: Cybernetics

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on March 4, 2015

Human flesh is weak and metal is strong. So, it is no surprise that military science fiction has often featured soldiers enhanced by cybernetics ranging from the minor to the extreme. An example of a minor cybernetic is an implanted radio. The most extreme example would be a full body conversion: the brain is removed from the original body and placed within a mechanical body. This body might look like a human (known as a Gemini full conversion in Cyberpunk) or be a vehicle such as a tank, as in Keith Laumer’s A Plague of Demons.

One obvious point of moral concern with cybernetics is the involuntary “upgrading” of soldiers, such as the sort practiced by the Cybermen of Doctor Who. While important, the issue of involuntary augmentation is not unique to cybernetics and was addressed in the second essay in this series. For the sake of this essay, it will be assumed that the soldiers volunteer for their cybernetics and are not coerced or deceived. This then shifts the moral concern to the ethics of the cybernetics themselves.

While the ethics of cybernetics is complicated, one way to handle matters is to split cybernetics into two broad categories. The first category consists of restorative cybernetics. The second consists of enhancement cybernetics.

Restorative cybernetics are devices used to restore (hopefully) normal functions to a wounded soldier. Examples would include cyberoptics (replacement eyes), cyberlimbs (replacements legs and arms), and cyberorgans (such as an artificial heart). Soldiers are already being fitted with such devices, although by the standards of science fiction they are still primitive. Given that these devices merely restore functionality and the ethics of prosthetics and similar replacements is well established, there seems to be no moral concern about using such technology in what is essentially a medical role. In fact, it could be argued that nations have a moral obligation to use such technology to restore their wounded soldiers.

While enhancement cybernetics might be used to restore functionality to a wounded soldier, enhancement cybernetics go beyond mere restoration. By definition, they are intended to improve on the original. These enhancements break down into two main classes. The first class consists of replacement cybernetics—these devices require the removal of the original part (be it an eye, limb or organ) and serve as replacements that improve on the original in some manner. For example, cyberoptics could provide a soldier with night vision, telescopic visions and immunity to being blinded by flares and flashes. As another example, cybernetic limbs could provide greater speed, strength and endurance. And, of course, a full conversion could provide a soldier with a vast array of superhuman abilities.

The obvious moral concern with these devices is that they require the removal of the original organic parts—something that certainly seems problematic, even if they do offer enhanced abilities. This could, of course, be offset if the original parts were preserved and restored when the soldier left the service. There is also the concern raised in science fiction about the mental effects of such removals and replacements—the Cyberpunk role playing game developed the notion of cyberpsychosis, a form of insanity caused by having flesh replaced by machines. Obviously, it is not yet known what negative effects (if any) such enhancements will have on people. As in any case of weighing harms and benefits, the likely approach would be utilitarian: are the advantages of the technology worth the cost to the soldier?

A second type of enhancement is an add-on which does not replace existing organic parts. Instead, as the name implies, an add-on involves the addition of a device to the body of the soldier. Add-on cybernetics differ from wearables and standard gear in that they are actually implanted in or attached to the soldier’s body. As such, removal can be rather problematic.

A fairly minor example would be something like an implanted radio. A rather extreme example would be the case of the comic book villain Doctor Octopus—his mechanical limbs are add-ons.  Other examples of add-ons include such things as implanted sensors, implanted armor, implanted weapons (such as in the comic book hero Wolverine), and other such augmentations.

Since these devices do not involve removal of healthy parts, they do avoid that moral concern. However, there are still legitimate concerns about the physical and mental harms that might be caused by such devices. It is easy enough to imagine implanted devices having serious side effects on soldiers. As noted above, these matters would probably be best addressed by utilitarian ethics—weighing the harms against the benefits.

Both types of enhancements also raise a moral concern about returning the soldier to the civilian population after her term of service. In the case of restorative grade devices, there is not as much concern—these soldiers would, ideally, function as they did before their injuries. However, the enhancements do present a potential problem since they, by definition, give the soldier capabilities that exceed that of normal humans. In some cases, re-integration would probably not be a problem. For example, a soldier with enhanced cyberoptics would presumably present no special problems. However, certain augmentations would present serious problems, such as implanted weapons or full conversions. Ideally, augmented soldiers could be restored to normal after their service has ended, but there could obviously be cases in which this was not done—either because of the cost or because the augmentation could not be reversed. This has been explored in science fiction—soldiers that can never stop being soldiers because they are machines of war. While this could be justified on utilitarian grounds (after all, war itself is often justified on such grounds), it is certainly a matter of concern—or will be.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robo Responsibility

Posted in Ethics, Law, Philosophy, Science, Technology by Michael LaBossiere on March 2, 2015

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ransoms & Hostages

Posted in Ethics, Law, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on February 20, 2015

1979 Associated Press photograph showing hosta...

While some countries will pay ransoms to free hostages, the United States has a public policy of not doing this. Thanks to ISIS, the issue of whether ransoms should be paid to terrorists groups or not has returned to the spotlight.

One reason to not pay a ransom for hostages is a matter of principle. This principle could be that bad behavior should not be rewarded or that hostage taking should be punished (or both).

One of the best arguments against paying ransoms for hostages is both a practical and a utilitarian moral argument. The gist of the argument is that paying ransoms gives hostage takers an incentive to take hostages. This incentive will mean that more people will be taken hostage. The cost of not paying is, of course, the possibility that the hostage takers will harm or kill their initial hostages. However, the argument goes, if hostage takers realize that they will not be paid a ransom, they will not have an incentive to take more hostages. This will, presumably, reduce the chances that the hostage takers will take hostages. The calculation is, of course, that the harm done to the existing hostages will be outweighed by the benefits of not having people taken hostage in the future.

This argument assumes, obviously enough, that the hostage takers are primarily motivated by the ransom payment. If they are taking hostages primarily for other reasons, such as for status, to make a statement or to get media attention, then not paying them a ransom will not significantly reduce their incentive to take hostages. This leads to a second reason to not pay ransoms.

In addition to the incentive argument, there is also the funding argument. While a terrorist group might have reasons other than money to take hostages, they certainly benefit from getting such ransoms. The money they receive can be used to fund additional operations, such as taking more hostages. Obviously enough, if ransoms are not paid, then such groups do lose this avenue of funding which can impact their operations. Since paying a ransom would be funding terrorism, this provides both a moral a practical reason not to pay ransoms.

While these arguments have a rational appeal, they are typically countered by a more emotional appeal. A stock approach to arguing that ransoms should be paid is the “in their shoes” appeal. The method is very straightforward and simply involves asking a person whether or not she would want a ransom to be paid for her (or a loved one). Not surprising, most people would want the ransom to be paid, assuming doing so would save her (or her loved one). Sometimes the appeal is made explicitly in terms of emotions: “how would you feel if your loved one died because the government refuses to pay ransoms?” Obviously, any person would feel awful.

This method does have considerable appeal. The “in their shoes” appeal can be seem similar to the golden rule approach (do unto others as you would have them do unto you). To be specific, the appeal is not to do unto others, but to base a policy on how one would want to be treated in that situation. If I would not want the policy applied to me (that is, I would want to be ransomed or have my loved one ransomed), then I should be morally opposed to the policy as a matter of consistency. This certainly makes sense: if I would not want a policy applied in my case, then I should (in general) not support that policy.

One obvious counter is that there seems to be a distinction between what a policy should be and whether or not a person would want that policy applied to herself. For example, some universities have a policy that if a student misses more than three classes, the student fails the course. Naturally, no student wants that policy to be applied to her (and most professors would not have wanted it applied to them when they were students), but this hardly suffices to show that the policy is wrong. As another example, a company might have a policy of not providing health insurance to part time employees. While the CEO would certainly not like the policy if she were part time, it does not follow that the policy must be a bad one. As such, policies need to be assessed not just in terms of how a persons feels about them, but in terms of their merit or lack thereof.

Another obvious counter is to use the same approach, only with a modification. In response to the question “how would you feel if you were the hostage or she were a loved one?” one could ask “how would you feel if you or a loved one were taken hostage in an operation funded by ransom money? Or “how would you feel if you or a loved one were taken hostage because the hostage takers learned that people would pay ransoms for hostages?” The answer would be, of course, that one would feel bad about that. However, while how one would feel about this can be useful in discussing the matter, it is not decisive. Settling the matter rationally does require considering more than just how people would feel—it requires looking at the matter with a degree of objectivity. That is, not just asking how people would feel, but what would be right and what would yield the best results in the practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should You Attend a For-Profit College?

Posted in Business, Ethics, Law, Philosophy, Universities & Colleges by Michael LaBossiere on February 16, 2015

The rise of for-profit universities have given students increased choices when it comes to picking schools. Since college is rather expensive and schools vary in regards to the success of their graduates, it is wise to carefully consider the options before writing those checks. Or, more likely these days, going into debt.

While there is a popular view that the for-profit free-market will consistently create better goods and services at ever lower prices, it is wisest to accept facts over ideological theory. As such, when picking between public, non-profit, and for-profit schools one should look at the numbers. Fortunately, ProPublica has been engaged in crunching the numbers.

Today most people go to college in order to have better job prospects. As such, one rather important consideration is the likelihood of getting a job after graduation and the likely salary. While for-profit schools spend about $4.2 billion in 2009 for recruiting and marketing and pay their own college presidents an average of $7.3 million per year, the typical graduate does rather poorly. According to the U.S. Department of Education 74% of the programs at for-profit colleges produced graduates whose average pay is less than that of high-school dropouts. In contrast, graduates of non-profit and public colleges do better financially than high school graduates.

Another important consideration is the cost of education. While the free-market is supposed to result in higher quality services at lower prices and the myth of public education is that it creates low quality services at high prices, the for-profit schools are considerably more expensive than their non-profit and public competition. A two-year degree costs, on average, $35,000 at a for-profit school. The average community college offers that degree at a mere $8,300. In the case of four year degrees, the average is $63,000 at a for-profit and $52,000 for a “flagship” state college. For certificate programs, public colleges will set a student back $4,250 while a for-profit school will cost the student $19,806 on average. By these numbers, the public schools offer a better “product” at a much lower price—thus making public education the rational choice over the for-profit option.

Student debt and loans, which have been getting considerable attention in the media, are also a matter of consideration. The median debt of the average student at a for-profit college is $32,700 and 96% of the students at such schools take out loans. At non-profit private colleges, the amount is $24,600 and 57%. For public colleges, the median debt is $20,000 and 48% of students take out loans. Only 13% of community college students take out loans (thanks, no doubt, to the relatively low cost of community college).

For those who are taxpayers, another point of concern is how much taxpayer money gets funneled into for-profit schools. In a typical year, the federal government provides $6 billion in Pell Grants and $16 billion in student loans to students attending for-profit colleges. In 2010 there were 2.4 million students enrolled in these schools. It is instructive to look at the breakdown of how the for-profits expend their money.

As noted above, the average salary of the president of a for-profit college was $7.3 million in 2009. The five highest paid presidents of non-profit colleges averaged $3 million and the five highest paid presidents at public colleges were paid $1 million.

The for-profit colleges also spent heavily in marketing, spending $4.2 billion in recruiting, marketing and admissions staffing in 2009. In 2009 thirty for-profit colleges hired 35,202 recruiters which is about 1 recruiter per 49 students. As might be suspected, public schools do not spend that sort of money. My experience with recruiting at public schools is that a common approach is for a considerable amount of recruiting to fall to faculty—who do not, in general, get extra compensation for this extra work.

In terms of what is spent per student, for-profit schools average $2,050 per student per year. Public colleges spend, on average, $7,239 per student per year. Private non-profit schools spend the mots and average $15,321 per student per year. This spending does seem to yield results: at for-profit schools only 20% of students complete the bachelor’s degree within four years. Public schools do somewhat better with 31% and private non-profits do best at 52%. As such, a public or non-profit school would be the better choice over the for-profit school.

Because so much public money gets funneled into for-profit, public and private schools, there has been a push for “gainful employment” regulation. The gist of this regulation is that schools will be graded based on the annual student loan payments of their graduates relative to their earnings. A school will be graded as failing if its graduates have annual student loan payments that exceed 12% of total earnings or 30% of discretionary earnings. The “danger zone” is 8-12% of total earnings or 20-30% of discretionary earnings. Currently, there are about 1,400 programs with about 840,000 enrolled students in the “danger zone” or worse. 99% of them are, shockingly enough, at for-profit schools.

For those who speak of accountability, these regulations should seem quite reasonable. For those who like the free-market, the regulation’s target is the federal government: the goal is to prevent the government from dumping more taxpayer money into failing programs. Schools will need to earn this money by success.

However, this is not the first time that there has been an attempt to link federal money to success. In 2010 regulations were put in place that included a requirement that a school have at least 35% of its students actively repaying student loans. As might be guessed, for-profit schools are the leaders in loan defaults. In 2012 lobbyists for the for-profit schools (who have the highest default rates) brought a law suit to federal court. The judge agreed with them and struck down the requirement.

In November of 2014 an association of for-profit colleges brought a law suit against the current gainful employment requirements, presumably on the principle that it is better to pay lawyers and lobbyists rather than addressing problems with their educational model. If this lawsuit succeeds, which is likely, for-profits will be rather less accountable and this will serve to make things worse for their students.

Based on the numbers, you should definitely not attend the typical for-profit college. On average, it will cost you more, you will have more debt, and you will make less money. For the most for the least cost, the two year community college is the best deal. For the four year degree, the public school will cost less, but private non-profits generally have more successful results. But, of course, much depends on you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics III: Pharmaceuticals

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on February 13, 2015
Steve Rogers' physical transformation, from a ...

Steve Rogers’ physical transformation, from a reprint of Captain America Comics #1 (May 1941). Art by Joe Simon and Jack Kirby. (Photo credit: Wikipedia)

Humans have many limitations that make them less than ideal as weapons of war. For example, we get tired and need sleep. As such, it is no surprise that militaries have sought various ways to augment humans to counter these weaknesses. For example, militaries routinely make use of caffeine and amphetamines to keep their soldiers awake and alert. There have also been experiments

In science fiction, militaries go far beyond these sorts of drugs and develop far more potent pharmaceuticals. These chemicals tend to split into two broad categories. The first consists of short-term enhancements (what gamers refer to as “buffs”) that address a human weakness or provide augmented abilities. In the real world, the above-mentioned caffeine and amphetamines are short-term drugs. In fiction, the classic sci-fi role-playing game Traveller featured the aptly (though generically) named combat drug. This drug would boost the user’s strength and endurance for about ten minutes. Other fictional drugs have far more dramatic effects, such as the Venom drug used by the super villain Bane. Given that militaries already use short-term enhancers, it is certainly reasonable to think they are and will be interested in more advanced enhancers of the sort considered in science fiction.

The second category is that of the long-term enhancers. These are chemicals that enable or provide long-lasting effects. An obvious real-world example is steroids: these allow the user to develop greater muscle mass and increased strength. In fiction, the most famous example is probably the super-soldier serum that was used to transform Steve Rogers into Captain America.

Since the advantages of improved soldiers are obvious, it seems reasonable to think that militaries would be rather interested in the development of effective (and safe) long-term enhancers. It does, of course, seem unlikely that there will be a super-soldier serum in the near future, but chemicals aimed at improving attention span, alertness, memory, intelligence, endurance, pain tolerance and such would be of great interest to militaries.

As might be suspected, these chemical enhancers do raise moral concerns that are certainly worth considering. While some might see discussing enhancers that do not yet (as far as we know) exist as a waste of time, there does seem to be a real advantage in considering ethical issues in advance—this is analogous to planning for a problem before it happens rather than waiting for it to occur and then dealing with it.

One obvious point of concern, especially given the record of unethical experimentation, is that enhancers will be used on soldiers without their informed consent. Since this is a general issue, I addressed it in its own essay and reached the obvious conclusion: in general, informed consent is morally required. As such, the following discussion assumes that the soldiers using the enhancers have been honestly informed of the nature of the enhancers and have given their consent.

When discussing the ethics of enhancers, it might be useful to consider real world cases in which enhancers are used. One obvious example is that of professional sports. While Major League Baseball has seen many cases of athletes using such enhancers, they are used worldwide and in many sports, from running to gymnastics. In the case of sports, one of the main reasons certain enhancers, such as steroids, are considered unethical is that they provide the athlete with an unfair advantage.

While this is a legitimate concern in sports, it does not apply to war. After all, there is no moral requirement for a fair competition in battle. Rather, one important goal is to gain every advantage over the enemy in order to win. As such, the fact that enhancers would provide an “unfair” advantage in war does not make them immoral. One can, of course, discuss the relative morality of the sides involved in the war, but this is another matter.

A second reason why the use of enhancers is regarded as wrong in sports is that they typically have rather harmful side effects. Steroids, for example, do rather awful things to the human body and brain. Given that even aspirin has potentially harmful side effects, it seems rather likely that military-grade enhancers will have various harmful side effects. These might include addiction, psychological issues, organ damage, death, and perhaps even new side effects yet to be observed in medicine. Given the potential for harm, a rather obvious way to approach the ethics of this matter is utilitarianism. That is, the benefits of the enhancers would need to be weighed against the harm caused by their use.

This assessment could be done with a narrow limit: the harms of the enhancer could be weighed against the benefits provided to the soldier. For example, an enhancer that boosted a combat pilot’s alertness and significantly increased her reaction speed while having the potential to cause short-term insomnia and diarrhea would seem to be morally (and pragmatically) fine given the relatively low harms for significant gains. As another example, a drug that greatly boosted a soldier’s long-term endurance while creating a significant risk of a stroke or heart attack would seem to be morally and pragmatically problematic.

The assessment could also be done more broadly by taking into account ever-wider considerations. For example, the harms of an enhancer could be weighed against the importance of a specific mission and the contribution the enhancer would make to the success of the mission. So, if a powerful drug with terrible side-effects was critical to an important mission, its use could be morally justified in the same way that taking any risk for such an objective can be justified. As another example, the harms of an enhancer could be weighed against the contribution its general use would make to the war. So, a drug that increased the effectiveness of soldiers, yet cut their life expectancy, could be justified by its ability to shorten a war. As a final example, there is also the broader moral concern about the ethics of the conflict itself. So, the use of a dangerous enhancer by soldiers fighting for a morally good cause could be justified by that cause (using the notion that the consequences justify the means).

There are, of course, those who reject using utilitarian calculations as the basis for moral assessment. For example, there are those who believe (often on religious grounds) that the use of pharmaceuticals is always wrong (be they used for enhancement, recreation or treatment). Obviously enough, if the use of pharmaceuticals is wrong in general, then their specific application in the military context would also be wrong. The challenge is, of course, to show that the use of pharmaceuticals is simply wrong, regardless of the consequences.

In general, it would seem that the military use of enhancers should be assessed morally on utilitarian grounds, weighing the benefits of the enhancers against the harm done to the soldiers.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Obesity, Disability, & Accomodation

Posted in Ethics, Philosophy, Politics by Michael LaBossiere on February 11, 2015

It is estimated that almost 30% of humans are overweight or obese and this percentage seems likely to increase. Given this large number of large people, it is not surprising that various moral and legal issues have arisen regarding the accommodation of the obese. It is also not surprising that people arguing in favor of accommodating the obese content that obesity is a disability. The legal issues are, of course, simply matter of law and are settled by lawsuits. Since I am not a lawyer, I will focus on the ethics of the matter and will address two main issues. The first is whether or not obesity is a disability. The second is whether or not obesity is a disability that morally justifies making accommodations.

On the face of it, obesity is disabling. That is, a person who is obese will have reduced capabilities relative to a person who is not obese. An obese person will tend to have much lower endurance than a non-obese person, less speed, less mobility, less flexibility and so on. An obese person will also tend to suffer from more health issues and be at greater risk for various illnesses. Because of this, an obese person might find it difficult or impossible to perform certain job tasks, such as those involving strenuous physical activity or walking moderate distances.

The larger size and weight of obese individuals also presents challenges regarding such things as standard sized chairs, doors, equipment, clothing and vehicles. For example, an obese person might be unable to operate a forklift with the standard seating and safety belt. As another example, an obese person might not be able to fit in one airline seat and instead require two (or more).  As a third example, an obese student might not be able to fit into a standard classroom desk. As such, obesity could make it difficult or impossible for a person to work or make use of certain goods and services.

Obviously enough, obese people are not the only ones who are disabled. There are people with short term disabilities due to illness or injury. I experienced this myself when I had a complete quadriceps tendon tear—my left leg was locked in an immobilizer for weeks, then all but useless for months. With this injury, I was considerably slower, had difficulty with stairs, could not carry heavy loads, and could not drive. There are also people who have long term or permanent disabilities, such as people who are paralyzed, blind, or are missing limbs due to accidents or war. These people can face considerable challenges in performing tasks at work and in life in general. For example, a person who is permanently confined to a wheelchair due to a spinal injury will find navigating stairs or working in the woods or working at muddy construction sites rather challenging.

In general, there seems to be no moral problem with requiring employees, businesses, schools and so on to make reasonable accommodations for people who are disabled. The basic principle that justifies that is the principle of equal treatment: people should be afforded equal access, even when doing so requires some additional accommodation. As such, while having ramps in addition to stairs costs more, it is a reasonable requirement given that some people cannot fully use their legs. Given that the obese are disabled, it seems easy enough to conclude that they should be accommodated just as the blind and paralyzed are accommodated.

Naturally, it could be argued that there is no moral obligation to provide accommodations for anyone. If this is the case, then there would be no obligation to accommodate the obese. However, it would seem to be rather difficult to prove, for example, that disabled veterans returning to school should just have to work their way up the steps in their wheelchairs. For the sake of the discussion to follow I will assume that there is a moral obligation to accommodate the disabled. However, there is still the question of whether or not this should apply to the obese.

One obvious way to argue against accommodations for the obese is to argue that there is a morally relevant difference between those disabled by obesity and those disabled by injury, birth defects, etc. One difference that people often point to is that obesity is a matter of choice and other disabilities are not. That is, a person’s decisions resulted in her being fat and hence she is responsible in a way a person crippled in an accident is not.

It could be pointed out that some people who are disabled by injury where disabled as the result of their decisions. For example, a person might have driven while drunk and ended up paralyzed. But, of course, the person would not be denied access to handicapped parking or the use of automatic doors because his disability was self-inflicted. The same reasoning could be used for the obese: though their disability is self-inflicted, it is still a disability and thus should be accommodated.

The easy and obvious reply to this is that there is still a relevant difference. While a person crippled in a self-inflicted drunken crash caused his own disability, there is little he can do about that disability. He can change his diet and exercise but this will not restore functionality to his legs. That is, he is permanently stuck with the results of that decision. In contrast, an obese person has to maintain her obesity. While some people are genetically predisposed to being obese, how much a person eats and how much they exercise is a matter of choice. Since they could reduce their weight, the rest of us are under no obligation to provide special accommodations for them. This is because they could take reasonable steps to remove the need for such accommodations. To use analogy, imagine someone who insisted that they be provided with a Seeing Eye dog because she wants to wear opaque glasses all the time. These glasses would result in her being disabled since she would be blind. However, since she does not need to wear such glasses and could easily do without them, there is no obligation to provide her with the dog. In contrast, a person who is actually blind cannot just get new eyes and hence it is reasonable for society to accommodate her.

It can be replied that obesity is not a matter of choice. One approach would be to argue for metaphysical determinism—the obese are obese by necessity and could not be otherwise. The easy reply here would be to say that we are, sadly enough, metaphysically determined not to provide accommodations.

A more sensible approach would be to argue that obesity is, in some cases, a medical condition that is beyond the ability of a person to control—that is, the person lacks agency in regards to his eating and exercise. The most likely avenue of support for this claim would come from neuroscience. If it can be shown that people are incapable of controlling their weight, then obesity would be a true disability, on par with having one’s arm blasted off by an IED or being born with a degenerative neural disorder. This would, of course, require abandoning agency (at least in this context).

It could also be argued that a person does have some choice, but that acting on the choice would be so difficult that it is more reasonable for society to accommodate the individual than it is for the individual to struggle to not be obese. To use an analogy, a disabled person might be able to regain enough functionality to operate in a “mostly normal” way, but doing so might require agonizing effort that is beyond what could be expected of a person. In such a case, one would surely not begrudge the person the accommodations. So, it could be argued that since it is easier for society to accommodate the obese than it is for the obese to not be obese, society should do so.

There is, however, a legitimate concern here. If the principle is adopted that society must accommodate the obese because they are disabled and they cannot help their obesity, then others could appeal to that same sort of principle and perhaps over-extend the realm of disabilities that must be accommodated. For example, people who are addicted to drugs could make a similar argument: they are disabled, yet their addiction is not a matter of choice. As another example, people who are irresponsible or lazy can claim they are disabled as well and should be accommodated on the grounds that they cannot be other than they are. But, perhaps the line can be drawn in a principle way so that the obese are disabled, but others are not.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Follow

Get every new post delivered to your Inbox.

Join 2,252 other followers