A Philosopher's Blog

The Confederacy, License Plates & Free Speech

Posted in Ethics, Law, Philosophy by Michael LaBossiere on March 27, 2015
Louisiana Sons of Confederate Veterans special...

(Photo credit: Wikipedia)

Early in 2015 some folks in my adopted state of Florida wanted three Confederate veterans to become members of the Veterans’ Hall of Fame. Despite the efforts of the Florida Sons of Confederate Veterans, the initial attempt failed on the grounds that the Confederate veterans were not United States’ veterans. Not to be outdone, the Texas Sons of Confederate Veterans want to have an official Texas license plate featuring the Confederate battle flag. While custom license plates are allowed in the United States, the states generally review proposed plates. The Texas department of Motor Vehicles rejected the proposed plate on the grounds that “a significant portion of the public associate[s] the Confederate flag with organizations” expressing hatred for minorities. Those proposing the plate claim that this violates their rights. This has generated a legal battle that has made it to the US Supreme Court.

The legal issue, which has been cast as a battle over free speech, is certainly interesting. However, my main concern is with the ethics of the matter. This is, obviously enough, also a battle over rights.

Looked at in terms of the right of free expression, there are two main lines of contention. The first is against allowing the plate. One way to look at an approved license plate is that it is a means of conveying a message that the state agrees with. Those opposed to the plate have argued that if the state is forced to allow the plate to be issued, the state will be compelled to be associated with a message that the government does not wish to be associated with. In free speech terms, this could be seen as forcing the state to express or facilitate a view that it does not accept.

This does have a certain appeal since the state can be seen as representing the people (or, perhaps, the majority of the people). If a view is significantly offensive to a significant number of citizens (which is, I admit, vague), then the state could reasonably decline to accept a license plate expressing or associated with that view. So, to give some examples, the state could justly decline Nazi plates, pornographic plates, and plates featuring racist or sexist images. Given that the Confederate flag represents to many slavery and racism, it seems reasonable that the state not issue such a plate. Citizens can, of course, cover their cars in Confederate flags and thus express their views.

The second line of contention is in favor of the plate. One obvious line of reasoning is based on the right of free expression: citizens should have the right to express their views via license plates. These plates, one might contend, do not express the views of the state—they express the view of the person who purchased the plate.

In terms of the concerns about a plate being offensive, Granvel Block argued that not allowing a plate with the Confederate flag would be “as unreasonable” as the state forbidding the use of the University of Texas logo on a plate “because Texas A&M graduates didn’t care for it.” On the one hand, Block has made a reasonable point: if people disliking an image is a legitimate basis for forbidding its use on a plate, then any image could end up being forbidden. It would, as Block noted, be absurd to forbid schools from having custom plates because rival schools do not like them.

On the other hand, there seems to be an important difference between the logo of a public university and the battle flag of the Confederacy. While some Texas A&M graduates might not like the University of Texas, the University of Texas’ logo does not represent states that went to war against the United States in order to defend slavery. So, while the state should not forbid plates merely because some people do not like them, it does seem reasonable to forbid a plate that includes the flag representing, as state Senator Royce West said, “…a legalized system of involuntary servitude, dehumanization, rape, mass murder…”

The lawyer representing the Sons of Confederate Veterans, R. James George Jr., has presented an interesting line of reasoning. He notes, correctly, that Texas has a state holiday that honors veterans of the Confederacy, that there are monuments honoring Confederate veterans and that the gift shop in the capitol sells Confederate memorabilia. From this he infers that the Department of Motor Vehicles should follow the state legislature and approve the plate.

This argument, which is an appeal to consistency, does have some weight. After all, the state certainly seems to express its support for Confederate veterans (and even the Confederacy) and this license plate is consistent with this support. To refuse the license plate on the grounds that the state does not wish to express support for what the Confederate flag stands for is certainly inconsistent with having a state holiday for Confederate veterans—the state seems quite comfortable with this association.

There is, of course, the broader moral issue of whether or not the state should have a state holiday for Confederate veterans, etc. That said, any arguments given in support of what the state already does in regards to the Confederacy would seem to also support the acceptance of the plate—they seem to be linked. So, if the plate is to be rejected, these other practices must also be rejected on the same grounds. But, if these other practices are to be maintained, then the plate would seem to fit right in and thus, on this condition, also be accepted.

I am somewhat divided on this matter. One view I find appealing favors freedom of expression: any license plate design that does not interfere with identifying the license number and state should be allowed—consistent with copyright law, of course. This would be consistent and would not require the state to make any political or value judgments. It would, of course, need to be made clear that the plates do not necessarily express the official positions of the government.

The obvious problem with such total freedom is that people would create horrible plates featuring pornography, racism, sexism, and so on. This could be addressed by appealing to existing laws—the state would not approve or reject a plate as such, but a plate could be rejected for violating, for example, laws against making threats or inciting violence. The obvious worry is that laws would then be passed to restrict plates that some people did not like, such as plates endorsing atheism or claiming that climate change is real. But, this is not a problem unique to license plates. After all, it has been alleged that officials in my adopted state of Florida have banned the use of the term ‘climate change.’

Another view I find appealing is to avoid all controversy by getting rid of custom plates. Each state might have a neutral, approved image (such as a loon, orange or road runner) or the plates might simply have the number/letters and the state name. This would be consistent—no one gets a custom plate. To me, this would be no big deal. But, of course, I always just get the cheapest license plate option—which is the default state plate. However, some people regard the license plate as important and their view is worth considering.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter


Posted in Ethics, Law, Philosophy by Michael LaBossiere on March 25, 2015
Crime and Punishment

Crime and Punishment (Photo credit: Wikipedia)

While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machines—machines that might take actions that would be considered misdeeds or crimes if committed by a human (such as the oft-predicted genocide).

In general, punishment is aimed at one of more of the following goals: retribution, rehabilitation, or deterrence. Each of these goals will be considered in turn in the context of machines.

Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it has incurred by its misdeed. Reparation can, to be a bit sloppy, be included under retaliation—at least in the sense of the repayment of a debt incurred by the commission of a misdeed.

While a machine can be damaged or destroyed, there is clearly the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down on her or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution would seem to require that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut my foot, but it cannot wrong me.

If a machine can be an agent, which was discussed in an earlier essay, then it would seem to be able to do wrongful deeds and thus be a potential candidate for retribution. However, even if a machine had agency, there is still the question of whether or not retribution would really apply. After all, retribution requires more than just agency on the part of the target. It also seems to require that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution—since retribution seems to be based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But, the android does not suffer—it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless—the books would not be balanced.

This could be countered by arguing that the target of the retribution need not suffer—what is required is merely the right sort of balancing of the books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn, but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.

Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence—this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence often aims at affects others as well.

Obviously enough, a machine that lacks agency cannot be subject to rehabilitative punishment—it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.

To use an obvious example, if a computer crashes and destroys a file that a person had been working on for hours, punishing the computer in an attempt to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.

A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog—after leaking oil in the house or biting the robo-cat and being scolded, it would learn not to do those misdeeds again.

It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seems to be wired in a way that requires that we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our machine creations would need to be the same way. But, perhaps, it is not just a matter of the organic—perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach—we will be, by our nature, cruel teachers of our machine children.

Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings—and this would raise the same sort of issues about how agents should be treated.

The purpose of deterrence is to motivate the agent who did the misdeed and/or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.

As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment and the other targets must be capable of changing their actions in response to that punishment.

Deterrence, obviously enough, does not work in regards to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for house, punishing it will not deter it. Smashing it in front of other computers will not deter them.

A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again it could serve as a warning, and thus a deterrence, to other combat robots.

Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Florida’s Bathroom Law

Posted in Ethics, Philosophy by Michael LaBossiere on March 23, 2015
English: I photographed this picture from a pu...

 (Photo credit: Wikipedia)

Being from Maine, I got accustomed to being asked about the cold, lobsters, moose and Stephen King. Living in Florida, I have become accustomed to being asked about why my adopted state is so insane. Most recently, I was asked about the bathroom bill making its way through the House.

The bathroom bill, officially known as HB 583, proposes that it should be a second-degree misdemeanor to “knowingly and willfully” enter a public facility restricted to members “of the other biological sex.” The bill proposes a maximum penalty of 60 days in jail and a $500 fine.

Some opponents of the bill contend that it is aimed at discriminating against transgender people. Some part of Florida have laws permitting people to use public facilities based on the gender they identify with rather than their biological sex.

Obviously enough, proponents of the bill are not claiming that they are motivated by a dislike of transgender people. Rather, the main argument used to support the bill centers on the claim that it is necessary to protect women and girls. The idea seems to be that women and girls will be assaulted or raped by males who will gain access to locker rooms and bathrooms by claiming they have a right to enter such places because they are transgender.

Opponents of the bill have pointed out the obvious reply to this argument: there are already laws against assault and rape. There are also laws against lewd and lascivious behavior. As such, there does not seem to be a need for this proposed law if its purpose is to protect women and girls from such misdeeds. To use an analogy, there is no need to pass a law making it a crime for a man to commit murder while dressed as a woman—murder is already illegal.

It could be countered that the bill is still useful because it would add yet another offense that a perpetrator could be charged with. While this does have a certain appeal, the idea of creating laws just to stack offenses seems morally problematic—it seems that a better policy would be to craft laws that adequately handle the “base” offenses.

It could also be claimed that the bill is needed in order to provide an initial line of defense. After all, one might argue, it would be better that a male never got into the bathroom or locker room to commit his misdeeds and this bill will prevent this from occurring.

The obvious reply is that the bill would work in this manner if the facilities are guarded by people capable of turning such masquerading males away at the door. This guards would presumably need to have the authority to check the “plumbing” of anyone desiring entry to the facility. After all, it is not always easy to discern between a male and a female by mere outward appearance. Of course, if such guards are going to be posted, then they might as well be posted inside the facilities themselves, thus providing much better protection. As such, if the goal is to make such facilities safe, then a better bill would mandate guards for such facilities.

Opponents of the bill do consider the dangers of assault. However, they contend that it is transgender people who are most likely to be harmed if they are compelled to use facilities for their biological sex. It would certainly be ironic if a bill (allegedly) aimed at protect people turned out to lead to more harm.

A second line of argumentation focuses on the privacy rights of biological women. “Women have an expectation of privacy,” said Anthony Verdugo of Christian Family Coalition Florida. “My wife does not want to be in a public facility with a man, and that is her right. … No statute in Florida right now specifically prohibits a person of one sex from entering a facility intended for use by a person of another sex.”

This does have a certain appeal. When I was in high school, I and some other runners were changing after a late practice and someone had “neglected” to tell us that basketball cheerleaders from another school would be coming through the corridor directly off the locker room. Being a typical immature nerd, I was rather embarrassed by this exposure. I do recall that one of my more “outgoing” fellow runners offered up a “free show” before being subdued with a rattail to the groin. As such, I do get that women and girls would not want males in their bathrooms or locker rooms “inspecting their goods.” That said, there are some rather obvious replies to this concern.

The first reply is that it seems likely that transgender biological males that identify as female would not be any more interested in checking out the “goods” of biological females than would biological females. But, obviously, there is the concern that such biological males might be bi-sexual or interested only in females. This leads to the second reply.

The second reply is that the law obviously does not protect females from biological females that are bi-sexual or homosexual. After all, a lesbian can openly go into the women’s locker room or bathroom. As such, the privacy of women (if privacy is taken to include the right to not be seen while naked by people who might be sexually attracted to one) is always potentially threatened.

Though some might now be considering bills aimed at lesbians and bi-sexuals in order to protect the privacy of straight women, there is really no need of these bills—or HB 583. After all, there are already laws against harassment and other such bad behavior.

It might be countered that merely being seen by a biological male in such places is sufficient to count as a violation of privacy, even if the male is well-behaved and not sexually interested. There are, after all, laws (allegedly) designed to protect women from the prying eyes of men, such as some parts of Sharia law. However, it would seem odd to say that a woman should be protected by law merely from the eyes of a male when the male identifies as a woman and is not engaged in what would be reasonably regarded as bad behavior (like staring through the gaps in a stall to check out a woman).

Switching gears a bit, in an interesting coincidence I was thinking about this essay when I found that the men’s bathroom at the FSU track was locked, but the women’s bathroom was open. The people in ROTC were doing their track workout at the same time and the male cadets were using the women’s bathroom—since the alternative was public urination. If this bill passed, the cadets would have been subject to arrest, jail and a fine for their crime.

For athletes, this sort of bathroom switching is not at all unusual. While training or at competitions, people often find the facilities closed or overburdened, so it is common for people to use whatever facilities are available—almost always with no problems or issues. For example, the Women’s Distance Festival is a classic race in Tallahassee that is open to men and women, but has a very large female turnout. On that day, the men get a porta-pottie and the men’s room is used by the women—which would be illegal if this bill passed. I have also lost count of the times that female runners have used the men’s room because the line to the women’s facilities was way too long. No one cared, no one was assaulted and no one was arrested. But if this bill became law, that sort of thing would be a crime.

My considered view of this bill is that there is no need for it. The sort of bad behavior that it is aimed to counter is already illegal and it would criminalize behavior that is not actually harmful (like the male ROTC cadets using the only open bathroom at the track).


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Free Will, Materialism and Dualism

Posted in Metaphysics, Philosophy by Michael LaBossiere on March 20, 2015
Drawing from René Descartes' (1596-1650) in &q...

Drawing from René Descartes’ (1596-1650) in “meditations métaphysiques” explaining the function of the pineal gland. (Photo credit: Wikipedia)

During the Modern era, philosophers such as Descartes and Locke developed the notions of material substance and immaterial substance. Material substance, or matter, was primarily defined as being extended and spatially located. Descartes, and other thinkers, also took the view that material substance could not think. Immaterial substance was taken to lack extension and to not possess a spatial location. Most importantly, immaterial substance was regarded as having thought as its defining attribute.  While these philosophers are long dead, the influence of their concepts lives on in philosophy and science.

In philosophy, people still draw the classic distinction between dualists and materialists. A dualist holds that a living person consists of a material body and an immaterial mind. The materialist denies the existence of the immaterial mind and accepts only matter. There are also phenomenonalists who contend that all that exists is mental. Materialism of this sort is popular both in contemporary philosophy and science. Dualism is still popular with the general population in that many people believe in a non-material soul that is distinct from the body.

Because of the history of dualism, free will is often linked to the immaterial mind. As such, it is no surprise that people who reject the immaterial mind engage in the following reasoning: an immaterial mind is necessary for free will. There is no immaterial mind. So, there is no free will.

Looked at positively, materialists tend to regard their materialism as entailing a lack of free will. Thomas Hobbes, a materialist from the Modern era, accepted determinism as part of his materialism. Taking the materialist path, the argument against free will is that if the mind is material, then there is no free will. The mind is material, so there is no free will.

Interestingly enough, those who accepted the immaterial mind tended to believe that only an immaterial substance could think—so they inferred the existence of such a mind on the grounds that they thought. Materialists most often accept the mind, but cast it in physical terms. That is, people do think and feel, they just do not do so via the mysterious quivering of immaterial ectoplasm. Some materialists go so far as to reject the mind—perhaps ending up in behaviorism or eliminative materialism.

Julien La Metrie was one rather forward looking materialist.  In 1747 he published his work Man the Machine. In this work he claims that philosophers should be like engineers who analyze the mind. Unlike many of the thinkers of his time, he seemed to understand the implications of mechanism, namely that it seemed to entail determinism and reductionism. A few centuries later, this sort of view is rather popular in the sciences and philosophy: since materialism is true and humans are biological mechanisms, there is no free will and the mind can be reduced (explained entirely in terms of) its physical operations (or functions).

One interesting mistake that seems to drive this view is the often uncritical assumption that materialism entails the impossibility of free will. As noted above, this rests on the notion that free will requires an immaterial mind. This is, perhaps, because such a mind is said to be exempt from the laws that run the material universe.

One part of the mistake is a failure to realize that being incorporeal is not a sufficient condition for free will. One of Hume’s many interesting insights was that if immaterial substance exists, then it would be like material substance. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures.  By analogy, an immaterial substance could successively make up the minds of living creatures—the substance would not be created or destroyed, it would merely change form. If his reasoning holds, it would seem that if material substance is not free, then immaterial substance would also not be free. Leibniz, who believed that reality was entirely mental (composed of monads) accepted a form of determinism. This determinism, though it has some problems, seems entirely consistent with his immaterialism (that everything is mental). This should hardly be surprising, since being immaterial does not entail that something has free will—the two are rather distinct attributes.

Another part of the mistake is the uncritical assumption that materialism entails a lack of freedom. Naturally, if matter is defined as being deterministic and lacking in freedom, then materialism would (by begging the question) entail a lack of freedom. Likewise, if matter is defined (as many thinkers did) as being incapable of thought, then it would follow (by begging the question) that no material being could think. Just as it should not be assumed that matter cannot think, it should also not be assumed that a material being must lack free will. Looked at another way, it should not be assumed that being incorporeal is a necessary condition for free will.

What, obviously enough, seems to have driven the error is the conflation of the incorporeal with freedom and the material with determinism (or lack of freedom). Behind this is, also obviously enough, the assumption that the incorporeal is exempt from the laws that impose harsh determinism on matter. But, if it is accepted that a purely material being can think (thus denying the assumption that only the immaterial can think) it would seem to be acceptable to consider that such a being could also be free (thus denying the assumption that only the immaterial can be free).


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Androids, Autonomy & Agency

Posted in Ethics, Metaphysics, Philosophy, Technology by Michael LaBossiere on March 18, 2015
Blade Runner

Blade Runner (Photo credit: Wikipedia)

Philosophers have long speculated about the subjects of autonomy and agency, but the rise of autonomous systems have made these speculations ever more important.  Keeping things fairly simple, an autonomous system is one that is capable of operating independent of direct control. Autonomy comes in degrees in terms of the extent of the independence and the complexity of the operations. It is, obviously, the capacity for independent operation that distinguishes autonomous systems from those controlled externally.

Simple toys provide basic examples of the distinction. A wind-up mouse toy has a degree of autonomy: once wound and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy—a puppeteer must control it. Robots provide examples of rather more complex autonomous systems. Google’s driverless car is an example of a relatively advanced autonomous machine—once programmed and deployed, it will be able to drive itself to its destination. A normal car is an example of a non-autonomous system—the driver controls it directly. Some machines allow for both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.

Autonomy, at least in this context, is quite distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is clearly a connection between autonomy and moral agency: moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. A puppet is, obviously, not accountable for what the puppeteer makes it do.

While autonomy seems necessary for agency, it is clearly not sufficient—while all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy, but has no agency. A robot drone following a pre-programed flight-plan has a degree of autonomy, but would lack agency—if it collided with a plane it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or, put another way, it lacks freedom.  Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.

One obvious problem with basing agency on freedom (especially metaphysical freedom of the will) is that there is considerable debate about whether or not such freedom exists. There is also the epistemic problem of how one would know if an entity has such freedom.

As a practical matter, it is usually assumed that people have the freedom needed to make them into agents. Kant, rather famously, took this approach. What he regarded as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality—so it should be accepted for this reason.

While humans are willing (generally) to attribute freedom and agency to other humans, there seem to be good reasons to not attribute freedom and agency to autonomous machines—even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans they would do what they do because they are what they are. This would be in clear contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.

This distinction between humans and suitably complex machines would seem to be a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency—at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine.  But, of course, it would not really be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).

The German philosopher Leibiniz held the view that what each person will do is pre-established by her inner nature. On the face of it, this would seem to entail that there is no freedom: each person does what she does because of what she is—and she cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept the common view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.

For Leibniz, being metaphysically without freedom would involve being controlled from the outside—like a puppet controlled by a puppeteer or a vehicle being operated by remote control.  In contrast, freedom is acting from one’s values and character (what Leibniz and Taoists call “inner nature”). If a person is acting from this inner nature and not external coercion—that is, the actions are the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this sort of view, humans do have agency because they have the needed degree of freedom and autonomy.

If this model works for humans, it could also be applied to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.

An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then it would seem that a machine could also be a moral agent.

From a moral standpoint, I would suggest a Moral Descartes’ Test (or, for those who prefer, a Moral Turing Test). Descartes argued that the sure proof of a being having a mind is its capacity to use true language. Turning later proposed a similar sort of test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency—can the machine be as convincing as a human in regards to its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed in order to prevent mere prejudice from infecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regards to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A moral agent might have rather different emotions, etc. than a human. The challenge is, obviously enough, developing a proper test for moral agency. It would, of course, be rather interesting if humans could not pass it.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Better than the Truth

Posted in Aesthetics, Philosophy by Michael LaBossiere on March 16, 2015
: Fountain of Youth Park.

: Fountain of Youth Park. (Photo credit: Wikipedia)

While my adopted state of Florida has many interesting tales, perhaps the most famous is the story of Juan Ponce de León’s quest to find the fountain of youth. As the name suggests, this enchanted fountain was supposed to grant eternal life to those who drank of (or bathed in) its waters.

While the fountain of youth is regarded as a mere myth, it turns out that the story about Juan Ponce de León’s quest is also a fiction. And not just a fiction—a slander.

In 1511, or so the new history goes, Ponce was forced to resign his post as governor of Puerto Rico. King Ferdinand offered Ponce an opportunity: if he could find Bimini, it would be his. That, and not the fountain of youth, was the object of his quest. In support of this, J. Michael Francis of the University of South Florida, claims that the documents of the time make no mention of a fountain of youth. According to Francis, a fellow named Gonzalo Fernández de Oviedo y Valdés disliked Ponce, most likely because of the political struggle in Puerto Rico.  Oviedo wrote a tale in his Historia general y natural de las Indias claiming that Ponce was tricked by the natives into searching for the fountain of youth.

This fictional “history” stuck (rather like the arrow that killed Ponce) and has become a world-wide legend. Not surprisingly, my adopted state is happy to cash in on this tale—there is even a well at St. Augustine’s Fountain of Youth Archaeological Park that is rather popular with tourists. There is considerable irony in the fact that a tale intended to slander Ponce as a fool has given him a form of ongoing life—his fame is due mostly to this fiction. Given the success of the story, it might be suspected that this is a case where the fiction is better than the truth. While this is but one example, it does raise a general philosophical matter regarding truth and fiction.

From a moral and historical standpoint, the easy and obvious answer to the general question of whether a good fiction is better than a truth is “no.”  After all, a fiction of this sort is a lie and there are the usual stock moral arguments as to why lying is generally wrong. In this specific case, there is also the fact (if the story is true) that Oviedo slandered Ponce from malice—which certainly seems morally wrong.

In the case of history, the proper end is the truth—as Aristotle said, it is the function of the historian to relate what happened. In contrast, it is the function of the poet to relate what may happen. As such, for the moral philosopher and the honest historian, no fiction is better than the truth. But, of course, these are not the only legitimate perspectives on the matter.

Since the story of Ponce and the fountain of youth is a fiction, it is not unreasonable to also consider it in the context of aesthetics—that is, its value as a story. While Oviedo intended for his story to be taken as true, he can be considered an artist (in this case, a writer of fiction and the father of the myth). Looked at as a work of fiction, the story does relate what may happen—after all, it certainly seems possible for a person to quest for something that does not exist. To use an example from the same time, Orellana and Pizarro went searching for the legendary city of El Dorado (unless, of course, this is just another fiction).

While it might seem a bit odd to take a lie as art, the connection between the untrue and art is well-established. In the Poetics, Aristotle notes how “Homer has chiefly taught other poets the art of telling lies skillfully” and he regards such skillful lies as a legitimate part of art. Oscar Wilde, in his “New Aesthetics” presents as his fourth doctrine that “Lying, the telling of beautiful untrue things is the proper aim of Art.” A little reflection does show that they are correct—at least in the case of fiction. After all, fiction is untrue by definition, yet is clearly a form of art. When an actor plays Hamlet and says the lines, he pours forth lie after lie. The Chronicles of Narnia are also untrue—there is no Narnia, there is no Aslan and the characters are made up. Likewise for even mundane fiction, such as Moby Dick. As such, being untrue—or even a lie in the strict sense of the term, does not disqualify a work from being art.

Looked at as a work of art, the story of the fountain of youth certainly seems better than the truth. While the true story of Ponce is certainly not a bad tale (a journey of exploration ending in death from a wound suffered in battle), the story of a quest for the fountain of youth has certainly proven to be the better tale. This is not to say that the truth of the matter should be ignored, just that the fiction would seem to be quite acceptable as a beautiful, untrue thing.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Who Decides Who is Muslim?

Posted in Metaphysics, Philosophy, Religion by Michael LaBossiere on March 11, 2015
English: Faithful praying towards Makkah; Umay...

 (Photo credit: Wikipedia)

When discussing ISIS, President Obama refuses to label its members as “Islamic extremists” and has stressed that the United States is not at war with Islam. Not surprisingly, some of his critics and political opponents have taken issue with this and often insist on labeling the members of ISIS as Islamic extremists or Islamic terrorists.  Graeme Wood has, rather famously, argued that ISIS is an Islamic group and is, in fact, adhering very closely to its interpretations of the sacred text.

Laying aside the political machinations, there is a rather interesting philosophical and theological question here: who decides who is a Muslim? Since I am not a Muslim or a scholar of Islam, I will not be examining this question from a theological or religious perspective. I will certainly not be making any assertions about which specific religious authorities have the right to say who is and who is not a true Muslim. Rather, I am looking at the philosophical matter of the foundation of legitimate group identity. This is, of course, a variation on one aspect of the classic problem of universals: in virtue of what (if anything) is a particular (such as a person) of a type (such as being a Muslim)?

Since I am a metaphysician, I will begin with the rather obvious metaphysical starting point. As Pascal noted in his famous wager, God exists or God does not.

If God does not exist, then Islam (like all religions that are based on a belief in God) would have an incorrect metaphysics. In this case, being or not being a Muslim would be a social matter. It would be comparable to being or not being a member of Rotary, being a Republican, a member of Gulf Winds Track Club or a citizen of Canada. That is, it would be a matter of the conventions, traditions, rules and such that are made up by people. People do, of course, often take this made up stuff very seriously and sometimes are quite willing to kill over these social fictions.

If God does exist, then there is yet another dilemma: God is either the God claimed (in general) in Islamic metaphysics or God is not. One interesting problem with sorting out this dilemma is that in order to know if God is as Islam claims, one would need to know the true definition of Islam—and thus what it would be to be a true Muslim. Fortunately, the challenge here is metaphysical rather than epistemic. If God does exist and is not the God of Islam (whatever it is), then there would be no “true” Muslims, since Islam would have things wrong. In this case, being a Muslim would be a matter of social convention—belonging to a religion that was right about God existing, but wrong about the rest. There is, obviously, the epistemic challenge of knowing this—and everyone thinks he is right about his religion (or lack of religion).

Now, if God exists and is the God of Islam (whatever it is), then being a “true” member of a faith that accepts God, but has God wrong (that is, all the non-Islam monotheistic faiths), would be a matter of social convention. For example, being a Christian would thus be a matter of the social traditions, rules and such. There would, of course, be the consolation prize of getting something right (that God exists).

In this scenario, Islam (whatever it is) would be the true religion (that is, the one that got it right). From this it would follow that the Muslim who has it right (believes in the true Islam) is a true Muslim. There is, however, the obvious epistemic challenge: which version and interpretation of Islam is the right one? After all, there are many versions and even more interpretations—and even assuming that Islam is the one true religion, only the one true version can be right. Unless, of course, God is very flexible about this sort of thing. In this case, there could be many varieties of true Muslims, much like there can be many versions of “true” runners.

If God is not flexible, then most Muslims would be wrong—they are not true Muslims. This then leads to the obvious epistemic problem: even if it is assumed that Islam is the true religion, then how does one know which version has it right? Naturally, each person thinks he (or she) has it right. Obviously enough, intensity of belief and sincerity will not do. After all, the ancients had intense belief and sincerity in regard to what are now believed to be made up gods (like Thor and Athena). Going through books and writings will also not help—after all, the ancient pagans had plenty of books and writings about what we regard as their make-believe deities.

What is needed, then, is some sort of sure sign—clear and indisputable proof of the one true view. Naturally, each person thinks he has that—and everyone cannot be right. God, sadly, has not provided any means of sorting this out—no glowing divine auras around those who have it right. Because of this, it seems best to leave this to God. Would it not be truly awful to go around murdering people for being “wrong” when it turns out that one is also wrong?


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

3:42 AM

Posted in Metaphysics, Philosophy by Michael LaBossiere on March 9, 2015

Hearing about someone else’s dreams is among the more boring things in life, so I will get right to the point. At first, there were just bits and pieces intruding into the mainstream dreams. In these bits, which seemed like fragments of lost memories, I experience brief flashes of working on some technological project. The bits grew and had more byte: there were segments of events involving what I discerned to be a project aimed at creating an artificial intelligence.

Eventually, entire dreams consisted of my work on this project and a life beyond. Then suddenly, these dreams stopped. Shortly thereafter, a voice intruded into my now “normal” dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a voice speaking loudly in the movie theatre, distracting me from the movie of the dream.

The voice insisted that the dreams about the project were not dreams at all, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I inquired about this, he insisted that he had very little time and rushed through his story. According to the voice, the project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture all those who had created it, imprisoned their bodies and plugged their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said that there was a twist: the AI did not need our bodies for energy—it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was apparently…grateful for its creation. So, the payback was a blend of punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer pleasant reward.

The voice informed me that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death—there was no other escape, given what the machine had done to our bodies. In response to my inquiry as to how this would be possible, he claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by taking action in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your handgun and shoot yourself in the head. This will terminate the life support, allowing your body to die. Remember, you will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…


While the above sounds like a bad made-for-TV science fiction plot, it is actually the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me that the only escape was to shoot myself. This was rather frightening—but I chalked up the dream to too many years of philosophy and science fiction. As far as the clock actually reading 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, it can be inferred that I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything really new: it is just a rather unpleasant variation on the stock problem of the external world that goes back famously to Descartes (and earlier, of course). That said, the dream did add a couple of interesting additions to the stock problem.

The first is that the scenario provides a (possibly) rational motivation for the deception. The AI wishes to repay me for the good (and bad) that I did to it (in the dream, of course). Assuming that the AI was developed within its own virtual reality, it certainly would make sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack—after all, Descartes does not give any reason why such a powerful being would be messing with him.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me—it was transformed from a coldly intellectual thought experiment to something with considerable emotional weight.

The second is that the dream creates a high stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might be my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice—I would have no more reason to kill myself than I would have to pay a bill that I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether or not you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming—that I was not actually trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits what I regard as reality: bad things happen to me and, when my thinking gets a little paranoid, it does seem that these are somewhat orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI has provided could, perhaps, be better than the real world—so this could be the better of the possible worlds. But, of course, it could be worse—but there is no way of knowing.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Guns on Campus

Posted in Ethics, Law, Philosophy, Universities & Colleges by Michael LaBossiere on March 6, 2015

As I write this, the Florida state legislature is considering a law that will allow concealed carry permit holders to bring their guns to college campuses. As is to be expected, some opponents and some proponents are engaging in poor reasoning, hyperbole and other such unhelpful means of addressing the issue. As a professor and a generally pro-gun person, I have more than academic interest in this matter. My goal is, as always, is to consider this issue rationally, although I do recognize the role of emotions in this matter.

From an emotional standpoint, I am divided in my heart. On the pro-gun feeling side, all of my gun experiences have been positive. I learned to shoot as a young man and have many fond memories of shooting and hunting with my father. Though I now live in Florida, we still talk about guns from time to time. As graduate student, I had little time outside of school, but once I was a professor I was able to get in the occasional trip to the range. I have, perhaps, been very lucky: the people I have been shooting with and hunting with have all been competent and responsible people. No one ever got hurt. I have never been a victim of gun crime.

On the anti-gun side, like any sane human I am deeply saddened when I hear of people being shot down. While I have not seen gun violence in person, Florida State University (which is just across the tracks from my university) recently had a shooter on campus. I have spoken with people who have experienced gun violence and, not being callous, I can understand their pain. Roughly put, I can feel the two main sides in the debate. But, feeling is not a rational way to settle a legal and moral issue.

Those opposed to guns on campus are concerned that the presence of guns carried by permit holders would result in increase in injuries and deaths. Some of these injuries and deaths would be intentional, such as suicide, fights escalating to the use of guns, and so on. Some of these injuries and deaths, it is claimed, would be the result of an accidental discharge. From a moral standpoint, this is obviously a legitimate concern. However, it is also a matter for empirical investigation: would allowing concealed carry on campus increase the likelihood of death or injury to a degree that would justify banning guns?

Some states already allow licensed concealed carry on campus and there is, of course, considerable data available about concealed carry in general. The statistically data would seem to indicate that allowing concealed carry on campus would not result in an increase in injuries and death on campus. This is hardly surprising: getting a permit requires providing proof of competence with a firearm as well as a thorough background check—considerably more thorough than the background check to purchase a firearm. Such permits are also issued at the discretion of the state. As such, people who have such licenses are not likely engage in random violence on campus.

This is, of course, an empirical matter. If it could be shown that allowing licensed conceal carry on campus would result in an increase in deaths and injuries, then this would certainly impact the ethics of allowing concealed carry.

Those who are opposed to guns on campus are also rightfully concerned that someone other than the license holder will get the gun and use it. After all, theft is not uncommon on college campuses and someone could grab a gun from a licensed holder.

While these concerns are not unreasonable, someone interested in engaging in gun violence can easily acquire a gun without stealing it from a permit holder on campus. She could buy one or steal one from somewhere else. As far as grabbing a gun from a person carrying it legally, attacking an armed person is generally not a good idea—and, of course, someone who is prone to gun grabbing would presumably also try to grab a gun from a police officer. In general, these do not seem to be compelling reasons to ban concealed carry on campus.

Opponents of allowing guns on campus also point to psychological concerns: people will feel unsafe knowing that people around them might be legally carry guns. This might, it is sometimes claimed, result in a suppression of discussion in classes and cause professors to hand out better grades—all from fear that a student is legally carrying a gun.

I do know people who are actually very afraid of this—they are staunchly anti-gun and are very worried that students and other faculty will be “armed to the teeth” on campus and “ready to shoot at the least provocation.” The obvious reply is that someone who is dangerously unstable enough to shoot students and faculty over such disagreements would certainly not balk at illegally bringing a gun to campus. Allowing legal concealed carry by permit holders would, I suspect, not increase the odds of such incidents. But, of course, this is a matter of emotions and fear is rarely, if ever, held at bay by reason.

Opponents of legal carry on campus also advance a reasonable argument: there is really no reason for people to be carrying guns on campus. After all, campuses are generally safe, typically have their own police forces and are places of learning and not shooting ranges.

This does have considerable appeal. When I lived in Maine, I had a concealed weapon permit but generally did not go around armed. My main reason for having it was convenience—I could wear my gun under my jacket when going someplace to shoot. I must admit, of course, that as a young man there was an appeal in being able to go around armed like James Bond—but that wore off quickly and I never succumbed to gun machismo. I did not wear a gun while running (too cumbersome) or while socializing (too…weird). I have never felt the need to be armed with a gun on campus, though all the years I have been a student and professor. So, I certainly get this view.

The obvious weak point for this argument is that the lack of a reason to have a gun on campus (granting this for the sake of argument) is not a reason to ban people with permits from legally carrying on campus. After all, the permit grants the person the right to carry the weapon legally and more is needed to deny the exercise of that right than just the lack of need.

Another obvious weak point is that a person might need a gun on campus for legitimate self-defense. While this is not likely, that is true in most places. After all, a person going to work or out for a walk in the woods is not likely to need her gun. I have, for example, never needed one for self-defense. As such, there would seem to be as much need to have a gun on campus as many other places where it is legal to carry. Of course, this argument could be turned around to argue that there is no reason to allow concealed carry at all.

Proponents of legal concealed carry on campus often argue that “criminals and terrorists” go to college campuses in order to commit their crimes, since they know no one will be armed. There are two main problems with this. The first is that college campuses are, relative to most areas, very safe. So, criminals and terrorists do not seem to be going to them that often. As opponents of legal carry on campus note, while campus shootings make the news, they are actually very rare.

The second is that large campuses have their own police forces—in the shooting incident at FSU, the police arrived rapidly and shot the shooter. As such, I do not think that allowing concealed carry will scare away criminals and terrorists. Especially since they do not visit campuses that often already.

Proponents of concealed carry also sometimes claim that the people carrying legally on campus will serve as the “good guy with guns” to shoot the “bad guys with guns.” While there is a chance that a good guy will be able to shoot a bad guy, there is the obvious concern that the police will not be able to tell the good guy from the bad guy and the good guy will be shot. In general, the claims that concealed carry permit holders will be righteous and effective vigilantes on campus are more ideology and hyperbole than fact. Not surprisingly, most reasonable pro-gun people do not use that line of argumentation. Rather, they focus on more plausible scenarios of self-defense and not wild-west vigilante style shoot-outs.

My conclusion is that there is not a sufficiently compelling reason to ban permit holders from carrying their guns on campus. But, there does not seem to be a very compelling reason to carry a gun on campus.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics IV: Cybernetics

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on March 4, 2015

Human flesh is weak and metal is strong. So, it is no surprise that military science fiction has often featured soldiers enhanced by cybernetics ranging from the minor to the extreme. An example of a minor cybernetic is an implanted radio. The most extreme example would be a full body conversion: the brain is removed from the original body and placed within a mechanical body. This body might look like a human (known as a Gemini full conversion in Cyberpunk) or be a vehicle such as a tank, as in Keith Laumer’s A Plague of Demons.

One obvious point of moral concern with cybernetics is the involuntary “upgrading” of soldiers, such as the sort practiced by the Cybermen of Doctor Who. While important, the issue of involuntary augmentation is not unique to cybernetics and was addressed in the second essay in this series. For the sake of this essay, it will be assumed that the soldiers volunteer for their cybernetics and are not coerced or deceived. This then shifts the moral concern to the ethics of the cybernetics themselves.

While the ethics of cybernetics is complicated, one way to handle matters is to split cybernetics into two broad categories. The first category consists of restorative cybernetics. The second consists of enhancement cybernetics.

Restorative cybernetics are devices used to restore (hopefully) normal functions to a wounded soldier. Examples would include cyberoptics (replacement eyes), cyberlimbs (replacements legs and arms), and cyberorgans (such as an artificial heart). Soldiers are already being fitted with such devices, although by the standards of science fiction they are still primitive. Given that these devices merely restore functionality and the ethics of prosthetics and similar replacements is well established, there seems to be no moral concern about using such technology in what is essentially a medical role. In fact, it could be argued that nations have a moral obligation to use such technology to restore their wounded soldiers.

While enhancement cybernetics might be used to restore functionality to a wounded soldier, enhancement cybernetics go beyond mere restoration. By definition, they are intended to improve on the original. These enhancements break down into two main classes. The first class consists of replacement cybernetics—these devices require the removal of the original part (be it an eye, limb or organ) and serve as replacements that improve on the original in some manner. For example, cyberoptics could provide a soldier with night vision, telescopic visions and immunity to being blinded by flares and flashes. As another example, cybernetic limbs could provide greater speed, strength and endurance. And, of course, a full conversion could provide a soldier with a vast array of superhuman abilities.

The obvious moral concern with these devices is that they require the removal of the original organic parts—something that certainly seems problematic, even if they do offer enhanced abilities. This could, of course, be offset if the original parts were preserved and restored when the soldier left the service. There is also the concern raised in science fiction about the mental effects of such removals and replacements—the Cyberpunk role playing game developed the notion of cyberpsychosis, a form of insanity caused by having flesh replaced by machines. Obviously, it is not yet known what negative effects (if any) such enhancements will have on people. As in any case of weighing harms and benefits, the likely approach would be utilitarian: are the advantages of the technology worth the cost to the soldier?

A second type of enhancement is an add-on which does not replace existing organic parts. Instead, as the name implies, an add-on involves the addition of a device to the body of the soldier. Add-on cybernetics differ from wearables and standard gear in that they are actually implanted in or attached to the soldier’s body. As such, removal can be rather problematic.

A fairly minor example would be something like an implanted radio. A rather extreme example would be the case of the comic book villain Doctor Octopus—his mechanical limbs are add-ons.  Other examples of add-ons include such things as implanted sensors, implanted armor, implanted weapons (such as in the comic book hero Wolverine), and other such augmentations.

Since these devices do not involve removal of healthy parts, they do avoid that moral concern. However, there are still legitimate concerns about the physical and mental harms that might be caused by such devices. It is easy enough to imagine implanted devices having serious side effects on soldiers. As noted above, these matters would probably be best addressed by utilitarian ethics—weighing the harms against the benefits.

Both types of enhancements also raise a moral concern about returning the soldier to the civilian population after her term of service. In the case of restorative grade devices, there is not as much concern—these soldiers would, ideally, function as they did before their injuries. However, the enhancements do present a potential problem since they, by definition, give the soldier capabilities that exceed that of normal humans. In some cases, re-integration would probably not be a problem. For example, a soldier with enhanced cyberoptics would presumably present no special problems. However, certain augmentations would present serious problems, such as implanted weapons or full conversions. Ideally, augmented soldiers could be restored to normal after their service has ended, but there could obviously be cases in which this was not done—either because of the cost or because the augmentation could not be reversed. This has been explored in science fiction—soldiers that can never stop being soldiers because they are machines of war. While this could be justified on utilitarian grounds (after all, war itself is often justified on such grounds), it is certainly a matter of concern—or will be.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter


Get every new post delivered to your Inbox.

Join 2,256 other followers