A Philosopher's Blog

Ransoms & Hostages

Posted in Ethics, Law, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on February 20, 2015

1979 Associated Press photograph showing hosta...

While some countries will pay ransoms to free hostages, the United States has a public policy of not doing this. Thanks to ISIS, the issue of whether ransoms should be paid to terrorists groups or not has returned to the spotlight.

One reason to not pay a ransom for hostages is a matter of principle. This principle could be that bad behavior should not be rewarded or that hostage taking should be punished (or both).

One of the best arguments against paying ransoms for hostages is both a practical and a utilitarian moral argument. The gist of the argument is that paying ransoms gives hostage takers an incentive to take hostages. This incentive will mean that more people will be taken hostage. The cost of not paying is, of course, the possibility that the hostage takers will harm or kill their initial hostages. However, the argument goes, if hostage takers realize that they will not be paid a ransom, they will not have an incentive to take more hostages. This will, presumably, reduce the chances that the hostage takers will take hostages. The calculation is, of course, that the harm done to the existing hostages will be outweighed by the benefits of not having people taken hostage in the future.

This argument assumes, obviously enough, that the hostage takers are primarily motivated by the ransom payment. If they are taking hostages primarily for other reasons, such as for status, to make a statement or to get media attention, then not paying them a ransom will not significantly reduce their incentive to take hostages. This leads to a second reason to not pay ransoms.

In addition to the incentive argument, there is also the funding argument. While a terrorist group might have reasons other than money to take hostages, they certainly benefit from getting such ransoms. The money they receive can be used to fund additional operations, such as taking more hostages. Obviously enough, if ransoms are not paid, then such groups do lose this avenue of funding which can impact their operations. Since paying a ransom would be funding terrorism, this provides both a moral a practical reason not to pay ransoms.

While these arguments have a rational appeal, they are typically countered by a more emotional appeal. A stock approach to arguing that ransoms should be paid is the “in their shoes” appeal. The method is very straightforward and simply involves asking a person whether or not she would want a ransom to be paid for her (or a loved one). Not surprising, most people would want the ransom to be paid, assuming doing so would save her (or her loved one). Sometimes the appeal is made explicitly in terms of emotions: “how would you feel if your loved one died because the government refuses to pay ransoms?” Obviously, any person would feel awful.

This method does have considerable appeal. The “in their shoes” appeal can be seem similar to the golden rule approach (do unto others as you would have them do unto you). To be specific, the appeal is not to do unto others, but to base a policy on how one would want to be treated in that situation. If I would not want the policy applied to me (that is, I would want to be ransomed or have my loved one ransomed), then I should be morally opposed to the policy as a matter of consistency. This certainly makes sense: if I would not want a policy applied in my case, then I should (in general) not support that policy.

One obvious counter is that there seems to be a distinction between what a policy should be and whether or not a person would want that policy applied to herself. For example, some universities have a policy that if a student misses more than three classes, the student fails the course. Naturally, no student wants that policy to be applied to her (and most professors would not have wanted it applied to them when they were students), but this hardly suffices to show that the policy is wrong. As another example, a company might have a policy of not providing health insurance to part time employees. While the CEO would certainly not like the policy if she were part time, it does not follow that the policy must be a bad one. As such, policies need to be assessed not just in terms of how a persons feels about them, but in terms of their merit or lack thereof.

Another obvious counter is to use the same approach, only with a modification. In response to the question “how would you feel if you were the hostage or she were a loved one?” one could ask “how would you feel if you or a loved one were taken hostage in an operation funded by ransom money? Or “how would you feel if you or a loved one were taken hostage because the hostage takers learned that people would pay ransoms for hostages?” The answer would be, of course, that one would feel bad about that. However, while how one would feel about this can be useful in discussing the matter, it is not decisive. Settling the matter rationally does require considering more than just how people would feel—it requires looking at the matter with a degree of objectivity. That is, not just asking how people would feel, but what would be right and what would yield the best results in the practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should You Attend a For-Profit College?

Posted in Business, Ethics, Law, Philosophy, Universities & Colleges by Michael LaBossiere on February 16, 2015

The rise of for-profit universities have given students increased choices when it comes to picking schools. Since college is rather expensive and schools vary in regards to the success of their graduates, it is wise to carefully consider the options before writing those checks. Or, more likely these days, going into debt.

While there is a popular view that the for-profit free-market will consistently create better goods and services at ever lower prices, it is wisest to accept facts over ideological theory. As such, when picking between public, non-profit, and for-profit schools one should look at the numbers. Fortunately, ProPublica has been engaged in crunching the numbers.

Today most people go to college in order to have better job prospects. As such, one rather important consideration is the likelihood of getting a job after graduation and the likely salary. While for-profit schools spend about $4.2 billion in 2009 for recruiting and marketing and pay their own college presidents an average of $7.3 million per year, the typical graduate does rather poorly. According to the U.S. Department of Education 74% of the programs at for-profit colleges produced graduates whose average pay is less than that of high-school dropouts. In contrast, graduates of non-profit and public colleges do better financially than high school graduates.

Another important consideration is the cost of education. While the free-market is supposed to result in higher quality services at lower prices and the myth of public education is that it creates low quality services at high prices, the for-profit schools are considerably more expensive than their non-profit and public competition. A two-year degree costs, on average, $35,000 at a for-profit school. The average community college offers that degree at a mere $8,300. In the case of four year degrees, the average is $63,000 at a for-profit and $52,000 for a “flagship” state college. For certificate programs, public colleges will set a student back $4,250 while a for-profit school will cost the student $19,806 on average. By these numbers, the public schools offer a better “product” at a much lower price—thus making public education the rational choice over the for-profit option.

Student debt and loans, which have been getting considerable attention in the media, are also a matter of consideration. The median debt of the average student at a for-profit college is $32,700 and 96% of the students at such schools take out loans. At non-profit private colleges, the amount is $24,600 and 57%. For public colleges, the median debt is $20,000 and 48% of students take out loans. Only 13% of community college students take out loans (thanks, no doubt, to the relatively low cost of community college).

For those who are taxpayers, another point of concern is how much taxpayer money gets funneled into for-profit schools. In a typical year, the federal government provides $6 billion in Pell Grants and $16 billion in student loans to students attending for-profit colleges. In 2010 there were 2.4 million students enrolled in these schools. It is instructive to look at the breakdown of how the for-profits expend their money.

As noted above, the average salary of the president of a for-profit college was $7.3 million in 2009. The five highest paid presidents of non-profit colleges averaged $3 million and the five highest paid presidents at public colleges were paid $1 million.

The for-profit colleges also spent heavily in marketing, spending $4.2 billion in recruiting, marketing and admissions staffing in 2009. In 2009 thirty for-profit colleges hired 35,202 recruiters which is about 1 recruiter per 49 students. As might be suspected, public schools do not spend that sort of money. My experience with recruiting at public schools is that a common approach is for a considerable amount of recruiting to fall to faculty—who do not, in general, get extra compensation for this extra work.

In terms of what is spent per student, for-profit schools average $2,050 per student per year. Public colleges spend, on average, $7,239 per student per year. Private non-profit schools spend the mots and average $15,321 per student per year. This spending does seem to yield results: at for-profit schools only 20% of students complete the bachelor’s degree within four years. Public schools do somewhat better with 31% and private non-profits do best at 52%. As such, a public or non-profit school would be the better choice over the for-profit school.

Because so much public money gets funneled into for-profit, public and private schools, there has been a push for “gainful employment” regulation. The gist of this regulation is that schools will be graded based on the annual student loan payments of their graduates relative to their earnings. A school will be graded as failing if its graduates have annual student loan payments that exceed 12% of total earnings or 30% of discretionary earnings. The “danger zone” is 8-12% of total earnings or 20-30% of discretionary earnings. Currently, there are about 1,400 programs with about 840,000 enrolled students in the “danger zone” or worse. 99% of them are, shockingly enough, at for-profit schools.

For those who speak of accountability, these regulations should seem quite reasonable. For those who like the free-market, the regulation’s target is the federal government: the goal is to prevent the government from dumping more taxpayer money into failing programs. Schools will need to earn this money by success.

However, this is not the first time that there has been an attempt to link federal money to success. In 2010 regulations were put in place that included a requirement that a school have at least 35% of its students actively repaying student loans. As might be guessed, for-profit schools are the leaders in loan defaults. In 2012 lobbyists for the for-profit schools (who have the highest default rates) brought a law suit to federal court. The judge agreed with them and struck down the requirement.

In November of 2014 an association of for-profit colleges brought a law suit against the current gainful employment requirements, presumably on the principle that it is better to pay lawyers and lobbyists rather than addressing problems with their educational model. If this lawsuit succeeds, which is likely, for-profits will be rather less accountable and this will serve to make things worse for their students.

Based on the numbers, you should definitely not attend the typical for-profit college. On average, it will cost you more, you will have more debt, and you will make less money. For the most for the least cost, the two year community college is the best deal. For the four year degree, the public school will cost less, but private non-profits generally have more successful results. But, of course, much depends on you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics III: Pharmaceuticals

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on February 13, 2015
Steve Rogers' physical transformation, from a ...

Steve Rogers’ physical transformation, from a reprint of Captain America Comics #1 (May 1941). Art by Joe Simon and Jack Kirby. (Photo credit: Wikipedia)

Humans have many limitations that make them less than ideal as weapons of war. For example, we get tired and need sleep. As such, it is no surprise that militaries have sought various ways to augment humans to counter these weaknesses. For example, militaries routinely make use of caffeine and amphetamines to keep their soldiers awake and alert. There have also been experiments

In science fiction, militaries go far beyond these sorts of drugs and develop far more potent pharmaceuticals. These chemicals tend to split into two broad categories. The first consists of short-term enhancements (what gamers refer to as “buffs”) that address a human weakness or provide augmented abilities. In the real world, the above-mentioned caffeine and amphetamines are short-term drugs. In fiction, the classic sci-fi role-playing game Traveller featured the aptly (though generically) named combat drug. This drug would boost the user’s strength and endurance for about ten minutes. Other fictional drugs have far more dramatic effects, such as the Venom drug used by the super villain Bane. Given that militaries already use short-term enhancers, it is certainly reasonable to think they are and will be interested in more advanced enhancers of the sort considered in science fiction.

The second category is that of the long-term enhancers. These are chemicals that enable or provide long-lasting effects. An obvious real-world example is steroids: these allow the user to develop greater muscle mass and increased strength. In fiction, the most famous example is probably the super-soldier serum that was used to transform Steve Rogers into Captain America.

Since the advantages of improved soldiers are obvious, it seems reasonable to think that militaries would be rather interested in the development of effective (and safe) long-term enhancers. It does, of course, seem unlikely that there will be a super-soldier serum in the near future, but chemicals aimed at improving attention span, alertness, memory, intelligence, endurance, pain tolerance and such would be of great interest to militaries.

As might be suspected, these chemical enhancers do raise moral concerns that are certainly worth considering. While some might see discussing enhancers that do not yet (as far as we know) exist as a waste of time, there does seem to be a real advantage in considering ethical issues in advance—this is analogous to planning for a problem before it happens rather than waiting for it to occur and then dealing with it.

One obvious point of concern, especially given the record of unethical experimentation, is that enhancers will be used on soldiers without their informed consent. Since this is a general issue, I addressed it in its own essay and reached the obvious conclusion: in general, informed consent is morally required. As such, the following discussion assumes that the soldiers using the enhancers have been honestly informed of the nature of the enhancers and have given their consent.

When discussing the ethics of enhancers, it might be useful to consider real world cases in which enhancers are used. One obvious example is that of professional sports. While Major League Baseball has seen many cases of athletes using such enhancers, they are used worldwide and in many sports, from running to gymnastics. In the case of sports, one of the main reasons certain enhancers, such as steroids, are considered unethical is that they provide the athlete with an unfair advantage.

While this is a legitimate concern in sports, it does not apply to war. After all, there is no moral requirement for a fair competition in battle. Rather, one important goal is to gain every advantage over the enemy in order to win. As such, the fact that enhancers would provide an “unfair” advantage in war does not make them immoral. One can, of course, discuss the relative morality of the sides involved in the war, but this is another matter.

A second reason why the use of enhancers is regarded as wrong in sports is that they typically have rather harmful side effects. Steroids, for example, do rather awful things to the human body and brain. Given that even aspirin has potentially harmful side effects, it seems rather likely that military-grade enhancers will have various harmful side effects. These might include addiction, psychological issues, organ damage, death, and perhaps even new side effects yet to be observed in medicine. Given the potential for harm, a rather obvious way to approach the ethics of this matter is utilitarianism. That is, the benefits of the enhancers would need to be weighed against the harm caused by their use.

This assessment could be done with a narrow limit: the harms of the enhancer could be weighed against the benefits provided to the soldier. For example, an enhancer that boosted a combat pilot’s alertness and significantly increased her reaction speed while having the potential to cause short-term insomnia and diarrhea would seem to be morally (and pragmatically) fine given the relatively low harms for significant gains. As another example, a drug that greatly boosted a soldier’s long-term endurance while creating a significant risk of a stroke or heart attack would seem to be morally and pragmatically problematic.

The assessment could also be done more broadly by taking into account ever-wider considerations. For example, the harms of an enhancer could be weighed against the importance of a specific mission and the contribution the enhancer would make to the success of the mission. So, if a powerful drug with terrible side-effects was critical to an important mission, its use could be morally justified in the same way that taking any risk for such an objective can be justified. As another example, the harms of an enhancer could be weighed against the contribution its general use would make to the war. So, a drug that increased the effectiveness of soldiers, yet cut their life expectancy, could be justified by its ability to shorten a war. As a final example, there is also the broader moral concern about the ethics of the conflict itself. So, the use of a dangerous enhancer by soldiers fighting for a morally good cause could be justified by that cause (using the notion that the consequences justify the means).

There are, of course, those who reject using utilitarian calculations as the basis for moral assessment. For example, there are those who believe (often on religious grounds) that the use of pharmaceuticals is always wrong (be they used for enhancement, recreation or treatment). Obviously enough, if the use of pharmaceuticals is wrong in general, then their specific application in the military context would also be wrong. The challenge is, of course, to show that the use of pharmaceuticals is simply wrong, regardless of the consequences.

In general, it would seem that the military use of enhancers should be assessed morally on utilitarian grounds, weighing the benefits of the enhancers against the harm done to the soldiers.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Obesity, Disability, & Accomodation

Posted in Ethics, Philosophy, Politics by Michael LaBossiere on February 11, 2015

It is estimated that almost 30% of humans are overweight or obese and this percentage seems likely to increase. Given this large number of large people, it is not surprising that various moral and legal issues have arisen regarding the accommodation of the obese. It is also not surprising that people arguing in favor of accommodating the obese content that obesity is a disability. The legal issues are, of course, simply matter of law and are settled by lawsuits. Since I am not a lawyer, I will focus on the ethics of the matter and will address two main issues. The first is whether or not obesity is a disability. The second is whether or not obesity is a disability that morally justifies making accommodations.

On the face of it, obesity is disabling. That is, a person who is obese will have reduced capabilities relative to a person who is not obese. An obese person will tend to have much lower endurance than a non-obese person, less speed, less mobility, less flexibility and so on. An obese person will also tend to suffer from more health issues and be at greater risk for various illnesses. Because of this, an obese person might find it difficult or impossible to perform certain job tasks, such as those involving strenuous physical activity or walking moderate distances.

The larger size and weight of obese individuals also presents challenges regarding such things as standard sized chairs, doors, equipment, clothing and vehicles. For example, an obese person might be unable to operate a forklift with the standard seating and safety belt. As another example, an obese person might not be able to fit in one airline seat and instead require two (or more).  As a third example, an obese student might not be able to fit into a standard classroom desk. As such, obesity could make it difficult or impossible for a person to work or make use of certain goods and services.

Obviously enough, obese people are not the only ones who are disabled. There are people with short term disabilities due to illness or injury. I experienced this myself when I had a complete quadriceps tendon tear—my left leg was locked in an immobilizer for weeks, then all but useless for months. With this injury, I was considerably slower, had difficulty with stairs, could not carry heavy loads, and could not drive. There are also people who have long term or permanent disabilities, such as people who are paralyzed, blind, or are missing limbs due to accidents or war. These people can face considerable challenges in performing tasks at work and in life in general. For example, a person who is permanently confined to a wheelchair due to a spinal injury will find navigating stairs or working in the woods or working at muddy construction sites rather challenging.

In general, there seems to be no moral problem with requiring employees, businesses, schools and so on to make reasonable accommodations for people who are disabled. The basic principle that justifies that is the principle of equal treatment: people should be afforded equal access, even when doing so requires some additional accommodation. As such, while having ramps in addition to stairs costs more, it is a reasonable requirement given that some people cannot fully use their legs. Given that the obese are disabled, it seems easy enough to conclude that they should be accommodated just as the blind and paralyzed are accommodated.

Naturally, it could be argued that there is no moral obligation to provide accommodations for anyone. If this is the case, then there would be no obligation to accommodate the obese. However, it would seem to be rather difficult to prove, for example, that disabled veterans returning to school should just have to work their way up the steps in their wheelchairs. For the sake of the discussion to follow I will assume that there is a moral obligation to accommodate the disabled. However, there is still the question of whether or not this should apply to the obese.

One obvious way to argue against accommodations for the obese is to argue that there is a morally relevant difference between those disabled by obesity and those disabled by injury, birth defects, etc. One difference that people often point to is that obesity is a matter of choice and other disabilities are not. That is, a person’s decisions resulted in her being fat and hence she is responsible in a way a person crippled in an accident is not.

It could be pointed out that some people who are disabled by injury where disabled as the result of their decisions. For example, a person might have driven while drunk and ended up paralyzed. But, of course, the person would not be denied access to handicapped parking or the use of automatic doors because his disability was self-inflicted. The same reasoning could be used for the obese: though their disability is self-inflicted, it is still a disability and thus should be accommodated.

The easy and obvious reply to this is that there is still a relevant difference. While a person crippled in a self-inflicted drunken crash caused his own disability, there is little he can do about that disability. He can change his diet and exercise but this will not restore functionality to his legs. That is, he is permanently stuck with the results of that decision. In contrast, an obese person has to maintain her obesity. While some people are genetically predisposed to being obese, how much a person eats and how much they exercise is a matter of choice. Since they could reduce their weight, the rest of us are under no obligation to provide special accommodations for them. This is because they could take reasonable steps to remove the need for such accommodations. To use analogy, imagine someone who insisted that they be provided with a Seeing Eye dog because she wants to wear opaque glasses all the time. These glasses would result in her being disabled since she would be blind. However, since she does not need to wear such glasses and could easily do without them, there is no obligation to provide her with the dog. In contrast, a person who is actually blind cannot just get new eyes and hence it is reasonable for society to accommodate her.

It can be replied that obesity is not a matter of choice. One approach would be to argue for metaphysical determinism—the obese are obese by necessity and could not be otherwise. The easy reply here would be to say that we are, sadly enough, metaphysically determined not to provide accommodations.

A more sensible approach would be to argue that obesity is, in some cases, a medical condition that is beyond the ability of a person to control—that is, the person lacks agency in regards to his eating and exercise. The most likely avenue of support for this claim would come from neuroscience. If it can be shown that people are incapable of controlling their weight, then obesity would be a true disability, on par with having one’s arm blasted off by an IED or being born with a degenerative neural disorder. This would, of course, require abandoning agency (at least in this context).

It could also be argued that a person does have some choice, but that acting on the choice would be so difficult that it is more reasonable for society to accommodate the individual than it is for the individual to struggle to not be obese. To use an analogy, a disabled person might be able to regain enough functionality to operate in a “mostly normal” way, but doing so might require agonizing effort that is beyond what could be expected of a person. In such a case, one would surely not begrudge the person the accommodations. So, it could be argued that since it is easier for society to accommodate the obese than it is for the obese to not be obese, society should do so.

There is, however, a legitimate concern here. If the principle is adopted that society must accommodate the obese because they are disabled and they cannot help their obesity, then others could appeal to that same sort of principle and perhaps over-extend the realm of disabilities that must be accommodated. For example, people who are addicted to drugs could make a similar argument: they are disabled, yet their addiction is not a matter of choice. As another example, people who are irresponsible or lazy can claim they are disabled as well and should be accommodated on the grounds that they cannot be other than they are. But, perhaps the line can be drawn in a principle way so that the obese are disabled, but others are not.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should Confederate Veterans be Honored as Veterans?

Posted in Ethics, Law, Philosophy, Politics by Michael LaBossiere on February 9, 2015

Yet another interesting controversy has arisen in my adopted state of Florida. Three Confederate veterans, who fought against the United States of America, have been nominated for admission to Florida’s Veterans’ Hall of Fame. The purpose of the hall is to honor “those military veterans who, through their works and lives during or after military service, have made a significant contribution to the State of Florida.”

The three nominees are David Lang, Samuel Pasco and Edward A. Perry. Perry was Florida’s governor from 1885 to 1889; Pasco was a U.S. senator. Lang assisted in creating what became the Florida National Guard. As such, they did make significant contributions to Florida. The main legal question is whether or not they qualify as veterans. Since Florida was in rebellion (in defense of slaver) against the United States there is also a moral question of whether or not they should be considered veterans.

The state of Florida and the US federal government have very similar definitions of “veteran.” For Florida, a veteran is a person who served in the active military and received an honorable discharge. The federal definition states that “The term ‘veteran’ means a person who served in the active military, naval, or air service, and who was discharged or released therefrom under conditions other than dishonorable.” The law also defines “Armed Forces” as the “United States Army, Navy, Marine Corps, Air Force and Coast Guard.” The reserves are also included as being in the armed forces.

According to Mike Prendergast, the executive director of the Department of Veterans Affairs, the three nominees in question do not qualify because the applications to the hall did not indicate that the men served in the armed forces of the United States of America. Interestingly, Agricultural Commissioner Adam Putnam takes the view that “If you’re throwing these guys out on a technicality, that’s just dumb.”

Presumably, Putnam regards the fact that the men served in the Confederate army and took up arms against the United States as a technicality. This seems to be rather more than a mere technicality. After all, the honor seems to be reserved for veterans as defined by the relevant laws. As such, being Confederate veterans would seem to no more qualify the men for the hall than being a veteran of the German or Japanese army in WWII would qualify someone who moved to Florida and did great things for the state. There is also the moral argument about enrolling people who fought against the United States into this hall. Fighting in defense of slavery and against the lawful government of the United States would seem to be morally problematic in regards to the veteran part of the honor.

One counter to the legal argument is that Confederate soldiers were granted (mostly symbolic) pensions about 100 years after the end of the Civil War. Confederate veterans can also be buried in a special Confederate section of Arlington National Cemetery. These facts do push the door to a legal and moral argument open a crack. In regards to the legal argument, it could be contended that Confederate veterans have been treated, in some ways, as veterans. As such, one might argue, this should be extended to the Veterans’ Hall of Fame.

The obvious response is that these concessions to the Confederate veterans do not suffice to classify Confederate veterans as veterans of the United States. As such, they would not be qualified for the hall. There is also the moral counter that soldiers who fought against the United States should not be honored as veterans of the United States. After all, one would not honor veterans of other militaries that have fought against the United States.

It could also be argued that since the states that made up the Confederacy joined the United States, the veterans of the Confederacy would, as citizens, become United States’ veterans. Of course, the same logic would seem to apply to parts of the United States that were assimilated from other nations, such as Mexico, the lands of the Iroquois, and the lands of Apache and so on. As such, perhaps Sitting Bull would qualify as a veteran under this sort of reasoning. Perhaps this could be countered by contending that the south left and then rejoined, so it is not becoming part of the United States that has the desired effect but rejoining after a rebellion.

Another possible argument is to contend that the Veterans’ Hall of Fame is a Florida hall and, as such, just requires that the veterans be Florida veterans. In the Civil War units were, in general, connected to a specific state (such the 1st Maine). As such, if the men in question served in a Florida unit that fought against the United States, they would be Florida veterans but not United States veterans. Using this option would, of course, require that the requirements for the hall not include that a nominee be a veteran of the United States military and presumably it could not be connected to the United States VA since that agency is only responsible for veterans of the United States armed forces and not veterans who served other nations.

In regards to the moral concerns of honoring, as veterans, men who fought against the United States and in defense of slavery, it could be claimed that the war was not about slavery. The obvious problem with this is that the war was, in fact, fought to preserve slavery. The southern states made this abundantly clear. Alexander Stephens, vice president of the Confederacy, gave his infamous Cornerstone Speech and made this quite clear when he said “Our new Government is founded upon exactly the opposite ideas; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition.”

It could, of course, be argued that not every soldier fighting for the South was fighting to defend slavery. After all, just like today, most of the people fighting in wars are not the people who set policy or benefit from these policies. These men could have gone to war not to protect the institution of slavery, but because they were duped by the slave holders. Or because they wanted to defend their state from “northern aggression.” Or some other morally acceptable reason. That is, it could be claimed that these men were fighting for something other than the explicit purpose of the Confederacy, namely the preservation of slavery. Since this is not impossible, it could be claimed that the men should be given the benefit of the doubt and be honored for fighting against the United States and then doing significant things for Florida.

In any case, this matter is rather interesting and I am looking forward to seeing my adopted state mocked once again on the Daily Show. And, just maybe, Al Sharpton will show up to say some things.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics II: Informed Consent

Posted in Ethics, Law, Philosophy, Technology by Michael LaBossiere on February 6, 2015

One general moral subject that is relevant to the augmentation of soldiers by such things as pharmaceuticals, biologicals or cybernetics is the matter of informed consent. While fiction abounds with tales of involuntary augmentation, real soldiers and citizens of the United States have been coerced or deceived into participating in experiments. As such, there do seem to be legitimate grounds for being concerned that soldiers and citizens could be involuntarily augmented as part of experiments or actual “weapon deployment.”

Assuming the context of a Western democratic state, it seems reasonable to hold that augmenting a soldier without her informed consent would be immoral. After all, the individual has rights against the democratic state and these include the right not to be unjustly coerced or deceived. Socrates, in the Crito, also advanced reasonable arguments that the obedience of a citizen required that the state not coerce or deceive the citizen into the social contract and this would certainly apply to soldiers in a democratic state.

It is certainly tempting to rush to the position that informed consent would make the augmentation of soldiers morally acceptable. After all, the soldier would know what she was getting into and would volunteer to undergo the process in question. In popular fiction, one example of this would be Steve Rogers volunteering for the super soldier conversion. Given his consent, such an augmentation would seem morally acceptable.

There are, of course, some cases where informed consent makes a critical difference in ethics. One obvious example is the moral difference between sex and rape—the difference is a matter of informed and competent consent. If Sam agrees to have sex with Sally, then Sally is not raping Sam. But if Sally drugs Sam and has her way, then that would be rape.  Another obvious example is the difference between theft and receiving a gift—this is also a matter of informed consent. If Sam gives Sally a diamond ring that is not theft. If Sally takes the ring by force or coercion, then that is theft—and presumably wrong.

Even when informed consent is rather important, there are still cases in which the consent does not make the action morally acceptable. For example, Sam and Sally might engage in consensual sex, but if they are siblings or one is the parent of the other, the activity could still be immoral. As another example, Sam might consent to give Sally an heirloom ring that has been in the family for untold generations, but it might still be the wrong thing to do—especially when Sally hocks the ring to buy heroin.

There are also cases in which informed consent is not relevant because of the morality of the action itself. For example, Sam might consent to join in Sally’s plot to murder Ashley (rather than being coerced or tricked) but this would not be relevant to the ethics of the murder. At best it could be said that Sally did not add to her misdeed by coercing or tricking her accomplices, but this would not make the murder itself less bad.

Turning back to the main subject of augmentation, even if the soldiers gave their informed consent, the above consideration show that there would still be the question of whether or not the augmentation itself is moral or not. For example, there are reasonable moral arguments against genetically modifying human beings. If these arguments hold up, then even if a soldier consented to genetic modification, the modification itself would be immoral.  I will be addressing the ethics of pharmaceutical, biological and cybernetic augmentation in later essays.

While informed consent does seem to be a moral necessity, this position can be countered. One stock way to do this is to make use of a utilitarian argument: if the benefits gained from augmenting soldiers without their informed consent outweighed the harms, then the augmentation would be morally acceptable. For example, imagine that a war against a wicked enemy is going rather badly and that an augmentation method has been developed that could turn the war around. The augmentation is dangerous and has awful long term side-effects that would deter most soldiers from volunteering. However, losing to the wicked enemy would be worse—so it could thus be argued that the soldiers should be deceived so that the war could be won. As another example, a wicked enemy is not needed—it could simply be argued that the use of augmented soldiers would end the war faster, thus saving lives, albeit at the cost of those terrible side-effects.

Another stock approach is to appeal to the arguments used by democracies to justify conscription in time of war. If the state (or, rather, those who expect people to do what they say) can coerce citizens into killing and dying in war, then the state can surely coerce and citizens to undergo augmentation. It is easy to imagine a legislature passing something called “the conscription and augmentation act” that legalizes coercing citizens into being augmented to serve in the military. Of course, there are those who are suspicious of democratic states so blatantly violating the rights of life and liberty. However, not all states are democratic.

While democratic states would seem to face some moral limits when it comes to involuntary augmentation, non-democratic states appear to have more options. For example, under fascism the individual exists to serve the state (that is, the bastards that think everyone else should do what they say). If this political system is morally correct, then the state would have every right to coerce or deceive the citizens for the good of the state. In fiction, these states tend to be the ones to crank out involuntary augmented soldiers (that still manage to lose to the good guys).

Naturally, even if the state has the right to coerce or deceive soldiers into becoming augmented, it does not automatically follow that the augmentation itself is morally acceptable—this would depend on the specific augmentations. These matters will be addressed in upcoming essays.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Augmented Soldier Ethics I: Exoskeletons

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on February 4, 2015
US-Army exoskeleton

US-Army exoskeleton (Photo credit: Wikipedia)

One common element of military science fiction is the powered exoskeleton, also known as an exoframe, exosuit or powered armor. The basic exoskeleton is a powered framework that serves to provide the wearer with enhanced strength. In movies such as Edge of Tomorrow and video games such as Call of Duty Advanced Warfare the exoskeletons provide improved mobility and carrying capacity (which can include the ability to carry heavier weapons) but do not provide much in the way of armor. In contrast, the powered armor of science fiction provides the benefits of an exoskeleton while also providing a degree of protection. The powered armor of Starship Troopers, The Forever War, Armor and Iron Man all serve as classic examples of this sort of gear.

Because the exoskeletons of fiction provide soldiers with enhanced strength, mobility and carrying capacity, it is no surprise that militaries are very interested in exoskeletons in the real world. While exoskeletons have yet to be deployed, there are some ethical concerns about the augmentation of soldiers.

On the face of it, the use of exoskeletons in warfare seems to be morally unproblematic. The main reason is that an exoskeleton is analogous to any other vehicle, with the exception that it is worn rather than driven. A normal car provides the driver with enhanced mobility and carrying capacity and this is presumably not immoral. In terms of the military context, the exoskeleton would be comparable to a Humvee or a tank, both of which seem morally unproblematic as well.

It might be objected that the use of exoskeletons would give wealthier nations an unfair advantage in war. The easy and obvious response to this is that, unlike in sports and games, gaining an “unfair” advantage in war is not immoral. After all, there is not a moral expectation that combatants will engage in a fair fight rather than making use of advantages in such things as technology and numbers.

It might be objected that the advantage provided by exoskeletons would encourage countries that had them to engage in aggressions that they would not otherwise engage in. The easy reply to this is that despite the hype of video games and movies, any exoskeleton available in the near future would most likely not provide a truly spectacular advantage to infantry. This advantage would, presumably, be on par with existing advantages such as those the United States enjoys over almost everyone else in the world. As such, the use of exoskeletons would not seem morally problematic in this regard.

One point of possible concern is what might be called the “Iron Man Syndrome” (to totally make something up). The idea is that soldiers equipped with exoskeletons might become overconfident (seeing themselves as being like the superhero Iron Man) and thus put themselves and others at risk. After all, unless there are some amazing advances in armor technology that are unmatched by weapon technology, soldiers in powered armor will still be vulnerable to weapons capable of taking on light vehicle armor (which exist in abundance). However, this could be easily addressed by training. And experience.

A second point of possible concern is what could be called the “ogre complex” (also totally made up). An exoskeleton that dramatically boosts a soldier’s strength might encourage some people to act as bullies and abuse civilians or prisoners. While this might be a legitimate concern, it can easily addressed by proper training and discipline.

There are, of course, the usual peripheral issues associated with new weapons technology that could have moral relevance. For example, it is easy to imagine a nation wastefully spending money on exoskeletons, perhaps due to corruption. However, such matters are not specific to exoskeletons and would not be moral problems for the technology as such.

Given the above, it would seem that augmenting soldiers with exoskeletons poses no new moral concerns and is morally comparable to providing soldiers with Humvees, tanks and planes.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ladies & Swearing

Posted in Aesthetics, Ethics, Philosophy by Michael LaBossiere on February 2, 2015
swearing in cartoon Suomi: Kiroileva sarjakuva...

 (Photo credit: Wikipedia)

Once and future presidential candidate Mike Huckabee recently expressed his concern about the profanity flowing from the mouths of New York Fox News ladies: “In Iowa, you would not have people who would just throw the f-bomb and use gratuitous profanity in a professional setting. In New York, not only do the men do it, but the women do it! This would be considered totally inappropriate to say these things in front of a woman. For a woman to say them in a professional setting that’s just trashy!”

In response, Erin Gloria Ryan posted a piece on Jezebel.com. As might be suspected, the piece utilized the sort of language that Mike dislikes and she started off with “listen up, cunts: folksy as balls probable 2016 Presidential candidate Mike Huckabee has some goddamn opinions about what sort of language women should use. And guess the fuck what? You bitches need to stop with this swearing shit.” While the short article did not set a record for OD (Obscenity Density), the author did make a good go at it.

I am not much for swearing. In fact, I used to say “swearing is for people who don’t how to use words.” That said, I do recognize that there are proper uses of swearing.

While I generally do not favor swearing, there are exceptions in which swearing was not only permissible, but necessary. For example, when I was running cross country, one of the other runners was looking super rough. The coach asked him how he felt and he said “I feel like shit coach.” The coach corrected him by saying “no, you feel like crap.” He replied, “No, coach, I feel like shit.” And he was completely right. Inspired by the memory of this exchange, I will endeavor to discuss proper swearing. I am, of course, not developing a full theory of swearing—just a brief exploration of the matter.

I do agree with some of what Huckabee said, namely the criticism of swearing in a professional context. However, my professional context is academics and I am doing my professional thing in front of students and other faculty—not exactly a place where gratuitous f-bombing would be appropriate or even useful. It would also make me appear sloppy and stupid—as if I could not express ideas or keep the attention of the class or colleagues without the cheap shock theatrics of swearing.

I am certainly open to the idea that such swearing could be appropriate in certain professional contexts. That is, that the vocabulary of swearing would be necessary to describe professional matters accurately and doing so would not make a person seem sloppy, disrespectful or stupid. Perhaps Fox News and Jezebel.com are such places.

While I was raised with certain patriarchal views, I have shed all but their psychological residue. Hearing a woman swear “feels” worse than hearing a man swear, but I know this is just the dregs of the past. If it is appropriate for a man to swear, the same right of swearing applies to a woman equally. I’m gender neutral, at least in principle.

Outside of the professional setting, I still have a general opposition to casual and repetitive swearing. The main reason is that I look at words and phrases as tools. As with any tool, they have the suitable and proper uses. While a screwdriver could be used to pound in nails, that is a poor use.  While a shotgun could be used to kill a fly, that is excessive and will cause needless collateral damage. Likewise, swear words have specific functions and using them poorly can show not only a lack of manners and respect, but a lack of artistry.

In general, the function of swear words is to serve as dramatic tools—that is, they are intended to shock and to convey something rather strong, such as great anger. To use them casually and constantly is rather like using a scalpel for every casual cutting task—while it will work, the blade will grow dull from repeated use and will no longer function well when it is needed for its proper task. So, I reserve my swear words not because I am prudish, but because if I wear them out, they will not serve me when I really need them most. For example, if I say “we are fucked” all the time for any minor problem, then when a situation in which we are well and truly fucked arrives, I will not be able to use that phrase effectively. But, if I save it for when the fuck hits the fan, then people who know me will know that it has gotten truly serious—I have broken out the “it is serious” words.

As another example, swear words should be saved for when a powerful insult or judgment is needed. If I were to constantly call normal people “fuckers” or describe not-so-bad things as being “shit”, then I would have little means of describing truly bad people and truly bad things. While I generally avoid swearing, I do need those words from time to time, such as when someone really is a fucker or something truly is shit.

Of course, swear words can also be used for humorous purposes. This is not really my sort of thing, but their shock value can serve well here—to make a strong point or to shock. However, if the words are too worn by constant use, then they can no longer serve this purpose. And, of course, it can be all too easy and inartistic to get a laugh simply by being crude—true artistry involves being able to get laughs using the same language one would use in front of grandpa in church. Of course, there is also an artistry to swearing—but that is more than just doing it all the time.

I would not dream of imposing on others—folks who wish to communicate normally using swear words have every right to do so, just as someone is free to pound nails with a screwdriver or whittle with a scalpel. However, it does bother me a bit that these words are being dulled and weakened by excessive use. If this keeps up, we will need to make new words and phrases to replace them—and then, no doubt, new words to replace those.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Why Republicans Should Support Legalizing Marijuana

Posted in Ethics, Law, Philosophy, Politics by Michael LaBossiere on January 28, 2015
English: NORML members protest in Lafayette Pa...

English: NORML members protest in Lafayette Park during the annual July 4th “Smoke-In.” (Photo credit: Wikipedia)

While I believe that people should not use marijuana, I believe that the sale and consumption of the drug should be legal. Given the espoused principles of the Republicans, they should agree with me. To make the case for this, I will consider some of the core espoused principles of the Republicans.

First, Republicans employ the usual rhetoric of freedom (in early 2015 they had a Freedom Summit in Iowa) and allowing people the freedom to grow, sell and use marijuana would be consistent with the notion of freedom. But, of course, the vague rhetoric of freedom is just that—vague rhetoric. So I will turn to more specific principles.

Second, there is the standard Republican claim that they prefer to have matters handled locally rather than by the power of the federal government. Some states and the District of Columbia have legalized marijuana at the local level. To be consistent, the Republicans should accept the local decisions and allow the citizens to exercise the freedom they voted for. To impose on the local governments and the citizens would be contrary to this espoused principle.

Third, Republicans often speak of “getting government off our back” and in favor of small government. The laws regarding marijuana and their enforcement certainly put the government on the back of citizens. As the Republicans like to say, why should the state be telling people what they can and cannot do? These laws have also led to an increase in the size of government, which is contrary to the small government ideal.

Fourth, Republicans are typically eager to oppose regulations and want to set the market free. Legalizing marijuana by removing the existing laws would reduce regulations, thus being in accord with this ideological point. The free market has clearly spoken in regards to marijuana: people want to buy and sell it. To impose harsh laws and regulations on these transactions is to impede the free market and to have the government pick winners and losers. The Republicans should be in favor of this freeing of the market from burdensome regulation.

Fifth, Republicans speak lovingly of job creators and job creation. The marijuana industry is run by job creators who create many jobs in growing and distributing the crops. They also create jobs in the snack and fast food industries as well as in the paraphernalia business. Legalizing marijuana would help grow the economy and create jobs, so the Republicans should support this.

Finally, the Republicans express a devotion to lowering government spending. Enforcing the marijuana laws is rather costly and legalizing marijuana would help reduced government spending. This would allow more tax cuts. Given these key Republican principles, they should eagerly embrace the legalization of marijuana.

It might be noted that Republicans, despite these espoused principles, should be opposed to legalizing marijuana. One reason that has been stated is that marijuana is harmful, and specifically harmful for the children.

I, of course, agree that marijuana is harmful and certainly agree that children should not use it. However, there is the matter of consistency. Obviously enough, harmful things such as alcohol, automobiles, tobacco, junk food and guns are legal in the United States and Republicans are staunch supporters of these things—despite the harm they do. As such, Republican support of marijuana would be consistent with their support of such things as guns, fossil fuels and tobacco. As far as the matter of children, marijuana can be handled in the same way as cars, guns, tobacco and alcohol. That is, marijuana can be illegal for children.

There is also the fact that while marijuana is harmful, it does not seem to be significantly more harmful than tobacco and alcohol. Its use also kills far fewer people than do cars and guns. Naturally, I do agree that it should be illegal to drive, etc. while high—just as it is illegal to drive when drunk. As such, the harmful nature of marijuana

It might be objected that marijuana is simply immoral and thus must be kept illegal. The obvious challenge is showing why it is simply immoral and then showing why immoral things should be made illegal. This can be done—but the adoption of the principle that the immoral must be illegal would probably not appeal to Republicans if it were consistently applied.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

What is the Worst Thing You Should (Be Allowed to) Say?

Posted in Ethics, Law, Philosophy, Politics by Michael LaBossiere on January 26, 2015
Members of Westboro Baptist Church have been s...

Members of Westboro Baptist Church have been specifically banned from entering Canada for hate speech. Church members enter Canada, aiming to picket bus victim’s funeral (Photo credit: Wikipedia)

The murders at Charlie Hedbo and their aftermath raised the issue of freedom of expression in a dramatic and terrible manner. In response to these deaths, there was an outpouring of support for this basic freedom and, somewhat ironically, a crackdown on some people expressing their views.

This situation raises two rather important issues. The first is the matter of determining the worst thing that a person should express. The second is the matter of determining the worst thing that a person should be allowed to express. While these might seem to be the same issue, they are not. The reason for this is that there is a distinction between what a person should do and what is morally permissible to prevent a person from doing. The main focus will be on using the coercive power of the state in this role.

As an illustration of the distinction, consider the example of a person lying to his girlfriend about running strikes all day in the video game Destiny when he was supposed to be doing yard work. It seems reasonable to think that he should not lie to her (although exceptions are easy to imagine). However, it also seems reasonable to think that the police should not be sent to coerce him into telling her the truth. So, he should not lie to her about playing the game but he should be allowed to do so by the state (that is, it should not use its police powers to stop him).

This view can be disputed and there are those who argue in favor of complete freedom from the state (anarchists) and those who argue that the state should control every aspect of life (totalitarians). However, the idea that that there are some matters that are not the business of the state seems to be an intuitively plausible position—at least in democratic states such as the United States. What follows will rest on this assumption and the challenge will be to sort out these two issues.

One rather plausible and appealing approach is to take a utilitarian stance on the matter and accept the principle of harm as the foundation for determining the worst thing that a person should express and also the worst thing that a person should be allowed to express. The basic idea behind this is that the right of free expression is bounded by the stock liberal right of others not to be harmed in their life, liberty and property without due justification.

In the case of the worst thing that a person should express, I am speaking in the context of morality. There are, of course, non-moral meanings of “should.” To use the most obvious example, there is the “pragmatic should”: what a person should or should not do in regards to advancing his practical self-interest. For example, a person should not tell her boss what she really thinks of him if doing so would cost her the job she desperately needs. To use another example, there is also the “should of etiquette”: what a person should do or not do in order to follow the social norms. For example, a person should not go without pants at a formal wedding, even to express his opposition to the tyranny of pants.

Returning to the matter of morality, it seems reasonable to go with the stock approach of weighing the harm the expression generates against the right of free expression (assuming there is such a right). Obviously enough, there is not an exact formula for calculating the worst thing a person should express and this will vary according to the circumstances. For example, the worst thing one should express to a young child would presumably be different from the worst thing one should express to adult. In terms of the harms, these would include the obvious things such as offending the person, scaring her, insulting her, and so on for the various harms that can be inflicted by mere expression.

While I do not believe that people have a right not to be offended, people do seem to have a right not to be unjustly harmed by other people expressing themselves. To use an obvious example, men should not catcall women who do not want to be subject to this verbal harassment. This sort of behavior certainly offends, upsets and even scares many women and the men’s right to free expression does not give them a moral pass that exempts them from what they should or should not do.

To use another example, people should not intentionally and willfully insult another person’s deeply held beliefs simply for the sake of insulting or provoking the person. While the person does have the right to mock the belief of another, his right of expression is not a moral free pass to be abusive.

As a final example, people should not engage in trolling. While a person does have the right to express his views so as to troll others, this is clearly wrong. Trolling is, by definition, done with malice and contributes nothing of value to the conversation. As such, it should not be done.

It is rather important to note that while I have claimed that people should not unjustly harm others by expressing themselves, I have not made any claims about whether or not people should or should not be allowed to express themselves in these ways. It is to this that I now turn.

If the principle of harm is a reasonable principle (which can be debated), then a plausible approach would be to use it to sketch out some boundaries. The first rough boundary was just discussed: this is the boundary between what people should express and what people should (morally) not. The second rough boundary begins at the point where other people should be allowed to prevent a person from expressing himself and ends just before the point at which the state has the moral right to use its coercive power to prevent expression.

This area is the domain of interactions between people that does not fall under the authority of the state, yet still permits people to be prevented from expressing their views. To use an obvious example, the workplace is such a domain in which people can be justly prevented from expressing their views without the state being involved. To use a specific example, the administrators of my university have the right to prevent me from expressing certain things—even if doing so would not fall under the domain of the state. To use another example, a group of friends would have the right, among themselves, to ban someone from their group for saying racist, mean and spiteful things to one of their number. As a final example, a blog administrator would have the right to ban a troll from her site, even though the troll should not be subject to the coercive power of the state.

The third boundary is the point at which the state can justly use its coercive power to prevent a person from engaging in expression. As with the other boundaries, this would be set (roughly) by the degree of harm that the expression would cause others. There are many easy and obvious example where the state would act rightly in imposing on a person: threats of murder, damaging slander, incitements to violence against the innocent, and similar such unquestionably harmful expressions.

Matters do, of course, get complicated rather quickly. Consider, for example, a person who does not call for the murder of cartoonists who mock Muhammad but tweets his approval when they are killed. While this would certainly seem to be something a person should not do (though this could be debated), it is not clear that it crosses the boundary that would allow the state to justly prevent the person from expressing this view. If the approval does not create sufficient harm, then it would seem to not warrant coercive action against the person by the state.

As another example, consider the expression of racist views via social media. While people should not say such things (and would be justly subject to the consequences), as long as they do not engage in actual threats, then it would seem that the state does not have the right to silence the person. This is because the expression of racist views (without threats) would not seem to generate enough harm to warrant state coercion. Naturally, it could justify action on the part of the person’s employer, friends and associates: he might be fired and shunned.

As a third example, consider a person who mocks the dominant or even official religion of the state. While the rulers of such states usually think they have the right to silence such an infidel, it is not clear that this would create enough unjust harm to warrant silencing the person. Being an American, I think that it would not—but I believe in both freedom of religion and the freedom to mock religion.  There is, of course, the matter of the concern that such mockery would provoke others to harm the mocker, thus warranting the state to stop the person—for her own protection. However, the fact that people will act wrongly in response to expressions would not seem to warrant coercing the person into silence.

In general, I favor erring on the side of freedom: unless the state can show that silencing expression is needed to prevent a real and unjust harm, the state does not have the moral right to silence expression.

I have merely sketched out a general outline of this matter and have presented three rough boundaries in regards to what people should say and what they should be allowed to say. Much more work would be needed to develop a full and proper account.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Follow

Get every new post delivered to your Inbox.

Join 2,215 other followers