A Philosopher's Blog

Confederates & Nazis

Posted in Ethics, Philosophy, Politics, Race, Uncategorized by Michael LaBossiere on August 18, 2017

While there has been an attempt to revise the narrative of the Confederate States of America to a story of state’s rights, the fact of the matter is that succession from the Union was because of slavery. At the time of succession, those in the lead made no bones about this fact—they explicitly presented this as their prime motivation. This is not to deny that there were other motivations, such as concerns about state’s rights and economic factors. As such, the Confederacy’s moral foundation was slavery. This entails a rejection of the principle that all men are created equal, a rejection of the notion of liberty, and an abandonment of the idea that the legitimacy of government rests on the consent of the governed. In short, the Confederacy was an explicit rejection of core stated values of the United States.

While the Confederacy lost the war and the union was reformed, its values survived and are now explicitly manifested in the alt-right. After all, it is no coincidence that the alt-right has been marching in defense of Confederate monuments and often makes use of Confederate flags. They are, after all, aware of the moral foundations of their movement. Or, rather, immoral foundations.

While the value system of the Confederacy embraced white supremacy and accepted slavery as a moral good, it did not accept genocide. That is, the Confederacy advocated enslaving blacks rather than exterminating them. Extermination was, of course, something the Nazis embraced.

As is well known, the Nazis took over the German state and plunged the world into war. Like the Confederate states, the Nazis embraced the idea of white supremacy and rejected equality and liberty. The Nazis also made extensive use of slave labor. Unlike the Confederate states, the Nazis infamously engaged in a systematic effort to exterminate those they regarded as inferior. This does mark a moral distinction between the Confederate States of America and Nazi Germany. This is, however, a distinction between degrees of evil.

While the Nazis are generally regarded by most Americans as the paradigm of evil, many in the alt-right embrace their values and some do so explicitly and openly, identifying as neo-Nazis. Some do make the claim that they do not want to exterminate what they regard as other races; they profess a desire to have racially pure states. So, for example, some in the alt-right support Israel on the grounds that they see it as a Jewish state. In their ideal world, each state would be racially pure. This is why the alt-right is sometimes also referred to as the white nationalists. The desire to have pure states can be seen as morally better than the desire to exterminate others, but this is also a distinction in evils rather than a distinction between good and bad.

Based on the above, the modern alt-right is the inheritor of both the Confederate States of America and Nazi Germany. While this might seem to be merely a matter of historic interest, it does have some important implications. One of these is that it provides grounds that the members of the alt-right should be regarded as on par with members or supporters of ISIS or other such enemy foreign terrorist groups. This is in contrast with regarding the alt-right as being entirely domestic.

Those who join or support Isis (and other such groups) are regarded as different from domestic hate groups. This is because ISIS (and other such groups) are foreign and are effectively at war with the United States. This applies even when the ISIS supporter is an American who lives in America. This perceived difference has numerous consequences, including legal ones. It also has consequences for free speech—while advocating the goals and values of ISIS in the United States would be regarded as a threat worthy of a response from the state, the alt-right is generally seen as being protected by the right to free speech. This is nicely illustrated by the fact that the alt-right can get permits to march in the United States, while ISIS supporters cannot. One can imagine the response if ISIS supporters did apply for permit or engaged in a march.

While some hate groups can be regarded as truly domestic in that they are not associated with foreign organizations engaged in war with the United States, the alt-right cannot make this claim. At least they cannot to the degree they are connected to the Confederate States of America and the Nazis. Both are foreign powers at war with the United States. As such, the alt-right should be regarded as on par with other groups that affiliate themselves with foreign groups engaged in war with the United States.

The easy and obvious reply is that both the Confederacy and the Nazis were defeated and no longer exist. On the one hand, this is true. The Confederacy was destroyed and the succeeding states rejoined the United States. The Nazis were defeated and while Germany still exists, it is not controlled by the Nazis. On the other hand, the Confederacy and the Nazis do persist in the form of various groups that preserve their values and ideology—including the alt-right. To use the obvious analogy, even if all territory is reclaimed from ISIS and it is effectively defeated as a state, this does not entail that ISIS will be gone. It will persist as long as it has supporters and presumably the United States would not switch to a policy of tolerating ISIS members and supporters simply because ISIS no longer has territory.

The same should hold true for those supporting or claiming membership in the Confederacy or the Nazis—they are supporters of foreign powers that are enemies of the United States and are thus on par with ISIS supporters and members in terms of being agents of the enemy. This is not to say that the alt-right is morally equivalent to ISIS in terms of its actions. On the whole, ISIS is indisputably worse. But, what matters in this context, is the expression of allegiance to the values and goals of a foreign enemy—something ISIS supporters and alt-right members who embrace the Confederacy or Nazis have in common.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Trump’s White Nationalists, Again

Posted in Ethics, Philosophy, Politics, Race, Uncategorized by Michael LaBossiere on August 16, 2017

On the face of it, condemning white supremacists and neo-Nazis is one of the politically easiest things to do. Trump, however, seems incapable of engaging in this simple task. Instead, he has continued to act in ways that lend support to the alt-right. After a delayed and reluctant condemnation of the alt-right, Trump returned to his lane by making two claims. The first is the claim that “there is blame on both sides.” The second is the claim that there are good people on both sides. On the face of it, both claims are false. That said, these claims will be given more consideration than they deserve.

If one accepts a very broad concept of blame, then it would be possible to claim that there is blame on both sides. This could be done in the following way. The first step is asserting that a side is responsible if an event would not have taken place without its involvement. This is based, of course, on the notion that accountability is a matter of “but for.” In the case at hand, the relevant claim would be that but for the presence of the counter-protestors, there would have been no violence against them and Heather Heyer would not have been murdered. On this notion of responsibility, both sides are to blame.

While this concept of blame might have some appeal, it is obviously flawed. This is because the application of the principle would entail that any victim or target of a crime or misdeed would share some of the blame for the crime or misdeed. For example, but for a person having property, they would not have been robbed. As another example, but for being present during a terrorist attack, the person would not have been killed. As such, meriting blame would require more than such a broad “but for” condition.

A possible reply to this counter is to argue that the counter-protestors were not mere targets, but were active participants. That is, co-belligerents and co-instigators. To use an analogy, if a bar fight breaks out because two people start insulting each other and then start swinging, then both parties do share the blame. Trump seems to regard what happened in Virginia as analogous to this sort of a bar fight. If this is true, then both sides would bear some of the blame.

Of course, even if both parties were belligerent, then there are still grounds for assigning blame to one side rather than another. For example, if someone goes to a party to misbehave and someone steps up to counter this and is attacked, then the attacker would be to blame. This is because of the moral difference between the two parties: one is acting to commit a misdeed, the other is trying to counter this. In the case of Virginia, the alt-right is in the wrong. They are, after all, endorsing morally wicked views that should be countered.

There is, of course, also the obvious fact that it was a member of the alt-right that is alleged to have driven a car into the crowd, killing one person and injuring others. As such, if any blame is to be placed on a side, it is to be placed on the alt-right.

It could be argued that the action of one person in the alt-right does not make the entire group guilty of the crime. This is certainly a reasonable claim—a group is not automatically responsible for the actions of its worst members, whether the group is made up of Muslims, Christians, whites, blacks, conservatives or liberals. That said, the principles used to assign collective responsibility need to be applied consistently—people have an unfortunate tendency to use different standards for groups they like and groups they dislike. I would certainly agree that the alt-right members who did not engage in violence or instigate it are not responsible for the violence. However, it could be argued that the rhetoric and ideology of the alt-right inherently instigates and urges violence and evil behavior. If so, then all members who accept the ideology of the alt-right are accountable for being part of a group that is dedicated to doing evil. I now turn to Trump’s second claim.

Trump also made the claim that there are good people on both sides. As others have noted, this seems similar to his remarks about Mexicans being rapists and such, but also there being some good Mexicans. As such, Trump’s remark might simply be a Trumpism—something that just pops out of his mouth with no meaning or significance, like a burp. But, let it be assumed for the sake of discussion that Trump was trying to say something meaningful.

Trump is certainly right that there are good people on the side opposed to the alt-right. After all, the alt-right endorses a variety of evil positions and good people oppose evil. As far as good people being in the alt-right, that is not as clear. After all, as was just noted, the values expressed by the alt-right include evil views and it would be unusual for good people to endorse such views. This can, of course, be countered by arguing that the alt-right is not actually evil (which is presumably what many members believe—few people think of themselves as the villains). It can also be countered by asserting that there are good people who are in the alt-right out of error (they are good people, but err in some of their beliefs) or who hope to guide the movement to better goals. It could also be claimed that any group that is large enough will contain at least some good people (as a group will also contain bad people). For example, people often point to General Robert E. Lee as a good person serving an evil cause.

Given these considerations, it does seem possible that there is at least one good person in the alt-right and hence Trump could be right in the strict logical sense of there being some (at least one) good people in the group. But, Trump’s purpose is almost certainly not to make a claim that is trivial in its possible truth. Rather, he seems to be engaged in another false equivalence, that the alt-right and their opponents are morally equivalent because both groups have some good people. Given the evil of the alt-right’s views (which are fundamentally opposed to the expressed values of the United States), saying that both sides are morally the same is obviously to claim what is false. The alt-right is the worse side and objectively so.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Trump’s White Nationalists

Posted in Philosophy, Politics, Race by Michael LaBossiere on August 14, 2017


While the election of Obama led some to believe that racism had been exorcized, the triumph of Trump caused speculation that the demon had merely retreated to the shadows of the internet. In August of 2017, the city of Charlottesville, VA served as the location of a “United the Right” march. This march, which seems to have been a blend of neo-Nazis, white-supremacists and other of the alt-right, erupted in violence. One woman who was engaged in a counter-protest against the alt-right, Heather Heyer, was murdered. Officers Cullen and Bates were also killed when their helicopter crashed, although this appears to have been an accident.

While Trump strikes like an enraged wolverine against slights real or imaginary against himself, his initial reply to the events in Charlottesville were tepid. As has been his habit, Trump initially resisted being critical of white supremacists and garnered positive remarks from the alt-right for their perception that he has created a safe space for their racism. This weak response has, as would be expected, been the target of criticism from both the left and the more mainstream right.

Since the Second World War, condemning Nazis and Neo-Nazis has been extremely easy and safe for American politicians. Perhaps the only thing easier is endorsing apple pie. Denouncing white supremacists can be more difficult, but since the 1970s this has also been an easy move, on par with expressing a positive view of puppies in terms of the level of difficulty. This leads to the question of why Trump and the Whitehouse responded with “We condemn in the strongest possible terms this egregious display of hatred, bigotry and violence on many sides, on many sides” rather than explicitly condemning the alt-right. After all, Trump pushes hard to identify acts of terror by Muslims as Islamic terror and accepts the idea that this sort of identification is critical to fighting such terror. Consistency would seem to require that Trump identify terror committed by the alt-right as “alt-right terror”, “white-supremacist terror”, “neo-Nazi terror” or whatever would be appropriate. Trump, as noted above, delayed making specific remarks about white supremacists.

Some have speculated that Trump is a racist. Trump denies this, pointing to the fact that his beloved daughter married a Jew and converted to Judaism. While Trump does certainly make racist remarks, it is not clear if he embraces an ideology of racism or any ideology at all beyond egoism and self-interest. While the question of whether he is a racist is certainly important, there is no need to speculate on the matter when addressing his response (or lack of response). What matters is that the weakness of his initial response and his delay in making a stronger response sends a clear message to the alt-right that Trump is on their side, or at least is very tolerant of their behavior. It could be claimed that the alt-right is like a deluded suitor who thinks someone is really into them when they are not, but this seems implausible. After all, Trump is very easy on the alt-right and must be pushed, reluctantly, into being critical. If he truly condemned them, he would have reacted as he always does against things he does not like: immediately, angrily, repeatedly and incoherently. Trump, by not doing this, sends a clear message and allows the alt-right to believe that Trump does not really mean it when he condemns them days after the fact. As such, while Trump might not be a racist, he does create a safe space for racists. As Charlottesville and other incidents show, the alt-right presents a more serious threat to American lives than does terror perpetrated by Muslims. As such, Trump is not only abetting the evil of racism, he could be regarded as an accessory to murder.

It could be countered that Trump did condemn the bigotry, violence and hatred and thus his critics are in error. One easy and obvious reply is that although Trump did say he condemns these things, his condemnation was not directed at the perpetrators of the violence. After seeming to be on the right track towards condemning the wrongdoers, Trump engaged in a Trump detour by condemning the bigotry and such “on many sides.” This could, of course, be explained away: perhaps Trump lost his train of thought, perhaps Trump had no idea what was going on and decided to try to cover his ignorance, or perhaps Trump was just being Trump. While these explanations are tempting, it is also worth considering that Trump was using the classic rhetorical tactic of false equivalence—treating things that are not equal as being equal. In the case at hand, Trump can be seen as regarding those opposing the alt-right as being just as bigoted, hateful and violent as the alt-right’s worst members. While there are hateful bigots who want to do violence to whites, the real and significant threat is not from those who oppose the alt-right, but from the alt-right. After all, the foundation of the alt-right is bigotry and hatred. Hating Neo-Nazis and white supremacists is the morally correct response and does not make one equivalent or even close to being the same as them.

One problem with Trump’s false equivalence is that it helps feed the narrative that those who actively oppose the alt-right are bad people—evil social justice warriors and wicked special snowflakes. This encourages people who do not agree with the alt-right but do not like the left to focus on criticizing the left rather than the alt-right.  However, opposing the alt-right is the right thing to do.  Another problem with Trump’s false equivalence is that it encourages the alt-right by allowing them to see such remarks as condemning their opponents—they can tell themselves that Trump does not really want to condemn his alt-right base but must be a little critical because of politics.  While Trump might merely be pragmatically appealing to his base and selfishly serving his ego, his tolerance for the alt-right is damaging to the country and will certainly contribute to more murders.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Work & Vacation

Posted in Business, Law, Philosophy, Uncategorized by Michael LaBossiere on August 11, 2017

Most Americans do not use their vacation days, despite the fact that they tend to get less than their European counterparts. A variety of plausible reasons have been advanced for this, most of which reveal interesting facts about working in the United States.

As would be expected, fear is a major factor. Even when a worker is guaranteed paid vacation time as part of their compensation for work, many workers are afraid that using this vacation time will harm them. One worry is that by using this time, they will show that they are not needed or are inferior to workers that do not take as much (or any) time and hence will be passed up for advancement or even fired. On this view, vacation days are a trap—while they are offered and the worker has earned them, to use them all would sabotage or end the person’s employment. This is not to say that all or even many employers intentionally set a vacation day trap—in fact, many employers seem to have to take special effort to get their employees to use their vacation days. However, this fear is real and does indicate a problem with working in America.

Another fear that keeps workers from using all their days is the fear that they will fall behind in their work, thus requiring them to work extra hard before or after their vacation. On this view, there is little point in taking a vacation if one will just need to do the missed work and do it in less time than if one simply stayed at work. The practical challenge here is working ways for employees to vacation without getting behind (or thinking they will get behind). After all, if an employee is needed at a business, then their absence will mean that things that need to get done will not get done. This can be addressed in various ways, such as sharing workloads or hiring temporary workers. However, an employee can then be afraid that the business will simply fire them in favor of permanently sharing the workload or by replacing them with a series of lower paid temporary workers.

Interestingly enough, workers often decline to use all their vacation days because of pride. The idea is that by not using their vacation time, a person can create the impression that they are too busy and too important to take time off from work. In this case, the worker is not afraid that they will be fired, they are worried that they will lose status and damage their reputation. This is not to say that being busy is always a status symbol—there is, of course, also status attached to being so well off that one can be idle. This fits nicely into Hobbes’ view of human motivation: everything we do, we do for gain or glory. As such, if not taking vacation time increases one’s glory (status and reputation), then people will do that.

On the one hand, people who do work hard (and effectively) do deserve a positive reputation for these efforts and earn a relevant status. On the other hand, the idea that reputation and status are dependent on not using all one’s vacation time can clearly be damaging to a person. Humans do, after all, need to relax and recover. This view also, one might argue, puts too much value on the work aspect of a person’s life at the expense of their full humanity. Then again, for the working class in America, to be is to work (for the greater enrichment of the rich).

Workers who do not get paid vacations tend to not use all (or any) of their vacation days for the obvious reason that their vacations are unpaid. Since a vacation tends to cost money, workers without paid vacations can take a double hit if they take a vacation: they are getting no income while spending money. Since people do need time off from work, there have been some attempts to require that workers get paid vacation time. As would be imagined, this proposal tends to be resisted by businesses. In part it is because they do not like being told what they must do and in part it is because of concerns over costs. While moral arguments about how people should be treated tend to fail, there is some hope that practical arguments about improved productivity and other benefits could succeed. However, as workers have less and less power in the United States (in part because workers have been deluded into embracing ideologies and policies contrary to their own interests), it seems less and less likely that paid vacation time will increase or be offered to more workers.

Some workers also do not use all their vacation days for vacation because they need to use them for other purposes, such as sick days. It is not uncommon for working mothers to save their vacation days to use for when they need to take care of the kids. It is also not uncommon for workers to use their vacation days for sick days, when they need to be at home for a service visit, when they need to go to the doctors or for other similar things. If it is believed that vacation time is something that people need, then forcing workers to use up their vacation time for such things would seem to be wrong. The obvious solution, which is used by some businesses, is to offer such things as personal days, sick leave, and parental leave. While elite employers offer elite employees such benefits, they tend to be less available to workers of lower social and economic classes. So, for example, Sheryl Sandberg gets excellent benefits, while the typical worker does not. This is, of course, a matter of values and not just economic ones. That is, while there is the matter of the bottom line, there is also the question of how people should be treated. Unfortunately, the rigid and punitive class system in the United States ensures that the well-off are treated well, while the little people face a much different sort of life.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Of Dice & Chance

Posted in Metaphysics, Philosophy, Uncategorized by Michael LaBossiere on August 9, 2017

d20Imagine, if you will, a twenty-sided die (or a d20 as it is known to gamers) being rolled. In the ideal the die has a 1 in 20 chance of rolling a 20 (or any particular number). It is natural to think of the die as being a sort of locus of chance, a random number generator whose roll cannot be predicted. While this is an appealing view of dice, there is a rather interesting question about what such random chance amounts to.

One way to look at the matter, using the example of a d20, is that if the die is rolled 20 times, then one of those rolls will be a 20. Obviously enough, this is not true—as any gamer will tell you, the number of 20s rolled while rolling 20 times varies a great deal. This can, of course, be explained by the fact that d20s are imperfect and hence tend to roll some numbers more than others. There are also the influences of the roller, the surface on which the d20 lands and so on. As such, a d20 will not be a perfect random number generator. But, imagine if there could be a perfect d20 rolled under perfect conditions. What would occur?

One possibility is that each number would come up within the 20 rolls, albeit at random. As such, every 20 rolls would guarantee a 20 (and only one 20), thus accounting for the 1 in 20 chance of rolling a 20. This, however, seems problematic. There is the obvious question of what would ensure that each of the twenty numbers were rolled once (and only once). Then again, that this would occur is only marginally weirder than the idea of chance itself.

It is, of course, well-established that a small number of random events (such as rolling a d20 only twenty times) will deviate from what probability dictates. It is also well-established that as the number of rolls increases, the closer the outcomes will match the expected results (assuming the d20 is not loaded). This general principle is known as the law of large numbers. As such, getting three 20s or no 20s in a series of 20 rolls would not be surprising, but as the number of rolls increases, the closer the results will be to the expected 1 in 20 outcome for each number. As such, the 1 in 20 odds of getting a 20 with a d20 does not mean that 20 rolls will ensure one and only one 20, it means that with enough rolls about 1 in 20 of all the rolls will be 20s. This, does not, of course, really say much about how chance works—beyond noting that chance seems to play out “properly” over large numbers.

One interesting way to look at this is to say that if there were an infinite number of d20 rolls, then 5% of the infinite number of rolls would be 20s. One might, of course, wonder what 5% of infinity would be—would it not be infinite as well? Since infinity is such a mess, a rather more manageable approach would be to use the largest finite number (which presumably has its own problems) and note that 5% of that number of d20 rolls would be 20s.

Another approach would be to say that the 1 in 20 chance means that if all 1 in 20 chance events were formed into sets of 20, sets could be made from all the events that would have one occurrence each of the 1 in 20 events. Using dice as the example, if all the d20 rolls in the universe were known and collected into sets of numbers, they could be dived up into sets of twenty with each number in each set. So, while my 20 rolls would not guarantee a 20, there would be one 20 out of every 20 rolls in the universe. There is still, of course, the question of how this would work. One possibility is that random events are not random and this ensures the proper distribution of events—in this case, dice rolls.

It could also be claimed that chance is a bare fact, that a perfect d20 rolled in perfect conditions would have a 1 in 20 chance of producing a specific number. On this view, the law of large numbers might fail—while unlikely, if chance were a real random thing, it would not be impossible for results to be radically different than predicted. That is, there could be an infinite number of rolls of a perfect d20 with no 20 being rolled. One could even imagine that since a 1 can be rolled on any roll, someone could roll an infinite number of consecutive 1s. Intuitively this seems impossible—it is natural to think that in an infinity every possibility must occur (and perhaps do so perfectly in accord with the probability). But, this would only be a necessity if chance worked a certain way, perhaps that for every 20 rolls in the universe there must be one of each result. Then again, infinity is a magical number, so perhaps this guarantee is part of the magic.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Experience Machines

Posted in Ethics, Metaphysics, Philosophy, Uncategorized by Michael LaBossiere on August 8, 2017

Experience MachinesExperience Machines, edited by Mark Silcox (and including a chapter by me) is now available where fine books are sold, such as Amazon.

In his classic work Anarchy, State and Utopia, Robert Nozick asked his readers to imagine being permanently plugged into a ‘machine that would give you any experience you desired’. He speculated that, in spite of the many obvious attractions of such a prospect, most people would choose against passing the rest of their lives under the influence of this type of invention. Nozick thought (and many have since agreed) that this simple thought experiment had profound implications for how we think about ethics, political justice, and the significance of technology in our everyday lives.

Nozick’s argument was made in 1974, about a decade before the personal computer revolution in Europe and North America. Since then, opportunities for the citizens of industrialized societies to experience virtual worlds and simulated environments have multiplied to an extent that no philosopher could have predicted. The authors in this volume re-evaluate the merits of Nozick’s argument, and use it as a jumping–off point for the philosophical examination of subsequent developments in culture and technology, including a variety of experience-altering cybernetic technologies such as computer games, social media networks, HCI devices, and neuro-prostheses.

Tagged with: ,

Right-to-Try

Posted in Business, Ethics, Law, Medicine/Health, Philosophy by Michael LaBossiere on August 7, 2017

There has been a surge of support for right-to-try bills and many states have passed these into law. Congress, eager to do something politically easy and popular, has also jumped on this bandwagon.

Briefly put, the right-to-try laws give terminally ill patients the right to try experimental treatments that have completed Phase 1 testing but have yet to be approved by the FDA. Phase 1 testing involves assessing the immediate toxicity of the treatment. This does not include testing its efficacy or its longer-term safety. Crudely put, passing Phase 1 just means that the treatment does not immediately kill or significantly harm patients.

On the face of it, the right-to-try is something that no sensible person would oppose. After all, the gist of this right is that people who have “nothing to lose” are given the right to try treatments that might help them. The bills that propose to codify the right into law make use of the rhetorical narrative that the right-to-try laws would give desperate patients the freedom to seek medical treatment that might save them and this would be done by getting the FDA and the state out of their way. This is a powerful rhetorical narrative since it appeals to compassion, freedom and a dislike of the government. As such, it is not surprising that few people dare argue against such proposals. However, the matter does deserve proper critical consideration.

One interesting way to look at the matter is to consider an alternative reality in which the narrative of these laws was spun with a different rhetorical charge—negative rather than positive. Imagine, for a moment, if the rhetorical engines had cranked out a tale of how the bills would strip away the protection of the desperate and dying to allow predatory companies to use them as Guinea pigs for their untested treatments. If that narrative had been sold, people would be howling against such proposals rather than lovingly embracing them. Rhetorical narratives, be they positive or negative, are logically inert. As such, they are irrelevant to the merits of the right-to-try proposals. How people feel about the proposals is also logically irrelevant as well. What is wanted is a cool examination of the matter.

On the positive side, the right-to-try does offer people the chance to try treatments that might help them. It is, obviously enough, hard to argue that people do not have a right to take such risks when they are terminally ill. That said, there are still some points that need to be addressed.

One important point is that there is already a well-established mechanism in place to allow patients access to experimental treatments. The FDA already has system of expanded access that apparently approves the overwhelming majority of requests. Somewhat ironically, when people argue for the right-to-try by using examples of people successfully treated by experimental methods, they are showing that the existing system already allows people access to such treatments. This raises the question about why the laws are needed and what it changes.

The main change in such laws tends to be to reduce the role of the FDA in the process. Without such laws, requests to use such experimental methods typically have to go through the FDA (which seems to approve most requests).  If the FDA was denying people treatment that might help them, then such laws would seem to be justified. However, the FDA does not seem to be the problem here—they generally do not roadblock the use of experimental methods for people who are terminally ill. This leads to the question of what factors are limiting patient access.

As would be expected, the main limiting factors are those that impact almost all treatment access: costs and availability. While the proposed bills grant the negative right to choose experimental methods, they do not grant the positive right to be provided with those methods. A negative right is a liberty—one is free to act upon it but is not provided with the means to do so. The means must be acquired by the person. A positive right is an entitlement—the person is free to act and is provided with the means of doing so. In general, the right-to-try proposals do little or nothing to ensure that such treatments are provided. For example, public money is not allocated to pay for such treatments. As such, the right-to-try is much like the right-to-healthcare for most people: you are free to get it provided you can get it yourself. Since the FDA generally does not roadblock access to experimental treatments, the bills and laws would seem to do little or nothing new to benefit patients. That said, the general idea of right-to-try seems reasonable—and is already practiced. While few are willing to bring them up in public discussions, there are some negative aspects to the right-to-try. I will turn to some of those now.

One obvious concern is that terminally ill patients do have something to lose. Experimental treatments could kill them significantly earlier than their terminal condition or they could cause suffering that makes their remaining time even worse. As such, it does make sense to have some limit on the freedom to try. After all, it is the job of the FDA and medical professionals to protect patients from such harms—even if the patients want to roll the dice.

This concern can be addressed by appealing to freedom of choice—provided that the patients are able to provide informed consent and have an honest assessment of the treatment. This does create something of a problem: since little is known about the treatment, the patient cannot be well informed about the risks and benefits. But, as I have argued in many other posts, I accept that people have a right to make such choices, even if these choices are self-damaging. I apply this principle consistently, so I accept that it grants the right-to-try, the right to same-sex marriage, the right to eat poorly, the right to use drugs, and so on.

The usual counters to such arguments from freedom involve arguments about how people must be protected from themselves, arguments that such freedoms are “just wrong” or arguments about how such freedoms harm others. The idea is that moral or practical considerations override the freedom of the individual. This is a reasonable counter and a strong case can be made against allowing people the right to engage in a freedom that could harm or kill them. However, my position on such freedoms requires me to accept that a person has the right-to-try, even if it is a bad idea. That said, others have an equally valid right to try to convince them otherwise and the FDA and medical professionals have an obligation to protect people, even from themselves.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

What Can be Owned?

Posted in Business, Ethics, Law, Philosophy, Politics by Michael LaBossiere on August 4, 2017

One rather interesting philosophical question is that of what can, and perhaps more importantly cannot, be owned. There is, as one might imagine, considerable dispute over this matter. One major historical example of such a dispute is the debate over whether people can be owned. A more recent example is the debate over the ownership of genes. While each specific dispute needs to be addressed on its own merits, it is certainly worth considering the broader question of what can and what cannot be property.

Addressing this matter begins with the foundation of ownership—that is, what justifies the claim that one owns something, whatever that something might be. This is, of course, the philosophical problem of property. Many are not even aware there is such a philosophical problem—they uncritically accept the current system, though they might have some complaints about its particulars. But, to simply assume that the existing system of property is correct (or incorrect) is to beg the question. As such, the problem of property needs to be addressed without simply assuming it has been solved.

One practical solution to the problem of property is to contend that property is a matter of convention. This can be formalized convention (such as laws) or informal convention (such as traditions) or a combination of both. One reasonable view is property legalism—that ownership is defined by the law. On this view, whatever the law defines as property is property. Another reasonable view is that of property relativism—that ownership is defined by the cultural practices (which can include the laws). Roughly put, whatever the culture accepts as property is property. These approaches, obviously enough, correspond to the moral theories of legalism (that the law determines morality) and ethical relativism (that culture determines morality).

The conventionalist approach to property does seem to have the virtue of being practical and of avoiding mucking about in philosophical disputes. If there is a dispute about what (or who) can be owned, the matter is settled by the courts, by force of arms or by force of persuasion. There is no question of what view is right—winning makes the view right. While this approach does have its appeal, it is not without its problems.

Trying to solve the problem of property with the conventionalist approach does lead to a dilemma: the conventions are either based on some foundation or they are not. If the conventions are not based on a foundation other than force (of arms or persuasion), then they would seem to be utterly arbitrary. In such a case, the only reasons to accept such conventions would be practical—to avoid trouble with armed people (typically the police) or to gain in some manner.

If the conventions have some foundation, then the problem is determining what it (or they) might be. One easy and obvious approach is to argue that people have a moral obligation to obey the law or follow cultural conventions. While this would provide a basis for a moral obligation to accept the property conventions of a society, these conventions would still be arbitrary. Roughly put, those under the conventions would have a reason to accept whatever conventions were accepted, but no reason to accept one specific convention over another. This is analogous to the ethics of divine command theory, the view that what God commands is good because He commands it and what He forbids is evil because He forbids it. As should be expected, the “convention command” view of property suffers from problems analogous to those suffered by divine command theory, such as the arbitrariness of the commands and the lack of justification beyond obedience to authority.

One classic moral solution to the problem of property is that offered by utilitarianism. On this view, the practice of property that creates more positive value than negative value for the morally relevant beings would be the morally correct practice. It does make property a contingent matter—as the balance of positive against negative shifted, radically different conceptions of property can be thus justified. So, for example, while a capitalistic conception of property might be justified at a certain place and time, that might shift in favor of state ownership of the means of production. As always, utilitarianism leaves the door open for intuitively horrifying practices that manage to fulfill that condition. However, this approach also has an intuitive appeal in that the view of property that creates the greatest good would be the morally correct view of property.

One very interesting attempt to solve the problem of property is offered by John Locke. He begins with the view that God created everyone and gave everyone the earth in common. While God does own us, He is cool about it and effectively lets each person own themselves. As such, I own myself and you own yourself. From this, as Locke sees it, it follows that each of us owns our labor.

For Locke, property is created by mixing one’s labor with the common goods of the earth. To illustrate, suppose we are washed up on an island owned by no one. If I collect wood and make a shelter, I have mixed my labor with the wood that can be used by any of us, thus making the shelter my own. If you make a shelter with your labor, it is thus yours. On Locke’s view, it would be theft for me to take your shelter and theft for you to take mine.

As would be imagined, the labor theory of ownership quickly runs into problems, such as working out a proper account of mixing of labor and what to do when people are born on a planet on which everything is already claimed and owned. However, the idea that the foundation of property is that each person owns themselves is an intriguing one and does have some interesting implications about what can (and cannot) be owned. One implication would seem to be that people are owners and cannot be owned. For Locke, this would be because each person is owned by themselves and ownership of other things is conferred by mixing one’s labor with what is common to all.

It could be contended that people create other people by their labor literally in the case of the mother) and thus parents own their children. A counter to this is that although people do engage in sexual activity that results in the production of other people, this should not be considered labor in the sense required for ownership. After all, the parents just have sex and then the biological processes do all the work of constructing the new person. One might also play the metaphysical card and contend that what makes the person a person is not manufactured by the parents, but is something metaphysical like the soul or consciousness (for Locke, a person is their consciousness and the consciousness is within a soul).

Even if it is accepted that parents do not own their children, there is the obvious question about manufactured beings that are like people such as intelligent robots or biological constructs. These beings would be created by mixing labor with other property (or unowned materials) and thus would seem to be things that could be owned. Unless, of course, they are owners.

One approach is to consider them analogous to children—it is not how children are made that makes them unsuitable for ownership, it is what they are. On this view, people-like constructs would be owners rather than things to be owned. The intuitive counter is that people-like manufactured beings would be property like anything else that is manufactured. The challenge is, of course, to show that this would not entail that children are property—after all, considerable resources and work can be expended to create a child (such as IVF, surrogacy, and perhaps someday artificial wombs), yet intuitively they would not be property. This does point to a rather important question: is it what something is that makes it unsuitable to be owned or how it is created?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , ,

Weight Loss, Philosophy & Science

Posted in Philosophy, Reasoning/Logic, Running, Science, Sports/Athletics by Michael LaBossiere on August 2, 2017

When I was young and running 90-100 miles a week, I could eat all the things without gaining weight. Time is doubly cruel in that it slowed my metabolism and reduced my ability to endure high mileage. Inundated with the usual abundance of high calorie foods, I found I was building an unsightly pudge band around my middle. My first reaction was to try to get back to my old mileage, but I found that I now top out at 70 miles a week and anything more starts breaking me down. Since I could not exercise more, I was faced with the terrible option of eating less. Being something of an expert on critical thinking, I dismissed all the fad diets and turned to science to glean the best way to beat the bulge. Being a philosopher, I naturally misapplied the philosophy of science to this problem with some interesting results.

Before getting into the discussion, I am morally obligated to point out that I am not a medical professional. As such, what follows should be regarded with due criticism and you should consult a properly credentialed expert before embarking on changes to your exercise or nutrition practices. Or you might die. Probably not; but maybe.

As any philosopher will tell you, while the math used in science is deductive (the premises are supposed to guarantee the conclusion with certainty) scientific reasoning is inductive (the premises provide some degree of support for the conclusion that is less than complete). Because of this, science suffers from the problem of induction. In practical terms, this means that no matter how carefully the reasoning is conducted and no matter how good the evidence is, the conclusion drawn from the evidence can still be false. The basis for this problem is the fact that inductive reasoning involves a “leap” from the evidence/premises (what has been observed) to the conclusion (what has not been observed). Put bluntly, inductive reasoning can always lead to a false conclusion.

Scientists and philosophers have long endeavored to make science a deductive matter. For example, Descartes believed that he could find truths that he could know with certainty and then use valid deductive reasoning to generate a true conclusion with absolute certainty. Unfortunately, this science of certainty is the science of the future and always will be. So, we are stuck with induction.

The problem of induction obviously applies to the sciences that study nutrition, exercise and weight loss and, as such, the conclusions made in these sciences can always be wrong. This helps explain why the recommendations about these matters change relentlessly.

While there are philosophers of science who would disagree, science is mostly a matter of trying to figure things out by doing the best that can be done at the time. This is limited by the resources (such as technology) available at the time and by human epistemic capabilities. As such, whatever science is presenting at the moment is almost certainly at least partially wrong; but the wrongs get reduced over time. Or increase sometimes. This is true of all the sciences—consider, for example, the changes in physics since Thales began it. This also helps explain why the recommendations about diet and exercise change constantly.

While science is sometimes presented as a field of pure reason outside of social influences, science is obviously a social activity conducted by humans. Because of this, science is influence by the usual social factors and human flaws. For example, scientists need money to fund their research and can thus be vulnerable to corporations looking to “prove” various claims that are in their interest. As another example, scientific matters can become issues of political controversy, such as evolution and climate change. This politicization tends to derange science. As a final example, scientists can be motivated by pride and ambition to fudge or fake results. Because of these factors, the sciences dealing with nutrition and exercise are significantly corrupted and this makes it difficult to make a rational judgment about which claims are true. One excellent example is how the sugar industry paid scientists at Harvard to downplay the health risks presented by sugar and play up those presented by fat. Another illustration is the fact that the food pyramid endorsed by the US government has been shaped by the food industries rather than being based entirely on good science.

Given these problems it might be tempting to abandon mainstream science and go with whatever fad or food ideology one finds appealing. That would be a bad idea. While science suffers from these problems, mainstream science is vastly better than the nonscientific alternatives—they tend to have all of the problems of science without having its strengths. So, what should one do? The rational approach is to accept the majority opinion of the qualified and credible experts. One should also keep in mind the above problems and approach the science with due skepticism.

So, what are some of the things the best science of today say about weight loss? First, humans evolved as hunter-gatherers and getting enough calories was a challenge. As such, humans tend to be very good at storing energy in the form of fat which is one reason the calorie rich environment of modern society contributes to obesity. Crudely put, it is in our nature to overeat—because that once meant the difference between life and death.

Second, while exercise does burn calories, it burns far less than many imagine. For most people, the majority of calorie burning is a result of the body staying alive. As an example, I burn about 4,000 calories on my major workout days (estimated based on my Fitbit and activity calculations). But, about 2,500 of those calories are burned just staying alive. On those days I work out about four hours and I am fairly active the rest of the day. As such, while exercising more will help a person lose weight, the calorie impact of exercise is surprisingly low—unless you are willing to commit considerable time to exercise. That said, you should exercise—in addition to burning calories it has a wide range of health benefits.

Third, hunger is a function of the brain and the brain responds differently to different foods. Foods high in protein and fiber create a feeling of fullness that tends to turn off the hunger signal. Foods with a high glycemic index (like cake) tend to stimulate the brain to cause people to consume more calories. As such, manipulating your brain is an effective way to increase the chance of losing weight. Interestingly, as Aristotle argued, habituation to foods can train the brain to prefer foods that are healthier—that is, you can train yourself to prefer things like nuts, broccoli and oatmeal over cookies, cake, and soda. This takes time and effort, but can obviously be done.

Fourth, weight loss has diminishing returns: as one loses weight, one’s metabolism slows and less energy is needed. As such, losing weight makes it harder to lose weight, which is something to keep in mind.  Naturally, all of these claims could be disproven in the next round of scientific investigation—but they seem quite reasonable now.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,