A Philosopher's Blog

Work & Vacation

Posted in Business, Law, Philosophy, Uncategorized by Michael LaBossiere on August 11, 2017

Most Americans do not use their vacation days, despite the fact that they tend to get less than their European counterparts. A variety of plausible reasons have been advanced for this, most of which reveal interesting facts about working in the United States.

As would be expected, fear is a major factor. Even when a worker is guaranteed paid vacation time as part of their compensation for work, many workers are afraid that using this vacation time will harm them. One worry is that by using this time, they will show that they are not needed or are inferior to workers that do not take as much (or any) time and hence will be passed up for advancement or even fired. On this view, vacation days are a trap—while they are offered and the worker has earned them, to use them all would sabotage or end the person’s employment. This is not to say that all or even many employers intentionally set a vacation day trap—in fact, many employers seem to have to take special effort to get their employees to use their vacation days. However, this fear is real and does indicate a problem with working in America.

Another fear that keeps workers from using all their days is the fear that they will fall behind in their work, thus requiring them to work extra hard before or after their vacation. On this view, there is little point in taking a vacation if one will just need to do the missed work and do it in less time than if one simply stayed at work. The practical challenge here is working ways for employees to vacation without getting behind (or thinking they will get behind). After all, if an employee is needed at a business, then their absence will mean that things that need to get done will not get done. This can be addressed in various ways, such as sharing workloads or hiring temporary workers. However, an employee can then be afraid that the business will simply fire them in favor of permanently sharing the workload or by replacing them with a series of lower paid temporary workers.

Interestingly enough, workers often decline to use all their vacation days because of pride. The idea is that by not using their vacation time, a person can create the impression that they are too busy and too important to take time off from work. In this case, the worker is not afraid that they will be fired, they are worried that they will lose status and damage their reputation. This is not to say that being busy is always a status symbol—there is, of course, also status attached to being so well off that one can be idle. This fits nicely into Hobbes’ view of human motivation: everything we do, we do for gain or glory. As such, if not taking vacation time increases one’s glory (status and reputation), then people will do that.

On the one hand, people who do work hard (and effectively) do deserve a positive reputation for these efforts and earn a relevant status. On the other hand, the idea that reputation and status are dependent on not using all one’s vacation time can clearly be damaging to a person. Humans do, after all, need to relax and recover. This view also, one might argue, puts too much value on the work aspect of a person’s life at the expense of their full humanity. Then again, for the working class in America, to be is to work (for the greater enrichment of the rich).

Workers who do not get paid vacations tend to not use all (or any) of their vacation days for the obvious reason that their vacations are unpaid. Since a vacation tends to cost money, workers without paid vacations can take a double hit if they take a vacation: they are getting no income while spending money. Since people do need time off from work, there have been some attempts to require that workers get paid vacation time. As would be imagined, this proposal tends to be resisted by businesses. In part it is because they do not like being told what they must do and in part it is because of concerns over costs. While moral arguments about how people should be treated tend to fail, there is some hope that practical arguments about improved productivity and other benefits could succeed. However, as workers have less and less power in the United States (in part because workers have been deluded into embracing ideologies and policies contrary to their own interests), it seems less and less likely that paid vacation time will increase or be offered to more workers.

Some workers also do not use all their vacation days for vacation because they need to use them for other purposes, such as sick days. It is not uncommon for working mothers to save their vacation days to use for when they need to take care of the kids. It is also not uncommon for workers to use their vacation days for sick days, when they need to be at home for a service visit, when they need to go to the doctors or for other similar things. If it is believed that vacation time is something that people need, then forcing workers to use up their vacation time for such things would seem to be wrong. The obvious solution, which is used by some businesses, is to offer such things as personal days, sick leave, and parental leave. While elite employers offer elite employees such benefits, they tend to be less available to workers of lower social and economic classes. So, for example, Sheryl Sandberg gets excellent benefits, while the typical worker does not. This is, of course, a matter of values and not just economic ones. That is, while there is the matter of the bottom line, there is also the question of how people should be treated. Unfortunately, the rigid and punitive class system in the United States ensures that the well-off are treated well, while the little people face a much different sort of life.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Advertisements
Tagged with: ,

Right-to-Try

Posted in Business, Ethics, Law, Medicine/Health, Philosophy by Michael LaBossiere on August 7, 2017

There has been a surge of support for right-to-try bills and many states have passed these into law. Congress, eager to do something politically easy and popular, has also jumped on this bandwagon.

Briefly put, the right-to-try laws give terminally ill patients the right to try experimental treatments that have completed Phase 1 testing but have yet to be approved by the FDA. Phase 1 testing involves assessing the immediate toxicity of the treatment. This does not include testing its efficacy or its longer-term safety. Crudely put, passing Phase 1 just means that the treatment does not immediately kill or significantly harm patients.

On the face of it, the right-to-try is something that no sensible person would oppose. After all, the gist of this right is that people who have “nothing to lose” are given the right to try treatments that might help them. The bills that propose to codify the right into law make use of the rhetorical narrative that the right-to-try laws would give desperate patients the freedom to seek medical treatment that might save them and this would be done by getting the FDA and the state out of their way. This is a powerful rhetorical narrative since it appeals to compassion, freedom and a dislike of the government. As such, it is not surprising that few people dare argue against such proposals. However, the matter does deserve proper critical consideration.

One interesting way to look at the matter is to consider an alternative reality in which the narrative of these laws was spun with a different rhetorical charge—negative rather than positive. Imagine, for a moment, if the rhetorical engines had cranked out a tale of how the bills would strip away the protection of the desperate and dying to allow predatory companies to use them as Guinea pigs for their untested treatments. If that narrative had been sold, people would be howling against such proposals rather than lovingly embracing them. Rhetorical narratives, be they positive or negative, are logically inert. As such, they are irrelevant to the merits of the right-to-try proposals. How people feel about the proposals is also logically irrelevant as well. What is wanted is a cool examination of the matter.

On the positive side, the right-to-try does offer people the chance to try treatments that might help them. It is, obviously enough, hard to argue that people do not have a right to take such risks when they are terminally ill. That said, there are still some points that need to be addressed.

One important point is that there is already a well-established mechanism in place to allow patients access to experimental treatments. The FDA already has system of expanded access that apparently approves the overwhelming majority of requests. Somewhat ironically, when people argue for the right-to-try by using examples of people successfully treated by experimental methods, they are showing that the existing system already allows people access to such treatments. This raises the question about why the laws are needed and what it changes.

The main change in such laws tends to be to reduce the role of the FDA in the process. Without such laws, requests to use such experimental methods typically have to go through the FDA (which seems to approve most requests).  If the FDA was denying people treatment that might help them, then such laws would seem to be justified. However, the FDA does not seem to be the problem here—they generally do not roadblock the use of experimental methods for people who are terminally ill. This leads to the question of what factors are limiting patient access.

As would be expected, the main limiting factors are those that impact almost all treatment access: costs and availability. While the proposed bills grant the negative right to choose experimental methods, they do not grant the positive right to be provided with those methods. A negative right is a liberty—one is free to act upon it but is not provided with the means to do so. The means must be acquired by the person. A positive right is an entitlement—the person is free to act and is provided with the means of doing so. In general, the right-to-try proposals do little or nothing to ensure that such treatments are provided. For example, public money is not allocated to pay for such treatments. As such, the right-to-try is much like the right-to-healthcare for most people: you are free to get it provided you can get it yourself. Since the FDA generally does not roadblock access to experimental treatments, the bills and laws would seem to do little or nothing new to benefit patients. That said, the general idea of right-to-try seems reasonable—and is already practiced. While few are willing to bring them up in public discussions, there are some negative aspects to the right-to-try. I will turn to some of those now.

One obvious concern is that terminally ill patients do have something to lose. Experimental treatments could kill them significantly earlier than their terminal condition or they could cause suffering that makes their remaining time even worse. As such, it does make sense to have some limit on the freedom to try. After all, it is the job of the FDA and medical professionals to protect patients from such harms—even if the patients want to roll the dice.

This concern can be addressed by appealing to freedom of choice—provided that the patients are able to provide informed consent and have an honest assessment of the treatment. This does create something of a problem: since little is known about the treatment, the patient cannot be well informed about the risks and benefits. But, as I have argued in many other posts, I accept that people have a right to make such choices, even if these choices are self-damaging. I apply this principle consistently, so I accept that it grants the right-to-try, the right to same-sex marriage, the right to eat poorly, the right to use drugs, and so on.

The usual counters to such arguments from freedom involve arguments about how people must be protected from themselves, arguments that such freedoms are “just wrong” or arguments about how such freedoms harm others. The idea is that moral or practical considerations override the freedom of the individual. This is a reasonable counter and a strong case can be made against allowing people the right to engage in a freedom that could harm or kill them. However, my position on such freedoms requires me to accept that a person has the right-to-try, even if it is a bad idea. That said, others have an equally valid right to try to convince them otherwise and the FDA and medical professionals have an obligation to protect people, even from themselves.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

What Can be Owned?

Posted in Business, Ethics, Law, Philosophy, Politics by Michael LaBossiere on August 4, 2017

One rather interesting philosophical question is that of what can, and perhaps more importantly cannot, be owned. There is, as one might imagine, considerable dispute over this matter. One major historical example of such a dispute is the debate over whether people can be owned. A more recent example is the debate over the ownership of genes. While each specific dispute needs to be addressed on its own merits, it is certainly worth considering the broader question of what can and what cannot be property.

Addressing this matter begins with the foundation of ownership—that is, what justifies the claim that one owns something, whatever that something might be. This is, of course, the philosophical problem of property. Many are not even aware there is such a philosophical problem—they uncritically accept the current system, though they might have some complaints about its particulars. But, to simply assume that the existing system of property is correct (or incorrect) is to beg the question. As such, the problem of property needs to be addressed without simply assuming it has been solved.

One practical solution to the problem of property is to contend that property is a matter of convention. This can be formalized convention (such as laws) or informal convention (such as traditions) or a combination of both. One reasonable view is property legalism—that ownership is defined by the law. On this view, whatever the law defines as property is property. Another reasonable view is that of property relativism—that ownership is defined by the cultural practices (which can include the laws). Roughly put, whatever the culture accepts as property is property. These approaches, obviously enough, correspond to the moral theories of legalism (that the law determines morality) and ethical relativism (that culture determines morality).

The conventionalist approach to property does seem to have the virtue of being practical and of avoiding mucking about in philosophical disputes. If there is a dispute about what (or who) can be owned, the matter is settled by the courts, by force of arms or by force of persuasion. There is no question of what view is right—winning makes the view right. While this approach does have its appeal, it is not without its problems.

Trying to solve the problem of property with the conventionalist approach does lead to a dilemma: the conventions are either based on some foundation or they are not. If the conventions are not based on a foundation other than force (of arms or persuasion), then they would seem to be utterly arbitrary. In such a case, the only reasons to accept such conventions would be practical—to avoid trouble with armed people (typically the police) or to gain in some manner.

If the conventions have some foundation, then the problem is determining what it (or they) might be. One easy and obvious approach is to argue that people have a moral obligation to obey the law or follow cultural conventions. While this would provide a basis for a moral obligation to accept the property conventions of a society, these conventions would still be arbitrary. Roughly put, those under the conventions would have a reason to accept whatever conventions were accepted, but no reason to accept one specific convention over another. This is analogous to the ethics of divine command theory, the view that what God commands is good because He commands it and what He forbids is evil because He forbids it. As should be expected, the “convention command” view of property suffers from problems analogous to those suffered by divine command theory, such as the arbitrariness of the commands and the lack of justification beyond obedience to authority.

One classic moral solution to the problem of property is that offered by utilitarianism. On this view, the practice of property that creates more positive value than negative value for the morally relevant beings would be the morally correct practice. It does make property a contingent matter—as the balance of positive against negative shifted, radically different conceptions of property can be thus justified. So, for example, while a capitalistic conception of property might be justified at a certain place and time, that might shift in favor of state ownership of the means of production. As always, utilitarianism leaves the door open for intuitively horrifying practices that manage to fulfill that condition. However, this approach also has an intuitive appeal in that the view of property that creates the greatest good would be the morally correct view of property.

One very interesting attempt to solve the problem of property is offered by John Locke. He begins with the view that God created everyone and gave everyone the earth in common. While God does own us, He is cool about it and effectively lets each person own themselves. As such, I own myself and you own yourself. From this, as Locke sees it, it follows that each of us owns our labor.

For Locke, property is created by mixing one’s labor with the common goods of the earth. To illustrate, suppose we are washed up on an island owned by no one. If I collect wood and make a shelter, I have mixed my labor with the wood that can be used by any of us, thus making the shelter my own. If you make a shelter with your labor, it is thus yours. On Locke’s view, it would be theft for me to take your shelter and theft for you to take mine.

As would be imagined, the labor theory of ownership quickly runs into problems, such as working out a proper account of mixing of labor and what to do when people are born on a planet on which everything is already claimed and owned. However, the idea that the foundation of property is that each person owns themselves is an intriguing one and does have some interesting implications about what can (and cannot) be owned. One implication would seem to be that people are owners and cannot be owned. For Locke, this would be because each person is owned by themselves and ownership of other things is conferred by mixing one’s labor with what is common to all.

It could be contended that people create other people by their labor literally in the case of the mother) and thus parents own their children. A counter to this is that although people do engage in sexual activity that results in the production of other people, this should not be considered labor in the sense required for ownership. After all, the parents just have sex and then the biological processes do all the work of constructing the new person. One might also play the metaphysical card and contend that what makes the person a person is not manufactured by the parents, but is something metaphysical like the soul or consciousness (for Locke, a person is their consciousness and the consciousness is within a soul).

Even if it is accepted that parents do not own their children, there is the obvious question about manufactured beings that are like people such as intelligent robots or biological constructs. These beings would be created by mixing labor with other property (or unowned materials) and thus would seem to be things that could be owned. Unless, of course, they are owners.

One approach is to consider them analogous to children—it is not how children are made that makes them unsuitable for ownership, it is what they are. On this view, people-like constructs would be owners rather than things to be owned. The intuitive counter is that people-like manufactured beings would be property like anything else that is manufactured. The challenge is, of course, to show that this would not entail that children are property—after all, considerable resources and work can be expended to create a child (such as IVF, surrogacy, and perhaps someday artificial wombs), yet intuitively they would not be property. This does point to a rather important question: is it what something is that makes it unsuitable to be owned or how it is created?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , ,

Trump & Mercenaries: Arguments Against

Posted in Business, Law, Philosophy by Michael LaBossiere on July 28, 2017

Embed from Getty Images
While there are some appealing arguments in favor of the United States employing mercenaries, there are also arguments against this position. One obvious set of arguments is composed of those that focus on the practical problems of employing mercenaries. These problems include broad concerns about the competence of the mercenaries (such as worries about their combat effectiveness and discipline) as well as worries about the quality of their equipment. These concerns can, of course, be addressed on a case by case basis. Some mercenary operations are composed of well-trained, well-equipped ex-soldiers who are every bit as capable as professional soldiers serving their countries. If competent and properly equipped mercenaries are hired, there will obviously not be problems in these areas.

There are also obvious practical concerns about the loyalty and reliability of mercenaries—they are, after all, fighting for money rather than from duty or commitment to principles. This is not to disparage mercenaries. After all, working for money is what professionals do, whether they are mercenary soldiers, surgeons, electricians or professors. A surgeon who is motivated by money need not be less reliable than a colleague who is driven by a moral commitment to heal the sick and injured. Likewise, a soldier who fights for a paycheck need not be less dependable than a patriotic soldier.

That said, a person who is motivated primarily by money will act in accord with that value and this can make them considerably less loyal and reliable than someone motivated by higher principles. This is not to say that a mercenary cannot have higher principles, but a mercenary, by definition, sells their loyalty (such as it is) to the highest bidder. As such, this is a reasonable concern.

This concern can be addressed by paying mercenaries well enough to defend against bribery and by assigning tasks to mercenaries that require loyalty and reliability proportional to what the mercenaries can realistically offer. This, of course, can severely limit how mercenaries can be deployed and could make hiring them pointless—unless a nation has an abundance of money and a shortage of troops.

A concern that is both practical and moral is that mercenaries tend to operate outside of the usual chain of command of the military and are often exempt from many of the laws and rules that govern the operation of national forces. In many cases, mercenaries are intentionally granted special exemptions. An excellent illustration of how this can be disastrous is Blackwater, which was a major security contractor operating mercenary forces in Iraq.

In September of 2007 employees of Blackwater were involved in an incident resulting in 11 deaths. This was not the first such incident. Although many believe Blackwater acted incorrectly, the company was well protected against accountability because of the legal situation created by the United States.  In 2004 the Coalition Provisional Authority administrator signed an order making all Americans in Iraq immune to Iraqi law. Security contractors enjoyed even greater protection. The Military Extraterritorial Jurisdiction Act of 2000, which allows charges to be brought in American courts for crimes committed in foreign countries, applies only to those contracting with the Department of Defense. Companies employed by the State Department, such as was the case with Blackwater, are not covered by the law. Blackwater went even further and claimed exemption from all law suits and criminal prosecution. This defense was also used against a suit brought by families of four Blackwater employees killed in Iraq.

While there are advantages to granting mercenary forces exemptions from the law, Machiavelli warned against this because they might start “oppressing others quite contrary to your intentions.” His solution was to “keep him within the laws so that he does not overstep the mark.” This is excellent advice that should have been heeded. Instead, employing and placing such mercenaries beyond the law has led to serious problems.

The concern about mercenaries being exempt from the usual laws can be addressed simply enough: these exemptions can either be removed or not granted in the first place. While this will not guarantee good behavior, it can help encourage it.

The concern about mercenaries being outside the usual command structure can be harder to address. On the one hand, mercenary forces could simply be placed within the chain of command like any other unit. On the other hand, mercenary units are, by their very nature, outside of the usual command and organization structure and integrating them could prove problematic. Also, if the mercenaries are simply integrated as if they are normal units, then the obvious question arises as to why mercenaries would be needed in place of regular forces.

Yet another practical concern is that the employment of mercenaries can create public relations problems. While sending regular troops to foreign lands is always problematic, the use of mercenary forces can be more problematic. One reason is that the hiring of mercenaries is often looked down upon, in part because of the checkered history of mercenary forces. There is also the concern of how the local populations will perceive hired guns—especially given the above concerns about mercenaries operating outside of the boundaries that restrict regular forces. Finally, there is also the concern that the hiring of mercenaries can make the hiring country seem weak—the need to hire mercenaries would seem to suggest that the country has a shortage of competent regular forces.

A somewhat abstract argument against the United States employing mercenaries is based on the notion that nation states are supposed to be the sole operators of military forces. This, of course, assumes a specific view of the state and the moral right to operate military forces. If this conception of the state is correct, then hiring mercenaries would be to cede this responsibility (and right) to private companies, which would be unacceptable. The United States does allow private armies to exist within the country, if they have the proper connections to those in power. Blackwater, for example, was one such company. This seems to be problematic.

This concern can countered with an alternative view of the state in which private armies are acceptable. In the case of private armies within a country, it could be argued that they are acceptable as long as they acknowledge the supremacy of the state. So, for example, an American mercenary company would be acceptable as long as it operated under conditions set by the United States government and served only in approved ways. To use an obvious analogy, there are “rent-a-cops” that operate somewhat like police. These are acceptable provided that they operate under the rules of the state and do not create a challenge to the police powers of the state.

While this counter is appealing, there do not seem to be any compelling reasons for the United States to cede its monopoly on military force and hire mercenaries. Other than to profit the executives and shareholders of these mercenary companies, of course.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Trump & Mercenaries: Arguments For

Posted in Business, Ethics, Philosophy, Uncategorized by Michael LaBossiere on July 24, 2017
Embed from Getty Images

The Trump regime seems to be seriously considering outsourcing the war in Afghanistan to mercenaries.  The use of mercenaries, or contractors (as they might prefer to be called), is a time-honored practice. While the United States leads the world in military spending and has a fine military, it is no stranger to employing mercenaries. For example, the security contractor Blackwater became rather infamous for its actions in Iraq.

While many might regard the employment of mercenaries as repugnant, the proposal to outsource military operations to corporations should not be dismissed out of hand. Arguments for and against it should be given their due consideration. Mere prejudices against mercenaries should not be taken as arguments, nor should the worst deeds committed by some mercenaries be taken as damning them all.

As with almost every attempt at privatizing a state function, one of the stock arguments is based on the claim that privatization will save money. In some cases, this is an excellent argument. For example, it is cheaper for state employees to fly on commercial airlines than for a state to maintain a fleet of planes to send employees around on state business. In other cases, this argument falls apart. The stock problem is that a for-profit company must make a profit and this means it must have that profit margin over and above what it costs to provide the product or service. So, for a mercenary company to make money, it would need to pay all the costs that government forces would incur for the same operation and would need to charge extra to make a profit. As such, using mercenaries would not seem to be a money-saver.

It could be countered that mercenaries can have significantly lower operating costs than normal troops. There are various ways that costs could be cut relative to the costs of operating the government military forces: mercenaries could have cheaper or less equipment, they could be paid less, they could be provided less (or no) benefits, and mercenaries could engage in looting to offset their costs (and pass the savings on to their employer).

The cost cutting approach does raise some concerns about the ability of the mercenaries to conduct operations effectively: underpaid and underequipped troops would tend to do worse than better paid and better equipped troops. There are also obvious moral concerns about letting mercenaries loot.

However, there are savings that could prove quite significant: while the United States Department of Veterans Affairs has faced considerable criticism, veterans can get considerable benefits. For example, there is the GI Bill. Assuming mercenaries did not get such benefits, this would result in meaningful cost savings. In sum, if a mercenary company operated using common business practices of cost-cutting, then they could certainly run operations cheaper than the state. But, of course, if saving money is the prime concern, the state could engage in the same practices and save even more money by not providing a private contractor with the money needed to make a profit. Naturally, there might be good reasons why the state could not engage in these money-saving practices. In that case, the savings offered by mercenaries could justify their employment.

A second argument in favor of using mercenaries is based on the fact that those doing the killing and dying will not be government forces. While the death of a mercenary is as much the death of a person as the death of a government soldier, the mercenary’s death would tend to have far less impact on political opinion back home. The death of an American soldier in combat is meaningful to Americans in the way that the death of a mercenary would not.

While the state employing mercenaries is accountable for what they do, there is a distance between the misdeeds of mercenaries and the state that does not exist between the misdeeds of regular troops and the state. In practical terms, there is less accountability. It is, after all, much easier to disavow and throw mercenaries under the tank than it is to do the same with government troops.

This is not to say mercenaries provide a “get out of trouble” card to their employer—as the incidents in Iraq involving Blackwater showed, employers still get caught in the fallout from the actions of the mercenaries they hire. However, having such a force can be useful, especially when one wants to do things that would get regular troops into considerable trouble.

A final argument in favor of mercenaries is from the standpoint of the owners of mercenary companies. Most forms of privatization are a means of funneling public money into the pockets of executives and shareholders. Privatizing operations in Afghanistan could be incredibly profitable (or, rather, even more profitable) for contractors.

While receiving a tide of public money would be good for the companies, the profit argument runs directly up against the first argument for using mercenaries—that doing so would save money. This sort of “double vision” is common in privatization: those who want to make massive profits make the ironic argument that privatization is a good idea because it will save money.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

The Hands that Serve

Posted in Business, Philosophy, Universities & Colleges by Michael LaBossiere on July 21, 2017

My grandparents made shoes, but I was guided on a path towards college that ultimately ended up with me being a philosophy professor—an abstract profession that is, perhaps, as far from shoe making as one can get. While most are not destined to become philosophers, the push towards college education persists to this day. In contrast, skilled trades and manual labor are typically looked down upon—even though a skilled trade can be very financially rewarding.

Looking down on skilled trades might seem unusual for the United States, a country that arose out of skilled trades and one that still purports to value an honest day’s work for an honest day’s pay. However, as noted above, there has been a switch from valuing skilled trades in favor of college education and the associated jobs. Oddly, skilled trades are even considered by some to be, if not exactly shameful, nothing to be proud of. Instead, the respected professions typically require a college degree. Although, since inconsistency is the way of humanity, financial success without a degree is often lauded.

At this point one must be careful to not confuse the obsession with college degrees and associated jobs as a sign that Americans value intellectualism. While there are cultural icons such as Einstein, the United States has a strong anti-intellectual streak. Some of this is fueled by religion, some by the remnants of blue-collar practicality, and some by the knowledge of the elites that intellectuals can be a danger to the established order. What is at play here could be called “educationalism” to contrast it with “intellectualism.” In neutral terms, this can be taken as the valuing of education for its financial value in terms of the payoff in the workplace. In more negative terms, it can be taken as a prejudice or bias in favor of those with formal education. Because of the success of this sort of educationalism, people are encouraged to get an education primarily based on the financial returns to themselves and those who will exploit their labors. And part of the motivation is to avoid the stigma of not being in a profession that requires a degree.

While education can be valuable, this sort of educationalism is not without it negative consequences. As many have noted, one result has been an increase in those seeking college degrees. Since college degrees are now often absurdly expensive (thanks, in large part, to the adoption of the business model of exorbitant administrative salaries) this has resulted in a significant surge in college debt. There is also the predatory approaches of the for-profit colleges, which exist primarily to funnel public money to the executives and shareholders.

Another impact of this form of educationalism is that professions that do not require college degrees are cast as inferior to those that do require degrees. In some cases, this characterization is correct: for example, assembling burgers for a fast food chain is certainly inferior to nearly all jobs that require a college degree. However, this contempt for non-degree jobs often extends to skilled trades, such as those of electrician, plumber and carpenter.

In some cases, the looking down is based on the perception that skilled trades pay less than degree trades. While this can be the case, skill trades can pay very well indeed—you can check this yourself by calling a plumber or electrician and inquiring how much they will charge for various tasks.

In other cases, people look down on the skilled trades because they often think that because these trades do not require a college degree those who practice them must be less intelligent or less capable. That is, a common assumption is that people go into these trades because they lack the ability to navigate the rigors of a philosophy, art history or a communications degree. Crudely put, the prejudice is that smart people get degrees, stupid people work in skilled trades or manual labor.

While completing college does require some minimal level of ability, as a professor with decades of experience I can attest to the fact that this ability can be very minimal indeed. Put crudely, stupid people can and do graduate with degrees—and some go on to considerable success. My point here is not, however, to say that college graduates can be just as stupid as those in the skilled trades. Rather, my point is that a college degree is not a reliable indicator of greater ability or intelligence.

Switching to a more positive approach, skilled trades can be just as challenging as professions that require college degrees. While the skilled trades obviously place more emphasis on manual work, such as wiring houses or rebuilding engines, this does not entail that they require less intelligence or ability.

I am in a somewhat uncommon position of holding a doctorate while also having some meaningful experience with various skilled trades. Part of this is because my background is such that to be a man required having a skill set that includes the basics of a variety of trades. To illustrate, I was expected to know how to build a camp, rewire outlets, service firearms, repair simple engines, and not die in the wilds. I used some of these skills to make money to pay for school and still use them today to save money. And not die. While I am obviously not a skilled professional, I have a reasonably good grasp of the skills and abilities needed to work in many skilled professions and I understand they typically require intelligence, critical thinking and creative thinking. Based on my own experience, I can say that addressing a technical problem with wiring or an engine can be just as mentally challenging as addressing a philosophical conundrum about the ethics of driverless cars.  As such, it is mere prejudice to look down upon people in the skilled professions. Interesting, some who would be horrified of being accused of the prejudices of racism or sexism routinely look down their noses at those in skilled professions.

Since I will occasionally do repairs or projects for people, I do get a chance to see the prejudice—I sometimes feel that I am operating “undercover” in such situations. This is analogous to how I feel when, as a white person who teaches at an HBCU, I hear people expressing racist views because they think I am “one of them” because I am white.  For example, on one occasion I was changing the locks for a grad school friend of mine who did not know a screw driver from an instantiated universal. While I was doing this, some of her other friends stopped by. Not knowing who I was, they simply walked past, perhaps assuming I was some sort of peasant laborer. I overheard one of them whispering how glad he was he was in grad school, so he would not have to do such mundane and mindless work. Another whispered, with an odd pride, that she would have no idea how to do such work—presumably because her brain was far too advanced to guide her hands in the operation of a screwdriver. This odd combination is not uncommon: people often hold to the view that skilled labor is beneath them while also believing that they simply cannot do such work. As in the incident just mentioned, it seems common for people to rationalize their lack of ability by telling themselves they are too smart to waste their precious brain space on such abilities. Presumably if one learns to replace a light switch, one must lose the ability to grasp the fundamentals of deconstruction.

When my friend realized what was going on, she hastened to introduce me as a grad student and everyone apologized because they first thought I was “just some maintenance worker” and not “one of them.” Needless to say, their attitude towards me changed dramatically, as did their behavior. As one might suspect, these were the same sort of people who would rail against the patriarchy and racism for their cruel prejudices and biases. And yet they fully embraced the biases of “educationalism” and held me in contempt until they learned I was as educated as they.

I must admit that I also have prejudices and biases. When an adult cannot do basic tasks like replacing a fill valve in a toilet or replace a simple door lock, I do judge them. However, I try not to do this—after all, not everyone has a background in which they could learn such basic skills. But, of course, I expect people to reciprocate: in return they need to not be prejudiced against people who pursue skilled trades instead of college degrees. And, of course, since a person cannot learn everything, everyone has massive gaps and voids in their skill sets.

While those who pursue careers in which they create ever more elaborate financial instruments to ruin the economy are rewarded with great wealth and those who create new frivolous apps are praised, it should be remembered that the infrastructure of civilization that makes all these things possible depend largely on the skilled trades. Someone must wire the towers that make mobile phones possible so that people can Tweet their witty remarks, someone has to put in the plumping and HVAC systems that make buildings livable so that the weasels of Wall Street have a proper place to pee, and so on for the foundation of civilization. As Sean Le Rond D’Alembert so wisely said in 1751, “But while justly respecting great geniuses for their enlightenment, society ought not to degrade the hands by which it is served.” Excellent advice then, excellent advice now.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Poverty & the Brain

Posted in Business, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on July 14, 2017

A key part of the American mythology is the belief that a person can rise to the pinnacle of success from the depths of poverty. While this does occur, most understand that poverty presents a considerable obstacle to success. In fact, the legendary tales that tell of such success typically embrace an interesting double vision of poverty: they praise the hero for overcoming the incredible obstacle of poverty while also asserting that anyone with gumption should be able to achieve this success.

Outside of myths and legends, it is a fact that poverty is difficult to overcome. There are, of course, the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care—both necessities are expensive and the poor typically have less access to good food and good care. There is also recent research by scientists such as Kimberly G. Noble  that suggests a link between poverty and brain development.

While the most direct way to study the impact of poverty and the brain is by imaging the brain, this (as researchers have noted) is expensive. However, the research that has been conducted shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.

Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test for the functionality of the brain. As might be expected, children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.

As would be expected, there are individuals who do not conform to the generally correlation. That is, there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. As such, knowing the economic class of a child does not tell one what their individual capabilities are. However, there is a clear correlation when the matter is considered in terms of populations rather than single individuals. This is important to consider when assessing the impact of anecdotes of successful rising from poverty—as with all appeals to anecdotal evidence, they do not outweigh the bulk of statistical evidence.

To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not entail that one knows that Sally is weaker than Bob the boy. Sally might be much stronger than Bob. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the obvious question about whether poverty is a causal factor in brain development.

Those with even passing familiarity with causal reasoning know that correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities that there must be a causal connection would be to fall victim to the most basic of causal fallacies. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both—that is, poverty and the cognitive abilities are both effects.

There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.

This view does have considerable appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are needed for success and it seems reasonable to think that people who were less capable would thus be less successful. To use an analogy, there is a clear correlation between running speed and success in track races. It is not, of course, losing races that makes a person slow. It is being slow that causes a person to lose races.

Despite the appeal of this interpretation of the data, to rush to the conclusion that it is the cognitive abilities that cause poverty would be as much a fallacy as rushing to the conclusion that poverty influences brain development. Both views do seem plausible and it is certainly possible that there is causation going in both directions. The challenge, then, is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble—providing the experimental group of low income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.

Intuitively, it makes sense that an adequate family income would generally have a positive impact on the development of children. After all, this income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that throwing money at poverty is the cure; but reducing poverty is certainly a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who are concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Enslaved by the Machine

Posted in Business, Philosophy, Technology by Michael LaBossiere on July 7, 2017

A common theme of dystopian science fiction is the enslavement of humanity by machines. The creation of such a dystopia was also a fear of Emma Goldman. In one of her essays on anarchism, she asserted that

Strange to say, there are people who extol this deadening method of centralized production as the proudest achievement of our age. They fail utterly to realize that if we are to continue in machine subserviency, our slavery is more complete than was our bondage to the King. They do not want to know that centralization is not only the death-knell of liberty, but also of health and beauty, of art and science, all these being impossible in a clock-like, mechanical atmosphere.

When Goldman was writing in the 1900s, the world had just recently entered the age of industrial machinery and the technology of today was at most a dream of visionary writers. As such, the slavery she envisioned was not of robot masters ruling over humanity, but humans compelled to work long hours in factories, serving the machines to serve the human owners of these machines.

The labor movements of the 1900s did much to offset the extent of the servitude workers were forced to endure, at least in the West. As the rest of the world industrialized the story of servitude to the factory machine played out once again. While the whole point of factory machines was to automate the work as much as possible so that few could do the work once requiring many, it is only in relatively recent years that what many would consider “true” automation has taken place. That is, having machines automatically doing the work instead of humans. For example, the robots used to assemble cars do what humans used to do. As another example, computers instead of human operators now handle phone calls.

In the eyes of utopians, this sort of progress was supposed to free humans from tedious and dangerous work, allowing them to, at worst, be free to engage in creative and rewarding labor. The reality, of course, turned out to not be this utopia. While automation has replaced humans in some tedious, low paying and dangerous jobs, automation has also replaced humans in what were once considered good jobs. Humans also continue to work in tedious, low paying and dangerous jobs—mainly because human labor is still cheaper or more effective than automation in those areas. For example, fast food restaurants do not have burgerbots to prepare the food. This is because cheap human labor is readily available and creating a cost-effective robot that can make a hamburger as well as a human has proven difficult. As such, the dream that automation would free humanity has so far proven to be just that, a dream. As such, machines have mainly been pushing humans out of jobs, sometimes to jobs that would seem to be more suited for machines rather than humans. If human wellbeing were considered important. However, there is the question of human subservience to the machine.

Humans do, obviously enough, still work jobs that are like those condemned by Goldman. But, thanks to technology, humans are now even more closely supervised and regulated by machines. For example, there is software designed to monitor employee productivity. As another example, some businesses use workplace cameras to watch employees. Obviously enough, these can be dismissed as not being enslaved by the machines—rather, this can be regarded as good human resource management to ensure that the human workers are operating as close to clockwork efficiency as possible. At the command of other humans, of course.

One rather interesting technology that looks rather like servitude to the machine is warehouse picking of the sort done by Amazon. Amazon and other companies have automated some of the picking process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated—they wear headsets connected to computers that tell them what to do, then they tell the computers what they have done. That is, the machines are the masters and the humans are doing their bidding.

It is easy enough to argue that this sort of thing is not enslavement by machines. First, the computers controlling the humans are operating at the behest of the owners of Amazon who are presumably humans. Second, the humans are being paid for their labors and are not owned by the machines (or Amazon). As such, any enslavement of humans by machines would be purely metaphorical.

Interestingly, the best case for human enslavement by machines can be made outside of the workplace. Many humans are now ruled by their smartphones and tablets—responding to every beep and buzz of their masters, ignoring those around them to attend to the demands of the device, and living lives revolving around the machine.

This can be easily dismissed as a metaphor—while humans are addicted to their devices, they do not actually meet the definition of slaves. They willingly “obey” their devices and are not coerced by force or fraud—they could simply turn them off. That is, they are free to do as they want, they just do not want to disobey their devices. Humans are also not owned by their devices, rather they own their devices. But, it is reasonable to consider that humans are in a form of bondage—their devices have seduced them into making them into the focus of their attention and thus have become the masters. Albeit mindless masters with no agenda of their own. Yet.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Adult ADHD & Ethics

Posted in Business, Ethics, Medicine/Health, Philosophy, Politics by Michael LaBossiere on June 2, 2017

Embed from Getty Images
In 2017, the World Health Organization released as six question “test” for adult attention deficit hyperactivity disorder (ADHD). While even proponents of the questions warn that people should not self-diagnose with the “test”, there is the obvious question about the effectiveness of such a diagnostic method. After all, as others have noted, almost every adult seems to exhibit the symptoms that the questions ask about. For example, difficulty in concentrating, unwinding and relaxing seem to be the plight of most people. I first learned of a similar sort of diagnostic tool at a mandatory training session on learning disabilities and another faculty member commented on this tool by saying “by those standards, I think we all have ADHD.” Everyone agreed. Because of these concerns, doctors tend to agree that the simple screening test is not sufficient to diagnose adult ADHD. While using an unreliably method of diagnosing adult ADHD would be problematic, there are also important moral concerns about this matter.

Coincidentally enough, many of the doctors who served on the advisory panel for developing the screening method have enjoyed the financial support of the pharmaceutical companies who produce the drugs used to “treat” adult ADHD. Such payments to doctors by pharmaceutical companies is standard practice and drives much of how treatment works in the United States.  Doctors who are not influenced by pharmaceutical companies as less inclined to prescribe the brand name products of companies, which is hardly surprising.

It is important to note that the fact that doctors are enriched financial by pharmaceutical companies that profit from ADHD drugs does not prove that the questions are not useful nor does it prove that they are wrong in expanding the number of people on ADHD drugs. After all, the possibility that a person making a claim is biased does not entail that the claim is false and to think otherwise would be an error of logic. That said, if a person is an interested party and stands to gain, then the relevant claims should be considered with due skepticism. As such, the doctors who are pushing the agenda of the pharmaceutical companies that enrich them should be regarded as lacking in credibility to the degree they are likely to be influenced by this enrichment. Which, one would infer, would be significant.  As is always the case in such situations, what is needed are more objective sources of information about ADHD. As should not be surprising, those who are not being enriched by the industry are not as enthusiastic about expanding the ADHD drug market. This raises reasonable ethical concerns about whether the industry is profiting at the expense of people who are being pushed to use drugs they do not actually need. Given the general track record of these companies, this sort of unethical behavior does seem to be the case.

Since I am not a medical doctor specializing in ADHD, I lack the expertise to properly assess the matter. However, I can offer some rational consideration of adult ADHD and its treatment with pharmaceuticals. The diagnostic questions focus on such factors as concentration, ability to remain seated, ability to relax, ability to let people finish sentences, ability to not procrastinate, and independence in regards to ordering one’s life. As noted above, these are all things that all humans have difficulty with at one time or another. Of course, even the proponents of medicating people do note that it takes more than the usual problems to make a person a candidate for medication. But, of course, these proponents do have a fairly generous view of who should be medicated.

One reasonable concern is that there are non-pharmaceutical methods of addressing problematic behaviors of this sort. While, as noted above, I am not an ADHD specialist, I do have extensive training in methods of concentration (thanks to running, martial arts and academics). As such, I know that people can be trained to have better focus without the use of profitable chemicals. Since these drugs have side effects and cost money, it would be morally and practically preferable to focus on non-chemical methods of developing positive traits. Aristotle developed just such a method long ago: training in virtue by habituation. But, it can be objected, there are people who cannot or will not use such non-pharmaceutical methods.

This is a reasonable reply. After all, while many medical conditions can be addressed without drugs, there are times when drugs are the only viable options—such as in cases of severe bacterial infections. However, there is still an important concern: are the drugs merely masking the symptoms of an underlying problem?

In the United States, most adults do not get enough sleep and are under considerable stress. This is due, largely, to the economic system that we accept and tolerate. It is well known that lack of sleep and stress cause exactly the sort of woes that are seen as symptoms of adult ADHD. As such, it seems reasonable to think that problematic adult ADHD is largely the result of the American way of life. While the drugs mask the real problems, they do not solve them. In fact, these drugs can be seen as analogous to the soma of Aldous Huxley’s Brave New World. If this is true, then the treatment of ADHD with drugs is morally problematic in at least two ways. First, it does not really treat the problems—it merely masks them and leaves the real causes in place. Second, drugging people in this manner makes it easier for them to tolerate a political, social and economic system that is destroying them which is morally wrong. In light of the above discussion, the pushing of ADHD drugs on adults is morally wrong.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Panhandling & Free Expression

Posted in Business, Ethics, Philosophy, Politics by Michael LaBossiere on May 24, 2017
Embed from Getty Images

Many local officials tend to believe that panhandlers are detrimental to local businesses and tourism and, as such, it is no surprise that there have been many efforts to ban begging. While local governments keep trying to craft laws to pass constitutional muster, their efforts have generally proven futile in the face of the First Amendment. While the legal questions are addressed by courts, there remains the moral question of whether the banning of panhandling can be morally justified.

The obvious starting point for a moral argument for banning panhandling is a utilitarian approach. As noted above, local officials generally want to have such bans because they believe panhandlers can be bad for local businesses and tourism in general. For example, if potential customers are accosted by scruffy and unwashed panhandlers on the streets around businesses, then they are less likely to patronize those businesses. As another example, if a city gets a reputation for being awash in beggars who annoy tourists with their pleas for cash, then tourism is likely to decline. From the perspective of the business owners and the local officials, these effects would have negative value that would outweigh the benefits to the panhandlers of being able to ask for money. There is presumably also utility in encouraging panhandlers to move away to other locations, thus removing the financial and social cost of having panhandlers. If this utilitarian calculation is accurate, then banning panhandling would be morally acceptable. Of course, if the calculation is not correct and such a ban would do more harm than good, then the ban would be morally wrong.

A second utilitarian argument is the safety argument. While panhandlers generally do not engage in violence (they, after all, are asking for money and not trying to rob people), it has been claimed that they do present a safety risk. The standard concern is that by panhandling in or near traffic, they put themselves and others in danger. If this is true, then banning panhandling would be the right thing to do.  If, however, the alleged harm does not justify the ban, then it would be morally unacceptable.

There is also the obvious reply that any safety concerns could be addressed by having laws that forbid people from obstructing the flow of traffic and being a danger to themselves and others. Presumably many such laws exist in various localities. There is also the concern that the safety argument would need to be applied consistently to all such allegedly risky behavior around traffic, such as people engaging in political campaigns or street side advertising.

It is also easy enough to advance a utilitarian argument in favor of panhandling that is based on the harm that could be done by restricting the panhandlers’ freedom of expression and activity. Following Mill’s classic argument, as long as panhandlers are not harming people with their panhandling, then it would be wrong to limit their freedom to engage in this behavior. This is on the condition that the panhandling is, at worst, merely annoying and does not involve threatening behavior or harassment.

It could be objected that panhandling does cause harm—as noted above, the presence of panhandlers could harm local businesses. People can also regard panhandling as an infringement on their freedom to not be bothered in public. While this does have some appeal, this justification of a panhandling ban would also justify banning any public behavior people found annoying or that had some perceived impact on local businesses. This could include public displays of expression, political campaigning, preaching in public and many other behaviors that should not be banned. In short, the problem is that there is not something distinct enough to panhandling that would allow it to be banned without also justifying the ban of other activities. To simply ban it because it is panhandling would seem to solve this problem, but would not. After all, if an activity can be justly banned because it is that activity, then this would apply to any activity. After all, every activity is the activity it is.

Those who prefer an alternative to utilitarian calculations can easily defend panhandling against proposed bans by appealing to a right of free expression and behavior that is not based on utility. If people do have the moral right to free expression, then reasons would need to be advanced that would be strong enough to warrant violating this right. As noted above, an appeal could be made to the rights of businesses and the rights of other people to avoid being annoyed. However, the right to not be annoyed does not seem to trump the right of expression until the annoyance becomes significant. As such, a panhandler does have the right to annoy a person by asking for money, but if it crosses over into actual harassment, then this would be handled by the fact that people do not have a right to harassment.

In the case of businesses, while they do have a right to engage in free commerce, they do not have a right to expect people to behave in ways that are conducive to their business. If, for example, people found it offensive to have runners running downtown and decided to take their business elsewhere, this would not warrant a runner ban. But, if runners were blocking access to the businesses by running around the entrances, then the owners’ rights would be being violated. Likewise, if panhandlers are disliked by people and they decide to take their business elsewhere, this does not violate the rights of the businesses. But, if panhandlers started harassing people and blocking access to the businesses, then this would violate the rights of the owners.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter