Capitalism & Racing
While analogies, like cars, always break down in the end, they can be useful discussion devices. While on a run, I was thinking about my recent injury-induced lack of racing trophies and this topic kept blending in with that of the division of goods in a competitive capitalist society. This lead to an obvious analogy between road races and capitalism.
Both racing and capitalism involve competition and this generates winners and losers. Winners are, of course, supposed to be rewarded for the victories while losers are expected to reflect on their defeat and try harder next time. When planning a road race or managing capitalism, those in charge must address the nature and division of the rewards for the competition. In the case of a road race, this requires the race director to work out the prizes and decide such things as whether there will be age group awards and, if so, how deep the awards will go. In the case of capitalism, those in charge decide how the laws and polices will divide up the rewards.
While there are many ways to approach the division of the rewards, there are two broad approaches. One is to have a top-heavy reward system that yields the rewards to a few winners. In the case of a road race, this will typically involve all the prizes going to the top three runners or even just the first-place finisher. In the case of capitalism, this will typically involve most of the rewards going to the very top winners with the leftovers divided up among the many losers.
Another approach, broadly speaking, is to spread the rewards more broadly among a larger base of winners. For example, many races have age group awards in addition to the overall awards. Most races also have male and female groups as well, which further divides the prizes of victory. In the case of capitalism, this approach would give less to the top winners and divide more among the lesser winners. For example, under such an approach successful small businesses and successful middle and lower class individuals would get more of the rewards. This would, of course, mean less for those at the top of the pyramid, such as the biggest corporations and the individual billionaires.
One argument often advanced in favor of the top-heavy systems of capitalism is to contend that a broader division of the rewards would be some sort of socialism that would destroy competition. But, this is not the case. A broader reward system would still be competitive capitalism, it would just have a broader division of the rewards. Returning to the race analogy, a race that has a broader division of prizes is no less a race than one that offers prizes only to the first-place finishers. Competition remains, the difference is that there will be more winners and fewer losers.
It could, of course, still be argued that having a broader division of rewards would reduce competition and make things worse. In the case of a race, the idea is that runners would think “why should I train or race as hard as before to try to win the whole race when I can now get a prize for being third in my age group?” In the case of capitalism, people would presumably say “why should I work as hard as before to try to be the biggest winner when I can now get decent rewards for just being moderately successful?”
While I will not claim that no one thinks that way, most runners still train hard and race hard regardless of what sort of division of prizes the race offers. The same would seem to hold true of capitalism—people would still work hard and compete even when there was no massive prize for a few and little for everyone else. In fact, people who know they have little or no chance at the biggest prize would presumably compete somewhat harder if they knew that there was a broader division of the rewards and their efforts could pay off with prizes. Also, in the case of capitalism, people already work hard for small prizes when they know they have no chance of ever getting the biggest prize. As such, unless they are delusional or irrational they are not motivated by having a top-heavy reward system. Survival provides an adequate motivation.
At this point, one might want to bring up the example of races that have participation awards—that everyone gets a medal just for showing up. The economic analog would be a form of socialism or communism in which everyone gets the same reward regardless of effort. This, many would argue, would be terrible and unfair.
In the case of races, runners still compete even if everyone gets the same prize (be it the same medal or nothing at all). Because, of course, many people just love to compete for the sake of competition or for reasons that have nothing to do with prizes. It would hardly be a stretch to think that this view also extends into the economic realm—especially since there are people, such as open source developers and community volunteers, who work hard for no prizes. But, there is certainly a reasonable case to be made that people need to win prizes to be really motivated to do anything.
I must admit that while I will still run hard in such a race, I still have a love for competing for prizes. As such, I prefer races that offer competition-based rewards. I am, however, grudgingly tolerant of participation medals—after all, someone who shows up and does the whole race has accomplished something meaningful even if they did not win. Naturally enough, a race can have both participation medals and prizes for winning. In the case of an economy, this would be a competitive system that offered better rewards to the winners, but also provided those who are actively participating in the economy with at least minimal reward. One area in which this analogy breaks down is that the economy has people who cannot participate (the very old, the very young, the ill and so on) and it would be a far more serious matter for these people to get nothing than it is for people who do not finish the race to not get their participation medals.
Medicaid Expansion & Hospital Closures
One aspect of Obamacare was the expansion of Medicaid in states that agreed to accept this expansion. Some states, such as my adopted state of Florida, declined the expansion. This provided researchers with an opportunity to study the effects of accepting or rejecting the expansion.
One study, conducted by researchers at the University of Colorado Anschutz Medical Campus, found that hospitals in states that expanded Medicaid were six times less likely to close than hospitals in states that declined the expansion. Hospitals in rural areas, which tend to rely more heavily on Medicaid and generally have less income relative to urban hospitals, were the hardest hit.
These results are hardly surprising. Hospitals are required by the 1986 Emergency Medial Treatment and Labor Act(EMTALA) “to ensure public access to emergency services regardless of ability to pay.” As such, unlike other businesses, they cannot turn away people who cannot pay for the services they provide. While Medicaid payments to hospitals are notoriously low, some payment is better than no payment. Because of this, hospitals in states that expanded Medicaid are less likely to need to provide unpaid services and this makes it more likely that they can remain profitable and stay open.
It is, of course, reasonable to consider alternative explanations. After all, mere correlation is not causation and it would be fallacious post hoc reasoning (to infer that because A happened after B, B must have caused A) to simply conclude that Medicaid is the cause. The states that expanded Medicaid might differ in other ways from states that did not—for example, they might have more robust economies or larger percentages of privately ensured patients. That said, the study does seem to support the connection between Medicaid and hospitals remaining open.
One moral and practical concern about hospital closings is that people who need care will be less able to receive it. While it would be hyperbole to claim that hospital closings would leave people in the area with no care, it does reduce their access to care. This is especially of concern in rural areas that already have few hospitals. While people can, of course, travel to get medical care, increased travel times would reduce the likelihood that people will seek care and would also impact outcomes. For example, rapid treatment is critical for stroke victims. Even if patients still have access to a local hospital, hospital closures will increase the time patients need to wait for treatment and this can have a negative impact on medical outcomes.
While health care does not operate within a free market of informed consumers and competitive prices, the closing of hospitals can result in increased costs for medical care. After all, the scarcer a commodity is, the more people tend to charge for it. Since medical care is already extremely expensive, an increase in costs would be even more of a burden on patients, especially those that are not affluent.
Because of the negative impact of not expanding Medicaid, states that have not expanded it should do so. This will decrease hospital closures and thus have a generally positive impact. From a moral standpoint, this would be the right thing to do—assuming that the state has an obligation to the well-being of its citizens.
One obvious counter to this view is to argue against such an obligation. This position is often taken by conservatives who favor limited government and oppose entitlements. There is also the obvious market-based argument here (although medical care is clearly not operating as a free market). The gist of this argument is that medical services are a business and that if a business cannot stay open on its own, then the state has no obligation to intervene. As such, Medicaid should not be expanded to address this problem: if the hospitals cannot stay open on their own, then the market should close them.
The easy and obvious reply to this is that, as noted above, the law requires hospitals to provide medical services even when patients cannot pay. By imposing this restriction, the state has taken a strong role in the market. Since the state imposes this requirement on hospitals, it seems reasonable that the state should take steps to offset this burden—in this case, by expanding Medicaid.
Alternatively, EMTALA could be repealed and hospitals could operate like other businesses in terms of being able to refuse services for those who cannot pay. In this case, there would not be a need to expand Medicaid to assist hospitals in remaining open—they would not lose money providing services to those who cannot pay. But, there would be a high cost in terms of sickness and death among those unable to afford medical care. There is also the possibility that even without the burden of EMTALA hospitals would still be more likely to close without a Medicaid expansion. After all, while hospitals would not be losing money on patients who cannot pay, they would also not have the financial benefit of the Medicaid expansion. As such, their closure rate would presumably be higher than hospitals in states that have expanded Medicaid.
Work & Vacation
Most Americans do not use their vacation days, despite the fact that they tend to get less than their European counterparts. A variety of plausible reasons have been advanced for this, most of which reveal interesting facts about working in the United States.
As would be expected, fear is a major factor. Even when a worker is guaranteed paid vacation time as part of their compensation for work, many workers are afraid that using this vacation time will harm them. One worry is that by using this time, they will show that they are not needed or are inferior to workers that do not take as much (or any) time and hence will be passed up for advancement or even fired. On this view, vacation days are a trap—while they are offered and the worker has earned them, to use them all would sabotage or end the person’s employment. This is not to say that all or even many employers intentionally set a vacation day trap—in fact, many employers seem to have to take special effort to get their employees to use their vacation days. However, this fear is real and does indicate a problem with working in America.
Another fear that keeps workers from using all their days is the fear that they will fall behind in their work, thus requiring them to work extra hard before or after their vacation. On this view, there is little point in taking a vacation if one will just need to do the missed work and do it in less time than if one simply stayed at work. The practical challenge here is working ways for employees to vacation without getting behind (or thinking they will get behind). After all, if an employee is needed at a business, then their absence will mean that things that need to get done will not get done. This can be addressed in various ways, such as sharing workloads or hiring temporary workers. However, an employee can then be afraid that the business will simply fire them in favor of permanently sharing the workload or by replacing them with a series of lower paid temporary workers.
Interestingly enough, workers often decline to use all their vacation days because of pride. The idea is that by not using their vacation time, a person can create the impression that they are too busy and too important to take time off from work. In this case, the worker is not afraid that they will be fired, they are worried that they will lose status and damage their reputation. This is not to say that being busy is always a status symbol—there is, of course, also status attached to being so well off that one can be idle. This fits nicely into Hobbes’ view of human motivation: everything we do, we do for gain or glory. As such, if not taking vacation time increases one’s glory (status and reputation), then people will do that.
On the one hand, people who do work hard (and effectively) do deserve a positive reputation for these efforts and earn a relevant status. On the other hand, the idea that reputation and status are dependent on not using all one’s vacation time can clearly be damaging to a person. Humans do, after all, need to relax and recover. This view also, one might argue, puts too much value on the work aspect of a person’s life at the expense of their full humanity. Then again, for the working class in America, to be is to work (for the greater enrichment of the rich).
Workers who do not get paid vacations tend to not use all (or any) of their vacation days for the obvious reason that their vacations are unpaid. Since a vacation tends to cost money, workers without paid vacations can take a double hit if they take a vacation: they are getting no income while spending money. Since people do need time off from work, there have been some attempts to require that workers get paid vacation time. As would be imagined, this proposal tends to be resisted by businesses. In part it is because they do not like being told what they must do and in part it is because of concerns over costs. While moral arguments about how people should be treated tend to fail, there is some hope that practical arguments about improved productivity and other benefits could succeed. However, as workers have less and less power in the United States (in part because workers have been deluded into embracing ideologies and policies contrary to their own interests), it seems less and less likely that paid vacation time will increase or be offered to more workers.
Some workers also do not use all their vacation days for vacation because they need to use them for other purposes, such as sick days. It is not uncommon for working mothers to save their vacation days to use for when they need to take care of the kids. It is also not uncommon for workers to use their vacation days for sick days, when they need to be at home for a service visit, when they need to go to the doctors or for other similar things. If it is believed that vacation time is something that people need, then forcing workers to use up their vacation time for such things would seem to be wrong. The obvious solution, which is used by some businesses, is to offer such things as personal days, sick leave, and parental leave. While elite employers offer elite employees such benefits, they tend to be less available to workers of lower social and economic classes. So, for example, Sheryl Sandberg gets excellent benefits, while the typical worker does not. This is, of course, a matter of values and not just economic ones. That is, while there is the matter of the bottom line, there is also the question of how people should be treated. Unfortunately, the rigid and punitive class system in the United States ensures that the well-off are treated well, while the little people face a much different sort of life.
Right-to-Try
There has been a surge of support for right-to-try bills and many states have passed these into law. Congress, eager to do something politically easy and popular, has also jumped on this bandwagon.
Briefly put, the right-to-try laws give terminally ill patients the right to try experimental treatments that have completed Phase 1 testing but have yet to be approved by the FDA. Phase 1 testing involves assessing the immediate toxicity of the treatment. This does not include testing its efficacy or its longer-term safety. Crudely put, passing Phase 1 just means that the treatment does not immediately kill or significantly harm patients.
On the face of it, the right-to-try is something that no sensible person would oppose. After all, the gist of this right is that people who have “nothing to lose” are given the right to try treatments that might help them. The bills that propose to codify the right into law make use of the rhetorical narrative that the right-to-try laws would give desperate patients the freedom to seek medical treatment that might save them and this would be done by getting the FDA and the state out of their way. This is a powerful rhetorical narrative since it appeals to compassion, freedom and a dislike of the government. As such, it is not surprising that few people dare argue against such proposals. However, the matter does deserve proper critical consideration.
One interesting way to look at the matter is to consider an alternative reality in which the narrative of these laws was spun with a different rhetorical charge—negative rather than positive. Imagine, for a moment, if the rhetorical engines had cranked out a tale of how the bills would strip away the protection of the desperate and dying to allow predatory companies to use them as Guinea pigs for their untested treatments. If that narrative had been sold, people would be howling against such proposals rather than lovingly embracing them. Rhetorical narratives, be they positive or negative, are logically inert. As such, they are irrelevant to the merits of the right-to-try proposals. How people feel about the proposals is also logically irrelevant as well. What is wanted is a cool examination of the matter.
On the positive side, the right-to-try does offer people the chance to try treatments that might help them. It is, obviously enough, hard to argue that people do not have a right to take such risks when they are terminally ill. That said, there are still some points that need to be addressed.
One important point is that there is already a well-established mechanism in place to allow patients access to experimental treatments. The FDA already has system of expanded access that apparently approves the overwhelming majority of requests. Somewhat ironically, when people argue for the right-to-try by using examples of people successfully treated by experimental methods, they are showing that the existing system already allows people access to such treatments. This raises the question about why the laws are needed and what it changes.
The main change in such laws tends to be to reduce the role of the FDA in the process. Without such laws, requests to use such experimental methods typically have to go through the FDA (which seems to approve most requests). If the FDA was denying people treatment that might help them, then such laws would seem to be justified. However, the FDA does not seem to be the problem here—they generally do not roadblock the use of experimental methods for people who are terminally ill. This leads to the question of what factors are limiting patient access.
As would be expected, the main limiting factors are those that impact almost all treatment access: costs and availability. While the proposed bills grant the negative right to choose experimental methods, they do not grant the positive right to be provided with those methods. A negative right is a liberty—one is free to act upon it but is not provided with the means to do so. The means must be acquired by the person. A positive right is an entitlement—the person is free to act and is provided with the means of doing so. In general, the right-to-try proposals do little or nothing to ensure that such treatments are provided. For example, public money is not allocated to pay for such treatments. As such, the right-to-try is much like the right-to-healthcare for most people: you are free to get it provided you can get it yourself. Since the FDA generally does not roadblock access to experimental treatments, the bills and laws would seem to do little or nothing new to benefit patients. That said, the general idea of right-to-try seems reasonable—and is already practiced. While few are willing to bring them up in public discussions, there are some negative aspects to the right-to-try. I will turn to some of those now.
One obvious concern is that terminally ill patients do have something to lose. Experimental treatments could kill them significantly earlier than their terminal condition or they could cause suffering that makes their remaining time even worse. As such, it does make sense to have some limit on the freedom to try. After all, it is the job of the FDA and medical professionals to protect patients from such harms—even if the patients want to roll the dice.
This concern can be addressed by appealing to freedom of choice—provided that the patients are able to provide informed consent and have an honest assessment of the treatment. This does create something of a problem: since little is known about the treatment, the patient cannot be well informed about the risks and benefits. But, as I have argued in many other posts, I accept that people have a right to make such choices, even if these choices are self-damaging. I apply this principle consistently, so I accept that it grants the right-to-try, the right to same-sex marriage, the right to eat poorly, the right to use drugs, and so on.
The usual counters to such arguments from freedom involve arguments about how people must be protected from themselves, arguments that such freedoms are “just wrong” or arguments about how such freedoms harm others. The idea is that moral or practical considerations override the freedom of the individual. This is a reasonable counter and a strong case can be made against allowing people the right to engage in a freedom that could harm or kill them. However, my position on such freedoms requires me to accept that a person has the right-to-try, even if it is a bad idea. That said, others have an equally valid right to try to convince them otherwise and the FDA and medical professionals have an obligation to protect people, even from themselves.
What Can be Owned?
One rather interesting philosophical question is that of what can, and perhaps more importantly cannot, be owned. There is, as one might imagine, considerable dispute over this matter. One major historical example of such a dispute is the debate over whether people can be owned. A more recent example is the debate over the ownership of genes. While each specific dispute needs to be addressed on its own merits, it is certainly worth considering the broader question of what can and what cannot be property.
Addressing this matter begins with the foundation of ownership—that is, what justifies the claim that one owns something, whatever that something might be. This is, of course, the philosophical problem of property. Many are not even aware there is such a philosophical problem—they uncritically accept the current system, though they might have some complaints about its particulars. But, to simply assume that the existing system of property is correct (or incorrect) is to beg the question. As such, the problem of property needs to be addressed without simply assuming it has been solved.
One practical solution to the problem of property is to contend that property is a matter of convention. This can be formalized convention (such as laws) or informal convention (such as traditions) or a combination of both. One reasonable view is property legalism—that ownership is defined by the law. On this view, whatever the law defines as property is property. Another reasonable view is that of property relativism—that ownership is defined by the cultural practices (which can include the laws). Roughly put, whatever the culture accepts as property is property. These approaches, obviously enough, correspond to the moral theories of legalism (that the law determines morality) and ethical relativism (that culture determines morality).
The conventionalist approach to property does seem to have the virtue of being practical and of avoiding mucking about in philosophical disputes. If there is a dispute about what (or who) can be owned, the matter is settled by the courts, by force of arms or by force of persuasion. There is no question of what view is right—winning makes the view right. While this approach does have its appeal, it is not without its problems.
Trying to solve the problem of property with the conventionalist approach does lead to a dilemma: the conventions are either based on some foundation or they are not. If the conventions are not based on a foundation other than force (of arms or persuasion), then they would seem to be utterly arbitrary. In such a case, the only reasons to accept such conventions would be practical—to avoid trouble with armed people (typically the police) or to gain in some manner.
If the conventions have some foundation, then the problem is determining what it (or they) might be. One easy and obvious approach is to argue that people have a moral obligation to obey the law or follow cultural conventions. While this would provide a basis for a moral obligation to accept the property conventions of a society, these conventions would still be arbitrary. Roughly put, those under the conventions would have a reason to accept whatever conventions were accepted, but no reason to accept one specific convention over another. This is analogous to the ethics of divine command theory, the view that what God commands is good because He commands it and what He forbids is evil because He forbids it. As should be expected, the “convention command” view of property suffers from problems analogous to those suffered by divine command theory, such as the arbitrariness of the commands and the lack of justification beyond obedience to authority.
One classic moral solution to the problem of property is that offered by utilitarianism. On this view, the practice of property that creates more positive value than negative value for the morally relevant beings would be the morally correct practice. It does make property a contingent matter—as the balance of positive against negative shifted, radically different conceptions of property can be thus justified. So, for example, while a capitalistic conception of property might be justified at a certain place and time, that might shift in favor of state ownership of the means of production. As always, utilitarianism leaves the door open for intuitively horrifying practices that manage to fulfill that condition. However, this approach also has an intuitive appeal in that the view of property that creates the greatest good would be the morally correct view of property.
One very interesting attempt to solve the problem of property is offered by John Locke. He begins with the view that God created everyone and gave everyone the earth in common. While God does own us, He is cool about it and effectively lets each person own themselves. As such, I own myself and you own yourself. From this, as Locke sees it, it follows that each of us owns our labor.
For Locke, property is created by mixing one’s labor with the common goods of the earth. To illustrate, suppose we are washed up on an island owned by no one. If I collect wood and make a shelter, I have mixed my labor with the wood that can be used by any of us, thus making the shelter my own. If you make a shelter with your labor, it is thus yours. On Locke’s view, it would be theft for me to take your shelter and theft for you to take mine.
As would be imagined, the labor theory of ownership quickly runs into problems, such as working out a proper account of mixing of labor and what to do when people are born on a planet on which everything is already claimed and owned. However, the idea that the foundation of property is that each person owns themselves is an intriguing one and does have some interesting implications about what can (and cannot) be owned. One implication would seem to be that people are owners and cannot be owned. For Locke, this would be because each person is owned by themselves and ownership of other things is conferred by mixing one’s labor with what is common to all.
It could be contended that people create other people by their labor literally in the case of the mother) and thus parents own their children. A counter to this is that although people do engage in sexual activity that results in the production of other people, this should not be considered labor in the sense required for ownership. After all, the parents just have sex and then the biological processes do all the work of constructing the new person. One might also play the metaphysical card and contend that what makes the person a person is not manufactured by the parents, but is something metaphysical like the soul or consciousness (for Locke, a person is their consciousness and the consciousness is within a soul).
Even if it is accepted that parents do not own their children, there is the obvious question about manufactured beings that are like people such as intelligent robots or biological constructs. These beings would be created by mixing labor with other property (or unowned materials) and thus would seem to be things that could be owned. Unless, of course, they are owners.
One approach is to consider them analogous to children—it is not how children are made that makes them unsuitable for ownership, it is what they are. On this view, people-like constructs would be owners rather than things to be owned. The intuitive counter is that people-like manufactured beings would be property like anything else that is manufactured. The challenge is, of course, to show that this would not entail that children are property—after all, considerable resources and work can be expended to create a child (such as IVF, surrogacy, and perhaps someday artificial wombs), yet intuitively they would not be property. This does point to a rather important question: is it what something is that makes it unsuitable to be owned or how it is created?
Trump & Mercenaries: Arguments Against
Embed from Getty Images
While there are some appealing arguments in favor of the United States employing mercenaries, there are also arguments against this position. One obvious set of arguments is composed of those that focus on the practical problems of employing mercenaries. These problems include broad concerns about the competence of the mercenaries (such as worries about their combat effectiveness and discipline) as well as worries about the quality of their equipment. These concerns can, of course, be addressed on a case by case basis. Some mercenary operations are composed of well-trained, well-equipped ex-soldiers who are every bit as capable as professional soldiers serving their countries. If competent and properly equipped mercenaries are hired, there will obviously not be problems in these areas.
There are also obvious practical concerns about the loyalty and reliability of mercenaries—they are, after all, fighting for money rather than from duty or commitment to principles. This is not to disparage mercenaries. After all, working for money is what professionals do, whether they are mercenary soldiers, surgeons, electricians or professors. A surgeon who is motivated by money need not be less reliable than a colleague who is driven by a moral commitment to heal the sick and injured. Likewise, a soldier who fights for a paycheck need not be less dependable than a patriotic soldier.
That said, a person who is motivated primarily by money will act in accord with that value and this can make them considerably less loyal and reliable than someone motivated by higher principles. This is not to say that a mercenary cannot have higher principles, but a mercenary, by definition, sells their loyalty (such as it is) to the highest bidder. As such, this is a reasonable concern.
This concern can be addressed by paying mercenaries well enough to defend against bribery and by assigning tasks to mercenaries that require loyalty and reliability proportional to what the mercenaries can realistically offer. This, of course, can severely limit how mercenaries can be deployed and could make hiring them pointless—unless a nation has an abundance of money and a shortage of troops.
A concern that is both practical and moral is that mercenaries tend to operate outside of the usual chain of command of the military and are often exempt from many of the laws and rules that govern the operation of national forces. In many cases, mercenaries are intentionally granted special exemptions. An excellent illustration of how this can be disastrous is Blackwater, which was a major security contractor operating mercenary forces in Iraq.
In September of 2007 employees of Blackwater were involved in an incident resulting in 11 deaths. This was not the first such incident. Although many believe Blackwater acted incorrectly, the company was well protected against accountability because of the legal situation created by the United States. In 2004 the Coalition Provisional Authority administrator signed an order making all Americans in Iraq immune to Iraqi law. Security contractors enjoyed even greater protection. The Military Extraterritorial Jurisdiction Act of 2000, which allows charges to be brought in American courts for crimes committed in foreign countries, applies only to those contracting with the Department of Defense. Companies employed by the State Department, such as was the case with Blackwater, are not covered by the law. Blackwater went even further and claimed exemption from all law suits and criminal prosecution. This defense was also used against a suit brought by families of four Blackwater employees killed in Iraq.
While there are advantages to granting mercenary forces exemptions from the law, Machiavelli warned against this because they might start “oppressing others quite contrary to your intentions.” His solution was to “keep him within the laws so that he does not overstep the mark.” This is excellent advice that should have been heeded. Instead, employing and placing such mercenaries beyond the law has led to serious problems.
The concern about mercenaries being exempt from the usual laws can be addressed simply enough: these exemptions can either be removed or not granted in the first place. While this will not guarantee good behavior, it can help encourage it.
The concern about mercenaries being outside the usual command structure can be harder to address. On the one hand, mercenary forces could simply be placed within the chain of command like any other unit. On the other hand, mercenary units are, by their very nature, outside of the usual command and organization structure and integrating them could prove problematic. Also, if the mercenaries are simply integrated as if they are normal units, then the obvious question arises as to why mercenaries would be needed in place of regular forces.
Yet another practical concern is that the employment of mercenaries can create public relations problems. While sending regular troops to foreign lands is always problematic, the use of mercenary forces can be more problematic. One reason is that the hiring of mercenaries is often looked down upon, in part because of the checkered history of mercenary forces. There is also the concern of how the local populations will perceive hired guns—especially given the above concerns about mercenaries operating outside of the boundaries that restrict regular forces. Finally, there is also the concern that the hiring of mercenaries can make the hiring country seem weak—the need to hire mercenaries would seem to suggest that the country has a shortage of competent regular forces.
A somewhat abstract argument against the United States employing mercenaries is based on the notion that nation states are supposed to be the sole operators of military forces. This, of course, assumes a specific view of the state and the moral right to operate military forces. If this conception of the state is correct, then hiring mercenaries would be to cede this responsibility (and right) to private companies, which would be unacceptable. The United States does allow private armies to exist within the country, if they have the proper connections to those in power. Blackwater, for example, was one such company. This seems to be problematic.
This concern can countered with an alternative view of the state in which private armies are acceptable. In the case of private armies within a country, it could be argued that they are acceptable as long as they acknowledge the supremacy of the state. So, for example, an American mercenary company would be acceptable as long as it operated under conditions set by the United States government and served only in approved ways. To use an obvious analogy, there are “rent-a-cops” that operate somewhat like police. These are acceptable provided that they operate under the rules of the state and do not create a challenge to the police powers of the state.
While this counter is appealing, there do not seem to be any compelling reasons for the United States to cede its monopoly on military force and hire mercenaries. Other than to profit the executives and shareholders of these mercenary companies, of course.
Trump & Mercenaries: Arguments For
The Trump regime seems to be seriously considering outsourcing the war in Afghanistan to mercenaries. The use of mercenaries, or contractors (as they might prefer to be called), is a time-honored practice. While the United States leads the world in military spending and has a fine military, it is no stranger to employing mercenaries. For example, the security contractor Blackwater became rather infamous for its actions in Iraq.
While many might regard the employment of mercenaries as repugnant, the proposal to outsource military operations to corporations should not be dismissed out of hand. Arguments for and against it should be given their due consideration. Mere prejudices against mercenaries should not be taken as arguments, nor should the worst deeds committed by some mercenaries be taken as damning them all.
As with almost every attempt at privatizing a state function, one of the stock arguments is based on the claim that privatization will save money. In some cases, this is an excellent argument. For example, it is cheaper for state employees to fly on commercial airlines than for a state to maintain a fleet of planes to send employees around on state business. In other cases, this argument falls apart. The stock problem is that a for-profit company must make a profit and this means it must have that profit margin over and above what it costs to provide the product or service. So, for a mercenary company to make money, it would need to pay all the costs that government forces would incur for the same operation and would need to charge extra to make a profit. As such, using mercenaries would not seem to be a money-saver.
It could be countered that mercenaries can have significantly lower operating costs than normal troops. There are various ways that costs could be cut relative to the costs of operating the government military forces: mercenaries could have cheaper or less equipment, they could be paid less, they could be provided less (or no) benefits, and mercenaries could engage in looting to offset their costs (and pass the savings on to their employer).
The cost cutting approach does raise some concerns about the ability of the mercenaries to conduct operations effectively: underpaid and underequipped troops would tend to do worse than better paid and better equipped troops. There are also obvious moral concerns about letting mercenaries loot.
However, there are savings that could prove quite significant: while the United States Department of Veterans Affairs has faced considerable criticism, veterans can get considerable benefits. For example, there is the GI Bill. Assuming mercenaries did not get such benefits, this would result in meaningful cost savings. In sum, if a mercenary company operated using common business practices of cost-cutting, then they could certainly run operations cheaper than the state. But, of course, if saving money is the prime concern, the state could engage in the same practices and save even more money by not providing a private contractor with the money needed to make a profit. Naturally, there might be good reasons why the state could not engage in these money-saving practices. In that case, the savings offered by mercenaries could justify their employment.
A second argument in favor of using mercenaries is based on the fact that those doing the killing and dying will not be government forces. While the death of a mercenary is as much the death of a person as the death of a government soldier, the mercenary’s death would tend to have far less impact on political opinion back home. The death of an American soldier in combat is meaningful to Americans in the way that the death of a mercenary would not.
While the state employing mercenaries is accountable for what they do, there is a distance between the misdeeds of mercenaries and the state that does not exist between the misdeeds of regular troops and the state. In practical terms, there is less accountability. It is, after all, much easier to disavow and throw mercenaries under the tank than it is to do the same with government troops.
This is not to say mercenaries provide a “get out of trouble” card to their employer—as the incidents in Iraq involving Blackwater showed, employers still get caught in the fallout from the actions of the mercenaries they hire. However, having such a force can be useful, especially when one wants to do things that would get regular troops into considerable trouble.
A final argument in favor of mercenaries is from the standpoint of the owners of mercenary companies. Most forms of privatization are a means of funneling public money into the pockets of executives and shareholders. Privatizing operations in Afghanistan could be incredibly profitable (or, rather, even more profitable) for contractors.
While receiving a tide of public money would be good for the companies, the profit argument runs directly up against the first argument for using mercenaries—that doing so would save money. This sort of “double vision” is common in privatization: those who want to make massive profits make the ironic argument that privatization is a good idea because it will save money.
The Hands that Serve
My grandparents made shoes, but I was guided on a path towards college that ultimately ended up with me being a philosophy professor—an abstract profession that is, perhaps, as far from shoe making as one can get. While most are not destined to become philosophers, the push towards college education persists to this day. In contrast, skilled trades and manual labor are typically looked down upon—even though a skilled trade can be very financially rewarding.
Looking down on skilled trades might seem unusual for the United States, a country that arose out of skilled trades and one that still purports to value an honest day’s work for an honest day’s pay. However, as noted above, there has been a switch from valuing skilled trades in favor of college education and the associated jobs. Oddly, skilled trades are even considered by some to be, if not exactly shameful, nothing to be proud of. Instead, the respected professions typically require a college degree. Although, since inconsistency is the way of humanity, financial success without a degree is often lauded.
At this point one must be careful to not confuse the obsession with college degrees and associated jobs as a sign that Americans value intellectualism. While there are cultural icons such as Einstein, the United States has a strong anti-intellectual streak. Some of this is fueled by religion, some by the remnants of blue-collar practicality, and some by the knowledge of the elites that intellectuals can be a danger to the established order. What is at play here could be called “educationalism” to contrast it with “intellectualism.” In neutral terms, this can be taken as the valuing of education for its financial value in terms of the payoff in the workplace. In more negative terms, it can be taken as a prejudice or bias in favor of those with formal education. Because of the success of this sort of educationalism, people are encouraged to get an education primarily based on the financial returns to themselves and those who will exploit their labors. And part of the motivation is to avoid the stigma of not being in a profession that requires a degree.
While education can be valuable, this sort of educationalism is not without it negative consequences. As many have noted, one result has been an increase in those seeking college degrees. Since college degrees are now often absurdly expensive (thanks, in large part, to the adoption of the business model of exorbitant administrative salaries) this has resulted in a significant surge in college debt. There is also the predatory approaches of the for-profit colleges, which exist primarily to funnel public money to the executives and shareholders.
Another impact of this form of educationalism is that professions that do not require college degrees are cast as inferior to those that do require degrees. In some cases, this characterization is correct: for example, assembling burgers for a fast food chain is certainly inferior to nearly all jobs that require a college degree. However, this contempt for non-degree jobs often extends to skilled trades, such as those of electrician, plumber and carpenter.
In some cases, the looking down is based on the perception that skilled trades pay less than degree trades. While this can be the case, skill trades can pay very well indeed—you can check this yourself by calling a plumber or electrician and inquiring how much they will charge for various tasks.
In other cases, people look down on the skilled trades because they often think that because these trades do not require a college degree those who practice them must be less intelligent or less capable. That is, a common assumption is that people go into these trades because they lack the ability to navigate the rigors of a philosophy, art history or a communications degree. Crudely put, the prejudice is that smart people get degrees, stupid people work in skilled trades or manual labor.
While completing college does require some minimal level of ability, as a professor with decades of experience I can attest to the fact that this ability can be very minimal indeed. Put crudely, stupid people can and do graduate with degrees—and some go on to considerable success. My point here is not, however, to say that college graduates can be just as stupid as those in the skilled trades. Rather, my point is that a college degree is not a reliable indicator of greater ability or intelligence.
Switching to a more positive approach, skilled trades can be just as challenging as professions that require college degrees. While the skilled trades obviously place more emphasis on manual work, such as wiring houses or rebuilding engines, this does not entail that they require less intelligence or ability.
I am in a somewhat uncommon position of holding a doctorate while also having some meaningful experience with various skilled trades. Part of this is because my background is such that to be a man required having a skill set that includes the basics of a variety of trades. To illustrate, I was expected to know how to build a camp, rewire outlets, service firearms, repair simple engines, and not die in the wilds. I used some of these skills to make money to pay for school and still use them today to save money. And not die. While I am obviously not a skilled professional, I have a reasonably good grasp of the skills and abilities needed to work in many skilled professions and I understand they typically require intelligence, critical thinking and creative thinking. Based on my own experience, I can say that addressing a technical problem with wiring or an engine can be just as mentally challenging as addressing a philosophical conundrum about the ethics of driverless cars. As such, it is mere prejudice to look down upon people in the skilled professions. Interesting, some who would be horrified of being accused of the prejudices of racism or sexism routinely look down their noses at those in skilled professions.
Since I will occasionally do repairs or projects for people, I do get a chance to see the prejudice—I sometimes feel that I am operating “undercover” in such situations. This is analogous to how I feel when, as a white person who teaches at an HBCU, I hear people expressing racist views because they think I am “one of them” because I am white. For example, on one occasion I was changing the locks for a grad school friend of mine who did not know a screw driver from an instantiated universal. While I was doing this, some of her other friends stopped by. Not knowing who I was, they simply walked past, perhaps assuming I was some sort of peasant laborer. I overheard one of them whispering how glad he was he was in grad school, so he would not have to do such mundane and mindless work. Another whispered, with an odd pride, that she would have no idea how to do such work—presumably because her brain was far too advanced to guide her hands in the operation of a screwdriver. This odd combination is not uncommon: people often hold to the view that skilled labor is beneath them while also believing that they simply cannot do such work. As in the incident just mentioned, it seems common for people to rationalize their lack of ability by telling themselves they are too smart to waste their precious brain space on such abilities. Presumably if one learns to replace a light switch, one must lose the ability to grasp the fundamentals of deconstruction.
When my friend realized what was going on, she hastened to introduce me as a grad student and everyone apologized because they first thought I was “just some maintenance worker” and not “one of them.” Needless to say, their attitude towards me changed dramatically, as did their behavior. As one might suspect, these were the same sort of people who would rail against the patriarchy and racism for their cruel prejudices and biases. And yet they fully embraced the biases of “educationalism” and held me in contempt until they learned I was as educated as they.
I must admit that I also have prejudices and biases. When an adult cannot do basic tasks like replacing a fill valve in a toilet or replace a simple door lock, I do judge them. However, I try not to do this—after all, not everyone has a background in which they could learn such basic skills. But, of course, I expect people to reciprocate: in return they need to not be prejudiced against people who pursue skilled trades instead of college degrees. And, of course, since a person cannot learn everything, everyone has massive gaps and voids in their skill sets.
While those who pursue careers in which they create ever more elaborate financial instruments to ruin the economy are rewarded with great wealth and those who create new frivolous apps are praised, it should be remembered that the infrastructure of civilization that makes all these things possible depend largely on the skilled trades. Someone must wire the towers that make mobile phones possible so that people can Tweet their witty remarks, someone has to put in the plumping and HVAC systems that make buildings livable so that the weasels of Wall Street have a proper place to pee, and so on for the foundation of civilization. As Sean Le Rond D’Alembert so wisely said in 1751, “But while justly respecting great geniuses for their enlightenment, society ought not to degrade the hands by which it is served.” Excellent advice then, excellent advice now.
Poverty & the Brain
A key part of the American mythology is the belief that a person can rise to the pinnacle of success from the depths of poverty. While this does occur, most understand that poverty presents a considerable obstacle to success. In fact, the legendary tales that tell of such success typically embrace an interesting double vision of poverty: they praise the hero for overcoming the incredible obstacle of poverty while also asserting that anyone with gumption should be able to achieve this success.
Outside of myths and legends, it is a fact that poverty is difficult to overcome. There are, of course, the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care—both necessities are expensive and the poor typically have less access to good food and good care. There is also recent research by scientists such as Kimberly G. Noble that suggests a link between poverty and brain development.
While the most direct way to study the impact of poverty and the brain is by imaging the brain, this (as researchers have noted) is expensive. However, the research that has been conducted shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.
Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test for the functionality of the brain. As might be expected, children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.
As would be expected, there are individuals who do not conform to the generally correlation. That is, there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. As such, knowing the economic class of a child does not tell one what their individual capabilities are. However, there is a clear correlation when the matter is considered in terms of populations rather than single individuals. This is important to consider when assessing the impact of anecdotes of successful rising from poverty—as with all appeals to anecdotal evidence, they do not outweigh the bulk of statistical evidence.
To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not entail that one knows that Sally is weaker than Bob the boy. Sally might be much stronger than Bob. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the obvious question about whether poverty is a causal factor in brain development.
Those with even passing familiarity with causal reasoning know that correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities that there must be a causal connection would be to fall victim to the most basic of causal fallacies. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both—that is, poverty and the cognitive abilities are both effects.
There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.
This view does have considerable appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are needed for success and it seems reasonable to think that people who were less capable would thus be less successful. To use an analogy, there is a clear correlation between running speed and success in track races. It is not, of course, losing races that makes a person slow. It is being slow that causes a person to lose races.
Despite the appeal of this interpretation of the data, to rush to the conclusion that it is the cognitive abilities that cause poverty would be as much a fallacy as rushing to the conclusion that poverty influences brain development. Both views do seem plausible and it is certainly possible that there is causation going in both directions. The challenge, then, is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble—providing the experimental group of low income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.
Intuitively, it makes sense that an adequate family income would generally have a positive impact on the development of children. After all, this income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that throwing money at poverty is the cure; but reducing poverty is certainly a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who are concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.
24 comments