The United States has had a libertarian and anarchist thread since the beginning, which is certainly appropriate for a nation that espouses individual liberty and expresses distrust of the state. While there are many versions of libertarianism and these range across the political spectrum, I will focus on one key aspect of libertarianism. To be specific, I will focus on the idea that the government should impose minimal limits on individual liberty and that there should be little, if any, state regulation of business. These principles were laid out fairly clearly by the American anarchist Henry David Thoreau in his claims that the best government governs least (or not at all) and that government only advances business by getting out of its way.
I must admit that I find the libertarian-anarchist approach very appealing. Like many politically minded young folks, I experimented with a variety of political theories in college. I found Marxism unappealing—as a metaphysical dualist, I must reject materialism. Also, I was well aware of the brutally oppressive and murderous nature of the Marxists states and they were in direct opposition to both my ethics and my view of liberty. Fascism was certainly right out—the idea of the total state ran against my views of liberty. Since, like many young folks, I thought I knew everything and did not want anyone to tell me what to do, I picked anarchism as my theory of choice. Since I am morally opposed to murdering people, even for a cause, I sided with the non-murderous anarchists, such as Thoreau. I eventually outgrew anarchism, but I still have many fond memories of my halcyon days of naïve political views. As such, I do really like libertarian-anarchism and really want it to be viable. But, I know that liking something does not entail that it is viable (or a good idea).
Put in extremely general terms, a libertarian system would have a minimal state with extremely limited government impositions on personal liberty. The same minimalism would also extend to the realm of business—they would operate with little or no state control. Since such a system seems to maximize liberty and freedom, it seems to be initially very appealing. After all, freedom and liberty are good and more of a good thing is better than less. Except when it is not.
It might be wondered how more liberty and freedom is not always better than less. I find two of the stock answers both appealing and plausible. One was laid out by Thomas Hobbes. In discussing the state of nature (which is a form of anarchism—there is no state) he notes that total liberty (the right to everything) amounts to no right at all. This is because everyone is free to do anything and everyone has the right to claim (and take) anything. This leads to his infamous war of all against all, making life “nasty, brutish and short.” Like too much oxygen, too much liberty can be fatal. Hobbes solution is the social contract and the sovereign: the state.
A second one was present by J.S. Mill. In his discussion of liberty he argued that liberty requires limitations on liberty. While this might seem like a paradox or a slogan from Big Brother, Mill is actually quite right in a straightforward way. For example, your right to free expression requires that my right to silence you be limited. As another example, your right to life requires limits on my right to kill. As such, liberty does require restrictions on liberty. Mill does not limit the limiting of liberty to the state—society can impose such limits as well.
Given the plausibility of the arguments of Hobbes and Mill, it seems reasonable to accept that there must be limits on liberty in order for there to be liberty. Libertarians, who usually fall short of being true anarchists, do accept this. However, they do want the broadest possible liberties and the least possible restrictions on business.
In theory, this would appear to show that the theory provides the basis for a viable political system. After all, if libertarianism is the view that the state should impose the minimal restrictions needed to have a viable society, then it would be (by definition) a viable system. However, there is the matter of libertarianism in practice and also the question of what counts as a viable political system.
Looked at in a minimal sense, a viable political system would seem to be one that can maintain its borders and internal order. Meeting this two minimal objectives would seem to be possible for a libertarian state, at least for a while. That said, the standards for a viable state might be taken to be somewhat higher, such as the state being able to (as per Locke) protect rights and provide for the good of the people. It can (and has) been argued that such a state would need to be more robust than the libertarian state. It can also be argued that a true libertarian state would either devolve into chaos or be forced into abandoning libertarianism.
In any case, the viability of libertarian state would seem to depend on two main factors. The first is the ethics of the individuals composing the state. The second is the relative power of the individuals. This is because the state is supposed to be minimal, so that limits on behavior must be set largely by other factors.
In regards to ethics, people who are moral can be relied on to self-regulate their behavior to the degree they are moral. To the degree that the population is moral the state does not need to impose limitations on behavior, since the citizens will generally not behave in ways that require the imposition of the compulsive power of the state. As such, liberty would seem to require a degree of morality on the part of the citizens that is inversely proportional to the limitations imposed by the state. Put roughly, good people do not need to be coerced by the state into being good. As such, a libertarian state can be viable to the degree that people are morally good. While some thinkers have faith in the basic decency of people, many (such as Hobbes) regard humans as lacking in what others would call goodness. Hence, the usual arguments about how the moral failings of humans requires the existence of the coercive state.
In regards to the second factor, having liberty without an external coercive force maintaining the liberty would require that the citizens be comparable in political, social and economic power. If some people have greater power they can easily use this power to impose on their fellow citizens. While the freedom to act with few (or no) limits is certainly a great deal for those with greater power, it certainly is not very good for those who have less power. In such a system, the powerful are free to do as they will, while the weaker people are denied their liberties. While such a system might be libertarian in name, freedom and liberty would belong to the powerful and the weaker would be denied. That is, it would be a despotism or tyranny.
If people are comparable in power or can form social, political and economic groups that are comparable in power, then liberty for all would be possible—individuals and groups would be able to resist the encroachments of others. Unions, for example, could be formed to offset the power of corporations. Not surprisingly, stable societies are able to build such balances of power to avoid the slide into despotism and then to chaos. Stable societies also have governments that endeavor to protect the liberties of everyone by placing limits on how much people can inflict their liberties on other people. As noted above, people can also be restrained by their ethics. If people and groups varied in power, yet abided by the limits of ethical behavior, then things could still go well for even the weak.
Interestingly, a balance of power might actually be disastrous. Hobbes argued that it is because people are equal in power that the state of nature is a state of war. This rests on his view that people are hedonistic egoists—that is, people are basically selfish and care not about other people.
Obviously enough, in the actual world people and groups vary greatly in power. Not surprisingly, many of the main advocates of libertarianism enjoy considerable political and economic power—they would presumably do very well in a system that removed many of the limitations upon them since they would be freer to do as they wished and the weaker people and groups would be unable to stop them.
At this point, one might insist on a third factor that is beloved by the Adam Smith crowd: rational self-interest. The usual claim is that people would limit their behavior because of the consequences arising from their actions. For example, a business that served contaminated meat would soon find itself out of business because the survivors would stop buying the meat and spread the word. As another example, an employer who used his power to compel his workers to work long hours in dangerous conditions for low pay would find that no one would be willing to work for him and would be forced to improve things to retain workers. As a third example, people would not commit misdeeds because they would be condemned or punished by vigilante justice. The invisible hand would sort things out, even if people are not good and there is a great disparity in power.
The easy and obvious reply is that this sort of system generally does not work very well—as shown by history. If there is a disparity in power, that power will be used to prevent negative consequences. For example, those who have economic power can use that power to coerce people into working for low pay and can also use that power to try to keep them from organizing to create a power that can resist this economic power. This is why, obviously enough, people like the Koch brothers oppose unions.
Interestingly, most people get that rational self-interest does not suffice to keep people from acting badly in regards to crimes such as murder, theft, extortion, assault and rape. However, there is the odd view that rational self-interest will somehow work to keep people from acting badly in other areas. This, as Hobbes would say, arises from an insufficient understanding of humans. Or is a deceit on the part of people who have the power to do wrong and get away with it.
While I do like the idea of libertarianism, a viable libertarian society would seem to require people who are predominantly ethical (and thus self-regulating) or a careful balance of power. Or, alternatively, a world in which people are rational and act from self-interest in ways that would maintain social order. This is clearly not our world.
My friend Ron claims that “Mike does not drive.” This is not true—I do drive, but I do so as little as possible. Part of it is frugality—I don’t want to spend more than I need to on gas and maintenance. Most of it is that I hate to drive. Some of this is due to the fact that driving time is mostly wasted time—I would rather be doing something else. Most of it is that I find driving an awful blend of boredom and stress. As such, I am completely in favor of driverless cars and want Google to take my money. That said, it is certainly worth considering some of the implications of the widespread adoption of driverless cars.
One of the main selling points of driverless cars is that they are supposed to be significantly safer than humans. This is for a variety of reasons, many of which involve the fact that machines do not (yet) get sleepy, bored, angry, distracted or drunk. Assuming that the significant increase in safety pans out, this means that there will be significantly fewer accidents and this will have a variety of effects.
Since insurance rates are (supposed to be) linked to accident rates, one might expect that insurance rates will go down. In any case, insurance companies will presumably be paying out less, potentially making them even more profitable.
Lower accident rates also entail fewer injuries, which will presumably be good for people who would have otherwise been injured in a car crash. It would also be good for those depending on these people, such as employers and family members. Fewer injuries also means less use of medical resources, ranging from ambulances to emergency rooms. On the plus side, this could result in some decrease in medical costs and perhaps insurance rates (or merely mean more profits for insurance companies, since they would be paying out less often). On the minus side, this would mean less business for hospitals, therapists and other medical personnel, which might have a negative impact on their income. On the whole, though, reducing the number of injuries seems to be a moral good on utilitarian grounds.
A reduction in the number and severity of accidents would also mean fewer traffic fatalities. On the plus side, having fewer deaths seems to be a good thing—on the assumption that death is bad. On the minus side, funeral homes will see their business postponed and the reduction in deaths could have other impacts on such things as the employment rate (more living people means more competition for jobs). However, I will take the controversial position that fewer deaths is probably good.
While a reduction in the number and severity of accidents would mean less and lower repair bills for vehicle owners, this also entails reduced business for vehicle repair businesses. Roughly put, every dollar saved in repairs (and replacement vehicles) by self-driving cars is a dollar lost by the people whose business it is to fix (and replace) damaged vehicles. Of course, the impact depends on how much a business depends on accidents—vehicles will still need regular maintenance and repairs. People will presumably still spend the money that they would have spent on repairs and replacements, and this would shift the money to other areas of the economy. The significance of this would depend on the amount of savings resulting from the self-driving vehicles.
Another economic impact of self-driving vehicles will be in the area of those who make money driving other people. If my truck is fully autonomous, rather than take a cab to the airport, I can simply have my own truck drop me off and drive home. It can then come get me at the airport. People who like to drink to the point of impairment will also not need cabs or services like Uber—their own vehicle can be their designated driver. A new sharing economy might arise, one in which your vehicle is out making money while you do not need it. People might also be less inclined to use airlines or busses—if your car can safely drive you to your destination while you sleep, play video games, read or even exercise (why not have exercise equipment in a vehicle for those long trips?). No more annoying pat downs, cramped seating, delays or cancellations.
As a final point, if self-driving vehicles operate within the traffic laws (such as speed limits and red lights) automatically, then the revenue from tickets and traffic violations will be reduced significantly. Since vehicles will be loaded with sensors and cameras, passengers (one cannot describe them as drivers anymore will have considerable data with which to dispute any tickets. Parking revenue (fees and tickets) might also be reduced—it might be cheaper for a vehicle to just circle around or drive home than to park. This reduction in revenue could have a significant impact on municipalities—they would need to find alternative sources of revenue (or come up with new violations that self-driving cars cannot counter). Alternatively, the policing of roads might be significantly reduced—after all, if there are far fewer accidents and few violations, then fewer police would be needed on traffic patrol. This would allow officers to engage in other activities or allow a reduction of the size of the force. The downside of force reduction would that the former police officers would be out of a job.
If all vehicles become fully self-driving, there might no longer be a need for traffic lights, painted lane lines or signs in the usual sense. Perhaps cars would be pre-loaded with driving data or there would be “broadcast pods” providing data to them as needed. This could result in considerable savings, although there would be the corresponding loss to those who sell, install and maintain these things.
The murder of nine people in the Emanuel AME Church in South Carolina ignited an intense discussion of race and violence. While there has been near-universal condemnation of the murders, some people take effort to argue that these killings are part of a broader problem of racism in America. This claim is supported by reference to the well-known history of systematic violence against blacks in America as well as consideration of data from today. Interestingly, some people respond to this approach by asserting that more blacks are killed by blacks than by whites. Some even seem obligated to add the extra fact that more whites are killed by blacks than blacks are killed by whites.
While these points are often just “thrown out there” without being forged into part of a coherent argument, presumably the intent of such claims is to somehow disprove or at least diminish the significance of claims regarding violence against blacks by whites. To be fair, there might be other reasons for bringing up such claims—perhaps the person is engaged in an effort to broaden the discussion to all violence out of a genuine concern for the well-being of all people.
In cases in which the claims about the number of blacks killed by blacks are brought forth in response to incidents such as the church shooting, this tactic appears to be a specific form of a red herring. This fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic.
This sort of “reasoning” has the following form:
- Topic A is under discussion.
- Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
- Topic A is abandoned.
In the case of the church shooting, the pattern would be as follows:
- The topic of racist violence against blacks is being discussed, specifically the church shooting.
- The topic of blacks killing other blacks is brought up.
- The topic of racist violence against blacks is abandoned in favor of focusing on blacks killing other blacks.
This sort of “reasoning” is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim. In the specific case at hand, switching the topic to black on black violence does nothing to address the topic of racist violence against blacks.
While the red herring label would certainly suffice for these cases, it is certainly appealing to craft a more specific sort of fallacy for cases in which something bad is “countered” by bringing up another bad. The obvious name for this fallacy is the “two bads fallacy.” This is a fallacy in which a second bad thing is presented in response to a bad thing with the intent of distracting attention from the first bad thing (or with the intent of diminishing the badness of the first bad thing).
This fallacy has the following pattern:
- Bad thing A is under discussion.
- Bad thing B is introduced under the guise of being relevant to A (when B is actually not relevant to A in this context).
- Bad thing A is ignored, or the badness of A is regarded as diminished or refuted.
In the case of the church shooting, the pattern would be as follows:
- The murder of nine people in the AME church, which is bad, is being discussed.
- Blacks killing other blacks, which is bad, is brought up.
- The badness of the murder of the nine people is abandoned, or its badness is regarded as diminished or refuted.
This sort of “reasoning” is fallacious because the mere fact that something else is bad does not entail that another bad thing thus has its badness lessened or refuted. After all, the fact that there are worse things than something does not entail that it is not bad. In cases in which there is not an emotional or ideological factor, the poorness of this reasoning is usually evident:
Sam: “I broke my arm, which is bad.”
Bill: “Well, some people have two broken arms and two broken legs.”
Joe: “Yeah, so much for your broken arm being bad. You are just fine. Get back to work.”
What seems to lend this sort of “reasoning” some legitimacy is that comparing two things that are bad is relevant to determining relative badness. If a person is arguing about how bad something is, it is certainly reasonable to consider it in the context of other bad things. For example, the following would not be fallacious reasoning:
Sam: “I broke my arm, which is bad.”
Bill: “Some people have two broken arms and two broken legs.”
Joe: “That is worse than one broken arm.”
Sam: “Indeed it is.”
Joe: “But having a broken arm must still suck.”
Sam: “Indeed it does.”
Because of this, it is important to distinguish between cases of the fallacy (X is bad, but Y is also bad, so X is not bad) and cases in which a legitimate comparison is being made (X is bad, but Y is worse, so X is less bad than Y, but still bad).
After the terrorist attack on the Emanuel African Methodist Episcopal Church in Charleston, commentators hastened to weave a narrative about the murders. Some, such as folks at Fox News, Lindsay Graham and Rick Santorum, endeavored to present the attack as an assault on religious liberty. This does fit the bizarre narrative that Christians are being persecuted in a country whose population and holders of power are predominantly Christian. While the attack did take place in a church, it was a very specific church with a history connected to the struggle against slavery and racism in America. If the intended target was just a church, presumably any church would have sufficed. Naturally, it could be claimed that it just so happened that this church was selected.
The alleged killer’s own words make his motivation clear. He said that he was killing people because blacks were “raping our women” and “taking over our country.” As far as currently known, he made no remarks about being motivated by hate of religion in general or Christianity in particular. Those investigating his background found considerable evidence of racism and hatred of blacks, but evidence of hatred against Christianity seems to be absent. Given this evidence, it seems reasonable to accept that the alleged killer was there to specifically kill black people and not to kill Christians.
Some commentators also put forth the stock narrative that the alleged killer suffered from mental illness, despite there being no actual evidence of this. This, as critics have noted, is the go-to explanation when a white person engages in a mass shooting. This explanation is given some credibility because some shooters have, in fact, suffered from mental illness. However, people with mental illness (which is an incredibly broad and diverse population) are far more often the victims of violence rather than the perpetrators.
It is certainly tempting to believe that a person who could murder nine people in a church must be mentally ill. After all, one might argue, no sane person would commit such a heinous deed. An easy and obvious reply is that if mental illness is a necessary condition for committing wicked deeds, then such illness must be very common in the human population. Accepting this explanation would, on the face of it, seem to require accepting that the Nazis were all mentally ill. Moving away from the obligatory reference to Nazis, it would also entail that all violent criminals are mentally ill.
One possible counter is to simply accept that there is no evil, merely mental illness. This is an option that some do accept and some even realize and embrace the implications of this view. Accepting this view does require its consistent application: if a white man who murders nine people must be mentally ill, then an ISIS terrorist who beheads a person must also be mentally ill rather than evil. As might be suspected, the narrative of mental illness is not, in practice, consistently applied.
This view does have some potential problems. Accepting this view would seem to deny the existence of evil (or at least the sort involved with violent acts) in favor of people being mentally defective. This would also be to deny people moral agency, making humans things rather than people. However, the fact that something might appear undesirable does not make it untrue. Perhaps the world is, after all, brutalized by the mad rather than the evil.
An unsurprising narrative, put forth by Charles L. Cotton of the NRA, is that the Reverend Clementa Pickney was to blame for the deaths because he was also a state legislator “And he voted against concealed-carry. Eight of his church members who might be alive if he had expressly allowed members to carry handguns in church are dead. Innocent people died because of his position on a political issue.” While it is true that Rev. Pickney voted against a 2011 bill allowing guns to be brought into churches and day care centers, it is not true that Rev. Pickney is responsible for the deaths. The reasoning in Cotton’s claim is that if Rev. Pickney had not voted against the bill, then an armed “good guy” might have been in the church and might have been able to stop the shooter. From a moral and causal standpoint, this seems to be quite a stretch. When looking at the moral responsibility, it primarily falls on the killer. The blame can be extended beyond the killer, but the moral and causal analysis would certainly place blame on such factors as the influence of racism, the easy availability of weapons, and so on. If Cotton’s approach is accepted and broad counterfactual “what if” scenarios are considered, then the blame would seem to spread far and wide. For example, if he had been called on his racism early on and corrected by his friends or relatives, then those people might still be alive. As another example, if the state had taken a firm stand against racism by removing the Confederate flag and boldly denouncing the evils of slavery while acknowledging its legacy, perhaps those people would still be alive.
It could be countered that the only thing that will stop a bad guy with a gun is a good guy with a gun and that it is not possible to address social problems except via the application of firepower. However, this seems to be untrue.
One intriguing narrative, most recently put forth by Jeb Bush, is the idea of an unknown (or even unknowable) motivation. Speaking after the alleged killer’s expressed motivations were known (he has apparently asserted that he wanted to start a race war), Bush claimed that he did not “know what was on the mind or the heart of the man who committed these atrocious crimes.” While philosophers do recognize the problem of other minds in particular and epistemic skepticism in general, it seems unlikely that Bush has embraced philosophical skepticism. While it is true that one can never know the mind or heart of another with certainty, the evidence regarding the alleged shooter’s motivations seems to be clear—racism. To claim that it is unknown, one might think, is to deny what is obvious in the hopes of denying the broader reality of racism in America. It can be replied that there is no such broader reality of racism in America, which leads to the last narrative I will consider.
The final narrative under consideration is that such an attack is an “isolated incident” conducted by a “lone wolf.” This narrative does allow that the “lone wolf” be motivated by racism (though, of course, one need not accept that motivation). However, it denies the existence of a broader context of racism in America—such as the Confederate flag flying proudly on public land near the capital of South Carolina. Instead, the shooter is cast as an isolated hater, acting solely from his own motives and ideology. This approach allows one to avoid the absurdity of denying that the alleged shooter was motivated by racism while denying that racism is a broader problem. One obvious problem with the “isolated incident” explanation is that incidents of violence against African Americans is more systematic than isolated—as anyone who actually knows American history will attest. In regards to the “lone wolf” explanation, while it is true that the alleged shooter seems to have acted alone, he did not create the ideology that seems to have motivated the attack. While acting alone, he certainly seems to be the member of a substantial pack and that pack is still in the wild.
It can be replied that the alleged shooter was, by definition, a lone wolf (since he acted alone) and that the incident was isolated because there has not been a systematic series of attacks across the country. The lone wolf claim does certainly have appeal—the alleged shooter seems to have acted alone. However, when other terrorists attempt attacks in the United States, the narrative is that each act is part of a larger whole and not an isolated incident. In fact, some extend the blame to religion and ethnic background of the terrorist, blaming all of Islam or all Arabs for an attack.
In the past, I have argued that the acts of terrorists should not confer blame on their professed religion or ethnicity. However, I do accept that the terrorist groups (such as ISIS) that a terrorist belongs to does merit some of the blame for the acts of its members. I also accept that groups that actively try to radicalize people and motivate them to acts of terror deserve some blame for these acts. Being consistent, I certainly will not claim that all or even many white people are racists or terrorists just because the alleged shooter is white. That would be absurd. However, I do accept that some of the responsibility rests with the racist community that helped radicalize the alleged shooter to engage in his act of terror.
Donald Trump declared his candidacy for president. So, what are his chances?
While the majority of scientists believe that genetically modified foods (or, more accurately, crops and animals) are safe for human consumption, there is considerable opposition to these genetically modified organisms. As might be suspected, this matter is philosophically interesting.
There are two stock moral arguments against such “tampering.” One is the playing God argument in which it is claimed that such modification is playing God and it is then argued (or simply asserted) that humans should not play God. A closely related argument is the unnatural argument. This argument works somewhat like the playing God argument, but involves arguing that because such modifications are unnatural, they are morally wrong. Rousseau famously lamented the horrible impact of advances in the arts and sciences—and he was writing when the height of technology included the musket.
One stock reply to the playing God argument is to show that people have been “playing God” in a similar manner and that this is morally acceptable. While the ability to directly manipulate genes is relatively new, humans have been engaging in genetic engineering via selective breeding since the dawn of agriculture. This has been done with plants and animals, both for those raised for food and those kept for other purposes. For example, the various breeds of dogs are the result of human engineering via selective breeding. So, humans have been playing God a very long time and if dogs are morally okay, then genetically modified crops do not seem to be a special moral problem. To use an analogy, if it is okay to make houses and structures by hand, then using power tools and construction equipment would not seem to make modern building methods morally wrong—the technology is just better.
A stock reply to the unnatural argument is to show that what is allegedly unnatural does occur in nature. For those who believe in evolution, the process of natural selection functions as a natural “engineer”, leading to changes in species and the creation of new species. In the case of genetic engineering, humans are doing what nature does—only faster and with a purpose. If this seems to be playing God, this takes the matter back to the playing God argument.
There are those who argue against genetic modification of food sources on the grounds that such foods are dangerous. This can be a reasonable concern and it is certainly wise to confirm a modified food source is actually a safe source. As noted above, most scientists regard these modified food sources as safe for human consumption. This seems reasonable, provided that the food sources were tested for potential dangers, such as being toxic. Some people do express the concern that the modified genes will somehow get from modified food sources and change the genes of the people who eat them. Given the way digestion and genes work, this is extremely unlikely. After all, humans eat normal food that contains genetic material all the time, yet do not undergo mutation. For example, eating chicken does not cause a person to gain chicken genes. As such, genetically modified food sources do not seem to present a special danger, provided that they are tested to see if the modifications had an unintended and dangerous results (such as making the previously safe to eat plant poisonous to humans).
Some people are not especially worried about the genetic modifications themselves, but are worried about the use to which such modifications will be put by the agricultural corporations. This worry is not (in general) that corporations will make science fiction monsters. Rather, the concern is that the modifications will be used as a means to exploit farmers, especially those in developing countries, and to lock them into having to buy the seeds from the corporations year after year. For example, a company might develop a type of rice that can handle higher levels of salt and drier conditions very well and sell that to farmers who need such a plant because of the impact of climate change. Since the company owns the rights to the seeds, the farmers will need to buy from that company if they wish to keep growing rice.
In defense of the corporations, they could avail themselves of Locke’s argument: they are taking plants and animals from the common and “mixing their labor” with them, making these plants and animals their property. As such, they can insist on ownership rights and bring lawsuits against those who might, for example, try to create similar plants and animals. After all, one might argue, corporations have a right to make a profit and this right must be protected by the laws. It can also be argued that farmers can, in a free market, purchase seeds from another company. Surely, one might argue, farmers can easily find competing products at lower prices that are as good.
In any case, the corporation problem is not a problem inherent to genetic modification of food sources, but rather with the behavior of people. There are, in fact, researchers who are developing modified plants and animals that will be available to farmers and not owned by corporations.
Those who support genetically modified food sources do have a very good general argument. The argument is that genetic modification allows the creation of food sources that can solve various problems. As an example, a plant might be modified so that it can survive harsher environmental conditions than the original, while also being more resistant to pests and producing a greater crop yield. Since genetic engineering is faster, more reliable and more precise than the old method of selective breeding, it can produce positive results more effectively. Thus, on utilitarian grounds, genetic modification seems morally acceptable.
There are, of course, some potential harms in genetic modifications. While it is very unlikely that any science fiction disaster scenario will arise and play out, there is always the possibility of unintended consequences and these are worth considering—but in terms of their relative likelihood and not on the basis of the plots of bad science fiction.
In the previous essay I discussed gender nominalism—the idea that gender is not a feature of reality, but a social (or individual) construct. As such, a person falling within a gender class is a matter of naming rather than a matter of having objective features. In this essay I will not argue for (or against) gender nominalism. Rather, I will be discussing gender nominalism within the context of competition.
Being a runner, I will start with competitive sports. As anyone who has run competitively knows, males and females generally compete within their own sexes. So, for example, a typical road race will (at least) have awards for the top three males and also for the top three females. While individual males and females vary greatly in their abilities, males have a general physical advantage over females when it comes to running: the best male runner is significantly better than the best female runner and average male runners are also better than average female runners.
Given that males generally have an advantage over females in regards to running (and many other physical sports), it would certainly be advantageous for a male runner if the division was based on gender (rather than biological sex) and people could simply declare their genders. That is, a male could declare himself a woman and thus be more likely to do better relative to the competition. While there are those who do accept that people have the right to gender declare at will and that others are obligated to accept this, it seems clear that this would not be morally acceptable in sports.
The intent of dividing athletes by sex is to allow for a fairer completion. This same principle, that of fairer competition, is also used to justify age groups—as older runner knows, few things slow a person down like dragging many years. Because of this, a runner could, in general, gain an advantage by making a declaration of age identity (typically older). Perhaps the person could claim that he has always been old on the inside and that to refuse to accept his age identification would be oppression. However, this would be absurd: declaring an age does not change the person’s ability to compete and would thus grant an unfair advantage. Likewise, allowing a male to compete as a woman (or girl) in virtue of gender identification would be unfair. The declaration would not, obviously, change the person’s anatomy and physiology.
There are, however, cases that are much more controversial and challenging. These include cases in which a person has undergone a change in anatomy. While these cases are important, they go beyond the intended scope of this essay, which is gender nominalism.
Some competitions do not divide the competitors by sex. These are typically competitions where the physical differences between males and females do not impact the outcome. Some examples include debate, chess, spelling bees and NASCAR. In these cases, males and females compete equally and hence the principle of fairness justifies the lack of sex divisions. Some of these competitions do have other divisions. For example, spelling bees do not normally pit elementary school students against high school students. In such competitions, gender identification would seem to be irrelevant. As such, competitors should be free to gender identify as they wish within the context of the competition.
Interestingly, there are competitions where there appear to be no sex-based advantages (in terms of physical abilities), yet there are gender divisions. There are competitions in literature, music, and acting that are divided by gender (and some are open only to one gender). There are also scholarships, fellowships and other academic awards that are open only to one gender (in the United States, these are often limited to woman).
Since being a biological male would seem to yield no advantage in such cases, the principle of fairness would not seem to apply. For example, the fact that males are generally larger and stronger would yield no advantage when it came to writing a novel, acting in a play, or playing a guitar. As such, it would seem that if people should be able to set their own gender identity, they should be able to do so for such competitions, thus enabling them to compete where they wish.
It could be argued that the principle of fairness would still apply—that biological males would still have an advantage even if they elected to identify as women for the competition. This advantage, it might be claimed, would be based in the socially constructed advantages that males possess. Naturally, it would need to be shown that a male that gender identifies as a woman for such competitions, such as getting a woman’s only scholarship, would still retain the (alleged) male advantage.
It could also be argued that the divisions are not based on a principle of fairness regarding advantages or disadvantages. Rather, the divisions are to given more people a chance of winning. This could be justified on the same grounds that justify having many categories. For example, there are awards for being the best actor in a supporting role, which exists to create another chance for an actor to win something. If a person could just gender declare and be eligible, then that would create an “imbalance”, much as allowing non-supporting actors to declare themselves supporting actors to get a shot at that award would be unfair.
Of course, this seems to assume that there is a justified distinction between the genders that would ground the claims of unfairness. That is, it would be as wrong for a male to win best actress as it would be for a female screenwriter who never acted to win best actress for her screenplay. Or that it would be as bad for a male to get a scholarship intended for a woman as it would be for a football player who cannot do math to get a math scholarship. This approach, which would involve rejecting one form of gender nominalism (the version in which the individual gets to declare gender) is certainly an option. This would not, however, require accepting that gender is not a social construct—one could still be a gender nominalist of the sort that believes that gender classification is both a matter of individual declaration and acceptance by the “relevant community.” As such, the relevant communities could police their competitions. For example, those who dole out scholarships for woman can define what it is to be a woman, so as to prevent non-woman from getting those awards. This would, of course, seem to justify similar gender policing by society as a whole, which leads to some interesting problems about who gets to define gender identity. The usual answer people give is, of course, themselves.