Ann Coulter’s appearance at the Berkeley was cancelled in response to threats made by anarchist groups. While some conservatives argue that concerns about security should often trump concerns about rights (such as infringing on religious liberty or privacy to “make us safer”), two conservative organizations have started a lawsuit against the university. The claim that the school is endeavoring “to restrict conservative speech” on campus. Since Berkeley is a public school, the First Amendment does apply and hence the case can make an appeal to this constitutional right. While well-paid lawyers will hash out the legal matters, this does raise an interesting moral concern.
As I have shown in numerous other essays, I hold to a view of freedom of expression that goes far beyond the limited legal protection laid out in the First Amendment. I also hold to the freedom of consumption—that people have a right to, for example, hear whatever views they wish to hear. As such, Coulter has a right to express herself and the student organizations have the right to invite her so they can listen to whatever wicked or foolish things she might elect to spew forth.
Like many classic liberals, my go-to justification of these liberties is based on J.S. Mill’s arguments. The gist is that allowing people the liberty of expression and the liberty of consumption creates more happiness than restricting these liberties. Being a fan of natural rights, I also find the idea that these rights have additional grounding beyond mere utility appealing. I do, however, admit that such rights are certainly metaphysically suspect and difficult to properly ground in reality. In short, while I think that Coulter will say nothing worth hearing, she has every right to speak before the student groups that invited her.
I should note that my view of Coulter is not based on any notion that conservative political theory lacks merit; it is based on my view that she lacks merit. Unfortunately, thoughtful conservative political theorists seem to be out of vogue. This is unfortunate; the past saw many excellent conservative thinkers and they made significant contributions to political and philosophical thought. These days, there seem to be mostly just empty pundits spewing emptiness on Fox News. Or, worse, racists and sexists purporting to represent conservative thought. Then again, perhaps abandoning the intellectual aspects of politics was a smart tactical move: the left might have its intellectuals, but the right holds the power in most states. But, back to the matter at hand.
While I do accept the rights of expression and consumption, these rights are not absolute. If the justification for rights and liberties is taken to be utilitarian, then these rights can be limited on the same grounds. As such, if the harm created by allowing the freedoms of expression and consumption would create more harm, then they can be justly limited. The stock example is, of course, the restriction on people yelling “fire” in a crowded theater when there is no fire.
If a natural rights view is accepted, the restriction of a right can be justified by appealing to other rights. In the case of speech, the right to life would warrant preventing people from yelling “fire” in a crowded theater. The challenge is, of course, working out a hierarchy of rights. However, it does seem reasonable to make the right to life a rather important right, if only because being alive is generally a necessary condition for the other rights.
If having a person speak could put that person and others in danger, then this can justify postponing a speech until proper security arrangements can be made or even cancelling it if such arrangements cannot be made. This can be done by appealing to a utilitarian justification or by arguing that the right not to be harmed trumps the rights of free expression and free consumption. This is analogous to other cases in which liberty must be weighed against safety.
This does lead to the obvious concern that free expression and free consumption could thus be thwarted simply by threatening violence; thus giving individuals and groups willing to make threats considerable powers of censorship. One limiting factor is that making such threats is a crime. Unfortunately, the internet provides so many anonymous ways of making threats that the police face considerable challenge in dealing with them.
Deciding how to respond to credible threats of violence requires weighing the rights of expression and consumption against the harms that are likely to arise. As a general principle, it seems reasonable to accept that a speech should be postponed in the face of a credible threat that cannot be addressed in time. Such a credible threat should be dealt with by law enforcement and then the speech can be made. If the threat can be addressed so that an acceptable level of public safety is possible (within the available budget), then the speech should proceed normally. This approach can be easily justified on utilitarian grounds: people are kept reasonably safe while at the same time threats are prevented from becoming an effective tool of censorship. This does require that the state take such threats seriously and take appropriate action.
There is, of course, also the moral responsibility of those who make such threats: they are wrong to do this. If they do not like, for example, Coulter’s views, they should ask a campus group to invite them to speak out against her views on campus.
The question “when was the last battle of the Civil War fought?” is a trick question; the last battle has yet to be fought. One minor skirmish took place recently in New Orleans as the city began its removal of Confederate monuments. Fortunately, this skirmish has yet to result in any injuries or deaths, although the removal of the first monument looked like a covert military operation. Using equipment with hidden company names, the removal crews wore masks and body armor while operating under both the cover of darkness and police sniper protection. These precautions were deemed necessary because of threats made against workers. In addition to being controversial, such removals are philosophically interesting.
One general argument in favor of keeping such Confederate monuments in place is the historical argument: the monuments express and are part of history and their removal is analogous to tearing pages from the history books. This argument does have considerable appeal, at least in cases in which the monuments mark an historical event and stick to the facts. However, monuments tend to be erected to bestow honors and this goes beyond mere noting of historical facts.
One example of such a monument is the Battle of Liberty Place Monument. It was erected in New Orleans in 1891 to honor the 1874 battle between the Crescent City White League and the racially integrated New Orleans Metropolitan police and state militia. The monument was modified by the city in 1932 with a plaque expressing support for white supremacy. The monument was modified again in 1993 when a new plaque was placed over the 1932 plaque, commemorating all those who died in the battle.
From a moral perspective, the problem with this sort of monument is that it does not merely present a neutral historical marker, but endorses white supremacy and praises racism. As such, to keep the memorial in place is to state that the city currently at least tolerates white supremacy and racism. If these values are still endorsed by the city, then the monument should remain as an honest expression of these immoral values. That way people will know what to expect in the city.
However, if the values are no longer endorsed by the city, then it would seem that the monument should be removed. This would express the current views of the people of the city. It could be objected that such removal would be on par with purging historical records. Obviously, the records of the event should not be purged. It is, after all, a duty of history to record what has been and this can be done without praising (or condemning) what has occurred. In contrast, to erect and preserve an honoring monument is to take a stance on the matter—to praise or condemn it.
It could be argued that the 1993 change to the monument “redeems” it from its white supremacist and racist origins and, as such, it should be left in place. This does have some appeal, part of which is that the monument expresses the history of the (allegedly) changed values. To use an analogy, a building that once served an evil purpose can be refurbished and redeemed to serve a good purpose. This, it could be argued, sends a more powerful statement than simply razing the building.
However, the fact remains that the monument was originally created to honor white supremacy and the recent modification seems to be an effort to conceal this fact. As such, the right thing to do would seem to be to remove the monument. Since the monument does have historical significance, it would be reasonable to preserve it as such—historical artifacts can be kept without endorsing any values associated with the artifact. For example, keeping artifacts that belonged to Stalin as historically significant items is not to endorse Stalinism. Keeping a monument in a place of honor, however, does imply endorsement.
The matter can become more complicated in cases involving statues of individuals. In New Orleans, there are statues of General Robert E. Lee, Confederate President Jefferson Davis and General P.G.T. Beauregard. It cannot be denied that these were exceptional men who shaped the history of the United States. It also cannot be denied they possessed personal virtues. Lee, in particular, was by all accounts a man of considerable virtue. P.G.T. Beauregard went on to advocate for civil rights and voting rights for blacks (though some might say this was due to mere political expediency).
Given their historical importance and the roles they played, it can be argued that they were worthy of statues and that these statues should remain to honor them. The easy and obvious counter is that they engaged in treason against the United States and backed the wicked practice of slavery. As such, whatever personal virtues they might have possessed, they should not be honored for their role in the Confederacy. Statues that honor people who were Confederates but who did laudable things after the Civil War should, of course, be evaluated based on the merits of those individuals. But to honor the Confederacy and its support of slavery would be a moral error.
It could also be argued that even though the true cause of the Confederacy (the right of states to allow people to own other people as slaves) is wicked, people like Lee and Beauregard earned their statues and their honor. As such, it would be unjust to remove the statues because of the political sensibilities of today. After all, as it should be pointed out, there are statues that honor the slave owners Washington and Jefferson for their honorable deeds within the context of the dishonor of slavery. If the principle of removing monuments that honored those who supported a rebellion aimed at creating an independent slave-owning nation was strictly followed, then there would need to be a rather extensive purge of American monuments. If honoring supporters of slavery and slave owners is acceptable, then perhaps the removal of the statues of the heroes of the Confederacy could be justified on the grounds of their rebellion against the United States. This would allow for a principled distinction to be made: statues of slavery supporters and slave owners can be acceptable, as long as they were not rebels against the United States. Alternative, the principle could be that statues of victorious rebel slavery supporters are acceptable, but those of losing rebel slavery supporters are not. Winning, it could be said, makes all the difference.
Dictatorships are built upon the moral defects of citizens. While it can be tempting to think that the citizens who enable dictatorships are morally evil, this need not be the case. Dictatorship does not require an actively evil population, merely a sufficient number who are morally defective in ways that makes them suitably vulnerable to the appeals of dictatorship.
While there are many paths to dictatorship, most would-be dictators make appeals to fear, hatred, willful ignorance, and irresponsibility. For these appeals to succeed, an adequate number of citizens must be morally lacking in ways that make them vulnerable to such appeals. As would be expected, the best defense against dictators is moral virtue—which is why would-be dictators endeavor to destroy such virtue. I will briefly discuss each of these appeals in turn and will do so in the context of an ethics of virtue.
For the typical virtue theorist, virtue is a mean between two extremes. For example, the virtue of courage is a mean between excessive bravery (foolhardiness) and a deficiency of bravery (cowardice). Being virtuous is difficult as it requires both knowledge of morality and the character traits needed to act in accord with that knowledge. For example, to be properly brave involves knowing when to act on that courage and having the character needed to either face danger resolutely or avoid it without shame. As should be expected, dictators aim at eroding both knowledge and character. It is to this that I now turn.
Fear is a very powerful political tool, for when people are afraid they often act stupidly and wickedly. Like all competent politicians and advertisers, would-be dictators are aware of the power of fear and seek to employ it to get people to hand over power. While dictators often have very real enemies and dangers to use to create fear, they typically seek to create fear that is out of proportion to the actual threat. For example, members of a specific religion or ethnicity might be built up to appear to be an existential threat when, in fact, they present little (or even no) actual threat.
Exploiting the fear of citizens requires, obviously enough, that the citizens are afraid. In the case of exaggerated threats, the fear of the citizens must be out of proportion to the threat—that is, they must have an excess of fear. The best defense against the tactic of fear is, obviously enough, courage. To the degree citizens have courage it is harder for a dictator (or would-be dictator) to scare them into handing over power. Even if the citizens are afraid, if their fear is proportionate to the threat, then it is also much harder for dictators to gain the power they desire (which tends to be more power than needed to address the threat).
Some might point to the fact that people can be very violent in service of dictators and thus would seem to be brave. After all, they can engage in battle. However, this is typically either the “courage” of the bully or the result of a greater fear of the dictator. That is, their cowardice in one area makes them “brave” in another. This is not true courage.
Dictators thus endeavor to manufacture fear and to create citizens who are lacking in true courage. Those who oppose dictators need to focus on developing courage in the citizens for this provides the best defense against fear. Americans pride themselves as living in the land of the brave; if this is true, then it would help explain why America has not fallen into dictatorship. But, should America cease to be brave and submit to fear, then a dictatorship would seem all too likely.
It can be pointed out that some who back dictators seem to be driven by hate rather than fear. While this can be countered by contending that hate is most often based in fear, it can be accepted that hate is also a driving force that leads people to support dictators. Hate, like fear, is a powerful tool and leads people to act both stupidly and wickedly. While it can be argued that hate is always morally defective, it can also be contended that there is morally correct hate. For example, those who engage in terrible evil could be justly hated. Fortunately, I do not need to resolve the question of whether hate is always wrong or not; it suffices to accept that hate can be disproportionate—that is, that the hate can exceed the justification for the hate.
Dictators and would-be dictators, like almost all politicians, exploit this power of hate. As with fear, while there might be legitimate targets for hate, dictators tend to exaggerate hate and target for hate those who do not deserve to be hated. Homosexuals, for example, tend to be a favorite target for unwarranted hate.
The virtue that provides the best defense against excessive or unwarranted hatred is obviously tolerance. As such, it is no surprise that dictators endeavor to breed and strengthen intolerance in their citizens. This is aided by mockery of tolerance as weakness or as being “politically correct.” Racism and sexism are favorites for exploitation and would-be dictators can find these hatreds in abundance. As such, it is no surprise that dictators encourage racism, sexism and other such things while opposing tolerance.
This is not to say that tolerance is always good—there are things, such as dictators, that should not be tolerated. That said, tolerance is certainly a virtue that provides a defense against dictators and as such it should be properly cultivated in citizens. This does not require that people love or even like one another, merely that they be capable of tolerating the tolerable.
One concern about my approach is that I seem to have cast the supporters of would-be dictators as hateful cowards and this could be unfair. After all, it can be argued, some of their supporters might be operating from ignorance rather than malice. This is certainly a reasonable point.
Dictators, like most who love power, know that the ignorance of people is something that can be easily exploited. It is common to exploit such ignorance to generate hate and fear. For example, it is far easier to make people afraid of terrorism in the United States when those people do not know the actual threat posed by terrorism relative to other dangers. As another example, it is easier to get Americans to hate Muslims when they know little or nothing about the faith and its practitioners.
Those who are afraid or hateful because of ignorance can be excused to some degree; provided that they are not responsible for their ignorance. Willful ignorance, however, merely compounds the moral failing of those who hate and fear based on such ignorance.
Most virtue theorists, such as Confucius and Aristotle, regard knowledge as a virtue and hold that people are obligated to acquire knowledge. Knowledge is, obviously enough, the antidote to ignorance. While, as Socrates noted, our knowledge will always be dwarfed by our ignorance, willful ignorance is a vice. If someone is going to act on the basis of fear or hate, then they are morally obligated to determine if their fear or hate is warranted and to do so in a rational manner. To simply embrace a willful ignorance of the facts is to act wrongly and is something that dictators certainly exploit. This is why dictators and would-be dictators attack the free press, engage in systematic deceit, and often oppose education. This also contributes to creating citizens who are irresponsible.
A classic trait of a dictator is to claim that they are “the only one” who can get things done. Examples include claiming that they are the only one who can protect the people, that only they can fix our problems, and that only they know what must be done. In order for citizens to believe this, they must either be willfully ignorant or irresponsible. In the case of willful ignorance, the citizens would need to believe the obviously false claim that the dictator is the only person with the ability to accomplish the relevant goals. While there are some exceptional people and there must be someone who is best, there is no “the one” who is the sole savior of the citizens. In any case, a dictator obviously cannot be the only one who can get things done. If that were true, they would not need any followers, minions or others to do things for them. While this might be true of Superman, it is not true of any mere mortal dictator.
In the case of irresponsibility, the citizens would need to abdicate their responsibilities as citizens and turn over agency to the dictator. They would, in effect, revert back to the status of mere children and set aside the responsibilities of adulthood.
If the citizens were, in fact, incompetent human beings, then (as Mill argued in his work on liberty) a dictator would be needed to rule over them until they either achieved competence or perished. If the dictator took good care of them, this would be morally acceptable. If the citizens were not incompetent, then their abdication would be a failure of the virtue of responsibility. It is no coincidence that dictators typically cast themselves as father figures and the citizens as their children. They certainly hope that the citizens will cease to be proper adults and revert to the moral equivalent of children, thus falling into the vice of irresponsibility.
Thus, one of the best defenses against the rise of dictators is the development of virtue. Dictators are well aware of this and do their best to corrupt the citizens they hope will hand them power. While it is tempting to think that the United States can never fall into dictatorship, this is mere wishful thinking. The founders were well aware of this danger, which explains why they endeavored to make it hard for a dictator to arise. But the laws are only as strong and good as the people, which is why citizens need to be virtuous if tyranny is to be avoided.
There is a reasonable concern that can be raised in response to my view that the red line should be drawn at the murder of civilians rather than at murdering them with chemical weapons. This is the worry that my view would abandon the red line being drawn for using chemical weapons against civilians, thus creating a situation in which there are no red lines. This would be problematic because while the murder of civilians with conventional weapons is tolerable, crossing the red line of murdering civilians with chemical weapons does at least generate a response. Since some response from the world is better than no response, it is clearly better to have some viable red line rather than one everyone will simply ignore.
The tolerance of conventional murder and opposition to chemical murder has resulted in some actions whose impact should be duly assessed, to get some picture of the value of the chemical red line.
One impact that has been touted by some former members of the Obama administration is the fact that Syria got rid of many of its chemical weapons because of pressure from the United States and the world. On the plus side, less chemical weapons entails less possible murder with chemical weapons, which is presumably a good thing. One obvious offset to this alleged good is the concern that the Syrian government simply filled up the chemical kill gap with conventional killing, thus producing roughly the same number of deaths. If this is the case, then the focus on chemical weapons did not reduce the number of deaths.
This point could be countered by arguing that being murdered with chemical weapons is worse than being murdered with conventional weapons. The usual case for this is based on the claim that chemical weapons cause more suffering than chemical weapons. The usual response to this is that death by conventional weapon can be as or even more awful that death by chemical weapon. For example, a person who slowly dies while buried under the rubble caused by conventional bombs has certainly suffered more than a person who is killed almost immediately by a chemical weapon. As such, if death by chemical weapon is roughly equal to death by conventional weapon, then the difference between the two weapons is a difference that does not make a morally relevant difference. This can be illustrated by the following analogy.
Imagine that Sam is engaged in regularly murdering people in his neighborhood with guns. Those outside the neighborhood do not like this, but do nothing beyond Tweeting and posting about how awful Sam is. Then, one day, Sam tries a new type of murder: he poisons a few people. The neighbors are outraged and, after Tweeting with righteous fury, have their best shooter take a few rifle shots at where they think Sam keeps his poison. Sam goes back to murdering with his guns and the neighbors go back to occasionally tweeting about how bad Sam is. If this at least reduced Sam’s killing rate, this action would have some merit. But, if Sam just goes on killing with his guns, the action would have no meaningful impact and would have been pointless. At least beyond making the neighbors feel righteous for a short while.
The general point of this analogy shows the problem with how the chemical red line is used. To be specific, it allows for occasional and limited action when violations take place, then the offender merely returns to conventional murder. This does not address the real concern, namely that civilians are being murdered. As such, my original point seems to stand: the chemical weapon red line appears to create a moral space in which murder is tolerated while allowing a pretense of having meaningful moral standards. While this is presumably an awful thing to consider, having some red line might be worse than having no red line—after all, the chemical weapon red line enables the wicked self-deception that we are engaged in righteous action when in fact we are not. This allows us to salve our conscience and say “at least they are not being killed with chemical weapons” while we tolerate more murder.
Some years ago, I was firing my .357 magnum at an indoor range. This powerful pistol mades a satisfying “bang” and hurled a piece of metal at lethal speeds towards the paper target. Then there was a much louder noise and I felt a “whuummmp” vibrating my ribcage. My friend Ron was firing his .44 magnum nearby, close enough for me to feel the shockwave from the weapon.
While the .44 magnum is a powerful handgun (just ask Dirty Harry), it is a mere peashooter compared to a weapon like the Carl-Gustav M3, a shoulder fired heavy infantry weapon. When fired, this weapon generates a strong shockwave that might be causing brain injuries to the operators. While a proper scientific study has not been conducted on the effects of operating such weapons, it makes sense that they could cause such injuries. After all, the shockwave from the weapon is certainly analogous to that produced by other explosions, such as the IEDs that have caused terrible injuries. While IEDs certainly inflict wounds via the shrapnel and explosive burst, their shockwaves can also inflict brain damage without otherwise leaving a mark on the target.
The United States military had been gathering data using small blast gauges worn by soldiers. However, the use of the gauges was discontinued when it was claimed they could not consistently indicate when a soldier had been close enough to an explosion to suffer a concussion or mild traumatic brain injury. These gauges did, however, provide a wealth of information—including data that showed infantry operating heavy weapons were being repeatedly exposed to potentially dangerous levels of overpressure. Because such data could be used to link such exposure to long term health issues in soldiers, it might be suspected that the Pentagon stopped collecting data to avoid having to accept fiscal responsibility for such harms. This can, obviously enough, be seen as analogous to the NFL’s approach to concussions. This leads to some clear moral concerns about monitoring the exposure of operators and the use of heavy infantry weapons.
While it might seem awful, a moral argument can be made for not gathering data on soldiers operating heavy weapons. As noted above, if it were shown that being exposed to the overpressure of such weapons can cause brain injuries, then the state could incur the expenses associated with such responsibility. Without such data, the state can maintain that there is no proof of a connection and thus avoid such expenses. From a utilitarian standpoint, if the financial savings outweighed the harms done to the soldiers, then this would be the right thing to do. However, intentionally evading responsibility for harm does seem morally problematic, at best. It can also be countered that the benefits of being aware of the damage being done outweigh the benefits of an intentional ignorance. One obvious benefit is that such data could help mitigate or eliminate such damage and this seems morally superior to the intentional evasion by willful ignorance.
While there do seem to be steps that could be taken to minimize the damage done to troops operating heavy weapons (assuming there is such damage), it is likely that such damage cannot be avoided altogether. That is, there will always be some risk to the operators and those near. One technological solution would be to remotely operate heavy weapons (thus allowing the operator to be out of the damage zone). Another technological solution would be to automate such heavy weapons, thus taking humans out of the danger zone. Either of these options would increase the cost of the weapon system and would thus require weighing the financial cost against the wellbeing of soldiers. Fortunately, many of those who are fiscal conservatives when it comes to human wellbeing are fiscal liberals when it comes to corporate profits, so one way to sell the idea is to ensure that it would be profitable to corporations. There is also a moral argument that can be made for using the weapons as they are, even if they are harmful to the operators. It is to this that I now turn.
From a utilitarian standpoint, the ethics of exposing operators to damage from their own weapons would be a matter of weighing the harm done to the operators against the benefits of using such heavy weapons in combat. Infantry operated heavy weapons do seem to be very useful in combat. One obvious benefit of such weapons is that they allow infantry to engage vehicles, such as tanks and aircraft, with a reasonable chance of success. Taking on a tank or aircraft with light weapons generally does not turn out in the infantry’s favor. As such, if the choice is between risking some overpressure damage or facing a much greater risk of being killed by enemy vehicles, then the choice is obvious. As such, if the effectiveness of the weapon against the enemy adequately outweighs the risk to the operator, then it would be morally acceptable for the operators to take that risk. There is, however, still the question of the damage suffered during practice with the weapons.
The obvious way to argue that it is acceptable for troops to risk injury when training with heavy weapons is that they will need this practice to use the weapon effectively in combat. If they were to try to operate a heavy weapon without live practice, they would be far less likely to be effective and thus more likely to fail and be injured or killed by the enemy (or their own weapon). As such, the harm of going into battle without proper training morally outweighs the harm suffered by the operators in learning the weapon. This, of course, assumes that they are likely to end up in battle. If the training risks are taken and the training is not used, then the injury would have been for nothing—which takes this into the realm of considering odds in the context of ethics. On approach would be to scale training based on the likelihood of combat, scaling up if action is anticipated and keeping a minimal level when action is unlikely.
Making rational choices about the risks does, obviously enough, require knowing the risks. As such, there must be a proper study done of the risks of operating such weapons. Otherwise the moral and practical calculations would be essentially guessing, which is morally unacceptable.
I was recently interviewed by Jim Brown for The 180, a show on CBC Radio. The subject is the ethics of chemical weapons, specifically their use in Syria.
Here is the link:
When Obama was president, the “red line” he drew for the Syrian regime was the use of weapons of mass destruction, specifically chemical weapons. President Trump has also embraced the red line, asserting that Syria has gone “beyond a red line” with its recent use of chemical weapons. Trump has said that this attack changed his attitude towards Syria and Assad. Presumably the slaughter of civilians with conventional weapons did not cross the red line or impact his attitude very strongly. Those of a cynical bent might contend that the distinction between conventional and chemical weapons is accepted because it grants politicians the space needed to tolerate slaughter while being able to create the appearance of a moral stance. This moral stance is, of course, the condemnation of chemical weapons.
As I wrote in 2013, this red line policy involving chemical weapons seems to amounted to saying “we do not like that you are slaughtering people, but as long as you use conventional weapons…well, we will not do much beyond condemning you.” This leads to the question I addressed then, which is the question of whether chemical weapons are morally worse than conventional weapons.
Chemical weapons are clearly perceived as being worse than conventional weapons and their use in Syria has resulted in a level of outrage that the conventional killing has not. Some of the reasons for this perception are rooted in history.
World War I one saw the first large scale deployment of chemical weapons. While conventional artillery and machine guns did the bulk of the killing, gas attacks were regarded with a special horror. One reason was that the effects of gas tended to be rather awful, even compared to the wounds that could be inflicted by conventional weapons. This helped establish the feeling that chemical weapons are especially horrific and worse than conventional weapons.
There is also the ancient view that the use of poison is inherently evil or at least cowardly. After all, poison allows one to kill in secret and without taking the risk of facing an opponent in combat. In historical accounts and in fiction, poisoners are typically cast as villains. One excellent example of this is the use of poison in Shakespeare’s Hamlet. Even in games, such as Dungeons & Dragons, the use of poison is regarded as an inherently evil act. In contrast, killing someone with a sword or gun can be morally acceptable or even heroic. This view of poison as cowardly and evil seems to have infected the view of chemical weapons. This makes sense given that they are poisons.
Finally, there is the association of poison gas with the Nazi concentration camps. This connection has served to cement the connection of chemical weapons with evil. While these explanations are psychological interesting, they do not resolve the question of whether chemical weapons are morally worse than conventional weapons. It is to this issue that I now turn.
One good reason to regard chemical weapons as worse than conventional weapons is that they typically do not merely kill—they inflict terrible suffering. The basis of the difference is the principle that while killing is morally wrong, the method of killing is morally relevant to its wrongness. As such, the greater suffering inflicted by chemical weapons makes them morally worse than conventional weapons.
There are three counters to this. The first is that conventional weapons, such as bombs and artillery, can inflict horrific wounds matching the suffering inflicted by chemical weapons.
The second is that chemical weapons can be designed so that they kill quickly and with minimal suffering. An analogy can be drawn to capital punishment: lethal injection is regarded as morally superior to more conventional modes of execution such as hanging and firing squad. If the moral distinction is based on the suffering of the targets, then these chemical weapons would be morally superior to conventional weapons. Horrific chemical weapons would, of course, be worse than less horrific conventional (or chemical) weapons. As such, being a chemical weapon does not make a weapon worse, the suffering it inflicts is what matters morally.
The third is that wrongfully harming people with conventional weapons is still evil. Even if it is assumed that chemical weapons are worse in terms of the suffering they cause, the moral red line should be the killing of people rather than killing them with chemical weapons. This is because the distinction between not killing people and killing them is greater than the distinction between killing people with conventional weapons and killing them with chemical weapons. For example, having soldiers kill everyone in a village using their rifles seems to be as morally wrong as using poison gas to kill everyone. The result is the same: mass murder.
In addition to supposedly causing more suffering than conventional weapons, chemical weapons are said to be worse because they are often indiscriminate and persistent. For example, a chemical weapon deployed as a gas can easily drift and spread into areas outside of the desired target and remain dangerous for some time after the initial attack. As such, chemical weapons are worse than conventional weapons because they harm and kill those who were not the intended targets.
The obvious counter to his is to note that conventional weapons can also be indiscriminate or persistent. While bombs and artillery shells are accurate, they do still result in unintended causalities. They can also be used indiscriminately. Land mines present an excellent example of a conventional weapon that is both indiscriminate and persistent. Chemical weapons could be designed to have the same level of discrimination as conventional area-of-effect weapons (like bombs) and to be non-persistent (losing lethality rapidly). As such, it is discrimination and persistence that matter rather than the composition of the weapon.
While specific chemical weapons are worse than specific conventional weapons, chemical weapons are not inherently morally worse than conventional weapons. In fact, the claim of a moral distinction between conventional and chemical weapons can have terrible consequences: it allows a moral space in which to tolerate murder while maintaining the delusion of taking a meaningful moral stance.
As a professional philosopher, I am not inclined to believe in curses. However, my experiences over the years have convinced me that I am the victim of what I call the Curse of Springtime. As far as I know, this curse is limited to me and I do not want anyone to have the impression that I regard Springtime Tallahassee in a negative light. Here is the tale of the curse.
For runners, the most important part of Springtime is the Springtime 10K (and now the 5K). Since I moved to Tallahassee in 1993 I have had something bad happen right before or during the race. Some examples: one year I had a horrible sinus infection. Another year I had my first ever muscle pull. Yet another year I was kicking the kickstand of my Yamaha, slipped and fell-thus injuring my back. 2008 saw the most powerful manifestation of the curse.
On the Thursday before the race, my skylight started leaking. So, I (stupidly) went up to fix it. When I was coming down, the ladder shot out from under me. I landed badly and suffered a full quadriceps tendon tear that took me out of running for months. When Springtime rolled around in 2009 I believed that the curse might kill me and I was extra cautious. The curse seemed to have spent most of its energy on that injury, because although the curse did strike, it was minor. But, the curse continued: I would either get sick or injured soon before the race, or suffer and injury during the race. This year, 2017, was no exception. My knees and right foot started bothering me a week before the race and although I rested up and took care of myself, I was unable to run on Thursday. I hobbled through the 10K on Saturday, cursing the curse.
Since I teach critical thinking, I have carefully considered the Curse of Springtime and have found it makes a good example for applying methods of causal reasoning. I started with the obvious, considering that I was falling victim to the classic post hoc, ergo propter hoc (“after this, therefore because of this”). This fallacy occurs when it is uncritically assumed that because B follows A, that A must be the cause of B. To infer just because I always have something bad happen as Springtime arrives that Springtime is causing it would be to fall into this fallacy. To avoid this fallacy, I would need to sort out a possible causal mechanism—mere correlation is not causation.
One thing that might explain some of the injuries and illnesses is the fact that the race occurs at the same time each year. By the time Springtime rolls around, I have been racing hard since January and training hard as well—so it could be that I am always worn out at this time of year. As such, I would be at peak injury and illness vulnerability. On this hypothesis, there is no Curse—I just get worn down at the same time each year because I have the same sort of schedule each year. However, this explanation does not account for all the incidents—as noted above, I have also suffered injuries that had nothing to do with running, such as falls. Also, sometimes I am healthy and injury free before the race, then have something bad happen in the race itself. As such, the challenge is to find an explanation that accounts for all the adverse events.
It is certainly worth considering that while the injuries and illnesses can be explained as noted above, the rest of the incidents are mere coincidences: it just so happens that when I am not otherwise ill or injured, something has happened. While improbable, this is not impossible. That is, it is not beyond the realm of possibility for random things to always happen for the same race year after year.
It is also worth considering that it only seems that there is a curse because I am ignoring the other bad races I have and considering only the bad Springtime races. If I have many bad races each year, it would not be unusual for Springtime to be consistently bad. Fortunately, I have records of all my races and can look at it objectively: while I do have some other bad races, Springtime is unique in that something bad has happened every year. The same is not true of any other races. As such, I do not seem to be falling into a sort of Texas Sharpshooter Fallacy by only considering the Springtime race data and not all my race data.
There is certainly the possibility that the Curse of Springtime is psychological: because I think something bad will happen it becomes a self-fulfilling prophecy. Alternatively, it could be that because I expect something bad to happen, I carefully search for bad things and overestimate their badness, thus falling into the mistake of confirmation bias: Springtime seems cursed because I am actively searching for evidence of the curse and interpreting events in a way that support the curse hypothesis. This is certainly a possibility and perhaps any race could appear cursed if one spent enough effort seeking evidence of an alleged curse. That said, there is no such consistent occurrence of unfortunate events for any other race, even those that I have run every year since I moved here. This inclines me to believe that there is some causal mechanism at play here. Or a curse. But, I am aware of the vagaries of chance and it could simply be an unfortunate set of coincidences that every Springtime since 1994 has seemed cursed. But, perhaps in 2018 everything will go well and I can dismiss my belief in the curse as mere superstition. Unless the curse kills me then. You know, because curse.
Showing the extent of their concern for the privacy of Americans, congress has overturned rules aimed at giving consumers more control over how ISPs use their data. Most importantly, these rules would have required consent from customers before the ISPs could sell sensitive data (such as financial information, health information and browsing history). Assuming the sworn defender of the forgotten, President Donald Trump, signs the bill into law, ISPs will be able to monetize the private data of their customers.
While the ISPs obviously want to make more money, giving that as the justification for stripping away the privacy of customers would not make for effective rhetoric. Instead, proponents make the usual vague and meaningless references to free markets. Since there is no actual substance to these noises, they do not merit a response.
They also advance more substantial reasons, such as the claim that companies such as Facebook monetize private data, the assertion that customers will benefit and the claim that this will fuel innovation. I will consider each in turn.
On the one hand, the claim that other companies already monetize private data could be dismissed as a mere fallacy of appeal to common practice. After all, the fact that others are doing something does not entail that it is a good thing. On the other hand, this line of reasoning can be seen as a legitimate appeal to fairness: it would be unfair that companies like Google and Facebook get to monetize private data while ISPs do not get to do so. The easy and obvious counter to this is that consumers can easily opt out of Google and Facebook by not using their services. While this means forgoing some useful services, it is a viable option. In contrast, going without internet access is extremely problematic and customers have very few (if any alternatives). Even if a customer can choose between two or more ISPs, it is likely that they will all want to monetize the customers’ private data—it is simply too valuable a commodity to leave on the table. While it is not impossible for an ISP to try to win customers by choosing to forgo selling their data, this seems unlikely—thus customers will generally be stuck with the choice of giving up the internet or giving up their privacy. Given the coercive advantage of the ISPs, it is up to the state to protect the interests of the citizens (just as the state protects ISPs).
The claim that the customers will benefit is hard to evaluate in the abstract. After all, it is not yet known what, if anything, the ISPs will provide in return for the data. Facebook and Google offer valuable services in return for handing over data; but customers already pay ISPs for their services. It might turn out that the ISPs will offer customers deals that make giving up privacy appealing—such as lowered costs. However, anyone familiar with companies such as Comcast will have no faith in this. As such, the overturning of the privacy rules will benefit ISPs but will most likely not benefit consumers.
While the innovation argument is deployed in almost any discussion of technology, allowing ISPs to sell private data does not seem to be an innovation, unless one just means “change” by “innovation.” It also seems unlikely to lead to any innovations for the customers; although the ISPs will presumably work hard to innovate in ways to process and sell data. This innovation would be good for the ISPs, but would not seem to offer anything to the customers—anymore than innovations in processing and selling chickens benefits the chickens.
Defenders of the ISPs could make the case that the data belongs to the ISP rather than the customer, so they have the right to sell it. Laying aside the usual arguments about privacy rights and sticking to ownership rights, this claim is easily defeated by the following analogy.
Suppose that I rent an office and use it to conduct my business, such as writing my books. The owner has every right to expect me to pay my rent. However, they have no right to set up cameras to observe my work and interactions with people and then sell the information they gather as their own. That would be theft. In the case of the ISP, I am leasing access to the internet, but what I do in this virtual property belongs to me—they have no right of ownership to what I do. After all, I am doing all the labor. Naturally, I can agree to sell my labor; but this needs to be my choice. As such, when ISPs insist they have the right to sell customers private data, they are like landlords claiming they have a right to sell anything valuable they can learn by spying on their tenants. This is clearly wrong. Unfortunately, congress belongs to the ISPs and not to the people.
President Trump assigned his son-in-law Jared Kushner to head up the effort to make the federal government more like a business. Trump has already been a leader in this effort by engaging in the same sort of nepotism that occurs in business. While it is certainly tempting to dismiss this appointment as more nepotism, it is worth considering whether government should be more like a business.
The idea that government should be more like a business is certainly appealing to those who education, experience and values relate to business. It is natural for people to see the world through the lens of their experiences and education. It is also natural to want to apply the methods that one is most familiar with to as many areas as possible. For example, my education is in philosophy and I have extensive experience in critical thinking, logic and ethical reasoning. As such, I tend to see the world through the philosophical lens and I want to apply critical thinking, logic and ethical reasoning whenever I can. Likewise, those who are educated and experienced in business see the world through the business lens and wish to broadly apply their business skills and methods.
A reasonable case can be made as to why this business focused approach has some merit. One way to argue for this is to point out that many skills that are developed in the context of business can be applied to government. For example, negotiating and deal making skills can be applied to politics—although there are certainly differences between the specifics of each area. As another example, business leadership and management skills can also be applied in government, although there are clearly relevant differences between the two areas. It would thus be a mistake to claim that government is nothing like a business. That said, those enamored of business often make mistakes in their zeal to “businessform” government (that is, transform it into business).
One basic mistake is to think that just because there are positive qualities of business that are also positive qualities of government, making government more like a business will bring about those positive qualities. Obviously enough, making one thing more like another only results in positive qualities if they are made alike in those positive ways. Merely making them alike in other ways does not do this. To use an analogy, dressing like a runner makes one like a runner, but this does not confer the health benefits of running.
There is also the fact that although things that have similar positive qualities are thus similar, it does not follow that they are thus otherwise alike in relevant ways. For example, efficiency is a positive quality of business and government, but merely making government like business need not make it more efficient. There are, after all, business that are very inefficient.
Also, the fact that efficiency can be a positive quality of both business and government does not entail they are thus alike in other ways or that the way business is made more efficient is the way to make government more efficient. To illustrate, a business might be very efficient at exploiting customers and workers while enriching the stockholders, but that is presumably not the sort of efficiency one would aim for in government.
Avoiding this mistake involves resisting the mythology and fetishizing of “businessifictaion” and giving due consideration to which skills, methods and approaches transfer well from business to government and which do not.
A second basic mistake is similar to that made by Ion in Plato’s dialogue Ion. The rhapsode Ion believes, at the start of the dialogue, that poets have knowledge and mastery about almost everything. His reasoning is that because poets write about, for example military matters, they have an expertise in military matters. As such, poets should be able to teach people about these matters and serve as leaders in all these areas.
Socrates, as would be expected, shows that the poets (as poets) do not have such knowledge. The gist of his argument is that each area is mastered by mastering the subject of that area and all these areas “belong” to others and not to the poets. For example, knowledge of waging war belongs to soldiers. The poets touch but lightly on these other areas and understand only the appearances and not the depth. Socrates does note that a person can have multiple domains of mastery, so a medical doctor could, for example, also be skilled at mathematics or art history.
The error in the case of business is to think that because there are many types of business and almost everything has some connection to business, then an alleged mastery of business confers mastery over all these things. However, business skills are rather distinct from the skills that are specific to the various types of businesses. To illustrate, while a manager might believe that their managing skills are universal, managing a software company does not confer software skills nor does managing a hospital confer medical skills. One might pick up skills and knowledge, but this would not be as a businessperson. After all, while a business person might be a runner, that does not make running a business. The fact that there are businesses associated with running, such as Nike, does not entail that skill in business thus confers skill in running. As such, for someone to think that business skills thus confer mastery over government would be a mistake. They might believe that they have such mastery because government interacts with business and some businesses do things like what government does, but they would be as mistaken as someone who thinks that because they manage a Nike outlet they are thus an athlete.