In February of 2015 Laura Kipnis’ essay “Sexual Paranoia Strikes Academe” was published in the Chronicle of Higher Education. Though perhaps potentially controversial in content, the essay was a rational and balanced consideration of the subject of campus codes regarding relationships between students and professors. In response to this essay, Kipnis was subjected to what she rightly calls a Title IX Inquisition.
While I will not be addressing the specifics of Kipnis’ essays, reading them caused me to consider the topic of university regulation of relations between professors and students. While the legal issues are certainly interesting, my main concern as a philosopher lies in the domain of ethics.
I will begin by getting the easy stuff out of the way. Since universities have an obligation to provide a safe environment conducive to learning, universities should have rules that forbid professors from sexually harassing students or pressuring them. Since universities also have an obligation to ensure that grades are assigned based on merit, they should also have rules that forbid exchanging goods or services (in this case, sexual services) in return for better grades. Crimes such as sexual assault and rape should be handled by the police—though universities should certainly have rules governing the employment of professors who are convicted of assaulting or raping anyone. Of course, since the professor would most likely be in prison, this would probably make continued employment rather difficult.
Somewhat less easy is the issue of whether or not universities should forbid consenting relationships between professors and students when the student is enrolled in the professor’s class or otherwise professionally under the professor (such as being an advisee, TA, or RA). There is certainly a legitimate concern about fairness. After all, if a student is sexually involved with a professor, then the student might have an unfair advantage relative to other students. I consider this to be distinct from the exchange of a grade for sexual favors—rather, this is a matter of such things as positive bias in favor of the student that results in special treatment. For example, that a professor might grade her boyfriend’s paper much easier than those of other students.
While sexual relations can lead to bias, these are not the only relations that can have this effect. A professor who is friends with a student or related to a student can be subject to bias in favor of that student (as distinct from pure nepotism in which grades are simply handed out based on the relationship). So, if the principle justifying forbidding a professor from having a student in his class he has a relation with is based on the potential for bias, then students who are friends, relatives or otherwise comparably connected to the professor would also need to forbidden.
It can be argued that there is a relevant difference between sexual relations and non-sexual relations that would justify forbidding a professor from dating a student in her class, while still allowing her to have a friend or relative as a student. Alternatively, a university could simply place a general ban on professors having students with whom they have a potentially biasing relationship—be it sexual, platonic, or a family relationship. As a general policy, this does have some appeal on the grounds of fairness. It can, however, be countered on the grounds that a professional should be able to control her bias in regards to friends and family. This, of course, opens the door to the claim that a professional should also be able to control his bias in regards to a sexual relationship. However, many people would certainly be skeptical about that—and I recall from my own graduate school days the comments students would make about students who were sexual involved with their professor or TA. Put in polite terms, they expressed their skepticism about the fairness of the grading.
My considered view is a conditional one: if a professor can maintain her objectivity, then the unfairness argument would have no weight. However, there is the legitimate concern that some (or even many) professors could not maintain such objectivity, thus making such a general rule forbidding relationships justifiable. After all, rules limiting behavior are not crafted with the best people in mind, but those that are less than the best.
The fairness argument could not, of course, be used to justify forbidding professors from dating students who are not and will not be in their classes (or otherwise under them in a professional capacity). So, for example, if an engineering professor were to date an English Literature major who will never take any of the classes she teaches, then there would seem to be no basis in regards to fairness for forbidding this relationship. Since harassment and coercive relationships should be forbidden, there would thus seem to be no grounds for forbidding such a consensual relationship between two adults. However, there are those who argue that there are grounds for a general forbiddance.
There are, of course, practical reasons to have a general forbiddance of relationships between students and professors even when there is no coercion, no harassment, and no unfairness and so on. One reason is that relationships generally fail and often fail in dramatic ways—it could be problematic for a university to have such a dramatic failure play out on campus. Another reason is that such relationships can be a legal powder keg in terms of potential lawsuits against a university—as such, university administrators probably feel that their money and brand should be protected by forbidding any such relationships.
From a moral perspective, the concern is whether there are moral grounds for forbidding such relationships (other than, of course, a utilitarian argument about the potential for brand damage).
One stock argument is that there is always a power disparity between professors and students and this entails that all relationships are potentially coercive. Even if most professors would not consciously coerce a student, rules (as noted above) are not made for the best people. As such, the blanket ban on relationships is necessary to prevent any possibility of coercive relationships between students and professors.
It might be objected that a rule against coercive relationships would suffice and that if the professor has no professional relationship with the student, then they should be treated as adults. After all, the professor would seem to have no power at all over the student and coercion via professional position would not be a possibility. So, they should be free to have a relationship despite the worries of the “nanny” university.
It could be countered that a professor always has power over a student in virtue of being a professor—even when the professor has no professional relationship to the student. While a professor might have some “power” in regards to being older (usually), having some status, having more income (usually), and so on, these do not seem to be distinct from the “power” anyone could have over anyone else. That is, there seems to be nothing specific to being a professor that would give the professor power over the student that would make the relationship automatically coercive. As such, there would seem to be no grounds for forbidding the relationship.
It could be objected that students are vulnerable to the power of professors and lack the autonomy needed to resist this power. As such, the university must act in a paternalistic way and forbid all relationships—so as to protect the guileless, naïve and completely powerless students from the cunning, powerful predatory professors. This would be analogous to the laws that protect minors from adults—the minors cannot give informed consent. If college students are similarly vulnerable to professors, then the same sort of rule applies. Of course, if students are so vulnerable, then there should certainly be a reconsideration of the age of consent—increasing it to 23 might suffice. Then again, many students take six years to graduate, so perhaps it should be 24. There are also graduate students, so perhaps it should be extended to 30. Or even more—after all, a student could go to school at almost any age.
Unless it is assumed that students are powerless victims and professors are powerful predators, then a blanket ban on relationships seems morally unwarranted—at least on the grounds of forbidding relationships because of an assumption of coercion. However, there are other moral grounds for such rules—for example, a case can be made that dating students would be a violation of professionalism (on par with dating co-workers or clients). While the effect would be the same, the justification does seem to matter.
As part of my critical thinking class, I cover the usual topics of credibility and experiments/studies. Since people often find critical thinking a dull subject, I regularly look for real-world examples that might be marginally interesting to students. As such, I was intrigued by John Bohannon’s detailed account of how he “fooled millions into thinking chocolate helps weight loss.”
Bohannon’s con provides an excellent cautionary tale for critical thinkers. First, he lays out in detail how easy it is to rig an experiment to get (apparently) significant results. As I point out to my students, a small experiment or study can generate results that seem significant, but really are not. This is why it is important to have an adequate sample size—as a starter. What is also needed is proper control, proper selection of the groups, and so on.
Second, he provides a clear example of a disgraceful stain on academic publishing, namely “pay to publish” journals that do not engage in legitimate peer review. While some bad science does slip through peer review, these journals apparently publish almost anything—provided that the fee is paid. Since the journals have reputable sounding names and most people do not know which journals are credible and which are not, it is rather easy to generate a credible seeming journal publication. This is why I cover the importance of checking sources in my class.
Third, he details how various news outlets published or posted the story without making even perfunctory efforts to check its credibility. Not surprisingly, I also cover the media in my class both from the standpoint of being a journalist and being a consumer of news. I stress the importance of confirming credibility before accepting claims—especially when doing so is one’s job.
While Bohannon’s con does provide clear evidence of problems in regards to corrupt journals, uncritical reporting and consumer credulity, the situation does raise some points worth considering. One is that while he might have “fooled millions” of people, he seems to have fooled relative few journalists (13 out of about 5,000 reporters who subscribe to the Newswise feed Bohannon used) and these seem to be more of the likes of the Huffington Post and Cosmopolitan as opposed to what might be regarded as more serious health news sources. While it is not known why the other reporters did not run the story, it is worth considering that some of them did look at it critically and rejected it. In any case, the fact that a small number of reporters fell for a dubious story is hardly shocking. It is, in fact, just what would be expected given the long history of journalism.
Another point of concern is the ethics of engaging in such a con. It is possible to argue that Bohannon acted ethically. One way to do this is to note that using deceit to expose a problem can be justified on utilitarian grounds. For example, it seems morally acceptable for a journalist or police officer to use deceit and go undercover to expose criminal activity. As such, Bohannon could contend that his con was effectively an undercover operation—he and his fellows pretended to be the bad guys to expose a problem and thus his deceit was morally justified by the fact that it exposed problems.
One obvious objection to this is that Bohannon’s deceit did not just expose corrupt journals and incautious reporters. It also misinformed the audience who read or saw the stories. To be fair, the harm would certainly be fairly minimal—at worst, people who believed the story would consume dark chocolate and this is not exactly a health hazard. However, intentionally spreading such misinformation seems morally problematic—especially since story retractions or corrections tend to get far less attention than the original story.
One way to counter this objection is to draw an analogy to the exposure of flaws by hackers. These hackers reveal vulnerabilities in software with the stated intent of forcing companies to address the vulnerabilities. Exposing such vulnerabilities can do some harm by informing the bad guys, but the usual argument is that this is outweighed by the good done when the vulnerability is fixed.
While this does have some appeal, there is the concern that the harm done might not outweigh the good done. In Bohannon’s case it could be argued that he has done more harm than good. After all, it is already well-established that the “pay to publish” journals are corrupt, that there are incautious journalists and credulous consumers. As such, Bohannon has not exposed anything new—he has merely added more misinformation to the pile.
It could be countered that although these problems are well known, it does help to continue to bring them to the attention of the public. Going back to the analogy of software vulnerabilities, it could be argued that if a vulnerability is exposed, but nothing is done to patch it, then the problem should be brought up until it is fixed, “for it is the doom of men that they forget.” Bohannon has certainly brought these problems into the spotlight and this might do more good than harm. If so, then this con would be morally acceptable—at least on utilitarian grounds.
While casting Democrats as wanting to impose the power of big government, the Republicans profess to favor small government and local control. However, as J.S. Mill noted, people rarely operate on the basis of consistently applied principles regarding what the state should or should not do. As such, it is hardly surprising that Republicans are for local control, except when the locals are not doing what they want. Then they are often quite willing to use the power of the state against local government. One recent and clear example of this is the passage of laws in states such as Oklahoma and Texas that effectively forbid local governments from passing laws aimed at restricting fracking.
Even in oil industry friendly states such as Oklahoma, there have been attempts by local governments to impose restrictions on fracking. As might be imagined, having a fracking operation right next door tends to be disruptive—the lights, noise, heavy truck traffic and contamination are all concerns. In Oklahoma there is also the added concern of earthquakes that have been causally linked to disposal wells. Since places that did not have earthquakes before the wells were dug generally do not have earthquake resistant structures, these new quakes can pose threats to property and public safety.
In general, local governments have stepped in because the local people believed that the state government was not doing enough to protect the well-being of the local citizens. In general, state legislatures tend to be very friendly with the oil and gas industry—in part because they tend to make up a significant proportion of the economy of many states. While lobbying state legislatures is not cheap, it is obviously more cost effective to have the state legislatures pass laws forbidding local governments from acting contrary to the interests of the oil and gas industry. Otherwise, the industry would need to influence (or purchase) all the local governments and this would costly and time consuming.
Since I favor individual autonomy, it is hardly surprising that I also favor local autonomy. As such, I regard these laws to be wrong. However, considering arguments for and against them is certainly worthwhile.
One obvious set of arguments to deploy against these laws are all the general arguments that Republicans advance in favor of local control when the locals are doing what Republicans want them to do. After all, if these arguments adequately show that local control is good and desirable, then these arguments should apply to this situation as well. But, as noted above, the “principle” most follow is that people should do what they want and not do what they do not want them to do. Consistency is thus rather rare—and almost unseen when it comes to politics.
One argument in favor of having the state impose on the local governments is based on the fact that having a patchwork of laws is problematic. The flip side of this is, obviously, that having a consistent set of laws across the state (and presumably the entire country) is generally a good thing.
In the case of the regulation of the oil and gas industry, the argument rests on the claim that having all these different local laws would be confusing and costly—it is better to have laws for the industry that cover the entire state (and, to follow the logic, the entire country…or world). Interestingly, when the EPA advanced a similar argument for regulating water, the Republicans rushed to attack. Once again, this is hardly a shock: the patchwork argument is not applied consistently, just when a party wants to prevent local control.
Applied consistently, the patchwork argument certainly has its appeal. After all, it is true that having laws vary with each locality can be rather confusing and can have some negative consequences. For example, if the color of traffic lights was set by localities and some decided to go with different colors, then there would be problems. As another example, if some local governments refused to recognize same sex-marriage when it is legal in the state, this could lead to various legal problems (such as inheritance issues or hospital visitation rights). As such, there seem to be good reasons to have a unified set of laws rather than a patchwork.
That said, it can be argued that the difficulties of the patchwork can be outweighed by other factors. In general terms, one can always apply a utilitarian argument. If it can be shown that allowing local autonomy on a matter creates more good than the harm created by having a patchwork of laws, then that would be an argument in favor of local autonomy in this matter. In the case of local control of the gas and oil industry, this would be a matter of weighing the harms and the benefits to all those involved (and not just the oil and gas industry shareholders). I am inclined to think that allowing local control would create more good than harm, but I could be wrong about this. Perhaps the benefits to the state as a whole outweigh the damage done locally—that is, the few must sacrifice for the many (albeit against their will). But perhaps the many are suffering for the few stockholders, which would seem to be wrong.
Another moral argument worth considering is the matter of property rights. In the case of fracking, the oil and gas companies do own the mineral rights. As such, they do have legitimate property rights to the resources located under the property in question. However, the people who own the property above the minerals also have rights. These presumably include a right to safety from environmental contamination, a right to not have their property values degraded, a right to a certain quality of life in regards to noise and light, and so on for other rights. The moral challenge is, obviously enough, balancing these rights against each other. Working this out is, in the practical sense, a matter of politics.
Since local governments tend to be more responsive to locals than the state government, it could be argued that they would be biased against the oil and gas industry and hence this matter should be settled by the state to avoid an unfair resolution. However, it can be argued that state governments are often influenced (or owned) by the oil and gas industry. This would seem to point towards the need for federal regulation of the matter (assuming that the federal government is more objective)—which is something that Republicans tend to oppose, despite it being the logical conclusion of their argument against local control. Interesting, arguments advanced to claim that the federal government should not impose on the local control of the states would seem to apply to the local government. That is, if the federal government should not be imposing on the states, then the states should not be imposing on the local governments.
The police shooting of unarmed black Americans has raised the question of why such shootings occurred. While some have rushed to claim that it is a blend of racism and brutality, the matter deserves careful consideration.
While there are various explanations, the most plausible involves a blend of factors. The first, which does have a connection to racism, is the existence of implicit bias. Studies involving simulators have found that officers are more likely to use force against a black suspect than a white suspect. This has generally been explained in terms of officers having a negative bias in regards to blacks. What is rather interesting is that these studies show that even black and Hispanic officers are more likely to use force against black suspects. Also interesting is that studies have shown that civilians are more likely than officers to use force in the simulators and also show more bias in regards to race.
One reason why an implicit bias can lead to a use of force is that it impacts how a person perceives another’s actions and the perception of objects. When a person knows she is in a potentially dangerous situation, she is hyper vigilant for threats and is anticipating the possibility of attack. As such, a person’s movements and any object he is wielding will be seen through that “threat filter.” So, for example, a person reaching rapidly to grab his wallet can easily be seen as grabbing for a weapon. Perceptual errors, of course, occur quite often—think of how people who are afraid of snakes often see every vine or stick as a snake when walking in the woods. These perceptual errors also help explain shootings—a person can honestly think they saw the suspect reaching for a weapon.
Since the main difference between the officers and the civilians is most likely the training police receive, it seems reasonable to conclude that the training is having a positive effect. However, the existence of a race disparity in the use of force does show that there is still a problem to address. One point of concern is that the bias might be so embedded in American culture that training will not eliminate it. That is, as long as there is racial bias in the society, it will also infect the police. As such, eliminating the bias in police would require eliminating it in society as a whole—which goes far beyond policing.
A second often mentioned factor is what some call the “warrior culture.” Visually, this is exemplified by the use of military equipment, such as armored personal carriers, by the police. However, the warrior culture is not primarily a matter of equipment, but of attitude. While police training does include conflict resolution skill training, there is a significant evidence on combat skills, especially firearms. On the one hand, this makes sense—people who are going to be using weapons need to be properly trained in their use. On the other hand, there are grounds for being concerned with the fact that there is more focus on combat training relative to the peaceful resolution of conflicts.
Since I have seen absurd and useless “training” in conflict resolution, I do get that there would be concerns about such training. I also understand that conflict resolution is often cast in terms of “holding hands and drinking chamomile tea together” and hence is not always appealing to people who are interested in police work. However, it does seem to be a critical skill. After all, in a crisis people fall back on habit and training—and if people are trained primarily for combat, they will fall back on that. Naturally, there is the worry that too much emphasis on conflict resolution could put officers in danger—so that they keep talking well past the point at which they should have started shooting. However, this is a practical matter of training that can be addressed. A critical part of conflict resolution training is also what Aristotle would regard as moral education: developing the character to know when and how to act correctly. As Aristotle said, it is easy to be angry but it is hard to be angry at the right time for the right reasons, towards the right people and to the right degree. As Aristotle also said, this is very hard and most people are rather bad at this sort of thing, including conflict resolution. This does present a challenge even for a well-trained officer—the person she is dealing with is probably horrible at conflict-resolution. One possible solution is training for citizens—not in terms of just rolling over for the police, but in interacting with the police (and each other). Expecting the full burden of conflict resolution to fall upon the police certainly seems unfair and also not a successful strategy.
The final factor I will consider is the principle of the primacy of officer survival. One of the primary goals of police training and practice is officer survival. It would, obviously, be absurd to claim that police should not be trained in survival or that police practices should not put an emphasis on the survival of officers. However, there are legitimate concerns about ways of training officers, the practice of law enforcement and the attitude that training and practice create.
Part of the problem, as some see it, links to the warrior mentality. The police, it is claimed, are trained to regard their job as incredibly dangerous and policing as a form of combat mission. This, obviously enough, shapes the reaction of officers to situations they encounter, which ties into the matter of perceptual bias. If a person believes that she is going out into a combat zone, she will perceive people and actions through this “combat zone filter.” As such, people will be regarded as more threatening, actions will be more likely to be interpreted as hostile and objects will be more likely to be seen as weapons. As such, it certainly makes sense that approaching officer survival by regarding police work as a combat mission would result in more civilian causalities than would different approaches.
Naturally, it can be argued that officers do not, in general, have this sort of “combat zone” attitude and that academics are presenting the emphasis on survival in the wrong sort of light. It can also be argued that the “combat zone” attitude is real, but is also correct—people do, in fact, target police officers for attack and almost any situation could turn into a battle for survival. As such, it would be morally irresponsible to not train officers for survival, to instill in them a proper sense of fear, and to engage in practices that focus primarily on officers making it home at the end of the shift—even if this approach results in more civilian deaths, including the deaths of unarmed civilians.
This leads to a rather important moral concern, namely the degree of risk a person is obligated to take in order to minimize the harm to another person. This matter is not just connected to the issue of the use of force by police, but also the broader issue of self-defense.
I do assume that there is a moral right to self-defense and that police officers do not lose this right when acting in their professional capacity. That is, a person has a right to harm another person when legitimately defending her life, liberty or property against an unwarranted attack. Even if such a right is accepted, there is still the question of the degree of force a person is justified in using and to what extent a person should limit her response in order to minimize harm to the attacker.
In terms of the degree of force, the easy and obvious answer is that the force should be proportional to the threat but should also suffice to end the threat. For example, when I was a boy I faced the usual attacks of other boys. Since these attacks just involved fists and grappling, a proportional response was to hit back hard enough to make the other boy stop. Grabbing a rock, a bat or pulling a knife would be disproportional. As another example, if someone is shooting at a police officer, then she would certainly be in the right to use her firearm since that would be a proportional response.
One practical and moral concern about the proportional response is that the attacker might escalate. For example, if Bob swings on Mary and she lands a solid punch to his face, he might pull out a knife and stab her. If Mary had simply shot Bob, she would have not been stabbed because Bob would be badly wounded or dead. As such, some would argue, the response to an attack should be disproportional. In terms of the moral justification, this would rest on the fact that the attacker is engaged in an unjust action and the person attacked has reason to think, as Locke argued, that the person might intend to kill her.
Another practical and moral concern is that if the victim “plays fair” by responding in a proportional manner, she risks losing the encounter. For example, if Bob swings on Sally and Sally sticks with her fists, Bob might be able to beat her. Since dealing with an attacker is not a sporting event, the idea of “fair play” seems absurd—hence the victim has the moral right to respond in a disproportional manner.
However, there is also the counter-concern that a disproportional response would be excessive in the sense of being unnecessary. For example, if Bob swings at Sally and Sally shoots him four times with a twelve gauge, Sally is now safe—but if Sally could have used a Taser to stop Bob, then the use of the shotgun would seem to be wrong—after all, she did not need to kill Bob in order to save herself. As such, it would seem reasonable to hold to the moral principle that the force should be sufficient for defense, but not excessive.
The obvious practical challenge is judging what would be sufficient and what would be excessive. Laws that address self-defense issues usually leave this very vague: a person can use deadly force when facing a “reasonable perceived threat.” That is, the person must have a reasonable belief that there is a threat—there is usually no requirement that the threat be real. To use the stock example, if a man points a realistic looking toy gun at an officer and says he is going to kill her, the officer would have a reasonable belief that there is a threat. Of course, there are problems with threat assessment—as noted above, implicit bias, warrior mentality and survival focus can cause a person to greatly overestimate a threat (or see one where it does not exist).
The challenge of judging sufficient force in response to a perceived threat is directly connected with the moral concern about the degree of risk a person is obligated to face in order to avoid (excessively) harming another person. After all, a person could “best” ensure her safety by responding to every perceived threat with maximum lethal force. If she responds with less force or delays her response, then she is at ever increasing risk. If she accepts too little risk, she would be acting wrongly towards the person threatening her. If she accepts too much risk, she would be acting wrongly towards herself and anyone she is protecting.
A general and generic approach would be to model the obligation of risk on the proportional response approach. That is, the risk one is obligated to take is proportional to the situation at hand. This then leads to the problem of working out the details of the specific situation—which is to say that the degree of risk would seem to rest heavily on the circumstances.
However, there are general factors that would impact the degree of obligatory risk. One would be the relation between the people. For example, it seems reasonable to hold that people have greater obligations to accept risk to avoid harming people they love or care about. Another factor that seems relevant is the person’s profession. For example, soldiers are expected to take some risks to avoid killing civilians—even when doing so puts them in some danger. To use a specific example, soldiers on patrol could increase their chance of survival by killing any unidentified person (adult or child) that approaches them. However, being a soldier and not a killer requires the soldiers to accept some risk to avoid murdering innocents.
In the case of police officers it could be argued that their profession obligates them to take greater risks to avoid harming others. Since their professed duty is to serve and protect, it can be argued that the survival of those who they are supposed to protect should be given equal weight to that of the survival of the officer. That is, the focus should be on everyone going home. In terms of how this would be implemented, the usual practice would be training and changes to rules regarding use of force. Limiting officer use of force can be seen as generating greater risk for the officers, but the goal would be to reduce the harm done to civilians. Since the police are supposed to protect people, they are (it might be argued) under greater obligation to accept risk than civilians.
One obvious reply to this is that many officers already have this view—they take considerable risks to avoid harming people, even when they would be justified in using force. These officers save many lives—although sometimes at the cost of their own. Another reply is that this sort of view would get officers killed because they would be too concerned about not harming suspects and not concerned enough about their own survival. That is a reasonable concern—there is the challenge of balancing the safety of the public and the safety of officers.
-Spoiler Alert: Details of the Season 1 Finale of The Flash are revealed in this post.
Philosophers often make use of fictional examples in order to discuss ethical issues. In some cases, this is because they are discussing hypotheticals and do not have real examples to discuss. For example, discussions of the ethics of utilizing artificial intelligences are currently purely hypothetical (as far as we know). In other cases, this is because a philosopher thinks that a fictional case is especially interesting or simply “cool.” For example, philosophers often enjoy writing about the moral problems in movies, books and TV shows.
The use of fictional examples can, of course, be criticized. One stock criticism is that there are a multitude of real moral examples (and problems) that should be addressed. Putting effort into fictional examples is a waste of time. To use an analogy, it would be like spending time worrying about getting more gold for a World of Warcraft character when one does not have enough real money to pay the bills.
Another standard criticism focuses on the fact that fictional examples are manufactured. Because they are made up rather than “naturally” occurring, there are obvious concerns about the usefulness of such examples and to what extent the scenario is created by fiat. For example, when philosophers create convoluted and bizarre moral puzzles, it is quite reasonable to consider whether or not such a situation is even possible.
Fortunately, a case can be made for the use of fictional examples in discussions about ethics. Examples involving what might be (such as artificial intelligence) can be defended on the practical ground that it is preferable to discuss the matter before the problem arises rather than trying to catch up after the fact. After all, planning ahead is generally a good idea.
The use of fictional examples can also be justified on the same grounds that sports and games are justified—they might not be “useful” in a very limited and joyless sense of the term, but they can be quite fun. If poker, golf, or football can be justified on the basis of enjoyment, then so too can the use of fictional examples.
A third justification for the use of fictional examples is that they can allow the discussion of an issue in a more objective way. Since the example is fictional, it is less likely that a person will have a stake in the made-up example. Fictional examples can also allow the discussion to focus more on the issue as opposed to other factors, such as the emotions associated with an actual event. Of course, people can become emotionally involved in fictional examples. For example, fans of a particular movie character might be quite emotionally attached to that character.
A fourth reason is that a fictional example can be crafted to be an ideal example, to lay out the moral issue (or issues) clearly. Real examples are often less clear (though they do have the advantage of being real).
In light of the above, it seems reasonable to use fictional examples in discussing ethical issues. As such, I will move on to my main focus, which is discussing whether the Flash is morally worse than the Reverse Flash on CW’s show The Flash.
For those not familiar with the characters or the show, the Flash is a superhero whose power is the ability to move incredibly fast. While there have been several versions of the Flash, the Flash on the show is Barry Allen. As a superhero, the Flash has many enemies. One of his classic foes is the Reverse Flash. The Reverse Flash is also a speedster, but he is from the future (relative to the show’s main “present” timeline). Whereas the Flash’s costume is red with yellow markings, the Reverse Flash’s costume is yellow with red markings. While Barry is a good guy, Eobard Thawne (the Reverse Flash) is a super villain.
On the show, the Reverse Flash travels back in time to kill the young Barry before he becomes the Flash—with the intent of winning the battle before it even begins. However, the Flash also travels back in time to thwart the Reverse Flash and saves his past self. Out of anger, the Reverse Flash murders Barry’s mother but finds that he has lost his power. Using some creepy future technology, the Reverse Flash steals the life of the scientist Harrison Wells and takes on his identity. Using this identity, he builds the particle accelerator he needs to get back to the future and ends up, ironically, needing to create the Flash in order to get back home. The early and middle episodes of the show are about how Barry becomes the Flash and his early career in fighting crime and poor decision making.
In the later episodes, the secret of the Reverse Flash is revealed and Barry ends up defeating him in an epic battle. Before the battle, “Wells” makes the point that he has done nothing more and nothing less than what he has needed to do to get home. Interestingly, while the Reverse Flash is ruthless in achieving his goal of returning to his own time and regaining the friends, family and job he has lost, he is generally true to that claim and only harms people when he regards it as truly necessary. He even expresses what seems to be sincere regret when he decides to harm those he has befriended.
While the details are not made clear, he claims that the future Flash has wronged him terribly and he is acting from revenge, to undo the wrong and to return to his own time. While he does have a temper that drives him to senseless murder, when he is acting rationally he acts consistently with his claim: he does whatever it takes to advance his goals, but does not go beyond that.
While the case of the Reverse Flash is fictional, it does raise a real moral issue: is it morally right to harm people in order to achieve one’s goals? The answer depends, obviously, on such factors as the goals and what harms are inflicted on which people. While the wrong allegedly done to the Reverse Flash has not been revealed, he does seem to be acting selfishly. After all, he got stuck in the past because he came back to kill Barry and then murders people when he thinks he needs to do so to advance his plan of return. Kant would, obviously, regard the Reverse Flash as evil—he regularly treats other rational beings solely as means to achieving his ends. He also seems evil on utilitarian grounds—he ends numerous lives and creates considerable suffering so as to achieve his own happiness. But, this is to be expected: he is a supervillain. However, a case can be made that he is morally superior to the Flash.
In the season one finale, the Reverse Flash tells Barry how to travel back in time to save his mother—this involves using the particle accelerator. There are, however, some potential problems with the plan.
One problem is that if Barry does not run fast enough to open the wormhole to the past, he will die. Risking his own life to save his mother is certainly commendable.
A second problem is that if Barry does go back and succeed (or otherwise change things), then the timeline will be altered. The show has established that a change in the past rewrites history (although the time traveler remembers what occurred)—so going back could change the “present” in rather unpredictable ways. Rewriting the lives of people without their consent certainly seems morally problematic, even if it did not result in people being badly harmed or killed. Laying aside the time-travel aspect, the situation is one in which a person is willing to change, perhaps radically, the lives of many people (potentially everyone on the planet) without their consent just to possibly save one life. On the face of it, that seems morally wrong and rather selfish.
A third problem is that Barry has under two minutes to complete his mission and return, or a singularity will form. This singularity will, at the very least, destroy the entire city and could destroy the entire planet. So, while the Reverse Flash was willing to kill a few people to achieve his goal, the Flash is willing to risk killing everyone on earth to save his mother. On utilitarian grounds, that seems clearly wrong. Especially since even if he saved her, the singularity could just end up killing her when the “present” arrives.
Barry decides to go back to try to save his mother, but his future self directs him to not do so. Instead he says good-bye to his dying mother and returns to the “present” to fight the Reverse Flash. Unfortunately, something goes wrong and the city is being sucked up into a glowing hole in the sky. Since skyscrapers are being ripped apart and sucked up, presumably a lot of people are dying.
While the episode ends with the Flash trying to close the hole, it should be clear that he is at least as bad as the Reverse Flash, if not worse: he was willing to change, without their consent, the lives of many others and he was willing to risk killing everyone and everything on earth. This is hardly heroic. So, the Flash would seem to be rather evil—or at least horrible at making moral decisions.
“The road to the White House is not just any road. It is longer than you’d think and a special fuel must be burned to ride it. The bones of those who ran out of fuel are scattered along it. What do they call it? They call it ‘money road.’ Only the mad ride that road. The mad or the rich.”
While some countries have limited campaign seasons and restrictions on political spending, the United States follows its usual exceptionalism. That is, the campaign seasons are exceptionally long and exceptional sums of money are required to properly engage in such campaigning. The presidential campaign, not surprisingly, is both the longest and the most costly. The time and money requirements put rather severe restrictions on who can run a viable campaign for the office of President.
While the 2016 Presidential election takes place in November of that year, as of the May of 2015 a sizable number of candidates have declared that they are running. Campaigning for President is a full-time job and this means that person who is running must either have no job (or other comparable restrictions on her time) or have a job that permits her to campaign full time.
It is not uncommon for candidates to have no actual job. For example, Mitt Romney did not have a job when he ran in 2012. Hilary Clinton also does not seem to have a job in 2015, aside from running for President. Not having a job does, obviously, provide a person with considerable time in which to run for office. Those people who do have full-time jobs and cannot leave them cannot, obviously enough, make an effective run for President. This certainly restricts who can make an effective run for President.
It is very common for candidates to have a job in politics (such as being in Congress, being a mayor or being a governor) or in punditry. Unlike most jobs, these jobs apparently give a person considerable freedom to run for President. Someone more cynical than I might suspect that such jobs do not require much effort or that the person running is showing he is willing to shirk his responsibilities.
On the face of it, it seems that only those who do not have actual jobs or do not have jobs involving serious time commitments can effectively run for President. Those who have such jobs would have to make a choice—leave the job or not run. If a person did decide to leave her job to run would need to have some means of support for the duration of the campaign—which runs over a year. Those who are not independent of job income, such as Mitt Romney or Hilary Clinton, would have a rather hard time doing this—a year is a long time to go without pay.
As such, the length of the campaign places very clear restrictions on who can make an effective bid for the Presidency. As such, it is hardly surprising that only the wealthy and professional politicians (who are usually also wealthy) can run for office. A shorter campaign period, such as the six weeks some countries have, would certainly open up the campaign to people of far less wealth and who do not belong to the class of professional politicians. It might be suspected that the very long campaign period is quite intentional: it serves to limit the campaign to certain sorts of people. In addition to time, there is also the matter of money.
While running for President has long been rather expensive, it has been estimated that the 2016 campaign will run in the billions of dollars. Hilary Clinton alone is expected to spend at least $1 billion and perhaps go up to $2 billion. Or even more. The Republicans will, of course, need to spend a comparable amount of money.
While some candidates have, in the past, endeavored to use their own money to run a campaign, the number of billionaires is rather limited (although there are, obviously, some people who could fund their own billion dollar run). Candidates who are not billionaires must, obviously, find outside sources of money. Since money is now speech, candidates can avail themselves of big money donations and can be aided by PACs and SuperPACs. There are also various other clever ways of funneling dark money into the election process.
Since people generally do not hand out large sums of money for nothing, it should be evident that a candidate must be sold, to some degree, to those who are making it rain money. While a candidate can seek small donations from large numbers of people, the reality of modern American politics is that it is big money rather than the small donors that matter. As such, a candidate must be such that the folks with the big money believe that he is worth bankrolling—and this presumably means that they think he will act in their interest if he is elected. This means that these candidates are sold to those who provide the money. This requires a certain sort of person, namely one who will not refuse to accept such money and thus tacitly agree to act in the interests of those providing the money.
It might be claimed that a person can accept this money and still be her own woman—that is, use the big money to get into office and then act in accord with her true principles and contrary to the interests of those who bankrolled her. While not impossible, this seems unlikely. As such, what should be expected is candidates who are willing to accept such money and repay this support once in office.
The high cost of campaigning seems to be no accident. While I certainly do not want to embrace conspiracy theories, the high cost of campaigning does ensure that only certain types of people can run and that they will need to attract backers. As noted above, the wealthy rarely just hand politicians money as free gifts—unless they are fools, they expect a return on that investment.
In light of the above, it seems that Money Road is well designed in terms of its length and the money required to drive it. These two factors serve to ensure that only certain candidates can run—and it is worth considering that these are not the best candidates.
If you have made a mistake, do not be afraid of admitting the fact and amending your ways.
I never make the same mistake twice. Unfortunately, there are an infinite number of mistakes. So, I keep making new ones. Fortunately, philosophy is rather helpful in minimizing the impact of mistakes and learning that crucial aspect of wisdom: not committing the same error over and over.
One key aspect to avoiding the repetition of errors is skill in critical thinking. While critical thinking has become something of a buzz-word bloated fad, the core of it remains as important as ever. The core is, of course, the methods of rationally deciding whether a claim should be accepted as true, rejected as false or if judgment regarding that claim should be suspended. Learning the basic mechanisms of critical thinking (which include argument assessment, fallacy recognition, credibility evaluation, and causal reasoning) is relatively easy—reading through the readily available quality texts on such matters will provide the basic tools. But, as with carpentry or plumbing, merely having a well-stocked tool kit is not enough. A person must also have the knowledge of when to use a tool and the skill with which to use it properly. Gaining knowledge and skill is usually difficult and, at the very least, takes time and practice. This is why people who merely grind through a class on critical thinking or flip through a book on fallacies do not suddenly become good at thinking. After all, no one would expect a person to become a skilled carpenter merely by reading a DIY book or watching a few hours of videos on YouTube.
Another key factor in avoiding the repetition of mistakes is the ability to admit that one has made a mistake. There are many “pragmatic” reasons to avoid admitting mistakes. Public admission to a mistake can result in liability, criticism, damage to one’s reputation and other such harms. While we have sayings that promise praise for those who admit error, the usual practice is to punish such admissions—and people are often quick to learn from such punishments. While admitting the error only to yourself will avoid the public consequences, people are often reluctant to do this. After all, such an admission can damage a person’s pride and self-image. Denying error and blaming others is usually easier on the ego.
The obvious problem with refusing to admit to errors is that this will tend to keep a person from learning from her mistakes. If a person recognizes an error, she can try to figure out why she made that mistake and consider ways to avoid making the same sort of error in the future. While new errors are inevitable, repeating the same errors over and over due to a willful ignorance is either stupidity or madness. There is also the ethical aspect of the matter—being accountable for one’s actions is a key part of being a moral agent. Saying “mistakes were made” is a denial of agency—to cast oneself as an object swept along by the river of fare rather than an agent rowing upon the river of life.
In many cases, a person cannot avoid the consequences of his mistakes. Those that strike, perhaps literally, like a pile of bricks, are difficult to ignore. Feeling the impact of these errors, a person might be forced to learn—or be brought to ruin. The classic example is the hot stove—a person learns from one touch because the lesson is so clear and painful. However, more complicated matters, such as a failed relationship, allow a person room to deny his errors.
If the negative consequences of his mistakes fall entirely on others and he is never called to task for these mistakes, a person can keep on making the same mistakes over and over. After all, he does not even get the teaching sting of pain trying to drive the lesson home. One good example of this is the political pundit—pundits can be endlessly wrong and still keep on expressing their “expert” opinions in the media. Another good example of this is in politics. Some of the people who brought us the Iraq war are part of Jeb Bush’s presidential team. Jeb, infamously, recently said that he would have gone to war in Iraq even knowing what he knows now. While he endeavored to awkwardly walk that back, it might be suspected that his initial answer was the honest one. Political parties can also embrace “solutions” that have never worked and relentless apply them whenever they get into power—other people suffer the consequences while the politicians generally do not directly reap consequences from bad policies. They do, however, routinely get in trouble for mistakes in their personal lives (such as affairs) that have no real consequences outside of this private sphere.
While admitting to an error is an important first step, it is not the end of the process. After all, merely admitting I made a mistake will not do much to help me avoid that mistake in the future. What is needed is an honest examination of the mistake—why and how it occurred. This needs to be followed by an honest consideration of what can be changed to avoid that mistake in the future. For example, a person might realize that his relationships ended badly because he made the mistake of rushing into a relationship too quickly—getting seriously involved without actually developing a real friendship.
To steal from Aristotle, merely knowing the cause of the error and how to avoid it in the future is not enough. A person must have the will and ability to act on that knowledge and this requires the development of character. Fortunately, Aristotle presented a clear guide to developing such character in his Nicomachean Ethics. Put rather simply, a person must do what it is she wishes to be and stick with this until it becomes a matter of habit (and thus character). That is, a person must, as Aristotle argued, become a philosopher. Or be ruled by another who can compel correct behavior, such as the state.
The Trans-Pacific Partnership (TPP) has generated considerable controversy, mostly over what people think it might do. While making prediction about such complex matters is always difficult, there is a somewhat unusual challenge in making such prediction about the TPP. This challenge is that it is being kept secret from the public.
While senators are allowed to read the text of the TPP, it is being treated like an ultra-secret document. To gaze upon it, a senator must go to a secure basement room, hand over all electronics and then leave behind any notes he (or she) has written. An official from the US Trade Representative’s office watches them. After reading the document, the senator is not allowed to discuss the matter with the public, experts or lawyers.
While members of congress typically do not read the legislation the lobbyists have written for them to pass and the public usually has little interest in the text of bills, there is obviously still the question of justifying such secrecy. After all, the United States is supposed to be a democratic state and President Obama made all the right noises about transparency in government.
Robert Mnookin, of Harvard Law, has put forth stock justifications for such secrecy. The first justification is that having such matters open to the public is damaging to the process: “The representatives of the parties have to be able to explore a variety of options just to see what might be feasible before they ultimately make a deal. That kind of exploration becomes next to impossible if you have to do it in public.”
The second stock justification is that secrecy enables deals to be negotiated. As he says, “In private, people can explore and tentatively make concessions, which if they publicly made, would get shot down before you really had a chance to explore what you might be given in return for some compromise.”
In support of Mnookin, public exposure does have its disadvantages and secrecy does have its advantages. As he noted, if the negotiating parties have to operate in public, this can potentially limit their options. To use the obvious analogy, if a person is negotiating for a raise, then having to do so in front of his colleagues would certainly limit her options. In the case of trade deals, if the public knew about the details of the deals, then there might be backlash for proposals that anger the public.
Secrecy does, of course, confer many advantages. By being able to work out the exploration in secret, the public remains ignorant and thus cannot be upset about specific proposals. Going with the salary analogy, if I can negotiate my salary in complete secrecy, then I can say things I would not say publicly and explore deals that I would not make in public. This is obviously advantageous to the deal makers.
Obviously, the same sort of reasoning can be applied to all aspects of government: if the ruling officials are required to operate in the public eye, then they cannot explore things without fear that the public would be upset by what they are doing. For example, if the local government wanted to install red-light cameras to improve revenues and had to discuss this matter openly, then the public might oppose this. As another example, if the state legislature wanted to cut a special deal for a company, discussing the payoff openly could be problematic.
Secrecy would, in all such cases, allow the ruling officials to work out various compromises without the troubling impact of public scrutiny. The advantages to the ruling officials and their allies are quite evident—so much so, it is no wonder that governments have long pushed for secrecy.
Naturally, there are some minor concerns that need to be addressed. One is that secrecy allows for deals that, while advantageous for those making the deals, are harmful to other members of the population. Those who think that government should consider the general welfare would probably find this sort of thing problematic.
Another trivial point of concern is the possibility of corruption. After all, secrecy certainly serves as an enabler for corruption, while transparency tends to reduce corruption. The easy reply is that corruption is only of concern to those who think that corruption is a bad thing, as opposed to an opportunity for enhanced revenue for select individuals. Put that way, it sounds delightful.
A third matter is that such secrecy bypasses the ideal of the democratic system: that government is open and that matters of state are publicly discussed by the representatives so that the people have an opportunity to be aware of what is occurring and have a role in the process. This is obviously only of concern to those misguided few who value the ideals of such a system. Those realists and pragmatists who know the value of secrecy know that involving the people is a path to trouble. Best to keep such matters away from them, to allow their betters to settle matters behind closed doors.
A fourth minor concern is that making rational decisions about secret deals is rather difficult. When asked what I think about TPP, all I can say is that I am concerned that it is secret, but cannot say anything about the content—because I have no idea what is in it. While those who wrote it know what is in there (as do the few senators who have seen it), discussion of its content is not possible—which makes deciding about the matter problematic. The easy answer is that since we do not matter, we do not need to know.