The police shooting of unarmed black Americans has raised the question of why such shootings occurred. While some have rushed to claim that it is a blend of racism and brutality, the matter deserves careful consideration.
While there are various explanations, the most plausible involves a blend of factors. The first, which does have a connection to racism, is the existence of implicit bias. Studies involving simulators have found that officers are more likely to use force against a black suspect than a white suspect. This has generally been explained in terms of officers having a negative bias in regards to blacks. What is rather interesting is that these studies show that even black and Hispanic officers are more likely to use force against black suspects. Also interesting is that studies have shown that civilians are more likely than officers to use force in the simulators and also show more bias in regards to race.
One reason why an implicit bias can lead to a use of force is that it impacts how a person perceives another’s actions and the perception of objects. When a person knows she is in a potentially dangerous situation, she is hyper vigilant for threats and is anticipating the possibility of attack. As such, a person’s movements and any object he is wielding will be seen through that “threat filter.” So, for example, a person reaching rapidly to grab his wallet can easily be seen as grabbing for a weapon. Perceptual errors, of course, occur quite often—think of how people who are afraid of snakes often see every vine or stick as a snake when walking in the woods. These perceptual errors also help explain shootings—a person can honestly think they saw the suspect reaching for a weapon.
Since the main difference between the officers and the civilians is most likely the training police receive, it seems reasonable to conclude that the training is having a positive effect. However, the existence of a race disparity in the use of force does show that there is still a problem to address. One point of concern is that the bias might be so embedded in American culture that training will not eliminate it. That is, as long as there is racial bias in the society, it will also infect the police. As such, eliminating the bias in police would require eliminating it in society as a whole—which goes far beyond policing.
A second often mentioned factor is what some call the “warrior culture.” Visually, this is exemplified by the use of military equipment, such as armored personal carriers, by the police. However, the warrior culture is not primarily a matter of equipment, but of attitude. While police training does include conflict resolution skill training, there is a significant evidence on combat skills, especially firearms. On the one hand, this makes sense—people who are going to be using weapons need to be properly trained in their use. On the other hand, there are grounds for being concerned with the fact that there is more focus on combat training relative to the peaceful resolution of conflicts.
Since I have seen absurd and useless “training” in conflict resolution, I do get that there would be concerns about such training. I also understand that conflict resolution is often cast in terms of “holding hands and drinking chamomile tea together” and hence is not always appealing to people who are interested in police work. However, it does seem to be a critical skill. After all, in a crisis people fall back on habit and training—and if people are trained primarily for combat, they will fall back on that. Naturally, there is the worry that too much emphasis on conflict resolution could put officers in danger—so that they keep talking well past the point at which they should have started shooting. However, this is a practical matter of training that can be addressed. A critical part of conflict resolution training is also what Aristotle would regard as moral education: developing the character to know when and how to act correctly. As Aristotle said, it is easy to be angry but it is hard to be angry at the right time for the right reasons, towards the right people and to the right degree. As Aristotle also said, this is very hard and most people are rather bad at this sort of thing, including conflict resolution. This does present a challenge even for a well-trained officer—the person she is dealing with is probably horrible at conflict-resolution. One possible solution is training for citizens—not in terms of just rolling over for the police, but in interacting with the police (and each other). Expecting the full burden of conflict resolution to fall upon the police certainly seems unfair and also not a successful strategy.
The final factor I will consider is the principle of the primacy of officer survival. One of the primary goals of police training and practice is officer survival. It would, obviously, be absurd to claim that police should not be trained in survival or that police practices should not put an emphasis on the survival of officers. However, there are legitimate concerns about ways of training officers, the practice of law enforcement and the attitude that training and practice create.
Part of the problem, as some see it, links to the warrior mentality. The police, it is claimed, are trained to regard their job as incredibly dangerous and policing as a form of combat mission. This, obviously enough, shapes the reaction of officers to situations they encounter, which ties into the matter of perceptual bias. If a person believes that she is going out into a combat zone, she will perceive people and actions through this “combat zone filter.” As such, people will be regarded as more threatening, actions will be more likely to be interpreted as hostile and objects will be more likely to be seen as weapons. As such, it certainly makes sense that approaching officer survival by regarding police work as a combat mission would result in more civilian causalities than would different approaches.
Naturally, it can be argued that officers do not, in general, have this sort of “combat zone” attitude and that academics are presenting the emphasis on survival in the wrong sort of light. It can also be argued that the “combat zone” attitude is real, but is also correct—people do, in fact, target police officers for attack and almost any situation could turn into a battle for survival. As such, it would be morally irresponsible to not train officers for survival, to instill in them a proper sense of fear, and to engage in practices that focus primarily on officers making it home at the end of the shift—even if this approach results in more civilian deaths, including the deaths of unarmed civilians.
This leads to a rather important moral concern, namely the degree of risk a person is obligated to take in order to minimize the harm to another person. This matter is not just connected to the issue of the use of force by police, but also the broader issue of self-defense.
I do assume that there is a moral right to self-defense and that police officers do not lose this right when acting in their professional capacity. That is, a person has a right to harm another person when legitimately defending her life, liberty or property against an unwarranted attack. Even if such a right is accepted, there is still the question of the degree of force a person is justified in using and to what extent a person should limit her response in order to minimize harm to the attacker.
In terms of the degree of force, the easy and obvious answer is that the force should be proportional to the threat but should also suffice to end the threat. For example, when I was a boy I faced the usual attacks of other boys. Since these attacks just involved fists and grappling, a proportional response was to hit back hard enough to make the other boy stop. Grabbing a rock, a bat or pulling a knife would be disproportional. As another example, if someone is shooting at a police officer, then she would certainly be in the right to use her firearm since that would be a proportional response.
One practical and moral concern about the proportional response is that the attacker might escalate. For example, if Bob swings on Mary and she lands a solid punch to his face, he might pull out a knife and stab her. If Mary had simply shot Bob, she would have not been stabbed because Bob would be badly wounded or dead. As such, some would argue, the response to an attack should be disproportional. In terms of the moral justification, this would rest on the fact that the attacker is engaged in an unjust action and the person attacked has reason to think, as Locke argued, that the person might intend to kill her.
Another practical and moral concern is that if the victim “plays fair” by responding in a proportional manner, she risks losing the encounter. For example, if Bob swings on Sally and Sally sticks with her fists, Bob might be able to beat her. Since dealing with an attacker is not a sporting event, the idea of “fair play” seems absurd—hence the victim has the moral right to respond in a disproportional manner.
However, there is also the counter-concern that a disproportional response would be excessive in the sense of being unnecessary. For example, if Bob swings at Sally and Sally shoots him four times with a twelve gauge, Sally is now safe—but if Sally could have used a Taser to stop Bob, then the use of the shotgun would seem to be wrong—after all, she did not need to kill Bob in order to save herself. As such, it would seem reasonable to hold to the moral principle that the force should be sufficient for defense, but not excessive.
The obvious practical challenge is judging what would be sufficient and what would be excessive. Laws that address self-defense issues usually leave this very vague: a person can use deadly force when facing a “reasonable perceived threat.” That is, the person must have a reasonable belief that there is a threat—there is usually no requirement that the threat be real. To use the stock example, if a man points a realistic looking toy gun at an officer and says he is going to kill her, the officer would have a reasonable belief that there is a threat. Of course, there are problems with threat assessment—as noted above, implicit bias, warrior mentality and survival focus can cause a person to greatly overestimate a threat (or see one where it does not exist).
The challenge of judging sufficient force in response to a perceived threat is directly connected with the moral concern about the degree of risk a person is obligated to face in order to avoid (excessively) harming another person. After all, a person could “best” ensure her safety by responding to every perceived threat with maximum lethal force. If she responds with less force or delays her response, then she is at ever increasing risk. If she accepts too little risk, she would be acting wrongly towards the person threatening her. If she accepts too much risk, she would be acting wrongly towards herself and anyone she is protecting.
A general and generic approach would be to model the obligation of risk on the proportional response approach. That is, the risk one is obligated to take is proportional to the situation at hand. This then leads to the problem of working out the details of the specific situation—which is to say that the degree of risk would seem to rest heavily on the circumstances.
However, there are general factors that would impact the degree of obligatory risk. One would be the relation between the people. For example, it seems reasonable to hold that people have greater obligations to accept risk to avoid harming people they love or care about. Another factor that seems relevant is the person’s profession. For example, soldiers are expected to take some risks to avoid killing civilians—even when doing so puts them in some danger. To use a specific example, soldiers on patrol could increase their chance of survival by killing any unidentified person (adult or child) that approaches them. However, being a soldier and not a killer requires the soldiers to accept some risk to avoid murdering innocents.
In the case of police officers it could be argued that their profession obligates them to take greater risks to avoid harming others. Since their professed duty is to serve and protect, it can be argued that the survival of those who they are supposed to protect should be given equal weight to that of the survival of the officer. That is, the focus should be on everyone going home. In terms of how this would be implemented, the usual practice would be training and changes to rules regarding use of force. Limiting officer use of force can be seen as generating greater risk for the officers, but the goal would be to reduce the harm done to civilians. Since the police are supposed to protect people, they are (it might be argued) under greater obligation to accept risk than civilians.
One obvious reply to this is that many officers already have this view—they take considerable risks to avoid harming people, even when they would be justified in using force. These officers save many lives—although sometimes at the cost of their own. Another reply is that this sort of view would get officers killed because they would be too concerned about not harming suspects and not concerned enough about their own survival. That is a reasonable concern—there is the challenge of balancing the safety of the public and the safety of officers.
-Spoiler Alert: Details of the Season 1 Finale of The Flash are revealed in this post.
Philosophers often make use of fictional examples in order to discuss ethical issues. In some cases, this is because they are discussing hypotheticals and do not have real examples to discuss. For example, discussions of the ethics of utilizing artificial intelligences are currently purely hypothetical (as far as we know). In other cases, this is because a philosopher thinks that a fictional case is especially interesting or simply “cool.” For example, philosophers often enjoy writing about the moral problems in movies, books and TV shows.
The use of fictional examples can, of course, be criticized. One stock criticism is that there are a multitude of real moral examples (and problems) that should be addressed. Putting effort into fictional examples is a waste of time. To use an analogy, it would be like spending time worrying about getting more gold for a World of Warcraft character when one does not have enough real money to pay the bills.
Another standard criticism focuses on the fact that fictional examples are manufactured. Because they are made up rather than “naturally” occurring, there are obvious concerns about the usefulness of such examples and to what extent the scenario is created by fiat. For example, when philosophers create convoluted and bizarre moral puzzles, it is quite reasonable to consider whether or not such a situation is even possible.
Fortunately, a case can be made for the use of fictional examples in discussions about ethics. Examples involving what might be (such as artificial intelligence) can be defended on the practical ground that it is preferable to discuss the matter before the problem arises rather than trying to catch up after the fact. After all, planning ahead is generally a good idea.
The use of fictional examples can also be justified on the same grounds that sports and games are justified—they might not be “useful” in a very limited and joyless sense of the term, but they can be quite fun. If poker, golf, or football can be justified on the basis of enjoyment, then so too can the use of fictional examples.
A third justification for the use of fictional examples is that they can allow the discussion of an issue in a more objective way. Since the example is fictional, it is less likely that a person will have a stake in the made-up example. Fictional examples can also allow the discussion to focus more on the issue as opposed to other factors, such as the emotions associated with an actual event. Of course, people can become emotionally involved in fictional examples. For example, fans of a particular movie character might be quite emotionally attached to that character.
A fourth reason is that a fictional example can be crafted to be an ideal example, to lay out the moral issue (or issues) clearly. Real examples are often less clear (though they do have the advantage of being real).
In light of the above, it seems reasonable to use fictional examples in discussing ethical issues. As such, I will move on to my main focus, which is discussing whether the Flash is morally worse than the Reverse Flash on CW’s show The Flash.
For those not familiar with the characters or the show, the Flash is a superhero whose power is the ability to move incredibly fast. While there have been several versions of the Flash, the Flash on the show is Barry Allen. As a superhero, the Flash has many enemies. One of his classic foes is the Reverse Flash. The Reverse Flash is also a speedster, but he is from the future (relative to the show’s main “present” timeline). Whereas the Flash’s costume is red with yellow markings, the Reverse Flash’s costume is yellow with red markings. While Barry is a good guy, Eobard Thawne (the Reverse Flash) is a super villain.
On the show, the Reverse Flash travels back in time to kill the young Barry before he becomes the Flash—with the intent of winning the battle before it even begins. However, the Flash also travels back in time to thwart the Reverse Flash and saves his past self. Out of anger, the Reverse Flash murders Barry’s mother but finds that he has lost his power. Using some creepy future technology, the Reverse Flash steals the life of the scientist Harrison Wells and takes on his identity. Using this identity, he builds the particle accelerator he needs to get back to the future and ends up, ironically, needing to create the Flash in order to get back home. The early and middle episodes of the show are about how Barry becomes the Flash and his early career in fighting crime and poor decision making.
In the later episodes, the secret of the Reverse Flash is revealed and Barry ends up defeating him in an epic battle. Before the battle, “Wells” makes the point that he has done nothing more and nothing less than what he has needed to do to get home. Interestingly, while the Reverse Flash is ruthless in achieving his goal of returning to his own time and regaining the friends, family and job he has lost, he is generally true to that claim and only harms people when he regards it as truly necessary. He even expresses what seems to be sincere regret when he decides to harm those he has befriended.
While the details are not made clear, he claims that the future Flash has wronged him terribly and he is acting from revenge, to undo the wrong and to return to his own time. While he does have a temper that drives him to senseless murder, when he is acting rationally he acts consistently with his claim: he does whatever it takes to advance his goals, but does not go beyond that.
While the case of the Reverse Flash is fictional, it does raise a real moral issue: is it morally right to harm people in order to achieve one’s goals? The answer depends, obviously, on such factors as the goals and what harms are inflicted on which people. While the wrong allegedly done to the Reverse Flash has not been revealed, he does seem to be acting selfishly. After all, he got stuck in the past because he came back to kill Barry and then murders people when he thinks he needs to do so to advance his plan of return. Kant would, obviously, regard the Reverse Flash as evil—he regularly treats other rational beings solely as means to achieving his ends. He also seems evil on utilitarian grounds—he ends numerous lives and creates considerable suffering so as to achieve his own happiness. But, this is to be expected: he is a supervillain. However, a case can be made that he is morally superior to the Flash.
In the season one finale, the Reverse Flash tells Barry how to travel back in time to save his mother—this involves using the particle accelerator. There are, however, some potential problems with the plan.
One problem is that if Barry does not run fast enough to open the wormhole to the past, he will die. Risking his own life to save his mother is certainly commendable.
A second problem is that if Barry does go back and succeed (or otherwise change things), then the timeline will be altered. The show has established that a change in the past rewrites history (although the time traveler remembers what occurred)—so going back could change the “present” in rather unpredictable ways. Rewriting the lives of people without their consent certainly seems morally problematic, even if it did not result in people being badly harmed or killed. Laying aside the time-travel aspect, the situation is one in which a person is willing to change, perhaps radically, the lives of many people (potentially everyone on the planet) without their consent just to possibly save one life. On the face of it, that seems morally wrong and rather selfish.
A third problem is that Barry has under two minutes to complete his mission and return, or a singularity will form. This singularity will, at the very least, destroy the entire city and could destroy the entire planet. So, while the Reverse Flash was willing to kill a few people to achieve his goal, the Flash is willing to risk killing everyone on earth to save his mother. On utilitarian grounds, that seems clearly wrong. Especially since even if he saved her, the singularity could just end up killing her when the “present” arrives.
Barry decides to go back to try to save his mother, but his future self directs him to not do so. Instead he says good-bye to his dying mother and returns to the “present” to fight the Reverse Flash. Unfortunately, something goes wrong and the city is being sucked up into a glowing hole in the sky. Since skyscrapers are being ripped apart and sucked up, presumably a lot of people are dying.
While the episode ends with the Flash trying to close the hole, it should be clear that he is at least as bad as the Reverse Flash, if not worse: he was willing to change, without their consent, the lives of many others and he was willing to risk killing everyone and everything on earth. This is hardly heroic. So, the Flash would seem to be rather evil—or at least horrible at making moral decisions.
“The road to the White House is not just any road. It is longer than you’d think and a special fuel must be burned to ride it. The bones of those who ran out of fuel are scattered along it. What do they call it? They call it ‘money road.’ Only the mad ride that road. The mad or the rich.”
While some countries have limited campaign seasons and restrictions on political spending, the United States follows its usual exceptionalism. That is, the campaign seasons are exceptionally long and exceptional sums of money are required to properly engage in such campaigning. The presidential campaign, not surprisingly, is both the longest and the most costly. The time and money requirements put rather severe restrictions on who can run a viable campaign for the office of President.
While the 2016 Presidential election takes place in November of that year, as of the May of 2015 a sizable number of candidates have declared that they are running. Campaigning for President is a full-time job and this means that person who is running must either have no job (or other comparable restrictions on her time) or have a job that permits her to campaign full time.
It is not uncommon for candidates to have no actual job. For example, Mitt Romney did not have a job when he ran in 2012. Hilary Clinton also does not seem to have a job in 2015, aside from running for President. Not having a job does, obviously, provide a person with considerable time in which to run for office. Those people who do have full-time jobs and cannot leave them cannot, obviously enough, make an effective run for President. This certainly restricts who can make an effective run for President.
It is very common for candidates to have a job in politics (such as being in Congress, being a mayor or being a governor) or in punditry. Unlike most jobs, these jobs apparently give a person considerable freedom to run for President. Someone more cynical than I might suspect that such jobs do not require much effort or that the person running is showing he is willing to shirk his responsibilities.
On the face of it, it seems that only those who do not have actual jobs or do not have jobs involving serious time commitments can effectively run for President. Those who have such jobs would have to make a choice—leave the job or not run. If a person did decide to leave her job to run would need to have some means of support for the duration of the campaign—which runs over a year. Those who are not independent of job income, such as Mitt Romney or Hilary Clinton, would have a rather hard time doing this—a year is a long time to go without pay.
As such, the length of the campaign places very clear restrictions on who can make an effective bid for the Presidency. As such, it is hardly surprising that only the wealthy and professional politicians (who are usually also wealthy) can run for office. A shorter campaign period, such as the six weeks some countries have, would certainly open up the campaign to people of far less wealth and who do not belong to the class of professional politicians. It might be suspected that the very long campaign period is quite intentional: it serves to limit the campaign to certain sorts of people. In addition to time, there is also the matter of money.
While running for President has long been rather expensive, it has been estimated that the 2016 campaign will run in the billions of dollars. Hilary Clinton alone is expected to spend at least $1 billion and perhaps go up to $2 billion. Or even more. The Republicans will, of course, need to spend a comparable amount of money.
While some candidates have, in the past, endeavored to use their own money to run a campaign, the number of billionaires is rather limited (although there are, obviously, some people who could fund their own billion dollar run). Candidates who are not billionaires must, obviously, find outside sources of money. Since money is now speech, candidates can avail themselves of big money donations and can be aided by PACs and SuperPACs. There are also various other clever ways of funneling dark money into the election process.
Since people generally do not hand out large sums of money for nothing, it should be evident that a candidate must be sold, to some degree, to those who are making it rain money. While a candidate can seek small donations from large numbers of people, the reality of modern American politics is that it is big money rather than the small donors that matter. As such, a candidate must be such that the folks with the big money believe that he is worth bankrolling—and this presumably means that they think he will act in their interest if he is elected. This means that these candidates are sold to those who provide the money. This requires a certain sort of person, namely one who will not refuse to accept such money and thus tacitly agree to act in the interests of those providing the money.
It might be claimed that a person can accept this money and still be her own woman—that is, use the big money to get into office and then act in accord with her true principles and contrary to the interests of those who bankrolled her. While not impossible, this seems unlikely. As such, what should be expected is candidates who are willing to accept such money and repay this support once in office.
The high cost of campaigning seems to be no accident. While I certainly do not want to embrace conspiracy theories, the high cost of campaigning does ensure that only certain types of people can run and that they will need to attract backers. As noted above, the wealthy rarely just hand politicians money as free gifts—unless they are fools, they expect a return on that investment.
In light of the above, it seems that Money Road is well designed in terms of its length and the money required to drive it. These two factors serve to ensure that only certain candidates can run—and it is worth considering that these are not the best candidates.
If you have made a mistake, do not be afraid of admitting the fact and amending your ways.
I never make the same mistake twice. Unfortunately, there are an infinite number of mistakes. So, I keep making new ones. Fortunately, philosophy is rather helpful in minimizing the impact of mistakes and learning that crucial aspect of wisdom: not committing the same error over and over.
One key aspect to avoiding the repetition of errors is skill in critical thinking. While critical thinking has become something of a buzz-word bloated fad, the core of it remains as important as ever. The core is, of course, the methods of rationally deciding whether a claim should be accepted as true, rejected as false or if judgment regarding that claim should be suspended. Learning the basic mechanisms of critical thinking (which include argument assessment, fallacy recognition, credibility evaluation, and causal reasoning) is relatively easy—reading through the readily available quality texts on such matters will provide the basic tools. But, as with carpentry or plumbing, merely having a well-stocked tool kit is not enough. A person must also have the knowledge of when to use a tool and the skill with which to use it properly. Gaining knowledge and skill is usually difficult and, at the very least, takes time and practice. This is why people who merely grind through a class on critical thinking or flip through a book on fallacies do not suddenly become good at thinking. After all, no one would expect a person to become a skilled carpenter merely by reading a DIY book or watching a few hours of videos on YouTube.
Another key factor in avoiding the repetition of mistakes is the ability to admit that one has made a mistake. There are many “pragmatic” reasons to avoid admitting mistakes. Public admission to a mistake can result in liability, criticism, damage to one’s reputation and other such harms. While we have sayings that promise praise for those who admit error, the usual practice is to punish such admissions—and people are often quick to learn from such punishments. While admitting the error only to yourself will avoid the public consequences, people are often reluctant to do this. After all, such an admission can damage a person’s pride and self-image. Denying error and blaming others is usually easier on the ego.
The obvious problem with refusing to admit to errors is that this will tend to keep a person from learning from her mistakes. If a person recognizes an error, she can try to figure out why she made that mistake and consider ways to avoid making the same sort of error in the future. While new errors are inevitable, repeating the same errors over and over due to a willful ignorance is either stupidity or madness. There is also the ethical aspect of the matter—being accountable for one’s actions is a key part of being a moral agent. Saying “mistakes were made” is a denial of agency—to cast oneself as an object swept along by the river of fare rather than an agent rowing upon the river of life.
In many cases, a person cannot avoid the consequences of his mistakes. Those that strike, perhaps literally, like a pile of bricks, are difficult to ignore. Feeling the impact of these errors, a person might be forced to learn—or be brought to ruin. The classic example is the hot stove—a person learns from one touch because the lesson is so clear and painful. However, more complicated matters, such as a failed relationship, allow a person room to deny his errors.
If the negative consequences of his mistakes fall entirely on others and he is never called to task for these mistakes, a person can keep on making the same mistakes over and over. After all, he does not even get the teaching sting of pain trying to drive the lesson home. One good example of this is the political pundit—pundits can be endlessly wrong and still keep on expressing their “expert” opinions in the media. Another good example of this is in politics. Some of the people who brought us the Iraq war are part of Jeb Bush’s presidential team. Jeb, infamously, recently said that he would have gone to war in Iraq even knowing what he knows now. While he endeavored to awkwardly walk that back, it might be suspected that his initial answer was the honest one. Political parties can also embrace “solutions” that have never worked and relentless apply them whenever they get into power—other people suffer the consequences while the politicians generally do not directly reap consequences from bad policies. They do, however, routinely get in trouble for mistakes in their personal lives (such as affairs) that have no real consequences outside of this private sphere.
While admitting to an error is an important first step, it is not the end of the process. After all, merely admitting I made a mistake will not do much to help me avoid that mistake in the future. What is needed is an honest examination of the mistake—why and how it occurred. This needs to be followed by an honest consideration of what can be changed to avoid that mistake in the future. For example, a person might realize that his relationships ended badly because he made the mistake of rushing into a relationship too quickly—getting seriously involved without actually developing a real friendship.
To steal from Aristotle, merely knowing the cause of the error and how to avoid it in the future is not enough. A person must have the will and ability to act on that knowledge and this requires the development of character. Fortunately, Aristotle presented a clear guide to developing such character in his Nicomachean Ethics. Put rather simply, a person must do what it is she wishes to be and stick with this until it becomes a matter of habit (and thus character). That is, a person must, as Aristotle argued, become a philosopher. Or be ruled by another who can compel correct behavior, such as the state.
The Trans-Pacific Partnership (TPP) has generated considerable controversy, mostly over what people think it might do. While making prediction about such complex matters is always difficult, there is a somewhat unusual challenge in making such prediction about the TPP. This challenge is that it is being kept secret from the public.
While senators are allowed to read the text of the TPP, it is being treated like an ultra-secret document. To gaze upon it, a senator must go to a secure basement room, hand over all electronics and then leave behind any notes he (or she) has written. An official from the US Trade Representative’s office watches them. After reading the document, the senator is not allowed to discuss the matter with the public, experts or lawyers.
While members of congress typically do not read the legislation the lobbyists have written for them to pass and the public usually has little interest in the text of bills, there is obviously still the question of justifying such secrecy. After all, the United States is supposed to be a democratic state and President Obama made all the right noises about transparency in government.
Robert Mnookin, of Harvard Law, has put forth stock justifications for such secrecy. The first justification is that having such matters open to the public is damaging to the process: “The representatives of the parties have to be able to explore a variety of options just to see what might be feasible before they ultimately make a deal. That kind of exploration becomes next to impossible if you have to do it in public.”
The second stock justification is that secrecy enables deals to be negotiated. As he says, “In private, people can explore and tentatively make concessions, which if they publicly made, would get shot down before you really had a chance to explore what you might be given in return for some compromise.”
In support of Mnookin, public exposure does have its disadvantages and secrecy does have its advantages. As he noted, if the negotiating parties have to operate in public, this can potentially limit their options. To use the obvious analogy, if a person is negotiating for a raise, then having to do so in front of his colleagues would certainly limit her options. In the case of trade deals, if the public knew about the details of the deals, then there might be backlash for proposals that anger the public.
Secrecy does, of course, confer many advantages. By being able to work out the exploration in secret, the public remains ignorant and thus cannot be upset about specific proposals. Going with the salary analogy, if I can negotiate my salary in complete secrecy, then I can say things I would not say publicly and explore deals that I would not make in public. This is obviously advantageous to the deal makers.
Obviously, the same sort of reasoning can be applied to all aspects of government: if the ruling officials are required to operate in the public eye, then they cannot explore things without fear that the public would be upset by what they are doing. For example, if the local government wanted to install red-light cameras to improve revenues and had to discuss this matter openly, then the public might oppose this. As another example, if the state legislature wanted to cut a special deal for a company, discussing the payoff openly could be problematic.
Secrecy would, in all such cases, allow the ruling officials to work out various compromises without the troubling impact of public scrutiny. The advantages to the ruling officials and their allies are quite evident—so much so, it is no wonder that governments have long pushed for secrecy.
Naturally, there are some minor concerns that need to be addressed. One is that secrecy allows for deals that, while advantageous for those making the deals, are harmful to other members of the population. Those who think that government should consider the general welfare would probably find this sort of thing problematic.
Another trivial point of concern is the possibility of corruption. After all, secrecy certainly serves as an enabler for corruption, while transparency tends to reduce corruption. The easy reply is that corruption is only of concern to those who think that corruption is a bad thing, as opposed to an opportunity for enhanced revenue for select individuals. Put that way, it sounds delightful.
A third matter is that such secrecy bypasses the ideal of the democratic system: that government is open and that matters of state are publicly discussed by the representatives so that the people have an opportunity to be aware of what is occurring and have a role in the process. This is obviously only of concern to those misguided few who value the ideals of such a system. Those realists and pragmatists who know the value of secrecy know that involving the people is a path to trouble. Best to keep such matters away from them, to allow their betters to settle matters behind closed doors.
A fourth minor concern is that making rational decisions about secret deals is rather difficult. When asked what I think about TPP, all I can say is that I am concerned that it is secret, but cannot say anything about the content—because I have no idea what is in it. While those who wrote it know what is in there (as do the few senators who have seen it), discussion of its content is not possible—which makes deciding about the matter problematic. The easy answer is that since we do not matter, we do not need to know.
I recently attended a meeting discussing the use of Blackboard analytics as a tool for student retention and improving graduation rates. Last year I had attended multiple meetings on the subject of classes with high failure rates and this had motivated me to formalize what I had been doing informally for years, namely generating a picture of why students fail my classes. While my university is still implementing Blackboard analytics, I have gathered information from my classes and my students which has enabled me to get a reasonable picture of the failure rates, attendance rates and the reasons for failure and absences.
Not surprisingly, the new data still supports the old data in regards to correlation between a student’s attendance and her grade. Students who do fail (D or F) tend to have very poor attendance. I have also found that attendance has grown dramatically worse in my classes over the years. This is not based on the usual complaints of the old about the youth of today—I have stacks of rumpled attendance sheets that provide actual evidence. Based on conversations with other faculty, the same is true of other classes.
Interestingly, while students who have good (A or B) grades tend to have good attendance, relatively large numbers of students are able to pass (C) despite poor attendance (missing more often than not). Perhaps they would have done better if they had attended more, but perhaps not.
Reviewing my gradebooks has shown that the main cause of failure is a combination of not completing work and getting failing grades on much of the work that is completed. The most common pattern is that a student does not complete 2-3 of the five exams, and fails some or all of the exams he does take. Somewhat less common is a student having passing grades on completed work, but not completing enough work to pass the course. This most commonly involves students who pass the exams and quizzes, but simply never turn in a paper. In some cases, students do pass the exams they take, but fail to take 2-3 of them. Interestingly, I have not had a student fail by completing and failing everything—the students who fail always leave some of the work undone.
In the days before Blackboard, students faced the challenge of coming to campus to take exams and turn in papers or assignments at specific times. In those days, I routinely had make-up exams and took papers late (when accompanied by appropriate documentation, of course). When Blackboard became available and reliable, I thought that I could address this problem by using Blackboard: students could take exams and quizzes and turn in papers and assignments at any time of day from anywhere they could get an internet connection. I also offered (and offer) very generous deadlines for the work so that students who faced difficulties or challenges could easily work around them.
While this did eliminate make-up exams and many problems with the papers, the impact on completion of work was less than I expected. In fact, class performance remained approximately the same as in the days before Blackboard. On the plus side, this showed that cheating had effectively been countered. On the minus side, I had hoped to significantly reduce the D and F grades resulting from people not doing the work.
While it is certainly tempting to regard the use of Blackboard as a failure in this regard, I do have some indirect reasons to think that it helped. As noted above, the attendance in my class (and those of others) has crashed. Despite this, the averages in my classes are remaining constant. One possible explanation is that the students would be doing worse, but for their ability to do the work in a very flexible manner. An alternative is, of course, that they are missing class because they can do the work on Blackboard. However, faculty who do not use Blackboard also consistently report attendance issues and generally have higher failure rates (based on general data regarding classes). So, I suspect that my use of Blackboard is doing some good, at least in terms of retention and graduation.
Naturally, I did wonder why students have been missing class. I have been conducting a study using a basic survey for one year and the results are here.
Over the year, I had 233 responses. Interestingly 71% reported attending at least often, with the largest percentage (25.8%) claiming to attend 80-90% of the time. 24.9% claimed to attend 90-100% of the time. As might be suspected, this self-reported data is simply not consistent with my actual attendance records. This can be explained in various ways. One obvious possibility is that students who would take the time to respond to a survey would be students who would be more likely to attend class, thus biasing the survey. A second obvious possibility is that people tend to select the answer they think they should give or the one that matches how they would like to be perceived. As such, students would tend to over-report their attendance. A third obvious possibility is that students might believe that the responses to the survey might cause me to hand out extra points (which is not the case and the survey is anonymous).
In regards to the reasons why students miss class, the highest (by far) self-reported reason is still work. While this might be explained in terms of students selecting the answer that presents them in the best light, it is consistent with anecdotal evidence I have “collected” by overhearing students, speaking with students, and speaking with other colleagues. It is also consistent with the fact that many students need outside employment in order to pay for college-work schedules do not always neatly fit around class schedules. If this information is accurate, addressing the attendance and completion problem would require addressing the matter of work. This could involve the usual proposals, such as finding ways to increase support for students so they do not need to work (or work as much) in college. It might also involve considering some new or alternative approaches to the problem. I suspect, but cannot prove, that my adoption of a heavily online approach has helped with this problem—students can complete the work around their work schedule, rather than trying to get work done at fixed times that might not match the needs of their workplace.
Of course, I also need to consider that it is this online approach that is contributing to the attendance issue. While 28.8% of students reported work as their primary reason, 15% claimed that the fact that the work is on Blackboard was the primary reason they missed class. Since the graded coursework is completed and turned in through Blackboard, a pragmatic student who is focused primarily on simply getting a grade as a means to an end would see far less reason to attend class. Since the majority of college students now report that they are in school primarily to get a job, it makes sense that many students would take this approach to class. However, there is the obvious risk in this pragmatic approach: as noted above, low attendance tends to correlate with low grades, so students who skip the class on the assumption that they can just do the work on Blackboard might not do as well as they could and might get far less from the course—that is, just a grade.
Based on this information and other findings, Blackboard is still a double edged sword. On the one hand, it does seem beneficial precisely because students can do the work or turn it in more conveniently and around the clock. On the other hand, using it as the sole means for turning in work does allow students to skip class while still being able to do the work. What still needs to be determined is which edge cuts more. Given the above discussion, I believe that while the use of Blackboard does lower attendance, it also allows students to complete work around their work schedules. As such, I suspect that it has generally been positive in terms of the purely pragmatic goal of maintaining or even improving retention and graduation. Of course, this claim is counterfactual: if I had not adopted the online approach, then the grades of the students would have worsened.
As noted above, my university is adopting Blackboard Analytics and this will provide the data needed to conduct a proper student (as opposed to an unfunded project using surveys and data from just my classes). Students today are, obviously, different from when I was a student and professors need to adjust to the relevant differences—one key challenge is finding out what they are. I have made some guesses, but better data would allow better decision making.
A federal appeals court ruled in May, 2015 that the NSA’s bulk collection of domestic calling data is illegal. While such bulk data collection would strike many as blatantly unconstitutional, this matter has not been addressed, though that is perhaps just a matter of time. My intent is to address the general issue of bulk domestic data collection by the state in a principled way.
When it comes to the state (or, more accurately, the people who compose the state) using its compulsive force against its citizens, there are three main areas of concern: practicality, morality and legality. I will addressing this matter within the context of the state using its power to impose on the rights and liberties of the citizens for the purported purpose of protecting them. This is, of course, the stock problem of liberty versus security.
In the case of practicality, the main question is whether or not the law, policy or process is effective in achieving its goals. This, obviously, needs to be balanced against the practical costs in terms of such things as time and resources (such as money).
In the United States, this illegal bulk data collection has been going on for years. To date, there seems to be but one public claim of success involving the program, which certainly indicates that the program is not effective. When the cost of the program is considered, the level of failure is appalling.
In defense of the program, some proponents have claimed that there have been many successes, but these cannot be reported because they must be kept secret. In fairness, it is certainly worth considering that there have been such secret successes that must remain secret for security reasons. However, this defense can easily be countered.
In order to accept this alleged secret evidence, those making the claim that it exists would need to be trustworthy. However, those making the claim have a vested interest in this matter, which certainly lowers their credibility. To use an analogy, if I was receiving huge sums of money for a special teaching program and could only show one success, but said there were many secret successes, you would certainly be wise to be skeptical of my claims. There is also the fact that thanks to Snowden, it is known that the people involved have no compunctions about lying about this matter, which certainly lowers their credibility.
One obvious solution would be for credible, trusted people with security clearance to be provided with the secret evidence. These people could then speak in defense of the bulk data collection without mentioning the secret specifics. Of course, given that everyone knows about the bulk data collection, it is not clear what relevant secrets could remain that the public simply cannot know about (except, perhaps, the secret that the program does not work).
Given the available evidence, the reasonable conclusion is that the bulk data collection is ineffective. While it is possible that there is some secret evidence, there is no compelling reason to believe this claim, given the lack of credibility on the part of those making this claim. This alone would suffice as grounds for ceasing this wasteful and ineffective approach.
In the case of morality, there are two main stock approaches. The first is a utilitarian approach in which the harms of achieving the security are weighed against the benefits provided by the security. The basic idea is that the state is warranted in infringing on the rights and liberties of the citizens on the condition that the imposition is outweighed by the wellbeing gained by the citizens—either in terms of positive gains or harms avoided. This principle applies beyond matters of security. For example, people justify such things as government mandated health care and limits on soda sizes on the same grounds that others justify domestic spying: these things are supposed to protect citizens.
Bulk data collection is, obviously enough, an imposition on the moral right to privacy—though it could be argued that this harm is fairly minimal. There are, of course, also the practical costs in terms of resources that could be used elsewhere, such as in health care or other security programs. Weighing the one alleged success against these costs, it seems evident that the bulk data collection is immoral on utilitarian grounds—it does not do enough good to outweigh its moral cost.
Another stock approach to such matters is to forgo utilitarianism and argue the ethics in another manner, such as appealing to rights. In the case of bulk data collection, it can be argued that it violates the right to privacy and is thus wrong—its success or failure in practical terms is irrelevant. In the United States people often argue this way when it comes to gun rights—the right outweighs utilitarian considerations about the well-being of the public.
Rights are, of course, not absolute—everyone knows the example of how the right to free expression does not warrant slander or yelling “fire” in a crowded theater when there is no fire. So, it could be argued that the right of privacy can be imposed upon. Many stock arguments exist to justify such impositions and these typical rest either on utilitarian arguments or arguments showing that the right to privacy does not apply. For example, it is commonly argued that criminals lack a right to privacy in regards to their wicked deeds—that is, there is no moral right to secrecy in order to conceal immoral deeds. While these arguments can be used to morally justify collecting data from specific suspects, they do not seem to justify bulk data collection—unless it can be shown that all Americans have forfeited their right to privacy.
It would thus seem that the bulk data collection cannot be justified on moral grounds. As a general rule, I favor the view that there is a presumption in favor of the citizen: the state needs a moral justification to impose on the citizen and it should not be assumed the state has a right to act unless the citizen can prove differently. This is, obviously enough, analogous to the presumption of innocence in the American legal system.
In regards to the legality of the matter, the specific law in question has been addressed. In terms of bulk data collection in general, the answer seems quite obvious. While I am obviously not a constitutional scholar, bulk data collection seems to be a clear and egregious violation of the 4th Amendment: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
The easy and obvious counter is to point out that I, as I said, am not a constitutional scholar or even a lawyer. As such, my assessment of the 4th Amendment is lacking the needed professional authority. This is, of course, true—which is why this matter needs to be addressed by the Supreme Court.
In sum, there seems to be no practical, moral or legal justification for such bulk data collection by the state and hence it should not be permitted. This is my position as a philosopher and the 2016 Uncandidate.
While Aristotle was writing centuries before the rise of wearable technology, his view of moral education provides a solid foundation for the theory behind what I like to call the benign tyranny of the device. Or, if one prefers, the bearable tyranny of the wearbable.
In his Nicomachean Ethics Aristotle addressed the very practical problem of how to make people good. He was well aware that merely listening to discourses on morality would not make people good. In a very apt analogy, he noted that such people would be like invalids who listened to their doctors, but did not carry out her instructions—they will get no benefit.
His primary solution to the problem is one that is routinely endorsed and condemned today: to use the compulsive power of the state to make people behave well and thus become conditioned in that behavior. Obviously, most people are quite happy to have the state compel people to act as they would like them to act; yet equally unhappy when it comes to the state imposing on them. Aristotle was also well aware of the importance of training people from an early age—something later developed by the Nazis and Madison Avenue.
While there have been some attempts in the United States and other Western nations to use the compulsive power of the state to force people to engage in healthy practices, these have been fairly unsuccessful and are usually opposed as draconian violations of the liberty to be out of shape. While the idea of a Fitness Force chasing people around to make them exercise amuses me, I certainly would oppose such impositions on both practical and moral grounds. However, most people do need some external coercion to force them to engage in healthy behavior. Those who are well-off can hire a personal trainer and a fitness coach. Those who are less well of can appeal to the tyranny of friends who are already self-tyrannizing. However, there are many obvious problems with relying on other people. This is where the tyranny of the device comes in.
While the quantified life via electronics is in its relative infancy, there is already a multitude of devices ranging from smart fitness watches, to smart plates, to smart scales, to smart forks. All of these devices offer measurements of activities to quantify the self and most of them offer coercion ranging from annoying noises, to automatic social media posts (“today my feet did not patter, so now my ass grows fatter”), to the old school electric shock (really).
While the devices vary in their specifics, Aristotle laid out the basic requirements back when lightning was believed to come from Zeus. Aristotle noted that a person must do no wrong either with or against one’s will. In the case of fitness, this would be acting in ways contrary to health.
What is needed, according to Aristotle, is “the guidance of some intelligence or right system that has effective force.” The first part of this is that the device or app must be the “right system.” That is to say, the device must provide correct guidance in terms of health and well-being. Unfortunately, health is often ruled by fad and not actual science.
The second part of this is the matter of “effective force.” That is, the device or app must have the power to compel. Aristotle noted that individuals lacked such compulsive power, so he favored the power of law. Good law has practical wisdom and also compulsive force. However, unless the state is going to get into the business of compelling health, this option is out.
Interesting, Aristotle claims that “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While this seems not entirely true, he did seem to be right in that people find the law less annoying than being bossed around by individuals acting as individuals (like that bossy neighbor telling you to turn down the music).
The same could be true of devices—while being bossed around by a person (“hey fatty, you’ve had enough ice cream, get out and run some”) would annoy most people, being bossed by an app or device could be far less annoying. In fact, most people are already fully conditioned by their devices—they obey every command to pick up their smartphones and pay attention to whatever is beeping or flashing. Some people do this even when doing so puts people at risk, such as when they are driving. This certainly provides a vast ocean of psychological conditioning to tap into, but for a better cause. So, instead of mindlessly flipping through Instagram or texting words of nothingness, a person would be compelled by her digital master to exercise more, eat less crap, and get more sleep. Soon the machine tyrants will have very fit hosts to carry them around.
So, Aristotle has provided the perfect theoretical foundation for designing the tyrannical device. To recap, it needs the following features:
- Practical wisdom: the health science for the device or app needs to be correct and the guidance effective.
- Compulsive power: the device or app must be able to compel the user effectively and make them obey.
- Not too annoying: while it must have compulsive power, this power must not generate annoyance that exceeds its ability to compel.
- A cool name.
So, get to work on those devices and apps. The age of machine tyranny is not going to impose itself. At least not yet.
After the financial class melted down the world economy, local governments faced an obvious reduction in their revenues. As the economy recovered under a Democrat President, the Republicans held onto or gained power in many state governments, such as my own adopted state of Florida. With laudable consistency with their professed ideology, Republicans routinely cut taxes for businesses, the well off and sometimes even almost everyone. While the theory seems to be that cutting taxes will increase the revenue for state and local governments, shockingly the opposite seems to happen: state and local governments find themselves running short of funds needed to meet the expenses of actually operating a civilization.
Being resourceful, local leaders seek other revenue streams in order to pay the bills. While cities like Ferguson provide well-known examples of a common “solution”, many cities and towns have embraced the practice of law-enforcement as revenue stream. While the general practice of getting revenue from law enforcement is nothing new, the extent to which some local governments rely on it is rather shocking. How the system works is also often shocking—it often amounts to a shakedown system one would expect to see in a corrupt country unfamiliar with the rule of law or the rights of citizens.
Since Ferguson, where Michael Brown was shot on August 9, 2014, has been the subject of extensive study, I will use the statistics from that town. Unfortunately, Ferguson does not appear to be unique or even unusual.
In 2013, Ferguson’s court dealt with 12,108 cases and 24,532 warrants. This works out to an average of 1.5 cases and 3 warrants per household in Ferguson. The fines and court fees that year totaled $2,635,400—making the municipal court the second largest revenue stream.
It would certainly be one thing if these numbers were the result of the legitimate workings of the machinery of justice. That is, if the cases and warrants were proportional to the actual crimes being committed and that justice was being dispensed fairly. That is, the justice was just.
One point of concern that has been widely addressed in the national media is that the legal system seems to disproportionally target blacks. In Ferguson, as in many places, the majority of the cases handled by the court arise from car stops. Ferguson is 29% white, but whites make up only 12.7% of those stopped. When a person is stopped, a black citizen will be searched 12.1% of the time, while a white citizen will be searched 6.9% of the time. In terms of arrest, a black citizen was arrested 10.4% of the time and a white citizen was arrested 5.2% of the time.
One stock reply to such figures is the claim that blacks commit more crimes than whites. If it were true that blacks were being arrested in proportion to the rate at which they were committing crimes, then this would be (on the face of it) fair. However, this does not seem to be the case. Interesting, even though blacks were more likely to be searched, the police discovered contraband 21.7% of the time. Whites who were searched were found with contraband 34.0% of the time. Also, 93% of those arrested in Ferguson were black. While certainly not impossible, it seems somewhat odd that 93% of the crime committed in the city was committed by black citizens.
Naturally, these numbers can be talked around or even explained away. It could be argued that blacks are not being targeted as a specific source of revenue and the arrest rates are proportional and just. This still leaves the matter of how the legal system operates in terms of being focused on revenue.
Laying aside all talk of race, Ferguson stands out as an example of how law enforcement can turn into a collection system. One key component is, of course, having a system of high fines. For example, Ferguson had a $531 fine for high grass and weeds, $792 for Failure to Obey, $527 for Failure to Comply, $427 for a Peace Disturbance violation, and so on.
If a person can pay, then the person is not arrested. But, if a person cannot afford the fine, then an arrest warrant is issued—this is the second part of the system. The city issued 32,975 arrest warrants for minor offenses in 2013—and the city has a population of 21,000 people.
After a person is arrested, she faces even more fees, such the obvious court fees and these can quickly pile up. For example, a person might get a $150 parking ticket that she cannot pay. She is then arrested and subject to more fees and more charges. This initial ticket might grow to a debt of almost$1,000 to the city. Given that the people who tend to be targeted are poor, it is likely they will not be able to pay the initial ticket. They will then be arrested, which could cost them their job, thus make them unable to pay their court fees. This could easily spiral into a court inflicted cycle of poverty and debt. This, obviously enough, is not what the legal system is supposed to do.
From a moral standpoint, one main problem with using this sort of law enforcement as a revenue stream is the damage it does to the citizens who cannot afford the fines and fees. As noted in the example above, a person could find her life ruined by a single parking ticket. The point of law enforcement in a just society is to protect the citizens from harm, not ruin them.
A second point of moral concern is that this sort of system is racketeering—it puts forth a threat of arrest and court fees, and then offers “protection” from that threat in return for a fee. That is, citizens are threatened to buy their way out of a greater harm. This is hardly justice. If it was practice by anyone else, it would be criminal racketeering and a protection scheme.
A third point of moral concern is that the system of exploiting the citizens by force and threat of force damages the fundamental relation between the citizen and the democratic state. In feudal states and in the domains of warlords, one expects the thugs of the warlords to shake down the peasants. However, that sort of thing is contrary to the nature of a democratic state. As happened during the revolts against feudalism and warlords, people will rise up against such oppression—and this is to be expected. Robin Hood is, after all, the hero and the Sheriff of Nottingham is the villain.
This is not to say that there should not be fines, penalties and punishments. However, they should be proportional to the offenses, they should be fairly applied, and should be aimed at protecting the citizens, not filling the coffers of the kingdom. As a final point, we should certainly not be cutting the taxes of the well off and then slamming the poor with the cost of doing so. That is certainly unjust and will, intended or not, result in dire social consequences.
America, it has been said, needs to be taken back. Or held onto. Or taken on a long walk on the beach. Whatever the metaphor, there is a pack of Republicans competing madly for the chance to be put down by Hilary Clinton. Among Democrats, only the bold Bernie Sanders has dared to challenge the Clinton machine. He will be missed.
One narrative put forth by some Republican candidates is the need for someone not beholden to special interests, an outsider who is for the people. That seems reasonable. Looking around, I don’t see too many of those. None actually.
Among the people (that is, us) there are longstanding complaints about the nature of politicians and folks regularly condemn the regular activities and traits of the political class. People ask why the sort of folks they claim to really want don’t run, then they vote for more of the same politicians.
It is time that America had a true choice. A choice not just between candidates of the two political machines, but between actual candidates and an uncandidate. I am Mike LaBossiere and I am your 2016 Uncandidate.
It might be wondered what it is to be a presidential uncandidate. One defining characteristic is the inability to win the election, but there is obviously more to it than that. Otherwise Ted Cruz and Mike Huckabee would also be uncandidates.
What truly makes an uncandidate is that he exemplifies what voters claim they want, but would assure catastrophic defeat in the election. I’ll run through a few of these and show you why I am an uncandidate for 2016. You can decide if you’d like to be one, too.
One of the main complaints about politicians is that they are beholden to the money that buys them the elections. As an uncandidate, I have a clear message: do not send me your money. If you are like most people, you need your money. If you are a billionaire or PACmaster, I am not for sale. If you find you have some extra cash that you do not need, consider asking a local teacher if she needs some supplies for her classroom or donating to the local food bank or animal shelter. Do some good for those who do good.
My unwillingness to accept money is certain defeat in the political arena—the presidency is now a billion dollar plus purchase. But, I am an uncandidate.
People also complain about the negativity of campaigns. While I will be critical of candidates, I will not engage in fear mongering, scare tactics and straw man tactics through slickly produced scary ads. Part of this is because of the obvious—I have no money to do such things. But part of it is also a matter of ethics—I learned in sports that one should win fairly by being better, not by whispering hate and lies from the shadows.
Since I teach critical thinking, I know that people are hardwired to give more weight to the negative. This is, in fact, a form of cognitive bias—an unconscious tendency. So, by abandoning negativity, I toss aside one of the sharper swords in the arsenal of the true politicians.
Interestingly enough, folks also complain that they do not know much about the candidates. Fortunately, I have been writing on this blog since 2007 and have written a pile of books. My positions on a multitude of issues are right here. I was also born in the United States, specifically in Maine. The blackflies will back me up on this. While willing to admit errors, I obviously do not shift my views around to pander. This is obviously not what a proper candidate who wants to win would do.
Apparently being an outsider is big these days. I think I went to Washington once as a kid, and I have never held political office. So I am clearly an outsider. For real. Often, when a person claims to be an outsider, it is like in that horror movie—the call is coming from inside the house (or the senate). Obviously enough, being connected is critical to being elected—I’m unconnected and will remain unelected.
Finally, folks are getting around to talking about how important the middle class is. While millionaires do claim to understand the middle class, I am actually middle class. Feel free to make comments about my class or lack thereof. I drive a 2001 Toyota Tacoma and paid $72,000 for my house back in the 1990s. Since I live the problems of the middle class, I get those problems. The presidency is, obviously enough, not for the middle class.
So, I announce my uncandidacy for 2016. I am not running for President because 1) I actually have a job and 2) I would totally lose. But, I encourage everyone to become an uncandidate—to be what we say we want our leaders to be (yet elect people who are not that anyway).
I’ll be unrunning my uncampaign online throughout 2015 and 2016. Because that is free.
Remember: do NOT give me money.
You can, however, buy my books.