If you have made a mistake, do not be afraid of admitting the fact and amending your ways.
I never make the same mistake twice. Unfortunately, there are an infinite number of mistakes. So, I keep making new ones. Fortunately, philosophy is rather helpful in minimizing the impact of mistakes and learning that crucial aspect of wisdom: not committing the same error over and over.
One key aspect to avoiding the repetition of errors is skill in critical thinking. While critical thinking has become something of a buzz-word bloated fad, the core of it remains as important as ever. The core is, of course, the methods of rationally deciding whether a claim should be accepted as true, rejected as false or if judgment regarding that claim should be suspended. Learning the basic mechanisms of critical thinking (which include argument assessment, fallacy recognition, credibility evaluation, and causal reasoning) is relatively easy—reading through the readily available quality texts on such matters will provide the basic tools. But, as with carpentry or plumbing, merely having a well-stocked tool kit is not enough. A person must also have the knowledge of when to use a tool and the skill with which to use it properly. Gaining knowledge and skill is usually difficult and, at the very least, takes time and practice. This is why people who merely grind through a class on critical thinking or flip through a book on fallacies do not suddenly become good at thinking. After all, no one would expect a person to become a skilled carpenter merely by reading a DIY book or watching a few hours of videos on YouTube.
Another key factor in avoiding the repetition of mistakes is the ability to admit that one has made a mistake. There are many “pragmatic” reasons to avoid admitting mistakes. Public admission to a mistake can result in liability, criticism, damage to one’s reputation and other such harms. While we have sayings that promise praise for those who admit error, the usual practice is to punish such admissions—and people are often quick to learn from such punishments. While admitting the error only to yourself will avoid the public consequences, people are often reluctant to do this. After all, such an admission can damage a person’s pride and self-image. Denying error and blaming others is usually easier on the ego.
The obvious problem with refusing to admit to errors is that this will tend to keep a person from learning from her mistakes. If a person recognizes an error, she can try to figure out why she made that mistake and consider ways to avoid making the same sort of error in the future. While new errors are inevitable, repeating the same errors over and over due to a willful ignorance is either stupidity or madness. There is also the ethical aspect of the matter—being accountable for one’s actions is a key part of being a moral agent. Saying “mistakes were made” is a denial of agency—to cast oneself as an object swept along by the river of fare rather than an agent rowing upon the river of life.
In many cases, a person cannot avoid the consequences of his mistakes. Those that strike, perhaps literally, like a pile of bricks, are difficult to ignore. Feeling the impact of these errors, a person might be forced to learn—or be brought to ruin. The classic example is the hot stove—a person learns from one touch because the lesson is so clear and painful. However, more complicated matters, such as a failed relationship, allow a person room to deny his errors.
If the negative consequences of his mistakes fall entirely on others and he is never called to task for these mistakes, a person can keep on making the same mistakes over and over. After all, he does not even get the teaching sting of pain trying to drive the lesson home. One good example of this is the political pundit—pundits can be endlessly wrong and still keep on expressing their “expert” opinions in the media. Another good example of this is in politics. Some of the people who brought us the Iraq war are part of Jeb Bush’s presidential team. Jeb, infamously, recently said that he would have gone to war in Iraq even knowing what he knows now. While he endeavored to awkwardly walk that back, it might be suspected that his initial answer was the honest one. Political parties can also embrace “solutions” that have never worked and relentless apply them whenever they get into power—other people suffer the consequences while the politicians generally do not directly reap consequences from bad policies. They do, however, routinely get in trouble for mistakes in their personal lives (such as affairs) that have no real consequences outside of this private sphere.
While admitting to an error is an important first step, it is not the end of the process. After all, merely admitting I made a mistake will not do much to help me avoid that mistake in the future. What is needed is an honest examination of the mistake—why and how it occurred. This needs to be followed by an honest consideration of what can be changed to avoid that mistake in the future. For example, a person might realize that his relationships ended badly because he made the mistake of rushing into a relationship too quickly—getting seriously involved without actually developing a real friendship.
To steal from Aristotle, merely knowing the cause of the error and how to avoid it in the future is not enough. A person must have the will and ability to act on that knowledge and this requires the development of character. Fortunately, Aristotle presented a clear guide to developing such character in his Nicomachean Ethics. Put rather simply, a person must do what it is she wishes to be and stick with this until it becomes a matter of habit (and thus character). That is, a person must, as Aristotle argued, become a philosopher. Or be ruled by another who can compel correct behavior, such as the state.
While Aristotle was writing centuries before the rise of wearable technology, his view of moral education provides a solid foundation for the theory behind what I like to call the benign tyranny of the device. Or, if one prefers, the bearable tyranny of the wearbable.
In his Nicomachean Ethics Aristotle addressed the very practical problem of how to make people good. He was well aware that merely listening to discourses on morality would not make people good. In a very apt analogy, he noted that such people would be like invalids who listened to their doctors, but did not carry out her instructions—they will get no benefit.
His primary solution to the problem is one that is routinely endorsed and condemned today: to use the compulsive power of the state to make people behave well and thus become conditioned in that behavior. Obviously, most people are quite happy to have the state compel people to act as they would like them to act; yet equally unhappy when it comes to the state imposing on them. Aristotle was also well aware of the importance of training people from an early age—something later developed by the Nazis and Madison Avenue.
While there have been some attempts in the United States and other Western nations to use the compulsive power of the state to force people to engage in healthy practices, these have been fairly unsuccessful and are usually opposed as draconian violations of the liberty to be out of shape. While the idea of a Fitness Force chasing people around to make them exercise amuses me, I certainly would oppose such impositions on both practical and moral grounds. However, most people do need some external coercion to force them to engage in healthy behavior. Those who are well-off can hire a personal trainer and a fitness coach. Those who are less well of can appeal to the tyranny of friends who are already self-tyrannizing. However, there are many obvious problems with relying on other people. This is where the tyranny of the device comes in.
While the quantified life via electronics is in its relative infancy, there is already a multitude of devices ranging from smart fitness watches, to smart plates, to smart scales, to smart forks. All of these devices offer measurements of activities to quantify the self and most of them offer coercion ranging from annoying noises, to automatic social media posts (“today my feet did not patter, so now my ass grows fatter”), to the old school electric shock (really).
While the devices vary in their specifics, Aristotle laid out the basic requirements back when lightning was believed to come from Zeus. Aristotle noted that a person must do no wrong either with or against one’s will. In the case of fitness, this would be acting in ways contrary to health.
What is needed, according to Aristotle, is “the guidance of some intelligence or right system that has effective force.” The first part of this is that the device or app must be the “right system.” That is to say, the device must provide correct guidance in terms of health and well-being. Unfortunately, health is often ruled by fad and not actual science.
The second part of this is the matter of “effective force.” That is, the device or app must have the power to compel. Aristotle noted that individuals lacked such compulsive power, so he favored the power of law. Good law has practical wisdom and also compulsive force. However, unless the state is going to get into the business of compelling health, this option is out.
Interesting, Aristotle claims that “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While this seems not entirely true, he did seem to be right in that people find the law less annoying than being bossed around by individuals acting as individuals (like that bossy neighbor telling you to turn down the music).
The same could be true of devices—while being bossed around by a person (“hey fatty, you’ve had enough ice cream, get out and run some”) would annoy most people, being bossed by an app or device could be far less annoying. In fact, most people are already fully conditioned by their devices—they obey every command to pick up their smartphones and pay attention to whatever is beeping or flashing. Some people do this even when doing so puts people at risk, such as when they are driving. This certainly provides a vast ocean of psychological conditioning to tap into, but for a better cause. So, instead of mindlessly flipping through Instagram or texting words of nothingness, a person would be compelled by her digital master to exercise more, eat less crap, and get more sleep. Soon the machine tyrants will have very fit hosts to carry them around.
So, Aristotle has provided the perfect theoretical foundation for designing the tyrannical device. To recap, it needs the following features:
- Practical wisdom: the health science for the device or app needs to be correct and the guidance effective.
- Compulsive power: the device or app must be able to compel the user effectively and make them obey.
- Not too annoying: while it must have compulsive power, this power must not generate annoyance that exceeds its ability to compel.
- A cool name.
So, get to work on those devices and apps. The age of machine tyranny is not going to impose itself. At least not yet.
While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machines—machines that might take actions that would be considered misdeeds or crimes if committed by a human (such as the oft-predicted genocide).
In general, punishment is aimed at one of more of the following goals: retribution, rehabilitation, or deterrence. Each of these goals will be considered in turn in the context of machines.
Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it has incurred by its misdeed. Reparation can, to be a bit sloppy, be included under retaliation—at least in the sense of the repayment of a debt incurred by the commission of a misdeed.
While a machine can be damaged or destroyed, there is clearly the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down on her or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution would seem to require that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut my foot, but it cannot wrong me.
If a machine can be an agent, which was discussed in an earlier essay, then it would seem to be able to do wrongful deeds and thus be a potential candidate for retribution. However, even if a machine had agency, there is still the question of whether or not retribution would really apply. After all, retribution requires more than just agency on the part of the target. It also seems to require that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution—since retribution seems to be based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But, the android does not suffer—it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless—the books would not be balanced.
This could be countered by arguing that the target of the retribution need not suffer—what is required is merely the right sort of balancing of the books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn, but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.
Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence—this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence often aims at affects others as well.
Obviously enough, a machine that lacks agency cannot be subject to rehabilitative punishment—it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.
To use an obvious example, if a computer crashes and destroys a file that a person had been working on for hours, punishing the computer in an attempt to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.
A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog—after leaking oil in the house or biting the robo-cat and being scolded, it would learn not to do those misdeeds again.
It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seems to be wired in a way that requires that we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our machine creations would need to be the same way. But, perhaps, it is not just a matter of the organic—perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach—we will be, by our nature, cruel teachers of our machine children.
Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings—and this would raise the same sort of issues about how agents should be treated.
The purpose of deterrence is to motivate the agent who did the misdeed and/or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.
As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment and the other targets must be capable of changing their actions in response to that punishment.
Deterrence, obviously enough, does not work in regards to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for house, punishing it will not deter it. Smashing it in front of other computers will not deter them.
A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again it could serve as a warning, and thus a deterrence, to other combat robots.
Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.
In general, people suffer from a wide range of cognitive biases. One of these is known as negativity bias and it is manifested by the tendency people have to give more weight to the negative than to the positive. For example, people tend to weigh the wrongs done to them more heavily than the good done to them. As another example, people tend to be more swayed by negative political advertisements than by positives ones. This bias can also have an impact on education.
A colleague of mine asks his logic students each semester how many of them are planning on law school. In the past, he had many students. Now, the number is considerably less. Curious about this, he checked and found that logic had switched from being a requirement for pre-law to being a mere recommendation. My colleague noted that it seemed irrational for students who plan on taking the LSAT and becoming lawyers to avoid the logic class, given that the LSAT is largely a logic test and that law school requires skill in logic. He made the point that students often prefer to avoid the useful when it is not required and only grudgingly take what is required. We discussed a bit how this relates to the negativity bias: a student who did not take the logic class when it was required would be punished by being unable to graduate. Now that the class is optional, there is only the positive benefit of a likely improvement on the LSAT and better performance in law school. Since people weigh punishments more than rewards, this behavior makes sense—but is still irrational. Especially since many of the students who skip the logic class will end up spending money taking LSAT preparation classes that will endeavor to spackle over their lack of skills in logic.
I have seen a similar sort of thing in my own classes. At my university, university policy allows us to lower student grades on the basis of a lack of attendance. We are even permitted to fail a student for excessive absences. While attendance is mandatory in my classes, I do not have a special punishment for missing class. Not surprisingly, when the students figure this out around week three or four, attendance plummets and then stabilizes at a low level. Before I used BlackBoard for quizzes, exams and for turning in assignments and papers, attendance would spike back up for days on which something had to be done in class. Since students can do their work via BlackBoard, these spikes are gone. They are, however, replaced by post-exam spikes when students do badly on the exams because they have not been in class. Then attendance slumps again. Interestingly, students often claim that they think the class is interesting and useful. But, since there is no direct and immediate punishment for not attending (just a delayed “punishment” in terms of lower grades and a lack of learning), many students are not motivated to attend class.
Naturally, I do consider the possibility that I am a bad professor who is teaching a subject that students regard as useless or boring. However, my evaluations are consistently good, former students have returned to say good things about me and my classes, and so on. That said, perhaps I am merely deluding myself and being humored. That said, it is easy enough to draw an analogy to exercise: exercise does not provide immediate rewards and there is no immediate punishment for not staying fit—just a loss of benefits. Most people elect to under-exercise or avoid it altogether. This, and similar things, does show that people generally avoid that which is difficult now but yields lasting benefits latter.
I have, of course, considered going to the punishment model for my classes. However, I have resisted this for a variety of reasons. The first is that my personality is such that I am more inclined to want to offer benefits rather than punishments. This seems to be a clear mistake given the general psychology of people. The second is that I believe in free choice: like God, I think people should be free to make bad choices and not be coerced into doing what is right. It has to be a free choice. Naturally, choosing poorly brings its own punishment—albeit later on. The third is the hassle of dealing with attendance: the paper work, having to handle excuses, being lied to regularly and so on. The fourth is the fact that classes are generally better for the good students when the students who do not want to be in class elect to not attend. While I want everyone to learn, I would rather have the people who would prefer not to learn not be in class disrupting the learning of others—college is not the place where the educator should have to spend time dealing with behavioral issues in the classroom. The fifth is I prefer to reduce the amount of lying that students think they have to engage in.
In terms of why I have been considering using the punishment model, there are three reasons. One is that if students are compelled to attend, they might very well inadvertently learn something. The second is that this model is a lesson for what the workplace will be like for most of the students—so habituating them to this (or, rather, keeping the habituation they should have acquired in K-12) would be valuable. After all, they will probably need to endure awful jobs until they retire or die. The third is that perhaps many people lack the discipline to do what they should and they simply must be compelled by punishment—this is, of course, the model put forth by thinkers like Aristotle and Hobbes.
As I write this, it is finals week. Obviously, one of my last duties in regards to a class is to record the grade for each student be it an A, B, C, D or F. Or the newer option, WF. For those not in the know, an F grade is what a student gets when she fails the course (I do not fail students-I merely record their failure). A WF is a sort-of-new thing in which a student fails by “walking away” from the course. To be specific, if a student earns an F but last attended only prior to the withdrawal deadline (November 8 this year) then the student gets a WF.
The distinction is rather important: if a students earns an F, she fails but gets to keep the financial aid for the course. If a students gets a WF, then she (or the university) has to pay the money back. In order for financial aid to be released, a student has to attend at least once. To keep it, a student needs to attend once more-after the withdrawal deadline (in theory, a student could just attend once as long as it is after that deadline). Every semester I get at least one student who never attends class. Ever. I also always get 3-6 WF students. Some only attend one class, then never again. Others attend within two days of the withdrawal deadline and thus just miss keeping the money. Presumably enduring one more class with me is too much. Or perhaps they get the date wrong.
In addition to the WF policy, my university also has a general attendance policy. A student gets three unexcused absences without any consequences or questions. After that, faculty are permitted to impose penalties, such as lowering the overall grade one letter grade for each extra unexcused absence. Some faculty are very strict about this and require students to be on time and remain the entire class, tracking each student as she enters and leaves the classroom. Woe to the student who misses too often, arrives too late or leaves too early: the F of doom looms.
I do keep track of attendance, mainly for two reasons. One is for my own curiosity: how often do students show up? The other is for the purpose of distinguishing between the F and WF grade. Since money is on the line, I have to be sure to get the attendance right-although students do tend to try to sign in for their fellows.
I have never, however, lowered (or raised) a grade simply because of attendance. My general view has been that if a student can do the work and earn a grade, then that grade should not be arbitrarily lowered simply because the student failed to bask in my radiant knowledge (or shiver in my shadowy ignorance). I also take the view that the students are (in theory) adults and hence they have the choice as to whether they wish to attend or not. If they elect to not attend and do not learn, then the grade they earn will reflect this. If they elect to not attend, yet still learn, then the grade they earn will reflect that. Some people like the customer metaphor: a student has bought a ticket to the show, but it is her choice to go or not. The seat is paid for, but the student is under no obligation to fill it. Naturally, if the student is attending on someone else’s dime, then this makes matters a bit more complex-especially if the student is expected to maintain a certain grade to keep the support.
Of course, there is something to be said for enforcing attendance with punishment. My experience, which matches the data from studies of human behavior, is that people weigh the negative more than the positive. In the case of a class, the (alleged) reward of education from attending has little impact on many students. However, the stick of failure for not attending is a strong motivator, especially for those who have little interest in education (as opposed to getting the paper to get the job to get the money…and then die). There is also the view that most people, even adults, must be ruled by pain rather than fine ideals or arguments (as per Aristotle). Less extreme, there is the view that college kids are just that, kids: many are incapable of using the freedom to attend or not attend wisely and hence the professor must use her wisdom to guide them to good behavior by punishing a failure to attend. It could even be argued that a professor, like a high school teacher or nanny, has a moral obligation to force students to attend for their own good.
I tend to go with God’s policy: people are free to do as they will, they get every chance, but they get what they earn.
In an earlier essay I looked at the matter of the ethics of overhead in regards to charities. In that essay, I focused on Dan Pallotta’s discussion of the matter and in this essay I will discuss the matter more generally.
While people do vary in their opinions of the matter, there does seem to be a general moral intuition that a charitable non-profit should have minimal overhead. The idea is, presumably, that the money should go to the charitable cause rather than to the cost of overhead. Thus, the idea is that the lower the overhead, the greater the virtue. In this context it is assumed that the overhead is generally legitimate (that is, the money for overhead is not simply wasted or misused).
The obvious way to discuss this matter in the context of ethics is to consider it within established approaches to ethics, specifically those of virtue theory, Kant and utilitarianism.
Borrowing from Aristotle and Aquinas, when assessing charity one needs to consider such factors as the object of the action, the circumstances of the action, and the end of the action. Aristotle, in defining what it is to act virtuously, puts considerable emphasis on the idea that a person must do the virtuous act for its own sake. Using the example of giving to charity, exercising the virtue of charity (or generosity) requires that the giving be done for the sake of giving. If, for example, I give for the sake of getting a tax break, then I am not exercising the virtue of charity. This would seem to provide some foundation for the intuition that charities should have low overhead. After all, for those engaged in the charitable function (be it a road race, a bake sale or something else) to be acting from the virtue of charity they would need to engage in the activity for its own sake. If, for example, I work for a charity to get a salary, then it would seem that I am not acting virtuously. As such, to be acting virtuously it would seem that those involved in a charity would need to be engaged in the charity for its owns sake, which would certainly seem to involve the expectation that they make sacrifices for the charity since they are supposed to be acting for its sake and not for some other sake, such as making a large salary.
Not surprisingly, people are praised for making sacrifices for charity—be it a person who volunteers for free or a person who could be a CEO of a major corporation but instead works for a charity for a mere fraction of what she could make in the for-profit sector.
Kant claimed that what matters morally is the good will and not what the good will accomplishes. Roughly put, if a person wills the moral law, then that is what matters. Whether the person accomplishes anything practical or not is not relevant to the ethics of the matter. In the case of a charity, what would presumably matter is that a person will in the appropriately good way and the consequences would not matter morally. This would certainly match the idea that what matters in a charity is that this will be shown by focusing on minimizing overhead and maximizing what goes to the charitable cause. Naturally, a person can will the good and also have success in terms of the consequences. However, people are praised for their intent. So, as Pallotta noted, those running a bake sale with a low overhead that raises a tiny amount of money are regarded as morally superior to those running a high-overhead event that raises a great deal of money. It is presumably assumed that those with the low overhead are focused on (willing) charity while those who are involved in the high overhead operation are really concerned with their own income.
In the case of utilitarianism, the focus is not on the intentions of those involved nor on what they will or do not will. Rather, what matters is the consequences. On this moral view, it would certainly seem that a high overhead charity could be superior to a low overhead charity in terms of the consequences. In fact, Pallotta seems to be giving what amounts to a utilitarian argument: what matters is the overall consequences. On this view, a charity is assessed based rather like any business: costs and benefits. So, for example, if a charity has large expenses in terms of salaries and promotions, yet successfully raises millions for charity, then it is better than a charity with tiny expenses that raises a tiny amount of money.
While it is tempting to claim that those operating from the utilitarian perspective would be doing so in a way that rejects the idea of the true virtue of charity, this need not be the case. Acting in a virtuous manner presumably does not require that a person act less effectively. As such, if a person accepts a large salary to work at a charity for the sake of the charity, then the person can still be regarded as virtuous, albeit well compensated for her virtue.
The obvious counter is that a person who was truly motivated by a sense of charity would accept a much lower salary so that more would go to charity. This is certainly a legitimate concern and raises the question of how much a person should sacrifice in order to be virtuous. In this case, a person who could make a huge salary effectively selling bottle water to the masses instead elects to make a large salary effectively combating malaria could be regarded as being virtuous—provided that she chose the one over the other for the sake of helping others. While a person who accepted a lower salary for doing the job could (and perhaps should) be regarded as more virtuous, it does seem misguided to automatically regard someone who is doing good as lacking virtue merely because they receive such compensation. If only from a practical sense, it seems like a good idea to reward people for doing what is good.
If, however, a person picks the charitable job for other reasons (such as location or to boost his image for planned political run), then the person would not be acting virtuously even if he happened to do good. We do not, of course, always know what is motivating a person. This probably explains why people tend to praise charities with lower overhead—since those involved are obviously not getting anything for themselves (in terms of money), then they surely must be motivated by charity’s sake. Or so it is assumed.