While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machines—machines that might take actions that would be considered misdeeds or crimes if committed by a human (such as the oft-predicted genocide).
In general, punishment is aimed at one of more of the following goals: retribution, rehabilitation, or deterrence. Each of these goals will be considered in turn in the context of machines.
Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it has incurred by its misdeed. Reparation can, to be a bit sloppy, be included under retaliation—at least in the sense of the repayment of a debt incurred by the commission of a misdeed.
While a machine can be damaged or destroyed, there is clearly the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down on her or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution would seem to require that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut my foot, but it cannot wrong me.
If a machine can be an agent, which was discussed in an earlier essay, then it would seem to be able to do wrongful deeds and thus be a potential candidate for retribution. However, even if a machine had agency, there is still the question of whether or not retribution would really apply. After all, retribution requires more than just agency on the part of the target. It also seems to require that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution—since retribution seems to be based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But, the android does not suffer—it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless—the books would not be balanced.
This could be countered by arguing that the target of the retribution need not suffer—what is required is merely the right sort of balancing of the books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn, but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.
Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence—this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence often aims at affects others as well.
Obviously enough, a machine that lacks agency cannot be subject to rehabilitative punishment—it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.
To use an obvious example, if a computer crashes and destroys a file that a person had been working on for hours, punishing the computer in an attempt to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.
A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog—after leaking oil in the house or biting the robo-cat and being scolded, it would learn not to do those misdeeds again.
It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seems to be wired in a way that requires that we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our machine creations would need to be the same way. But, perhaps, it is not just a matter of the organic—perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach—we will be, by our nature, cruel teachers of our machine children.
Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings—and this would raise the same sort of issues about how agents should be treated.
The purpose of deterrence is to motivate the agent who did the misdeed and/or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.
As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment and the other targets must be capable of changing their actions in response to that punishment.
Deterrence, obviously enough, does not work in regards to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for house, punishing it will not deter it. Smashing it in front of other computers will not deter them.
A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again it could serve as a warning, and thus a deterrence, to other combat robots.
Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.
In general, people suffer from a wide range of cognitive biases. One of these is known as negativity bias and it is manifested by the tendency people have to give more weight to the negative than to the positive. For example, people tend to weigh the wrongs done to them more heavily than the good done to them. As another example, people tend to be more swayed by negative political advertisements than by positives ones. This bias can also have an impact on education.
A colleague of mine asks his logic students each semester how many of them are planning on law school. In the past, he had many students. Now, the number is considerably less. Curious about this, he checked and found that logic had switched from being a requirement for pre-law to being a mere recommendation. My colleague noted that it seemed irrational for students who plan on taking the LSAT and becoming lawyers to avoid the logic class, given that the LSAT is largely a logic test and that law school requires skill in logic. He made the point that students often prefer to avoid the useful when it is not required and only grudgingly take what is required. We discussed a bit how this relates to the negativity bias: a student who did not take the logic class when it was required would be punished by being unable to graduate. Now that the class is optional, there is only the positive benefit of a likely improvement on the LSAT and better performance in law school. Since people weigh punishments more than rewards, this behavior makes sense—but is still irrational. Especially since many of the students who skip the logic class will end up spending money taking LSAT preparation classes that will endeavor to spackle over their lack of skills in logic.
I have seen a similar sort of thing in my own classes. At my university, university policy allows us to lower student grades on the basis of a lack of attendance. We are even permitted to fail a student for excessive absences. While attendance is mandatory in my classes, I do not have a special punishment for missing class. Not surprisingly, when the students figure this out around week three or four, attendance plummets and then stabilizes at a low level. Before I used BlackBoard for quizzes, exams and for turning in assignments and papers, attendance would spike back up for days on which something had to be done in class. Since students can do their work via BlackBoard, these spikes are gone. They are, however, replaced by post-exam spikes when students do badly on the exams because they have not been in class. Then attendance slumps again. Interestingly, students often claim that they think the class is interesting and useful. But, since there is no direct and immediate punishment for not attending (just a delayed “punishment” in terms of lower grades and a lack of learning), many students are not motivated to attend class.
Naturally, I do consider the possibility that I am a bad professor who is teaching a subject that students regard as useless or boring. However, my evaluations are consistently good, former students have returned to say good things about me and my classes, and so on. That said, perhaps I am merely deluding myself and being humored. That said, it is easy enough to draw an analogy to exercise: exercise does not provide immediate rewards and there is no immediate punishment for not staying fit—just a loss of benefits. Most people elect to under-exercise or avoid it altogether. This, and similar things, does show that people generally avoid that which is difficult now but yields lasting benefits latter.
I have, of course, considered going to the punishment model for my classes. However, I have resisted this for a variety of reasons. The first is that my personality is such that I am more inclined to want to offer benefits rather than punishments. This seems to be a clear mistake given the general psychology of people. The second is that I believe in free choice: like God, I think people should be free to make bad choices and not be coerced into doing what is right. It has to be a free choice. Naturally, choosing poorly brings its own punishment—albeit later on. The third is the hassle of dealing with attendance: the paper work, having to handle excuses, being lied to regularly and so on. The fourth is the fact that classes are generally better for the good students when the students who do not want to be in class elect to not attend. While I want everyone to learn, I would rather have the people who would prefer not to learn not be in class disrupting the learning of others—college is not the place where the educator should have to spend time dealing with behavioral issues in the classroom. The fifth is I prefer to reduce the amount of lying that students think they have to engage in.
In terms of why I have been considering using the punishment model, there are three reasons. One is that if students are compelled to attend, they might very well inadvertently learn something. The second is that this model is a lesson for what the workplace will be like for most of the students—so habituating them to this (or, rather, keeping the habituation they should have acquired in K-12) would be valuable. After all, they will probably need to endure awful jobs until they retire or die. The third is that perhaps many people lack the discipline to do what they should and they simply must be compelled by punishment—this is, of course, the model put forth by thinkers like Aristotle and Hobbes.
As I write this, it is finals week. Obviously, one of my last duties in regards to a class is to record the grade for each student be it an A, B, C, D or F. Or the newer option, WF. For those not in the know, an F grade is what a student gets when she fails the course (I do not fail students-I merely record their failure). A WF is a sort-of-new thing in which a student fails by “walking away” from the course. To be specific, if a student earns an F but last attended only prior to the withdrawal deadline (November 8 this year) then the student gets a WF.
The distinction is rather important: if a students earns an F, she fails but gets to keep the financial aid for the course. If a students gets a WF, then she (or the university) has to pay the money back. In order for financial aid to be released, a student has to attend at least once. To keep it, a student needs to attend once more-after the withdrawal deadline (in theory, a student could just attend once as long as it is after that deadline). Every semester I get at least one student who never attends class. Ever. I also always get 3-6 WF students. Some only attend one class, then never again. Others attend within two days of the withdrawal deadline and thus just miss keeping the money. Presumably enduring one more class with me is too much. Or perhaps they get the date wrong.
In addition to the WF policy, my university also has a general attendance policy. A student gets three unexcused absences without any consequences or questions. After that, faculty are permitted to impose penalties, such as lowering the overall grade one letter grade for each extra unexcused absence. Some faculty are very strict about this and require students to be on time and remain the entire class, tracking each student as she enters and leaves the classroom. Woe to the student who misses too often, arrives too late or leaves too early: the F of doom looms.
I do keep track of attendance, mainly for two reasons. One is for my own curiosity: how often do students show up? The other is for the purpose of distinguishing between the F and WF grade. Since money is on the line, I have to be sure to get the attendance right-although students do tend to try to sign in for their fellows.
I have never, however, lowered (or raised) a grade simply because of attendance. My general view has been that if a student can do the work and earn a grade, then that grade should not be arbitrarily lowered simply because the student failed to bask in my radiant knowledge (or shiver in my shadowy ignorance). I also take the view that the students are (in theory) adults and hence they have the choice as to whether they wish to attend or not. If they elect to not attend and do not learn, then the grade they earn will reflect this. If they elect to not attend, yet still learn, then the grade they earn will reflect that. Some people like the customer metaphor: a student has bought a ticket to the show, but it is her choice to go or not. The seat is paid for, but the student is under no obligation to fill it. Naturally, if the student is attending on someone else’s dime, then this makes matters a bit more complex-especially if the student is expected to maintain a certain grade to keep the support.
Of course, there is something to be said for enforcing attendance with punishment. My experience, which matches the data from studies of human behavior, is that people weigh the negative more than the positive. In the case of a class, the (alleged) reward of education from attending has little impact on many students. However, the stick of failure for not attending is a strong motivator, especially for those who have little interest in education (as opposed to getting the paper to get the job to get the money…and then die). There is also the view that most people, even adults, must be ruled by pain rather than fine ideals or arguments (as per Aristotle). Less extreme, there is the view that college kids are just that, kids: many are incapable of using the freedom to attend or not attend wisely and hence the professor must use her wisdom to guide them to good behavior by punishing a failure to attend. It could even be argued that a professor, like a high school teacher or nanny, has a moral obligation to force students to attend for their own good.
I tend to go with God’s policy: people are free to do as they will, they get every chance, but they get what they earn.
In an earlier essay I looked at the matter of the ethics of overhead in regards to charities. In that essay, I focused on Dan Pallotta’s discussion of the matter and in this essay I will discuss the matter more generally.
While people do vary in their opinions of the matter, there does seem to be a general moral intuition that a charitable non-profit should have minimal overhead. The idea is, presumably, that the money should go to the charitable cause rather than to the cost of overhead. Thus, the idea is that the lower the overhead, the greater the virtue. In this context it is assumed that the overhead is generally legitimate (that is, the money for overhead is not simply wasted or misused).
The obvious way to discuss this matter in the context of ethics is to consider it within established approaches to ethics, specifically those of virtue theory, Kant and utilitarianism.
Borrowing from Aristotle and Aquinas, when assessing charity one needs to consider such factors as the object of the action, the circumstances of the action, and the end of the action. Aristotle, in defining what it is to act virtuously, puts considerable emphasis on the idea that a person must do the virtuous act for its own sake. Using the example of giving to charity, exercising the virtue of charity (or generosity) requires that the giving be done for the sake of giving. If, for example, I give for the sake of getting a tax break, then I am not exercising the virtue of charity. This would seem to provide some foundation for the intuition that charities should have low overhead. After all, for those engaged in the charitable function (be it a road race, a bake sale or something else) to be acting from the virtue of charity they would need to engage in the activity for its own sake. If, for example, I work for a charity to get a salary, then it would seem that I am not acting virtuously. As such, to be acting virtuously it would seem that those involved in a charity would need to be engaged in the charity for its owns sake, which would certainly seem to involve the expectation that they make sacrifices for the charity since they are supposed to be acting for its sake and not for some other sake, such as making a large salary.
Not surprisingly, people are praised for making sacrifices for charity—be it a person who volunteers for free or a person who could be a CEO of a major corporation but instead works for a charity for a mere fraction of what she could make in the for-profit sector.
Kant claimed that what matters morally is the good will and not what the good will accomplishes. Roughly put, if a person wills the moral law, then that is what matters. Whether the person accomplishes anything practical or not is not relevant to the ethics of the matter. In the case of a charity, what would presumably matter is that a person will in the appropriately good way and the consequences would not matter morally. This would certainly match the idea that what matters in a charity is that this will be shown by focusing on minimizing overhead and maximizing what goes to the charitable cause. Naturally, a person can will the good and also have success in terms of the consequences. However, people are praised for their intent. So, as Pallotta noted, those running a bake sale with a low overhead that raises a tiny amount of money are regarded as morally superior to those running a high-overhead event that raises a great deal of money. It is presumably assumed that those with the low overhead are focused on (willing) charity while those who are involved in the high overhead operation are really concerned with their own income.
In the case of utilitarianism, the focus is not on the intentions of those involved nor on what they will or do not will. Rather, what matters is the consequences. On this moral view, it would certainly seem that a high overhead charity could be superior to a low overhead charity in terms of the consequences. In fact, Pallotta seems to be giving what amounts to a utilitarian argument: what matters is the overall consequences. On this view, a charity is assessed based rather like any business: costs and benefits. So, for example, if a charity has large expenses in terms of salaries and promotions, yet successfully raises millions for charity, then it is better than a charity with tiny expenses that raises a tiny amount of money.
While it is tempting to claim that those operating from the utilitarian perspective would be doing so in a way that rejects the idea of the true virtue of charity, this need not be the case. Acting in a virtuous manner presumably does not require that a person act less effectively. As such, if a person accepts a large salary to work at a charity for the sake of the charity, then the person can still be regarded as virtuous, albeit well compensated for her virtue.
The obvious counter is that a person who was truly motivated by a sense of charity would accept a much lower salary so that more would go to charity. This is certainly a legitimate concern and raises the question of how much a person should sacrifice in order to be virtuous. In this case, a person who could make a huge salary effectively selling bottle water to the masses instead elects to make a large salary effectively combating malaria could be regarded as being virtuous—provided that she chose the one over the other for the sake of helping others. While a person who accepted a lower salary for doing the job could (and perhaps should) be regarded as more virtuous, it does seem misguided to automatically regard someone who is doing good as lacking virtue merely because they receive such compensation. If only from a practical sense, it seems like a good idea to reward people for doing what is good.
If, however, a person picks the charitable job for other reasons (such as location or to boost his image for planned political run), then the person would not be acting virtuously even if he happened to do good. We do not, of course, always know what is motivating a person. This probably explains why people tend to praise charities with lower overhead—since those involved are obviously not getting anything for themselves (in terms of money), then they surely must be motivated by charity’s sake. Or so it is assumed.
One longstanding philosophical concern is the matter of why people behave badly. One example of this that filled the American news in July of 2013 was the new chapter in the sordid tale of former congressman Anthony Weiner. Weiner was previously best known for resigning from office after a scandal involving his internet activities and his failed campaign of deception regarding said activities. Weiner decided to make a return to politics by running for mayor of New York. However, his bid for office was overshadowed by revelations that he was sexting under the nom de sext “Carlos Danger” even after his resignation and promise to stop such behavior.
While his behavior has been more creepy and pathetic than evil, it does provide a context for discussion the matter of why people behave badly.
Socrates, famously, gave the answer that people do wrong out of ignorance. He did not mean that people elected to do wrong because they lacked factual knowledge (such as being unaware that stabbing people hurts them). This is not to say that bad behavior cannot stem from mere factual knowledge. For example, a person might be unaware that his joke about a rabbit caused someone great pain because she had just lost her beloved Mr. Bunny to a tragic weed whacker accident. In the case of Weiner, there is some possibility that ignorance of facts played a role in his bad behavior. For example, it seems that Weiner was in error about his chances of getting caught again, despite the fact that he had been caught before. Interestingly, Weiner’s fellow New York politician and Democrat Elliot Spitzer was caught in his scandal using the exact methods he himself had previously used and even described on television. In this case, the ignorance in question could be an arrogant overestimation of ability.
While such factual ignorance might play a role in a person’s decision to behave badly, there would presumably need to be much more in play in cases such as Weiner’s. For him to act on his (alleged) ignorance he would also need an additional cause or causes to engage in that specific behavior. For Socrates, this cause would be a certain sort of ignorance, namely a lack of wisdom.
While Socrates’ view has been extensively criticized (Aristotle noted that it contradicted the facts), it does have a certain appeal.
One way to consider such ignorance is to focus on the possibility that Weiner is ignorant of certain values. To be specific, it could be contended that Weiner acted badly because he did not truly know that he was choosing something worse (engaging in sexting) over something better (being faithful to his wife). In such cases a person might claim that he knows that he has picked the lesser over the greater, but it could be replied that doing this repeatedly displays an ignorance of the proper hierarchy of values. That is, it could be claimed that Weiner acted badly because he did not have proper knowledge of the good. To use an analogy, a person who is offered a simple choice (that is, no bizarre philosophy counter-example conditions) between $5 and $100 and picks the $5 as greater than $100 would seem to show a failure to grasp that 100 is greater than 5.
Socrates presented the obvious solution to evil: if evil arises from ignorance, than knowledge of the good attained via philosophy is just what would be needed.
The easy and obvious reply is that knowledge of what is better and what is worse is consistent with a person choosing to behave badly rather than better. To use an analogy, people who eat poorly and do not exercise profess to value health while acting in ways that directly prevent them from being healthy. This is often explained not in terms of a defect in values but, rather, in a lack of will. The idea that a person could have or at least understand the proper values but fail to act consistently with them because of weakness is certainly intuitively appealing. As such, one plausible explanation for Weiner’s actions is that while he knows he is doing wrong, he lacks the strength to prevent himself from doing so. Going back to the money analogy, it is not that the person who picks the $5 over the $100 does not know that 100 is greater than 5. Rather, in this scenario the $5 is easy to get and the $100 requires a strength the person lacks: she wants the $100, but simply cannot jump high enough to reach it.
Assuming a person knows what is good, the solution to this cause of evil would be, as Aristotle argued, proper training to make people stronger (or, at least, to condition them to select the better out of fear of punishment) so they can act on their knowledge of the good properly.
According to the hype, 3D printers are going to change the world in many positive ways. For example, home 3D printers will allow people to create replacement parts when something breaks. As another example, home 3D printers will allow anyone (with the money) to create their own objects (although much of this will be plastic junk). As a third example, the fact that 3D printers are almost universal machines (that is, they can theoretically make almost anything) will allow cheaper manufacturing. Not surprisingly, there is also a dark side to 3D printing.
One obvious point of moral concern is that such printers can allow people to print their own weapons and use these to harm people. While the first printed gun is not much of a weapon (it essentially a plastic “zip gun”), it did show that guns can be printed using the current technology. As the technology improves, it seems reasonable to believe that much better weapons could be printed, thus allowing the usual suspects (criminals, terrorists, and so on) to secretly print up their own weapons.
While this is a concern, people can and do already make their own weapons. While these weapons are usually fairly crude, they can be quite deadly—as the Boston Marathon bombing of 2013 showed. As such, 3D printing would not seem to significantly increase this sort of threat.
People can also get the metalworking tools needed to make more sophisticated weapons, although these are rather expensive and require skill to operate. Because of this, 3D printing might present an actual threat—a person does not need any special skills to print up a gun, although a printer capable of making an effective gun would probably be rather expensive.
Overall, until the printer technology is cheap and effective enough to print effective guns (that is, comparable to manufactured firearms), they will not present a significant threat. As such, there seems to be (as of now) little moral reason to be worried about this sort of use of 3D printing.
Another matter of obvious moral concern is that 3D printers will allow people to easily and secretly duplicate patented and copyrighted objects. Using a currently available home 3D printer, a person could print up copies of toys, miniatures (for games like D&D), parts and so on. Thus, 3D printing will allow people to do with objects what they have been doing with music, movies and software, namely engaging in piracy.
“Solid piracy” or “3D piracy” does differ from digital piracy in at least one key respect. In the case of printing an object, a person is not stealing the physical object that the manufacturer made. For example, if I were to print a copy of a copyrighted dragon (or gargoyle) miniature for my Pathfinder game, this is rather different from me going to the local gaming store and shoplifting that miniature.
On the one hand, this does seem to be a meaningful difference: by printing the dragon, I am not actually stealing the object. After all, no one is deprived of the object. As such, copying and printing a patented or copyrighted object would not be theft in the usual sense of stealing an actual object. Similar arguments have, of course, been given as to why pirating software, movies and music is not theft.
On the other hand, this does still seem to be theft. While I am not guilty of stealing the matter that makes up my dragon (assuming I did not steal that) I did steal the design of the dragon. For something like a plastic dragon miniature, the matter that makes it up is not the valuable component. Rather, to go with Aristotle, it is the form of the matter. In this case, the form of an imaginary dragon.
This sort of theft of design is nothing new—people have been stealing designs and producing their own objects for quite some time. What is different about 3D printing is that it makes such theft of form very easy. Sticking with my dragon example, before 3D printing it would have been very difficult for me to steal the dragon design/form: I would have had to create a mold of the dragon, melted down the plastic to make it and so on. It would, obviously, be cheaper and easier to just buy the dragon. However, 3D printing would allow me to easily copy the dragon. While there would be the cost of the printer (and perhaps a 3D scanner) and the materials, if I did enough copying and the material was cheap enough, it would also be cheaper to steal the dragon design than buy the dragon.
However, it would still be theft—I would be using the design owned by someone else without providing just compensation and this would be just as wrong as stealing a movie, software or music. Of course, there are those who contend that copying movies, software or music is not theft and they would presumably hold the same view about solid/3D piracy.