In January, 2016 Denmark passed a law that refugees who enter the state with assets greater than about US $1,450 will have their valuables taken in order to help pay for the cost of their being in the country. In response to international criticism, Denmark modified the law to allow refugees to keep items of sentimental value, such as wedding rings. This matter is certainly one of moral concern.
Critics have been quick to deploy a Nazi analogy, likening this policy to how the Nazis stole the valuables of those they sent to the concentration camps. While taking from refugees does seem morally problematic, the Nazi analogy does not really stick—there are too many relevant differences between the situations. Most importantly, the Danes would be caring for the refugees rather than murdering them. There is also the fact that the refugees are voluntarily going to Denmark rather than being rounded up, robbed, imprisoned and murdered. While the Danes have clearly not gone full Nazi, there are still grounds for moral criticism. However, I will endeavor to provide a short defense of the law—a rational consideration requires at least considering the pro side of the argument.
The main motivation of the law seems to be to deter refugees from coming to Denmark. This is a strategy of making their country less appealing than other countries in the hopes that refugees will go somewhere else and be someone else’s burden. Countries, like individuals, do seem to have the right to make themselves less appealing. While this sort of approach is certainly not morally commendable, it does not seem to be morally wrong. After all, the Danes are not simply banning refugees but trying to provide a financial disincentive. Somewhat ironically, the law would not deter the poorest of refugees. It would only deter those who have enough property to make losing it a worthwhile deterrent.
The main moral argument in favor of the law is based on the principle that people should help pay for the cost of their upkeep to at least the degree they can afford to do so. To use an analogy, if people show up at my house and ask to live with me and eat my food, it would certainly be fair of me to expect them to at least chip in for the costs of the utilities and food. After all, I do not get my utilities and food for free. This argument does have considerable appeal, but can be countered.
One counter to the argument is based on the fact that the refugees are fleeing a disaster. Going back to the house analogy, if survivors of a disaster showed up at my door asking for a place to stay until they could get back on their feet, taking their few remaining possessions to offset the cost of their food and shelter would seem to be cruel and heartless. They have lost so much already and to take what little that remains to them would add injury and insult to injury. To use another analogy, it would be like a rescue crew stripping people of their valuables to help pay for the rescue. While rescues are expensive, such a practice certainly would seem awful.
One counter is that refugees who are well off should pay for what they receive. After all, if relatively well-off people showed up at my door asking for food and shelter, it would not seem wrong of me to expect that they contribute to the cost of things. After all, if they can afford it, then they have no grounds to claim a free ride off me. Likewise for well-off refugees. That said, the law does not actually address the point, unless having more than $1450 is well off.
Another point of consideration is that it is one thing to have people pay for lodging and food with money they have; quite another to take a person’s remaining worldly possessions. It seems like a form of robbery, using whatever threat drove the refugees from home as the weapon. The obvious reply is that the refugees would be choosing to go to Denmark; they could go to a more generous country. The problem is, however, that refugees might soon have little choice about where they go.
Despite the predictions of many pundits, presidential candidate Donald Trump still leads the Republican pack as of the end of January. As should be expected, Trump’s remarks have resulted in criticism from the left. Somewhat unexpectedly, he has also been condemned by many conservatives. The National Review, a bastion of conservative thought, devoted an entire issue to harsh condemnation of Trump. This is certainly a fascinating situation and will no doubt become a chapter in many future political science textbooks.
That Trump is doing well should itself not be surprising. As I have argued in previous essays, he is the logical result of the strategies and tactics of the Republican Party. The Republican establishment has been feeding the beast; they should not be shocked that it has grown large. They crafted the ideal political ecosystem for Trump; they should not be dismayed that he has dominated this niche. As in so many horror stories, perhaps they realize they have created a monster and now they are endeavoring to destroy it.
It is not entirely clear what the “(un)friendly fire” of fellow Republicans is supposed to accomplish. One possibility is that the establishment hopes that these attacks will knock Trump down and allow a candidate more appealing to the establishment to win the nomination. Trump, many pundits claim, would lose in the general election and the Republicans certainly wish to win. However, Trump should not be counted out—he has repeatedly proven the pundits wrong and he might be, oddly enough, the best chance for a Republican victory in 2016.
The United States electorate has changed in recent years and Trump seems to be able to appeal very strongly to certain elements of this population. Bernie Sanders has also been able to appeal very strongly to other elements—and perhaps some of the same. As such, the Republican establishment might wish to reconsider their view of Trump’s chances relative to the other candidates.
That said, while Trump has done quite well in the polls, this is rather different from doing well in the actually trench work of politics. Doing well in the polls is rather like being a popular actor or athlete—this does not require a broad organization and a nationwide political machine. Trump is certainly a media star—quite literally. Soon, however, the “ground game” begins and the received opinion is this is where organization and political chops are decisive. Critics have pointed out, sweating just a bit, that Trump does not seem to have much of a ground game and certainly has little political chop building experience. Doing well in this ground game is analogous to doing well in a war; it remains to be seen if Trump can transition from reality TV star to political general.
As a counter to this, it can be argued that Trump could simply ride on his popularity and this would offset any weaknesses he has in regards to his organization and political chops. After all, highly motivated voters could simply get things done for him.
A second possibility is that at least some of the critics of Trump are motivated by more than concerns about pragmatic politics: they have a moral concern about Trump’s words and actions. Some of the concern is based on the assertion that Trump is not a true conservative. These concerns are well-founded: Trump is certainly not a social conservative and, while wealthy, he does not seem to have a strong commitment to classic conservative ideology. Other aspects of the concern are based on Trumps character and style; he is often regarded as a vulgar populist.
Those who oppose Trump on these grounds would presumably not be swayed by evidence that Trump could do well in the general election—if he is an awful candidate, he would presumably be worse as president. This election could be a very interesting test of party loyalty (and Hillary loathing). Some Republicans have said that they will not vote for Trump and most of these have made it clear they will not vote for a Democrat. As such, the Democrat might win in virtue of Republican voters not voting. After all, a Republican who does not vote is almost as good as a vote for the Democrat. As such, it is not surprising that a popular conspiracy theory speculates that Trump is an agent of the Clintons.
While I have been playing video games since the digital dawn of gaming, it was not until I completed Halo 5 that I gave some philosophical consideration to video game cut scenes. For those not familiar with cut scenes, they are non-interactive movies within a game. They are used for a variety of purposes, such as providing backstory, showing the consequences of the player’s action or providing information, such as how adversaries or challenges work.
The reason that Halo 5 motivated me to write about cut scenes is an unfortunate one: I believe that Halo 5 made poor use of cuts scenes and will argue for this point as part of my sketch of cut scene theory. Some gamers, including director Guillermo Del Toro and game designer Ken Levine, have spoken against the use of cut scenes. In support of their position, a fairly reasonable argument can be presented against cut scenes in games.
One fundamental difference between a game and a movie is the distinction between active and passive involvement. In the case of a typical movie, the audience merely experiences the movie as observers—they do not influence the outcome. In contrast, the players of a game experience the game as participants—they have a degree of control over the events. A cut scene, or in game movie, changes the person from being a player to being an audience member. This is analogous to taking a person playing sports and putting her into the bleachers to be a mere spectator. The person is, literally, taken out of the game. While there are some who enjoy watching sports, the athlete is there to play and not to be part of the audience. Likewise, while watching a movie can be enjoyable, a gamer is there to game and not be an audience member. To borrow from Aristotle, games and movies each have their own proper pleasures and mixing them together can harm the achievement of this pleasure.
Aristotle, in the Poetics, is critical of the use of the spectacle (such as what we would now call special effects) to produce the tragic feeling of tragedy. He contends that this should be done by the plot. Though this is harder to do, the effect is better. In the case of a video game, the use of cinematics can be regarded as an inferior way of bringing about the intended experience of a game. The proper means of bringing about the effect should lie within the game itself—that is, what the player is actually playing and not merely observing as a passive spectator. As such, cut scenes should be absent from games. Or, at the very least, kept to a minimum.
One way to counter this argument is to draw an analogy to role-playing games such D&D, Pathfinder and Call of Cthulhu. Such games typically begin with what is analogous to a game’s opening cinematic: the game master sets the stage for the adventure to follow. During the course of play, there are often important events that take considerable game world time but would be boring to actually play. For example, a stock phrase used by most game masters is “you journey for many days”, perhaps with some narrative about events that are relevant to the adventure, such as the party members (who are played by people who are friends in real life) becoming friends along the way. There are also other situations in which information needs to be conveyed or stories told that do not need to actually be played out because doing so would not be enjoyable or would be needlessly time consuming if done using game mechanics. A part of these games is shifting from active participant to briefly taking on the role of the audience. However, this is rather like being on the bench listening to the coach rather than being removed from the field and put into the bleachers. While one is not actively playing at that moment, it is still an important part of the game and the player knows that she will be playing soon.
In the case of video games, the same sort approach would also seem to fit, at least in games that have story elements that are important to the game (such as plot continuity, background setting, maintaining some realism, and so on) yet would be tedious, time consuming or beyond the mechanics of the game to actually play through. For example, if the game involves the player driving through a wasteland from a settlement to the ruins of a city she wishes to explore, then a short cut scene that illustrates the desolation of the world while the character is driving would certainly be appropriate. After all, driving for hours through a desolate wasteland would be very boring.
Because of the above argument, I do think that cut scenes can be a proper part of a video game, provided that they are used properly. This requires, but is not limited to, ensuring that the cut scenes are necessary and that the game would not be better served by either deleting the events covered in the movies or having them handled with actual game play. It is also critical that the player not feel that she has been put into the bleachers, although that bench feeling can be appropriate. As a general rule, I look at cut scenes as analogous to narrative in a tabletop role-playing game: a cut scene in a video game is fine if narrative would be fine in an analogous situation in a tabletop game.
Since I was motivated by Halo 5’s failings, I will use it as an example of the bad use of cut scenes. This will contain some possible spoilers, so those who plan to play the game might wish to stop reading.
Going with my narrative rule, a cut scene should not contain things that would be more fun to actually play than watch—unless there is some greater compelling reason why it must be a cut scene. Halo 5 routinely breaks this rule. A rather important sub-rule of this rule is that major enemies should be dealt with in game play and not simply defeated in a cut scene. Halo 5 broke this rule right away. In Halo 4 Jul ‘Mdama was built up as a major enemy. As such, it was rather surprising that he was knifed to death in a cut scene right near the start of Halo 5. This would be like setting out to kill a dragon in Dungeons & Dragons and having the dungeon master allow you to fight the orcs and goblins, but then just say “Fred the fighter hacks down the dragon. It dies” in lieu of playing out the fight with the dragon. Throughout Halo 5 there were cut scenes were I and my friend said “huh, that would have been fun to actually play rather than just watch.” That, in my view, is a mark of bad choices about cut scenes.
The designers also made the opposite sort of error: making players engage in tedious “play” that would have been far better served by short cut scenes. For example, there are parts where the player has to engage in tedious travel (such as ascending a damaged structure). While it would have been best to make it interesting, it would have been less bad to have a quick cut scene of the Spartans scrambling to safety. The worst examples, though, involved “game play” in which the player remains in first person shooter view, but cannot use any combat abilities. The goal is to walk around trying to find the various people to “talk” to. The conversations are scripted: when you reach the person, the non-player character just says a few things and your character says something back—there are no dialogue choices. These should have been handled by short cut scenes. After all, when I am playing a first person shooter, I do not want to have to walk around unable to shoot to trigger recorded conversations. These games are supposed to be “shoot and loot” not “walk and talk.”
To conclude, I take the view of cut scenes that Aristotle takes of acting: while some condemn all cut scenes and all acting (it was argued by some that tragedy was inferior to the epic because it was acted out on stage), it is only poor use of cut scenes (and poor acting) that should be condemned. I do condemn Halo 5.
Like all too many American cities and towns, the Michigan city of Flint faces dire financial woes. To address these woes, the state stepped in and bypassed local officials with the goal of cutting the budget of the city. One aspect of the solution was to switch Flint’s water supply to a cheaper source, specifically a polluted river. Another aspect seems to have been to decline to pay the $100 per day cost of treating the water in accord with federal regulations. The result was that the corrosive water started dissolving the pipes. Since many of the pipes in the city are made of lead, this resulted in citizens getting lead poisoning. This includes children, who are especially vulnerable to the damage caused by this toxin.
More troubling, it has been claimed that the state was aware of the problem and officials decided to cover it up. The state also apparently tried to discredit the research conducted by Dr. Mona Hanna-Attisha before finally admitting to the truth.
There have been various attempts to explain why this occurred, with filmmaker Michael Moore presenting the hypothesis that it was an attempt at “racist genocide.” This claim does have a certain appeal, given that the poor and minorities have been impacted by the corrosive water. Apparently the corrosive water has far less effect on newer infrastructure, which tends to be in areas that are better off economically. It is also appealing in that it is consistent with the fact of institutional racism that still plagues America. However, before rushing to accept the genocide hypothesis, it is worth considering alternative explanations.
One alternative is that the initial problem arose from political ideology. There is the view that the most important objective is reducing the spending of the state (typically to also lower taxes). Going along with this is also an opposition to federal regulations. Switching to the corrosive water and not treating it was initially cheaper and certainly evaded the regulations governing drinking water treatment. That said, the approach taken by the state did go against some professed conservative values, namely favoring local control and being opposed to government overreach. However, these values have been shown to be extremely flexible. For example, many state legislatures have passed laws forbidden local governments from banning fracking. As such, the initial action was consistent with the ideology.
In regards to the fact that the impact has been heaviest on the poor and minorities, this need not be driven by racism. An alternative explanation is that the policy was aimed not on the basis of race, but on the basis of power and influence. It is, of course, the case that the poor lack power and minorities are often poor. Since the poor lack the resources to resist harm and to buy influence, they are the most common target of budget cuts. Because of this, racism might not be the main factor.
In regards to the ensuing cover up, it might have begun with wishful thinking: the state officials did not want to believe that there was a problem. As such, they refused to accept that it existed. People are very good at denial, even when doing so is harmful to themselves. For example, many who do not take good care of themselves engage in wishful thinking in regards to the consequences their unhealthy behavior. It is, obviously, even easier to engage in wishful thinking when the harm is being suffered by others. Once the cover up progressed, the explanation is rather easy: people engage in a cover-up in the hopes of avoiding the consequences of their actions. However, as is so often the case, the cover-up has resulted in far more damage than a quick and honest admission.
This ongoing incident in Flint does show some important things. First, it does indicate that some traditional conservative claims are true: government can be the problem and local authorities can be better at decision making. Of course, government was the problem in this case because the focus was on saving a little money rather than ensuring the safety of the citizens.
Second, it serves as yet another example of poor assessment of consequences resulting from a shortsighted commitment to savings. This attempt at saving has done irreparable harm to many citizens (including children) and will cost millions of dollars to address. As such, this ill-considered attempt to save money has instead resulted in massive costs.
Third, it serves as yet another lesson in the fact that government regulations can be good. If the state had spent the $100 a day to treat the water in accord with federal regulations, then this problem would have not occurred. This is certainly something that people should consider when politicians condemn and call for eliminating regulations. This is not to claim that all regulations are good—but it is to claim that a blanket opposition to regulations is shortsighted and unwise.
I would like to say that the Flint disaster will result in significant changes. I do think it will have some impact—cities and towns are, no doubt, checking their water and assessing their infrastructure. However, the lessons will soon fade until it is time for a new disaster.
While the United Kingdom is quite welcoming to its American cousins, many of its citizens have petitioned for a ban against the now leading Republican presidential candidate Donald Trump. This issue was debated in mid-January by the parliament, although no vote was taken to ban the Donald.
The petition to ban Trump was signed by 575,000 people and was created in response to his call to ban all Muslims from entering the United States. While this matter is mostly political theater, it does raise some matters of philosophical interest.
One interesting point is that the proposal to ban Trump appears to be consistent with the principles that seem to lurk behind the obscuring fog of Trump’s various proposals and assertions. One obvious concern is that attributing principles to Trump is challenging—he is a master of being vague and is not much for providing foundations for his proposed policies. Trump has, however, focused a great deal on the borders of the United States. He has made the comically absurd proposal to build a wall between the United States and Mexico and, as noted above, proposed a ban on all Muslims entering the United States. This seems to suggest that Trump accepts the principle that a nation has the right to control its borders and to keep out anyone that is deemed a threat or undesirable by the state. This principle, which might be one that Trump accepts, is certainly a reasonable one in general terms. While thinkers disagree about the proper functions of the state, there is general consensus that a state must, at a minimum, provide basic defense and police functions and these include maintaining borders. This principle would certainly warrant the UK from banning Trump.
Even if the is specific general principle is not one Trump accepts, he certainly seems to accept that a state can ban people from entering that state. As such, consistency would require that Trump accept that the UK has every right to ban him. Trump, if he were inclined to argue rationally, could contend that there are relevant differences between himself and those he proposes to ban. He could, for example, argue that the proposed wall between the United States and Mexico is to keep out illegals and point out that he would enter the UK legally rather than sneaking across the border. In regards to the proposed ban on all Muslims, Trump could point out that he is for banning Muslims but not for banning non-Muslims. As such, his principle of banning Muslims could not be applied to him.
A way to counter this is to focus again on the general principle that might be behind Trump’s proposals, namely the principle of excluding people who are regarded as a threat or at least undesirable. While Trump is not likely to engage in acts of terror in the UK, his behavior in the United States does raise concerns about his ideology and he could justly be regarded as a threat to the UK. He could, perhaps, radicalize some of the population. As such, Trump could be justly banned on the basis of a possible principle he is employing to justify his proposed bans (assuming that there are some principles lurking back there somewhere).
Trump could, of course, simply call the UK a bunch of losers and insist that they have no right to ban him. While that sort of thing is fine for political speeches, he would need a justification for his assertion. Then again, Trump might simply call them losers and say he does not want to go there anyway.
The criticism of Trump in the UK seems to be, at least in part, aimed at trying to reduce his chance of becoming the President of the United States. Or perhaps there is some hope that the criticism will change his behavior. While a normal candidate might be influenced by such criticism from a close ally and decide to change, Trump is not a normal candidate. As has been noted many times, behavior that would have been politically damaging or fatal for other candidates has only served to keep Trump leading among the Republicans. As such, the petition against him and even the debate about the issue in Parliament will have no negative impact on his campaign. In fact, this sort of criticism will probably improve his poll numbers. As such, Trump is the orange Hulk of politics (not to be confused with Orange Hulk). The green Hulk gets stronger the angrier he gets, so attacking him just enables him to fight harder. The political orange Hulk, Trump, gets stronger the more he is rationally criticized and the more absurd and awful he gets. Like the green Hulk, Trump might be almost unbeatable. So, while Hulk might smash, Trump might win. And then smash.
One stock argument against increasing taxes on the rich in order to address income inequality is a disincentive argument. The gist of the argument is that if taxes are raised on the rich, then they will lose the incentive to invest, innovate, create jobs and so on. Most importantly, in regards to addressing the income inequality problem, the consequences of this disincentive will have the greatest impact on those who are not rich. For example, it has been claimed that the job creators will create less jobs and pay lower wages if they are taxed more to address income inequality. As such, the tax increase will be both harmful and self-defeating: the less rich will be no better off than they were before (and perhaps even worse off). As such, there would seem to be good utilitarian moral grounds for not increasing taxes on the rich.
Naturally, there is the question of whether or not this disincentive effect would be warranted or not. If the rich simply retaliated from spite, then the moral argument would fall apart—while there would be negative consequences for such a tax increase, these consequences would be harms intentionally inflicted. As such, not increasing taxes because of fear of retaliation would be morally equivalent to paying protection money so that criminals elect to not break things in one’s business or home.
If, however, the rich act because the tax increase is not fair, then the ethics of the situation would be different. To use an obvious analogy, if wealthy customers at a restaurant were forced to pay some of the bills for the less wealthy customers by the management, it would be hard to fault them for leaving smaller tips on the table. While the matter of what counts as a fair tax is rather controversial, it is certainly easy enough to accept an unfair increase would be unfair by definition. One approach would be to define unfairness in terms of the taxes cutting too much into what the person is entitled to in dint of her efforts, ability and productivity relative to what she owes to the country. This seems reasonable in that it provides considerable room for argumentation and does not beg and obvious questions (after all, the amount one owes one’s country could be as low as nothing).
Interestingly, the fairness argument would also apply to workers in regards to their salary. When a worker produces value, the employer pays the worker some of that value and keeps some of it. What the employer keeps can be seen as analogous to the tax imposed by the state on the rich person. As with the taxes on the rich person, there is the general question of what is fair to take from workers. Bringing in the disincentive argument, if it works to justify imposing only a fair tax on the rich, it should also do the same for the less rich. That is, those who argue against raising taxes on the rich to address income inequality by using the disincentive argument should also accept that the less rich should be paid in accord with the same principles used to judge how much income should be taken from the rich.
The obvious counter to this approach is to endeavor to break the analogy between the two situations: this would involve showing that the rich differ from the less rich in relevant ways or that taking income by taxes is relevantly different from taking money from employees. The challenge is, of course, to show that the differences really are relevant.
One of the stock arguments used to justify income inequality is the incentive argument. The gist is that income inequality is necessary as a motivating factor—crudely put, if people could not get (very) rich, then they would not have the incentive to do such things as work hard, innovate, invent and so on. The argument requires the assumption that hard work, innovation, inventing and so on are good; an assumption that has a certain general plausibility.
This argument does have considerable appeal. In terms of psychology, it is reasonable to make the descriptive claim that people are primarily motivated by the possibility of gain (and also glory). This view was held by Thomas Hobbes and numerous other thinkers on the grounds that it does match the observed behavior of many (but not all) people. If this view is correct, then achieving the goods of hard work, innovation, invention and so on would require income inequality.
There is, of course, the counter that some people seem to be very motivated by factors other than achieving an inequality in financial gain. Some are motivated by altruism, by a desire to improve, by curiosity, by the love of invention, by the desire to create things of beauty, to solve problems and so many other motives that do not depend on income. These sort of motivations do suggest that income inequality is not necessary as a motivating factor—at least for some people.
Since this is a matter of fact regarding human psychology, it is something that can (in theory) be settled by the right sort of empirical research. It is well worth noting that even if income inequality is necessary as a motivating factor, there remain many other concerns, such as the question of how much income inequality is necessary (and also how much is morally acceptable).
Interestingly, the incentive argument is something of a two-edged sword: while it can be used to justify income inequality, it can also be used to argue against the sort of economic inequality that exists in the United States and almost all other countries. The argument is as follows.
While worker productivity has increased significantly in the United States (and other countries) income for workers has not matched this productivity. This is a change from the past—income of workers went up more proportionally to the increase in productivity. This explains, in part, why CEO (and upper management in general) salaries have seen a significant increase relative to the income of workers: the increased productivity of the workers generates more income for the upper management than it does for the workers doing the work.
If it is assumed that gain is necessary for motivation and that inequality is justified by the results (working harder, innovating, producing and so on), then the workers should receive a greater proportion of the returns on their productivity. After all, if high executive compensation is justified on the grounds of its motivation in regards to productivity, innovation and so on, then the same principle would also apply to the workers. They, too, should receive compensation proportional to their productivity, innovation and so on. If they do not, then the incentive argument would entail that they would not have the incentive to be as productive, etc.
It could, of course, be argued, that top management earns its higher income by being primarily responsible for the increase in worker productivity—that is, the increase in worker productivity is not because of the workers but because of the leadership which is motivated by the possibility of gain on the part of the leadership. If this is the case, then the disparity would be fully justified by the incentive argument: the workers are more productive because the CEO is motivated to make them more productive so she can have even greater income.
However, if the increased productivity is due mainly to the workers, then this seems to counter the incentive argument: if workers are more productive than before with less relative compensation, then there does not seem to be that alleged critical connection between incentive and productivity required by the incentive argument. That is, if workers will increase productivity while receiving less compensation relative to their productivity, then the same would presumably hold for the top executives. While there are many other ways to warrant extreme income inequality, the incentive argument does seem to have a problem.
One possible response is to argue for important differences between the executives and workers such that executives need the incentive provided by the extreme inequality and workers are motivated sufficiently by other factors (like being able to buy food). It could also be contended that the workers are motivated by the extreme inequality as well—they would not be as productive if they did not have the (almost certainly false) belief that they will become rich.
David Bowie, the artist and actor, died on January 11, 2016. While I would not categorize myself as a fan of any artist, I do admit that I felt some sadness when I learned of his death. I must also confess that I listened to several Bowie songs today.
While Bowie’s art is clearly worthy of philosophical examination, I will instead focus on the philosophical subject of feeling for the death of a celebrity. I have written briefly about this in the past, on the occasion of the death of Michael Jackson. When Jackson died, many of his devoted fans were devastated by his death. The death of David Bowie has also caused a worldwide response, albeit of a somewhat different character.
People, obviously enough, simply feel what they do. However, there is still the question of whether the feeling is appropriate or not. That is, whether it is morally virtuous to feel in such a way and to such a degree. This view is, of course, taken from Aristotle: virtue involves having the right sort of feeling, in the right way, to the right degree, towards the right person, and so on through all the various factors considered by Aristotle.
In the case of the death of a celebrity, one (perhaps cynical) approach is to contend that overly strong emotional responses are not virtuous. Part of the reason is that virtue theorists always endorse the view that the right way to feel is the mean—between excess and deficiency. Another part of the reason is that the response should be in the right way towards the right person.
In the case of the death of a celebrity, it could be contended that a strong reaction, however sincere, is not morally appropriate. This assumes that the person responding lacks a two-way relationship with the celebrity. That is, that the person is not a relative or friend of the celebrity. In that case, the proper response would not be a matter of reacting to the death of a celebrity, but the death of a relative or friend. As such, what would be appropriate for David Bowie’s friends and relatives to feel is different from what would be appropriate for his fans to feel.
It could be contended that fans (who are not friends and relatives) do not have a meaningful connection with a celebrity as a person (a reciprocated relationship) and, as such, strong feelings upon the death of the celebrity would not be appropriate. From the standpoint of the fan, the celebrity is analogous to a fictional character in a book or movie—the fan observes the celebrity, but there is no reciprocity or true interaction. As such, to be unduly impacted by the death of a celebrity would not be a proper response—it would be similar to being unduly impacted by the death of a character in a movie.
One obvious response is that a celebrity is a real person and hence the death of a celebrity is real and not like the death of a fictional character—David Bowie is really dead. One cynical counter is that many thousands of real people have died today, people with whom the vast majority of the rest of us have no more personal relationship with than we had with David Bowie. As such, the real death of a celebrity should warrant no more emotional response than the death of anyone we do not know personally. It is, of course, proper to feel some sadness upon hearing of the death of a person (who did not merit death). However, feeling each death strongly would destroy us—which is no doubt why we feel so little in regards to the deaths of non-celebrities who are not connected to us.
Another option, which would require considerable development, is to argue that there can be proper emotional responses to the deaths of fictional characters—to be sad, for example, at the passing of Romeo and Juliet. This is, of course, exactly the sort of thing that Plato warned us about in the Republic.
A better reply is that a celebrity can have a meaningful impact on a person’s life, even when there is no actual personal interaction. In the case of David Bowie, people have been strongly affected by his music (and his acting) and this has played an important role in their lives. While a person might have never met Bowie, that person can be grateful for what Bowie created and his influence. As such, a person can justly and properly feel sadness at the death of a person they do not really know. That said, it could be contended that people do get to know an artist through the works. To use an analogy, it is similar to how one can know a long dead person through her writings (or writings about her). For example, one might develop a liking for Socrates by reading the Platonic dialogues and feel justly saddened by his death in the Apology. As such, people can feel justly sad about the death of a person they never met.
Ammon Bundy and fellow “militia” members occupied the Malheur National Wildlife Refuge in Oregon as a protest of federal land use policies. Ammon Bundy is the son of Cliven Bundy—the rancher who was involved in another armed stand-off with the federal government. Cliven Bundy still owes the American taxpayers over $1 million for grazing his cattle on public land—the sort of sponging off the public that would normally enrage conservatives. While that itself is an interesting issue, my focus will be on discussing the ethics of protest through non-violent armed occupation.
Before getting to the main issue, I will anticipate some concerns about the discussion. First, I will not be addressing the merits of the Bundy protest. Bundy purports to be protesting against the tyranny of the federal government in regards to its land-use policies. Some critics have pointed out that Bundy has benefitted from the federal government, something that seems a bit reminiscent of the infamous cry of “keep your government hands off my Medicare.” While the merit of a specific protest is certainly relevant to the moral status of the protest, my focus is on the general subject of occupation as a means of protest.
Second, I will not be addressing the criticism that if the federal land had been non-violently seized by Muslims protesting Donald Trump or Black Lives Matter activists protesting police treatment of blacks, then the response would have been very different. While the subject of race and protest is important, it is not my focus here. I now turn to the matter of protesting via non-violent armed occupation.
The use of illegal occupation is well established as a means of protest in the United States and was used during the civil rights movement. But, of course, an appeal to tradition is a fallacy—the mere fact that something is well-established does not entail that it is justified. As such, an argument is needed to morally justify occupation as a means of protest.
One argument for occupation as a means of protest is that protestors do not give up their rights simply because they are engaged in a protest. Assuming that they wish to engage in their protest where they would normally have the right to be, then it would seem to follow that they should be allowed to protest there.
One obvious reply to this argument is that people do not automatically have the right to engage in protest in all places they have a right to visit. For example, a public library is open to the public, but it does not follow that people thus have a right to occupy a public library and interfere with its operation. This is because the act of protest would violate the rights of others in a way that would seem to warrant not allowing the protest.
People also protest in areas that are not normally open to the public—or whose use by the public is restricted. This would include privately owned areas as well as public areas that have restrictions. In the case of the Bundy protest, public facilities are being occupied rather than private facilities. However, Bundy and his fellows are certainly using the area in a way that would normally not be allowed—people cannot, in the normal course of things, just take up residence in public buildings. This can also be regarded as a conflict of rights—the right of protest versus the right of private ownership or public use.
These replies can, of course, be overcome by showing that the protest does more good than harm or by showing that the right to protest outweighs the rights of others to use the area that is occupied. After all, to forbid protests simply because they might inconvenience or annoy people would be absurd. However, to accept protests regardless of the imposition on others would also be absurd. Being a protestor does not grant a person special rights to violate the rights of others, so a protestor who engages in such behavior would be acting wrongly and the protest would thus be morally wrong. After all, if rights are accepted to justify a right to protest, then this would provide a clear foundation for accepting the rights of those who would be imposed upon by the protest. If the protestor who is protesting tyranny becomes a tyrant to others, then the protest certainly loses its moral foundation.
This provides the theoretical framework for assessing whether the Bundy protest is morally acceptable or not: it is a matter of weighing the merit of the protest against the harm done to the rights of other citizens (especially those in the surrounding community).
The above assumes a non-violent occupation of the sort that can be classified as classic civil disobedience of the sort discussed by Thoreau. That is, non-violently breaking the rules (or law) in an act of disobedience intended to bring about change. This approach was also adopted by Gandhi and Dr. King. Bundy has added a new factor—while the occupation has (as of this writing) been peaceful, the “militia” on the site is well armed. It has been claimed that the weapons are for self-defense, which indicates that the “militia” is willing to escalate from non-violent (albeit armed) to violent occupation in response to the alleged tyranny of the federal government. This leads to the matter of the ethics of armed resistance as a means of protest.
Modern political philosophy does provide a justification of such resistance. John Locke, for example, emphasized the moral responsibilities of the state in regards to the good of the people. That is, he does not simply advocate obedience to whatever the laws happen to be, but requires that the laws and the leaders prove worthy of obedience. Laws or leaders that are tyrannical are not to be obeyed, but are to be defied and justly so. He provides the following definition of “tyranny”: “Tyranny is the exercise of power beyond right, which nobody can have a right to. And this is making use of the power any one has in his hands, not for the good of those who are under it, but for his own private separate advantage.” When the state is acting in a tyrannical manner, it can be justly resisted—at least on Locke’s view. As such, Bundy does have a clear theoretical justification for armed resistance. However, for this justification to be actual, it would need to be shown that federal land use policies are tyrannical to a degree that warrants the use of violence as a means of resistance.
Consistency does, of course, require that the framework be applied to all relevantly similar cases of protests—be they non-violent occupations or armed resistance.