As so many schools now do, Mount St. Mary’s University decided to look to the business world to find a president and selected Simon Newman. While Newman does have a graduate education, he had no previous professional academic experience. He, however, has thirty years of experience in the realm of finance and business. His plan for the school was to “raise a lot of capital and start a lot of programs and start the university on a more aggressive growth trajectory.” It was hoped that he would capitalize on the “incredible brand” of the school in raising said capital.
Rather ironically, Newman has damaged that “incredible brand” by embroiling Mount St. Mary’s University in a public relations disaster through his plan to cull students and his culling of faculty. This is presumably not what the university hoped would happen—the plan was hiring Newman would improve the geographic diversity of the students and boost both the school’s endowment and its reputation.
This incident is not the only one that has occurred because of the common practice of hiring business leaders to fill administrative posts at schools. There is also a similar trend in politics, with business people with no political experience being lauded as good choices for political offices (including the presidency). As such, it is well worth considering this matter.
One approach to justifying the choice to hire business people into academic administration (or elect them to office) is to argue that being a business person qualifies one for such positions. For example, that managing a private equity firm makes a person qualified to be a university president. Or president of the United States.
One obvious problem with this is revealed by consideration of the fallacious appeal to authority. This occurs when the expert/authority used to support a claim is not a legitimate expert relative to the claim. This commonly occurs when the alleged expert is an expert in one area, but not in the area of concern—expertise in one area does not automatically confer expertise in another. Likewise, a person could be great at business, but this does not confer expertise in academic administration.
Another problem with this is revealed by Socrates’ battle with Ion in the Ion. Socrates makes the point that a person who is the master of a field becomes a master by mastering that field rather than mastering some other field. For example, a doctor of medicine masters medicine by mastering medicine, not mechanical engineering. Obviously enough, a person who has mastered one field has not automatically mastered another. For example, one who has mastered running a hedge fund has not mastered being a university president.
Interestingly, these problems are recognized in almost all cases except those involving business persons seeking to be university administrators or office holders. If a person who worked assembling cars claimed to have thus mastered assembling code, they would not be believed. And rightly so. While both involve assembling, they are very different. If an athlete who mastered basketball (such as Michael Jordan) claimed to have thus mastered baseball, then they would be doubted. While both are sports involving balls, the skill sets are rather different.
One reply to these sort of objections is to argue that the skill set of a business person does apply to academic administration (and holding political office). For example, leadership skill could be seen as suitably “generic” so that a person who can lead a company as a CEO is thus qualified to lead as a university president (or president of the United States).
One problem is that even those who think that business people are qualified and even ideal for academic administration (or political office) do not usually think the reverse holds. For example, if a philosophy or engineering professor who became an administrator claimed he was thus qualified to run a hedge fund without any business experience, he would be mocked. As another example, if a state senator without any business experience claimed that he should be hired as the CEO of a firm, she would almost certainly not get the job (except as a payback for years of political favors, of course). This is a point well made by Socrates in the Ion: Ion claims that being a rhapsode also makes him a general; Socrates points out that this would make a general a rhapsode. Ion, as should be expected, did not like that idea.
Another problem is that while it is true that there are general skills, there is still the very reasonable concern that the general skills might not enough to properly do the job. To use an obvious example, I have over two decades of experience teaching philosophy classes. As such, I have a range of general teaching skills. However, this would not qualify me to teach biology classes—I would need to have knowledge of biology. I could, of course, learn enough biology to be competent to teach it. Likewise, a business person could learn to apply her general skills to a job as an administrator—but this would require learning the job. But, just as it would be unwise to hire me as a biology professor just because I could learn to do it, it would be unwise to hire (or elect) a business person to a position just because they could learn to do it eventually.
While the above arguments seem reasonable, there is still a way to argue in favor of hiring (or electing) business people into positions in academics (or politics). If it could be shown that an administrative position (or elected office) is, in fact, the same as a business position, then a business person would be a reasonable choice.
In some cases, this is obviously true. There are administrative posts that are functionally identical to business posts in companies and someone who has done the job in a company would thus be as qualified to do the job at a university. Where it becomes a matter of concern is in regards to positions that are not analogous to business positions. Addressing this properly would require considering each job, which goes far beyond the intended scope of this essay. However, I will briefly address the position of university president.
While the traditional university president is an academic who has transitioned into the leadership role after a distinguished career in the academy, some schools have redefined the role of president in terms of being a fundraiser and business leader. That is, the president is not primarily guiding the academy as an institute of higher learning, but running it as a business in order to increase capital and enhance the brand. If this is the proper role of the university president, then a business person would nicely fit this role. After all, this is reshaping the university from an academy to a business and a business leader should lead a business.
While mostly or completely transforming a university into a business would make business people suitable for positions in the university business, there is still the question of whether or not this is a good idea. To use an analogy, transforming a cruise ship into a pirate ship would make it so a pirate would be a suitable captain, but this might not be a good idea—especially for the passengers. Likewise, transforming a university into a business might not be a good idea-especially for the students.
Mount St. Mary’s University, a well-respected Catholic institution, has recently been dragged into a public relation disaster by President Simon Newman. It started when Newman devised a plan to improve the school’s retention rate by culling students prior to the federal reporting deadline. The plan was to get students to complete a survey described as a “valuable tool that will help you discover more about yourself.” In reality, the survey was intended to identify students for the culling.
In response to moral concerns raised in regards to his plan, he replied that “This is hard for you because you think of the students as cuddly bunnies, but you can’t. You just have to drown the bunnies…put a Glock to their heads.” When this remark and the facts of the matter became public courtesy of the student newspaper, the trouble began in earnest.
Unfortunately, the president’s desire to cull did not end with the freshmen. Newman fired tenured associate philosophy professor Thane Naberhaus and communications professor and faculty advisor for the student newspaper Ed Egan. He also removed two other philosophy professors from their administrative positions. Joshua Hochschild was removed from the position of dean of the College of Liberal Arts and David Rehm was taken from the position of provost. John Schwenkler, a fellow runner and a philosophy professor across the tracks from me at Florida State University, created a petition to protest these firings.
While it is a common misconception that tenure protects a professor from being fired, the reality is that tenured professors can still be fired for cause. In the case of Naberhaus, Newman claimed the professor violated his “duty of loyalty to [the] University.” The philosopher was also “designated persona non grata” and banned from the campus.
This situation does raise many legal concerns which will, no doubt, be addressed by lawyers. Since I am not a lawyer, I will leave the legality of these actions (especially the firing of the professors) to the experts. However, I will address the moral issue of firing faculty on the grounds of violating a “duty of loyalty to the university.” Since the concepts of duty and loyalty are moral concepts, this is clearly a moral matter.
While one could spend countless hours battling over the semantics of “duty” and “loyalty”, I will stick with a concise account that has intuitive appeal. A duty of loyalty to an institution is violated when a person knowingly and willingly acts contrary to the proper purpose of an institution that justly imposes a moral obligation on the person. If a person has no moral obligation to the institution, then he cannot violate a duty of loyalty. If a person is not acting contrary to the proper purpose of the institution, then the charge of disloyalty would be unfounded. After all, the person would be acting in a manner that is loyal to the university.
While there is an ever increasing push to redefine the university as a business with “loyalty” a matter of contracts and payments, it is still reasonable to regard a professor as having the requisite obligation to the university. As Socrates argued in regards to the state, by remaining at the university and accepting the goods it provides, the professor has accepted the obligation. Naturally, there is the usual exceptions for force or fraud being used against the professor. This, then, obligates the professor to the proper purpose of the university. As such, a professor could violate her duty of loyalty.
A rather more difficult matter is working out the proper purpose of a university. This is critical to determining whether a professor has violated her duty of loyalty to that institution. One approach is to define the purpose in terms of whatever the administration (primarily the president) says it is. While this would simplify matters, it is a problematic and unappealing approach. To use the obvious analogy, the loyalty of those who have a duty to the United States belongs not to the individuals who happen to hold office, but to the Constitution. To use a specific example, officers in the United States military do not swear an oath to the person who happens to occupy the office, be it Obama or Trump. As such, if the president orders an officer to violate the Constitution, the officer’s refusal is an act of loyalty rather than disloyalty to the United States.
A similar thing should hold in the case of an institution like a university—they are not supposed to exist to serve the whims or needs of the leadership of the time, but to serve the foundational principles of the institution. As such, an alternative approach is needed.
While universities will vary in their specific purposes, the core mission of a university would seem to involve doing what it is that universities do. Going back to the analogy of the state, if the state is supposed to serve the good of the people and defend life, liberty and property, then the university should serve the good of the students and defend truth, academic freedom, and the advancement of knowledge.
Getting back to the case at hand, the faculty who were fired were serving the proper mission of the university: they were acting for the good of the students and doing so, based on the evidence, from compassion and moral concern. As such, it is the faculty who remained loyal to the university. In contrast, the president violated his duty of loyalty to the university by acting contrary to the true purpose of an institute of higher learning. Thus, if disloyalty should be punished by being fired, the president should be fired.
“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”
While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from the land of comedy into the realm of philosophy (though detractors might claim that philosophy is comedy without humor; but that is actually law). Colbert has what seems to be an odd epistemology: he regularly claims that he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as merely comedic, it does raise many interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.
While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold the opinion that it is true. A belief is true, in the common sense view, when it gets reality right—this is the often maligned correspondence theory of truth. The stock simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks, but shall suffice for this discussion.
Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.
One possible response is to point out that the human mind is not beholden to the rules of logic—while a contradiction cannot be true, there are many ways a person can hold to contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold to both. This might be due to an ability to compartmentalize the beliefs so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.
While these responses do have considerable appeal, they do not appear to work in cases in which the person actually claims, as Colbert does, that they believe something they know is not true. After all, making this claim does require considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they belief something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.
One possibility is to consider the power of cognitive dissonance management—a person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem. I will explore this possibility in the context of comfort beliefs in a later essay.
Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true—even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.
Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty sided die, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true—assuming that this means that I can believe in something that I believe is less likely than another belief.
People are also strongly influenced by emotional and other factors that are not based in a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence.
The Iowa caucuses brought some surprises: Trump lost to Cruz, Rubio took third and Sanders almost tied Clinton. While Trump was the predicted winner and leading in the polls, his defeat seems easy enough to explain. While Trump is a master reality show star and showman, Cruz is an experienced politician who knows how to operate effectively within the political system. While getting votes is dependent on political popularity, it is also a matter of ensuring that people vote and Cruz seems have done a better job at this task. As such, while Trump was probably more popular, he was not more popular among those who voted. Trump is, interestingly enough, now threatening to sue Cruz for cheating in Iowa. Assuming that Cruz did not cheat and assuming that he won through superior political organization, then Trump will need to match Cruz in this regard or face the very real risk of losing the nomination. That said, it has been claimed that Cruz’s appeal to the evangelicals lead him to a victory over Trump–something that Cruz cannot count on across the country.
What is perhaps most interesting is that the pundits are claiming Rubio also had a victory on the grounds that he moved into a very close third. Rubio is the clear establishment candidate at this point and he seems well-positioned to pick up the supporters of the doomed establishment candidates, such as Jeb Bush. With the backing of the Republican party machinery, Rubio could come out ahead of Trump and Cruz. That said, the anti-establishment sentiment should not be dismissed: if Cruz can maintain the appearance of being a political outsider while using the skill set he has developed as a career politician, he stands an excellent chance of having the best of both worlds.
While Sanders is a long-time senator, he is regarded as an authentic outsider. This is in strong contrast with Hillary Clinton. She has a well-established reputation as a supreme insider and is certainly not known for her authenticity. The challenge for Sanders is maintaining enthusiasm in the face of the Clinton political machine. Fortunately for Bernie, we have seen that the Clinton machine can be defeated and Hillary is no doubt worried that 2016 might look like a repeat of 2008. Only with an old white socialist rather than a young black moderate in the starring role.
We might see Rubio going up against Sanders in the general election. If so, I would predict Rubio by a slight margin. Clinton would probably beat Rubio. Cruz and Trump, I think, would lose to either Clinton or Sanders. But, my predictions are probably wrong-much is up in the air, which makes matters interesting.
While all states allow for concealed carry, schools have generally been areas of exemption. As this is being written, my adopted state of Florida is considering a bill that would make concealed carry legal on the campuses of the state’s public universities. Some other states have already passed such laws. While I have written about concealed campus carry before, my focus here is on professors who refuse to allow guns in their classrooms and offices.
While I am not a lawyer, I am inclined to believe that professors lack the legal authority to impose such bans. This is, presumably, something the courts will be hashing out in upcoming lawsuits—perhaps including suits alleging a violation of a constitutionally protected right. Since I am not a lawyer, I will leave the legal matters to the experts. Instead, I will focus on the moral aspects of the subject.
One moral argument that could be made in favor of the professors is that they have the right to ban things they regard as morally offensive from their classrooms and offices. So, a professor who is morally opposed to guns could refuse to allow them. This is analogous in some ways to religious freedom arguments used to justify a business not providing coverage of contraception or those deployed against allowing same sex-marriage. The idea in all these cases is that the moral interest of one person or group overrides that of another, thus justifying the freedom of one over another. In the case of guns, it is the right of the professor to teach and hold office hours in a gun-free environment that overrides the right of others to carry guns.
One reply to this argument, as is used in the religious freedom cases, is that the right of the professor to restrict the right of the students is not justified. That is, their right to carry a weapon trumps her right to be in a weapon free zone. This would be somewhat similar to how the right of a same-sex couple to marry trumps the right of religious people to live in a same-sex marriage free country.
Another reply to this argument is to draw an analogy that is aimed at showing the absurdity of such a professorial ban. Imagine a professor who has a deep and abiding moral opposition to birth control and wants to ban them from her classroom and office. This includes birth control that is being “concealed” in the body (for example, a woman on the pill)—while the professor cannot see it, the mere presence is morally intolerable to her. While the professor has the right to keep students from fornicating in class, she would not seem to have the right to ban the presence of birth control. A similar argument could be made with smart phones: a professor can forbid their use in class because they can be disruptive and be used to cheat, but he cannot refuse to allow students to have them in their backpacks or pockets. As such, professors do not seem to have the right to ban guns simply because they are morally offended by them.
A better moral argument is based on the matter of safety: a professor could be concerned about people being shot (intentionally or accidentally). Colleagues of mine have also spoken about the chilling effect of allowing guns on campus: people, it is claimed, would be afraid to discuss contentious issues. It is also claimed that some professors would be inclined to grade easier to avoid getting shot.
There certainly are legitimate safety concerns about allowing guns on campus. However, there are two obvious points worth considering. The first is that guns are already allowed many places and people do not seem generally inclined to avoid contentious discussions or to not do their jobs properly because someone might shoot them with a (up to the murder attempt) legally carried gun. As such, unless campuses are simply special places, this concern does not warrant a special ban on campus carry. Put another away, if guns are allowed almost everywhere else, then without a relevant difference argument, they should be allowed on campuses. The second, as I point out to my colleagues, people can very easily carry guns illegally on campus. If someone intends to kill a professor over a bad grade or a heated discussion (which has happened) they can do so. Campuses are generally quite open and I have never seen anyone checked for weapons at any university. A professor ban would certainly not provide a greater degree of safety—even if the professor was able to enforce such an almost certainly illegal ban.
Interesting, the state legislatures who pass concealed carry on campus laws generally forbid people to bring guns to the legislature. While this shows inconsistency, it does not show the law is wrong. It does, however, point towards a relevant difference argument—perhaps the campus is relevantly similar to the legislature.
My view is that there is not really a compelling reason to walk around campus with a gun and I am concerned about safety issues. However, I do not have the moral right to ban guns from my classroom or office. In fact, I would plan on carrying one myself.
In January, 2016 Denmark passed a law that refugees who enter the state with assets greater than about US $1,450 will have their valuables taken in order to help pay for the cost of their being in the country. In response to international criticism, Denmark modified the law to allow refugees to keep items of sentimental value, such as wedding rings. This matter is certainly one of moral concern.
Critics have been quick to deploy a Nazi analogy, likening this policy to how the Nazis stole the valuables of those they sent to the concentration camps. While taking from refugees does seem morally problematic, the Nazi analogy does not really stick—there are too many relevant differences between the situations. Most importantly, the Danes would be caring for the refugees rather than murdering them. There is also the fact that the refugees are voluntarily going to Denmark rather than being rounded up, robbed, imprisoned and murdered. While the Danes have clearly not gone full Nazi, there are still grounds for moral criticism. However, I will endeavor to provide a short defense of the law—a rational consideration requires at least considering the pro side of the argument.
The main motivation of the law seems to be to deter refugees from coming to Denmark. This is a strategy of making their country less appealing than other countries in the hopes that refugees will go somewhere else and be someone else’s burden. Countries, like individuals, do seem to have the right to make themselves less appealing. While this sort of approach is certainly not morally commendable, it does not seem to be morally wrong. After all, the Danes are not simply banning refugees but trying to provide a financial disincentive. Somewhat ironically, the law would not deter the poorest of refugees. It would only deter those who have enough property to make losing it a worthwhile deterrent.
The main moral argument in favor of the law is based on the principle that people should help pay for the cost of their upkeep to at least the degree they can afford to do so. To use an analogy, if people show up at my house and ask to live with me and eat my food, it would certainly be fair of me to expect them to at least chip in for the costs of the utilities and food. After all, I do not get my utilities and food for free. This argument does have considerable appeal, but can be countered.
One counter to the argument is based on the fact that the refugees are fleeing a disaster. Going back to the house analogy, if survivors of a disaster showed up at my door asking for a place to stay until they could get back on their feet, taking their few remaining possessions to offset the cost of their food and shelter would seem to be cruel and heartless. They have lost so much already and to take what little that remains to them would add injury and insult to injury. To use another analogy, it would be like a rescue crew stripping people of their valuables to help pay for the rescue. While rescues are expensive, such a practice certainly would seem awful.
One counter is that refugees who are well off should pay for what they receive. After all, if relatively well-off people showed up at my door asking for food and shelter, it would not seem wrong of me to expect that they contribute to the cost of things. After all, if they can afford it, then they have no grounds to claim a free ride off me. Likewise for well-off refugees. That said, the law does not actually address the point, unless having more than $1450 is well off.
Another point of consideration is that it is one thing to have people pay for lodging and food with money they have; quite another to take a person’s remaining worldly possessions. It seems like a form of robbery, using whatever threat drove the refugees from home as the weapon. The obvious reply is that the refugees would be choosing to go to Denmark; they could go to a more generous country. The problem is, however, that refugees might soon have little choice about where they go.
Despite the predictions of many pundits, presidential candidate Donald Trump still leads the Republican pack as of the end of January. As should be expected, Trump’s remarks have resulted in criticism from the left. Somewhat unexpectedly, he has also been condemned by many conservatives. The National Review, a bastion of conservative thought, devoted an entire issue to harsh condemnation of Trump. This is certainly a fascinating situation and will no doubt become a chapter in many future political science textbooks.
That Trump is doing well should itself not be surprising. As I have argued in previous essays, he is the logical result of the strategies and tactics of the Republican Party. The Republican establishment has been feeding the beast; they should not be shocked that it has grown large. They crafted the ideal political ecosystem for Trump; they should not be dismayed that he has dominated this niche. As in so many horror stories, perhaps they realize they have created a monster and now they are endeavoring to destroy it.
It is not entirely clear what the “(un)friendly fire” of fellow Republicans is supposed to accomplish. One possibility is that the establishment hopes that these attacks will knock Trump down and allow a candidate more appealing to the establishment to win the nomination. Trump, many pundits claim, would lose in the general election and the Republicans certainly wish to win. However, Trump should not be counted out—he has repeatedly proven the pundits wrong and he might be, oddly enough, the best chance for a Republican victory in 2016.
The United States electorate has changed in recent years and Trump seems to be able to appeal very strongly to certain elements of this population. Bernie Sanders has also been able to appeal very strongly to other elements—and perhaps some of the same. As such, the Republican establishment might wish to reconsider their view of Trump’s chances relative to the other candidates.
That said, while Trump has done quite well in the polls, this is rather different from doing well in the actually trench work of politics. Doing well in the polls is rather like being a popular actor or athlete—this does not require a broad organization and a nationwide political machine. Trump is certainly a media star—quite literally. Soon, however, the “ground game” begins and the received opinion is this is where organization and political chops are decisive. Critics have pointed out, sweating just a bit, that Trump does not seem to have much of a ground game and certainly has little political chop building experience. Doing well in this ground game is analogous to doing well in a war; it remains to be seen if Trump can transition from reality TV star to political general.
As a counter to this, it can be argued that Trump could simply ride on his popularity and this would offset any weaknesses he has in regards to his organization and political chops. After all, highly motivated voters could simply get things done for him.
A second possibility is that at least some of the critics of Trump are motivated by more than concerns about pragmatic politics: they have a moral concern about Trump’s words and actions. Some of the concern is based on the assertion that Trump is not a true conservative. These concerns are well-founded: Trump is certainly not a social conservative and, while wealthy, he does not seem to have a strong commitment to classic conservative ideology. Other aspects of the concern are based on Trumps character and style; he is often regarded as a vulgar populist.
Those who oppose Trump on these grounds would presumably not be swayed by evidence that Trump could do well in the general election—if he is an awful candidate, he would presumably be worse as president. This election could be a very interesting test of party loyalty (and Hillary loathing). Some Republicans have said that they will not vote for Trump and most of these have made it clear they will not vote for a Democrat. As such, the Democrat might win in virtue of Republican voters not voting. After all, a Republican who does not vote is almost as good as a vote for the Democrat. As such, it is not surprising that a popular conspiracy theory speculates that Trump is an agent of the Clintons.
While I have been playing video games since the digital dawn of gaming, it was not until I completed Halo 5 that I gave some philosophical consideration to video game cut scenes. For those not familiar with cut scenes, they are non-interactive movies within a game. They are used for a variety of purposes, such as providing backstory, showing the consequences of the player’s action or providing information, such as how adversaries or challenges work.
The reason that Halo 5 motivated me to write about cut scenes is an unfortunate one: I believe that Halo 5 made poor use of cuts scenes and will argue for this point as part of my sketch of cut scene theory. Some gamers, including director Guillermo Del Toro and game designer Ken Levine, have spoken against the use of cut scenes. In support of their position, a fairly reasonable argument can be presented against cut scenes in games.
One fundamental difference between a game and a movie is the distinction between active and passive involvement. In the case of a typical movie, the audience merely experiences the movie as observers—they do not influence the outcome. In contrast, the players of a game experience the game as participants—they have a degree of control over the events. A cut scene, or in game movie, changes the person from being a player to being an audience member. This is analogous to taking a person playing sports and putting her into the bleachers to be a mere spectator. The person is, literally, taken out of the game. While there are some who enjoy watching sports, the athlete is there to play and not to be part of the audience. Likewise, while watching a movie can be enjoyable, a gamer is there to game and not be an audience member. To borrow from Aristotle, games and movies each have their own proper pleasures and mixing them together can harm the achievement of this pleasure.
Aristotle, in the Poetics, is critical of the use of the spectacle (such as what we would now call special effects) to produce the tragic feeling of tragedy. He contends that this should be done by the plot. Though this is harder to do, the effect is better. In the case of a video game, the use of cinematics can be regarded as an inferior way of bringing about the intended experience of a game. The proper means of bringing about the effect should lie within the game itself—that is, what the player is actually playing and not merely observing as a passive spectator. As such, cut scenes should be absent from games. Or, at the very least, kept to a minimum.
One way to counter this argument is to draw an analogy to role-playing games such D&D, Pathfinder and Call of Cthulhu. Such games typically begin with what is analogous to a game’s opening cinematic: the game master sets the stage for the adventure to follow. During the course of play, there are often important events that take considerable game world time but would be boring to actually play. For example, a stock phrase used by most game masters is “you journey for many days”, perhaps with some narrative about events that are relevant to the adventure, such as the party members (who are played by people who are friends in real life) becoming friends along the way. There are also other situations in which information needs to be conveyed or stories told that do not need to actually be played out because doing so would not be enjoyable or would be needlessly time consuming if done using game mechanics. A part of these games is shifting from active participant to briefly taking on the role of the audience. However, this is rather like being on the bench listening to the coach rather than being removed from the field and put into the bleachers. While one is not actively playing at that moment, it is still an important part of the game and the player knows that she will be playing soon.
In the case of video games, the same sort approach would also seem to fit, at least in games that have story elements that are important to the game (such as plot continuity, background setting, maintaining some realism, and so on) yet would be tedious, time consuming or beyond the mechanics of the game to actually play through. For example, if the game involves the player driving through a wasteland from a settlement to the ruins of a city she wishes to explore, then a short cut scene that illustrates the desolation of the world while the character is driving would certainly be appropriate. After all, driving for hours through a desolate wasteland would be very boring.
Because of the above argument, I do think that cut scenes can be a proper part of a video game, provided that they are used properly. This requires, but is not limited to, ensuring that the cut scenes are necessary and that the game would not be better served by either deleting the events covered in the movies or having them handled with actual game play. It is also critical that the player not feel that she has been put into the bleachers, although that bench feeling can be appropriate. As a general rule, I look at cut scenes as analogous to narrative in a tabletop role-playing game: a cut scene in a video game is fine if narrative would be fine in an analogous situation in a tabletop game.
Since I was motivated by Halo 5’s failings, I will use it as an example of the bad use of cut scenes. This will contain some possible spoilers, so those who plan to play the game might wish to stop reading.
Going with my narrative rule, a cut scene should not contain things that would be more fun to actually play than watch—unless there is some greater compelling reason why it must be a cut scene. Halo 5 routinely breaks this rule. A rather important sub-rule of this rule is that major enemies should be dealt with in game play and not simply defeated in a cut scene. Halo 5 broke this rule right away. In Halo 4 Jul ‘Mdama was built up as a major enemy. As such, it was rather surprising that he was knifed to death in a cut scene right near the start of Halo 5. This would be like setting out to kill a dragon in Dungeons & Dragons and having the dungeon master allow you to fight the orcs and goblins, but then just say “Fred the fighter hacks down the dragon. It dies” in lieu of playing out the fight with the dragon. Throughout Halo 5 there were cut scenes were I and my friend said “huh, that would have been fun to actually play rather than just watch.” That, in my view, is a mark of bad choices about cut scenes.
The designers also made the opposite sort of error: making players engage in tedious “play” that would have been far better served by short cut scenes. For example, there are parts where the player has to engage in tedious travel (such as ascending a damaged structure). While it would have been best to make it interesting, it would have been less bad to have a quick cut scene of the Spartans scrambling to safety. The worst examples, though, involved “game play” in which the player remains in first person shooter view, but cannot use any combat abilities. The goal is to walk around trying to find the various people to “talk” to. The conversations are scripted: when you reach the person, the non-player character just says a few things and your character says something back—there are no dialogue choices. These should have been handled by short cut scenes. After all, when I am playing a first person shooter, I do not want to have to walk around unable to shoot to trigger recorded conversations. These games are supposed to be “shoot and loot” not “walk and talk.”
To conclude, I take the view of cut scenes that Aristotle takes of acting: while some condemn all cut scenes and all acting (it was argued by some that tragedy was inferior to the epic because it was acted out on stage), it is only poor use of cut scenes (and poor acting) that should be condemned. I do condemn Halo 5.