In the United States, comedy seems to be dominated by the left as exemplified by shows such as The Daily Show and Full Frontal. While there are conservative comedians, they tend to avoid political comedy and instead seem more likely to comment on red necks rather than red state politics. As might be imagined, there has been considerable speculation about this political division in the art of comedy, one that mirrors the political divide in the arts in general. While I do not pretend to offer a definitive answer, it is certainly interesting to engage in some speculation.
Smug liberals might be inclined to avail themselves of the hypothesis that comedy requires intelligence and that conservatives are generally less intelligent than liberals. Conservatives might counter that liberal stupidity is what causes them to be amused by stupid liberal jokes. While comedy can be an indicator of wit; there does not seem to be a meaningful intelligence gap dividing the political spectrum; although people always think that their side is superior.
A rather more plausible explanation is that the difference does rest in psychology; that the same traits that draw a person to liberalism would also make a person more proficient at comedy. In contrast, the traits that draw a person to conservatism would make them less capable in regards to comedy.
Sticking with the classic definitions, conservatives want to preserve the existing social order and tend to have a favorable view of the established social institutions. Liberals tend to want to change the social order and regard the established social institutions with some suspicion. Since political comedy involves making fun of the existing social order and mocking established institutions, this would help explain why conservatives would be less likely to be engaged in political comedy than liberals.
It is worth considering that while conservative comedians would not be very inclined to attack conservative targets, there is still a target rich environment. There are obviously many liberal groups and organizations that would be ideally suited for conservative comedy—in fact, liberal comedians have already put down the foundations for mocking many of these groups. There are also numerous liberal individuals that would be suitable targets; they have already been softened up a bit by liberal comics.
Conservatives, such as Rush Limbaugh, do engage in mockery of institutions and social orders they regard as tainted with the left. However, this mockery tends to be more of an attack than an exercise in comedy: while some regard Limbaugh as a clown, he is not seen as a comedian. Or particularly funny. Given the abundance of targets and the willingness of conservatives to go after them, it is something of a mystery why this ecological niche of comedic liberal mocking has not been filled.
One possible explanation is a variation of the victim narrative that conservatives typically reject or condemn when it is used by the left to explain, for example, why women or minorities are underrepresented in an area. The narrative is that comedy is controlled and dominated by the liberals and they are using their power and influence to suppress and oppress conservatives who want to be comedians. If only conservative comics were given a chance to get their comedy out to the people, they would succeed.
Some might be tempted to reject this argument for the reasons typically advanced by conservatives when the narrative involves racism and sexism, such as claims that the failure of the allegedly oppressed is due to their own defects, to simply deny the disparity or to advance the bootstrap argument. While this approach might be satisfying, it is certainly worth considering that conservative comedians are the victims of oppression, that their voices are being silenced by the powerful, and that they are victims. The dearth of conservative comedians, like the dearth of minorities in the highest positions in society, does suggest that an injustice is being done. If conservative comedians are being oppressed, then steps should be taken to address this oppression, perhaps beginning with an affirmative comedic action program to help them get established in the face of a system that has long been stacked against them.
While this is certainly the sort of thing leftists love to do for the oppressed, it is also worth considering whether conservatives want to be comedians. As with other cases of alleged oppression, it might be the case that the reason that there are few, if any, conservative comedians is because few, if any, conservatives want to be comedians. If this is the case, then there is no oppression to address—things are as they should be. Another possible explanation lies in the nature of comedy, at least as it is defined by Aristotle.
As Aristotle saw it, comedy “is a subdivision of the ugly” and “consists in some defect or ugliness which is not painful or destructive.” Political comedy typical involves mockery across the lines of power, because politics is all about power relationships. Liberal comedy typically involves mocking upwards in regards to power. For example, female comedians making fun of the patriarchy is mockery originating down the power curve that is aimed upwards at the established institutions and norms. Since the mockery is going up the power divide, the comedy generally will not be painful or destructive—after all, the power advantage rests with the target and not the comic.
Since conservatives tend to support the existing power structures and established social values, the target of conservative comedy would tend to be people and organizations outside of those structures or those with different values. As such, conservative comedy would tend to be going down the power curve: the stronger going after the weaker. For example, a white comedian mocking Black Lives Matter would be shooting downward from an advantageous social position. While comedy can go down the power curve and still be comedy, this becomes rather challenging because it is very easy for such attempts to become painful or destructive, thus ceasing to be comedy. Trump provides an excellent example of this. While he often claims to just be joking, his enormous power advantage means that he is almost always punching downwards and thus appears bullying and cruel rather than comedic. This, I think, is a plausible explanation for the dearth of conservative comedians: mocking those who occupy positions of social disadvantage seems more cruel than comedic.
While I have been playing video games since the digital dawn of gaming, it was not until I completed Halo 5 that I gave some philosophical consideration to video game cut scenes. For those not familiar with cut scenes, they are non-interactive movies within a game. They are used for a variety of purposes, such as providing backstory, showing the consequences of the player’s action or providing information, such as how adversaries or challenges work.
The reason that Halo 5 motivated me to write about cut scenes is an unfortunate one: I believe that Halo 5 made poor use of cuts scenes and will argue for this point as part of my sketch of cut scene theory. Some gamers, including director Guillermo Del Toro and game designer Ken Levine, have spoken against the use of cut scenes. In support of their position, a fairly reasonable argument can be presented against cut scenes in games.
One fundamental difference between a game and a movie is the distinction between active and passive involvement. In the case of a typical movie, the audience merely experiences the movie as observers—they do not influence the outcome. In contrast, the players of a game experience the game as participants—they have a degree of control over the events. A cut scene, or in game movie, changes the person from being a player to being an audience member. This is analogous to taking a person playing sports and putting her into the bleachers to be a mere spectator. The person is, literally, taken out of the game. While there are some who enjoy watching sports, the athlete is there to play and not to be part of the audience. Likewise, while watching a movie can be enjoyable, a gamer is there to game and not be an audience member. To borrow from Aristotle, games and movies each have their own proper pleasures and mixing them together can harm the achievement of this pleasure.
Aristotle, in the Poetics, is critical of the use of the spectacle (such as what we would now call special effects) to produce the tragic feeling of tragedy. He contends that this should be done by the plot. Though this is harder to do, the effect is better. In the case of a video game, the use of cinematics can be regarded as an inferior way of bringing about the intended experience of a game. The proper means of bringing about the effect should lie within the game itself—that is, what the player is actually playing and not merely observing as a passive spectator. As such, cut scenes should be absent from games. Or, at the very least, kept to a minimum.
One way to counter this argument is to draw an analogy to role-playing games such D&D, Pathfinder and Call of Cthulhu. Such games typically begin with what is analogous to a game’s opening cinematic: the game master sets the stage for the adventure to follow. During the course of play, there are often important events that take considerable game world time but would be boring to actually play. For example, a stock phrase used by most game masters is “you journey for many days”, perhaps with some narrative about events that are relevant to the adventure, such as the party members (who are played by people who are friends in real life) becoming friends along the way. There are also other situations in which information needs to be conveyed or stories told that do not need to actually be played out because doing so would not be enjoyable or would be needlessly time consuming if done using game mechanics. A part of these games is shifting from active participant to briefly taking on the role of the audience. However, this is rather like being on the bench listening to the coach rather than being removed from the field and put into the bleachers. While one is not actively playing at that moment, it is still an important part of the game and the player knows that she will be playing soon.
In the case of video games, the same sort approach would also seem to fit, at least in games that have story elements that are important to the game (such as plot continuity, background setting, maintaining some realism, and so on) yet would be tedious, time consuming or beyond the mechanics of the game to actually play through. For example, if the game involves the player driving through a wasteland from a settlement to the ruins of a city she wishes to explore, then a short cut scene that illustrates the desolation of the world while the character is driving would certainly be appropriate. After all, driving for hours through a desolate wasteland would be very boring.
Because of the above argument, I do think that cut scenes can be a proper part of a video game, provided that they are used properly. This requires, but is not limited to, ensuring that the cut scenes are necessary and that the game would not be better served by either deleting the events covered in the movies or having them handled with actual game play. It is also critical that the player not feel that she has been put into the bleachers, although that bench feeling can be appropriate. As a general rule, I look at cut scenes as analogous to narrative in a tabletop role-playing game: a cut scene in a video game is fine if narrative would be fine in an analogous situation in a tabletop game.
Since I was motivated by Halo 5’s failings, I will use it as an example of the bad use of cut scenes. This will contain some possible spoilers, so those who plan to play the game might wish to stop reading.
Going with my narrative rule, a cut scene should not contain things that would be more fun to actually play than watch—unless there is some greater compelling reason why it must be a cut scene. Halo 5 routinely breaks this rule. A rather important sub-rule of this rule is that major enemies should be dealt with in game play and not simply defeated in a cut scene. Halo 5 broke this rule right away. In Halo 4 Jul ‘Mdama was built up as a major enemy. As such, it was rather surprising that he was knifed to death in a cut scene right near the start of Halo 5. This would be like setting out to kill a dragon in Dungeons & Dragons and having the dungeon master allow you to fight the orcs and goblins, but then just say “Fred the fighter hacks down the dragon. It dies” in lieu of playing out the fight with the dragon. Throughout Halo 5 there were cut scenes were I and my friend said “huh, that would have been fun to actually play rather than just watch.” That, in my view, is a mark of bad choices about cut scenes.
The designers also made the opposite sort of error: making players engage in tedious “play” that would have been far better served by short cut scenes. For example, there are parts where the player has to engage in tedious travel (such as ascending a damaged structure). While it would have been best to make it interesting, it would have been less bad to have a quick cut scene of the Spartans scrambling to safety. The worst examples, though, involved “game play” in which the player remains in first person shooter view, but cannot use any combat abilities. The goal is to walk around trying to find the various people to “talk” to. The conversations are scripted: when you reach the person, the non-player character just says a few things and your character says something back—there are no dialogue choices. These should have been handled by short cut scenes. After all, when I am playing a first person shooter, I do not want to have to walk around unable to shoot to trigger recorded conversations. These games are supposed to be “shoot and loot” not “walk and talk.”
To conclude, I take the view of cut scenes that Aristotle takes of acting: while some condemn all cut scenes and all acting (it was argued by some that tragedy was inferior to the epic because it was acted out on stage), it is only poor use of cut scenes (and poor acting) that should be condemned. I do condemn Halo 5.
Some ages get cool names, such as the Iron Age or the Gilded Age. Others are dubbed with word mantles less awesome. An excellent example of the latter is the designation of our time as the Awkward Age. Since philosophers are often willing to cash in on trends, it is not surprising that there is now a philosophy of awkwardness.
Various arguments have been advanced in support of the claim that this is the Awkward Age. Not surprisingly, a key argument is built on the existence of so many TV shows and movies that center on awkwardness. There is a certain appeal to this sort of argument and the idea that art expresses the temper, spirit, and social conditions of its age is an old one. I recall, from an art history class I took as an undergraduate, this standard approach to art. For example, the massive works of the ancient Egyptians is supposed to reveal their views of the afterlife as the harmony of the Greek works is supposed to reveal the soul of ancient Greece.
Wilde, in his dialogue “The New Aesthetics” considers this very point. Wilde takes the view that “Art never expresses anything but itself.” Naturally enough, Wilde provides an account of why people think art is about the ages. His explanation is best put by Carly Simon: “You’re so vain, I’ll bet you think this song is about you.” Less lyrically, the idea is that vanity causes people to think that the art of their time is about them. Since the people of today were not around in the way back times of old, they cannot say that past art was about them—so they assert that the art of the past was about the people of the past. This does have the virtue of consistency.
While Wilde does not offer a decisive argument in favor of his view, it does have a certain appeal. It also is worth considering that it is problematic to draw an inference about the character of an age from what TV shows or movies happen to be in vogue with a certain circle (there are, after all, many shows and movies that are not focused on awkwardness). While it is reasonable to draw some conclusions about that specific circle, leaping beyond to the general population and the entire age would be quite a leap—after all, there are many non-awkward shows and movies that could be presented as contenders to defining the age. It seems sensible to conclude that it is vanity on the part of the members of such a circle to regard what they like as defining the age. It could also be seen as a hasty generalization—people infer that what they regard as defining must also apply to the general population.
A second, somewhat stronger, sort of argument for this being the Awkward Age is based on claims about extensive social changes. To use an oversimplified example, consider the case of gender in the United States. The old social norms had two fairly clearly defined genders and sets of rules regarding interaction. Such rules included those that made it clear that the man asked the woman out on the date and that the man paid for everything. Now, or so the argument goes, the norms are in disarray or have been dissolved. Sticking with gender, Facebook now recognizes over 50 genders which rather complicates matters relative to the “standard” two of the past. Going with the dating rules once again, it is no longer clear who is supposed to do the asking and the paying.
In terms of how this connects to awkwardness, the idea is that when people do not have established social norms and rules to follow, ignorance and error can easily lead to awkward moments. For example, there could be an awkward moment on a date when the check arrives as the two people try to sort out who pays: Dick might be worried that he will offend Jane if he pays and Jane might be expecting Dick to pick up the tab—or she might think that each should pay their own tab.
To use an analogy, consider playing a new and challenging video game. When a person first plays, she will be trying to figure out how the game works and this will typically involve numerous failures. By analogy, when society changes, it is like being in a new game—one does not know the rules. Just as a person can look for guides to a new game online (like YouTube videos on how to beat tough battles), people can try to turn to guides to behavior. However, new social conditions mean that such guides are not yet available or, if they are, they might be unclear or conflict with each other. For example, a person who is new to contemporary dating might try to muddle through on her own or try to do some research—most likely finding contradictory guides to correct dating behavior.
Eventually, of course, the norms and rules will be worked out—as has happened in the past. This indicates a point well worth considering—today is obviously not the first time that society has undergone considerable change, thus creating opportunities for awkwardness. As Wilde noted, our vanity contributes to the erroneous belief that we are special in this regard. That said, it could be contended that people today are reacting to social change in a way that is different and awkward. That is, this is truly the Age of Awkwardness. My own view is that this is one of many times of awkwardness—what has changed is the ability and willingness to broadcast awkward events. Plus, of course, Judd Apatow.
One of my core aesthetic principles is that if I can do something, then it is not art. While this is (mostly) intended to be humorous, it is well founded—I have no artistic talent. Despite this, or perhaps because of this, I have successfully taught aesthetics for over two decades.
In the course of teaching this class, I became rather interested in two questions. The first was whether or not a person without any artistic talent could master the technical aspects of an art. The second was whether or not a person without any artistic talent could develop whatever it is that is needed to create what is often referred to as a work of genius. Or, at a much lower level, a work of true art.
While the usually philosophical approach would be to speculate about the matter over boxes of wine, I decided to engage in some blasphemy and undertook an empirical investigation. To be specific, I decided that I would see if I could teach myself to draw. I would then see if I could teach myself to create art. I began this experiment in the August of 2012 and employed the powers of obsession that have served me so well in running. It turns out they also work for drawing—I have never missed a day of drawing, even when I had to scratch out sketches on scraps using a broken pencil. Yes, I am like that.
While this experiment has just one subject (me), I have shown that it is possible for a person with no artistic talent to develop the technical skills of drawing. To be specific, I have trained myself to become what I like to call a graphite technician. At this point, my skill is such that people say “I like your drawings because I can tell who they are of.” That is, I have enough skill to create recognizable imitations. I refuse to accept any claims that I am an artist, on the basis of the principle mentioned above. Fortunately, I also have an argument to back up this claim.
When I started my experiment, I demonstrated my lack of drawing ability to my students and asked them why my bad drawing of a capybara is not art. They pointed out the obvious—it did not look much like a capybara because it was so badly drawn. When asked if it would be art if I could draw better, they generally agreed. I then asked about just photocopying (or scanning and printing) the picture I used as the basis for my capybara drawing. They pointed out the obvious—that would not be art, just a copy.
Part of the reason the photocopy or scan would not be art is that it is just a mechanical reproduction. When I draw a person well enough for others to recognize the subject, I am exhibiting technical skill—I can re-create the appearance of a person on paper using a pencil. However, it is clear that technical skill alone does not make the results art. After all, this technical skill can be exceeded by a cheap camera, a photocopier or a computer connected to a scanner and printer. Just as being able to scan and print a photo of a person does not make a person an artist, being able to create a reasonable facsimile of a person using a pencil and paper does not make a person an artist—just a graphite technician.
Why this is so can be shown by considering why a mechanical copy is not art: there is nothing in the copy that is not in the original (laying aside duplication defects). As such, the more exact the copy of the original, the less room there is for whatever it is that makes a work art. So, as I get better at creating drawings that look like what I am drawing, I get closer to being a human photocopier. I do not get closer to being an artist.
This sort of argument would seem to suggest that photography cannot be art—after all, the photographer is just a camera technician. An unaltered photograph merely captures an image of what is there. One counter to this is that a photographer (as opposed to a camera technician) adds something to the photograph (I do not mean digital or other manipulation). This seems to be her perspective—she selects what she will capture. So, what makes the work art is not that it duplicates reality (which it must by the laws of physics) but that the photographer has added that something extra. This something extra is what makes the photograph art and distinguishes it from mere picture taking. Or so photographers tell me.
It could be countered that what I am doing is art. Going back to the time of the ancient Greeks, art was taken to be a matter of imitation and, in general, the better the imitation, the better the art. Of course, Plato was rather critical of art on this ground—he regarded it as a corrupting imitation of an imitation.
Jumping ahead to the modern era, thinkers like d’Alembert still regarded fine art as an imitation, typically an imitation of nature aimed at producing pleasure. However, his theory of art does leave a possible opening for a graphite technician like myself to claim the beret of the artist. d’Alembert defined “art” as “any system of knowledge reducible to positive and invariable rules independent of caprice or opinion.” What I have done, like many before me, is learned the rules of drawing—geometry, shading, perspective and so on. As such, I can (by his definition) be said to be an artist.
Fortunately for my claim that I am not an artist, d’Alembert distinguishes between the fine arts and the mechanical arts. The mechanical arts involve rules that can be reduced to “purely mechanical operations.” In contrast, d’Alembert notes that while the “useful liberal arts have fixed rules any can transmit, but the laws of Fine Arts are almost exclusively from genius.” What I am doing, as a graphite technician, is following rules. And, as d’Alembert claimed, “rules concerning arts are only the mechanical part…”
What I am missing, at least on d’Alembert’s theory, is genius. On my own view, I am missing the mysterious something extra. While I do not have a developed theory of “the extra”, I have a vague idea about what it is in the case of drawing. As I developed my technical skills, I got better at imitating what I saw and could cause people to recognize what I was imitating. However, an artist who draws goes beyond showing people what they can already see in the original. The artist can see in the original what others cannot and then enable them to see it in her drawing. All I can do is create drawings where people can see what they can already see. Hence, I am a graphite technician and not an artist. I do not claim this to be a proper theory of art—but it points vaguely in the direction of such a theory.
That said, the experiment is continuing. I intend to see if it is possible to learn how to add that something extra or if, as some claim, it is simply something a person has or does not have.
Chaosium’s Call of Cthulhu roleplaying game was my gateway drug to the fiction of H.P. Lovecraft. His works shaped my view of horror and led me to write adventures and monographs for Chaosium. I am rather pleased that one of my creations is now included among the Great Old Ones. I even co-authored a paper on Lovecraft with physicist Paul Halpern. While Lovecraft is well known for the horrors of his Cthulhu Mythos, he is becoming well known for another sort of horror, namely racism.
When I was a kid, I was rather blind to the prejudices expressed in Lovecraft’s writings—I was much more focused on the strange vistas, sanity blasting beings, and the warping of space and time. As I grew older, I became aware of the casual prejudices expressed towards minorities and his special horror of “mongrel races.” However, I was unsure of whether he was truly a racist or trapped just expressing a common world view of his (and our) time. Which, to be honest, can be regarded as racist. Since I rather like Lovecraft’s writings, I was a bit disturbed as revelations about his racism began to pile up.
For the past forty years the World Fantasy Convention has given World Fantasy awards that take the form of a bust of Lovecraft. Nnedi Okorafor won a WFA in 2011 and was rather disturbed to find that Lovecraft had written a racist poem. While not as surprising as the revelation that Dr. Seuss drew racist cartoons, such evidence of blatant racism certainly altered my view of Lovecraft as a person.
As should be expected, there have been efforts to defend Lovecraft. One of the most notable defenders is S.T. Joshi, one of the leading authorities on the author. The defense of Lovecraft follows a fairly stock approach used to address the issue of whether or not artists’ personal qualities or actions should be relevant to the merit of their art. I turn now to considering some of these stock arguments.
One stock defense is the “product of the times” defense: although Lovecraft was racist, nearly everyone was racist in that time period. This defense does have some merit in that it is reasonable to consider the social and moral setting in which an artist lived. After all, artists have no special immunity to social influences. To use an analogy, consider the stock feminist arguments regarding the harmful influence of the patriarchal culture, sexist imagery, sexist language and unrealistic body images on young women. The argument is often made that young woman are shaped by these forces and develop low self-esteem, become more likely to have eating disorders, and develop unrealistic images of how they should look and behave. If these cultural influences can have such a devastating impact on young women, it is certainly easy enough to imagine the damaging impact of a culture awash in racism upon the young Lovecraft. Just as a young woman inundated by photoshopped images of supermodels can develop a distorted view of reality, a young person exposed to racism can develop a distorted view of reality. And, just as one would not hold the young woman responsible for her distorted self-image, one should not hold the young racist accountable for his distorted other-image.
It can be countered that the analogy does not hold. While young women can be mentally shaped by the patriarchal influences of the culture and are not morally accountable for this, people are fully responsible for accepting racism even in a culture that is flooded with racism, such as the United States in the 1900s. As such, Lovecraft is fully to blame for his racist views and his condemnation is justified. The challenge is, of course, to work out how some cultural factors can shape people in ways that excuse them and other shaping leaves people morally accountable.
Another reply is that this stock argument is a version of the appeal to common practice fallacy—a fallacy that occurs when a practice is defended on the grounds that it is commonly done. Obviously, the mere fact that a practice is common does not justify that practice. So, although racism was common in Lovecraft’s day, this does not serve as a defense of his views.
A second stock defense is that the artist has other traits that offset the negative qualities in question. In the case of Lovecraft, the defense is that he was intelligent, generous and produced works of considerable influence and merit. This defense does have some appeal—after all, everyone has negative traits and a person should be assessed by the totality of her being, not her worst quality taken in isolation.
While this is a reasonable reply, it only works to the degree that a person’s good qualities offset the negative qualities. After all, there are many awful people who are kind to their own pets or loved some other people. As such, a consideration of this defense would require weighing the evil of Lovecraft with the good. One factor well worth considering is that although Lovecraft wrote racist things and thought racist thoughts, there is the question of whether his racism led him to actually harm anyone. While it might be claimed that racism itself is crime enough, it does seem to matter whether or not he actually acted on this racism to the detriment of others. This, of course, ties into the broader philosophical issue of the moral importance of thoughts versus the moral importance of actions.
Another concern with this defense is that even if a person’s positive traits outweigh the negative, this does not erase the negative traits. So even if Lovecraft was a smart and generous racist, he was still a racist. Which is certainly grounds for condemnation.
A third, and especially intriguing stock defense against one moral flaw is to argue that the flaw is subsumed in a far greater flaw. In the case of Lovecraft, it could be argued that his specific racism is subsumed into his general misanthropic view of humanity. While there is some debate about the extent of his (alleged) misanthropy, this does have some appeal. After all, if Lovecraft disliked humans in general, his racism against specific ethnic groups would be part of that overall view and not racism in the usual sense. Many of Lovecraft’s stories (such as in “the Picture in the House”, “The Shadow Over Innsmouth”, ‘the Rats in the Walls”, and “the Dunwich Horror”) feature the degeneracy and villainy of those of European stock. The descriptions of the degenerated whites are every bit as condemning and harsh as his descriptions of people of other ethnicities. As such, Lovecraft cannot be accused of being a racist—unless his racism is cast as being against all humans.
One counter to this is to point out that being awful in general is not a defense of being awful in a particular way. Another counter is that while Lovecraft did include degenerate white people, he also wrote in very positive ways about some white characters—something he did not do for any other ethnicities. This, it could be argued, does support the claim that Lovecraft was racist.
A final stock defense is to argue that the merits of artists’ works are independent of the personal qualities of the artists. What matters, it can be argued, is the quality of the work itself. One way to argue for this is to use an analogy from my own past.
Years ago, when I was a young cross country runner, there was a very good runner at another college. This fellow regularly placed in and even won races—he was, without a doubt, one of the best runners in the conference. However, he was almost universally despised—so much so that people joked that the only reason no one beat him up was because they could not catch him. Despite his being hated, his fellow runners had to acknowledge the fact that he was a good runner and merited all the victories. The same would seem to apply in the case of an artist like Lovecraft: his works should be assessed on their own merits and not on his personality traits.
Another way to make the argument is to point out the fact that an artist having positive qualities does not make the art better. A person might be a moral saint, but this does not mean that her guitar playing skill will be exceptional. A person might be kind to animals and devoted to the wellbeing of others, but this will not enhance his poetry. So, if the positive traits of an artist do not improve a work, it should follow that negative traits do not make the work worse.
This then leads to the concern that an artist’s personality qualities might corrupt a work. To go back to the running analogy, if the despised runner was despised because he cheated at the races, then the personality traits that made him the object of dislike would be relevant to assessing the merit of his performances. Likewise, if the racism of a racist author infects his works, then this could be regarded as reducing their merit. This leads to the issue of whether or not such racism actually detracts from the merit of a work, which is a lengthy issue for another time.
My own view of Lovecraft is that his racism made him a worse person. However, the fact that he was a racist does not impact the merit of his works—except to the degree that the racist elements in the stories damage their artistic merit (which is an issue well worth considering). As such, Lovecraft should be condemned for his racism, but given due praise for the value of his work and his contribution to modern horror.
Although I like science fiction, I did not see Interstellar until fairly recently—although time is such a subjective sort of thing. One reason I decided to see it is because some have claimed that the movie should be shown in science classes, presumably to help the kids learn science. Because of this, I expected to see a science fiction movie. Since I write science fiction, horror and fantasy stuff, it should not be surprising that I get a bit obsessive about genre classifications. Since I am a professor, it should also not be surprising that I have an interest in teaching methods. As such, I will be considering Interstellar in regards to both genre classifications and its education value in the context of science. There will be spoilers—so if you have not seen it, you might wish to hold off reading this essay.
While there have been numerous attempts to distinguish between science and fantasy, Roger Zelazny presents one of the most brilliant and concise accounts in a dialogue between Yama and Tak in Lord of Light. Tak has inquired of Yama about whether a creature, a Rakshasa, he has seen is a demon or not. Yama responds by saying, “If by ‘demon’ you mean a malefic, supernatural creature, possessed of great powers, life span and the ability to temporarily assume any shape — then the answer is no. This is the generally accepted definition, but it is untrue in one respect. … It is not a supernatural creature.”
Tak, not surprisingly, does not see the importance of this single untruth in the definition. Yama replies with “Ah, but it makes a great deal of difference, you see. It is the difference between the unknown and the unknowable, between science and fantasy — it is a matter of essence. The four points of the compass be logic, knowledge, wisdom, and the unknown. Some do bow in that final direction. Others advance upon it. To bow before the one is to lose sight of the three. I may submit to the unknown, but never to the unknowable”
In Lord of Light, the Rakshasa play the role of demons, but they are aliens—the original inhabitants of a world conquered by human colonists. As such, they are natural creatures and fall under the domain of science. While I do not completely agree with Zelazny’s distinction, I find it appealing and reasonable enough to use as the foundation for the following discussion of the movie.
Interstellar initially stays safely within the realm of science-fiction by staying safely within the sphere of scientific speculation regarding hypersleep, wormholes and black holes. While the script does take some liberties with the science, this is fine for the obvious reason that this is science fiction and not a science lecture. Interstellar also has the interesting bonus of having contributed to real science regarding the appearance of black holes. That aspect would provide some justification for showing it (or some of it) in a science class.
Another part of the movie that would be suitable for a science class are the scenes in which Murph thinks that her room might be haunted by a ghost. Cooper, her father, urges her to apply the scientific method to the phenomenon. Of course, it might be considered bad parenting for a parent to urge his child to study what might be a dangerous phenomenon in her room. Cooper also instantly dismisses the ghost hypothesis—which can be seen as being very scientific (since there has been no evidence of ghosts) to not very scientific (since this might be evidence of ghosts).
The story does include the point that the local school is denying that the moon-landings really occurred and the official textbooks support this view. Murph is punished at school for arguing that the moon landings did occur and is rewarded by Cooper. This does make a point about science denial and could thus be of use in the classroom.
Rather ironically, the story presents its own conspiracies and casts two of the main scientists (Brand and Mann) as liars. Brand lies about his failed equation for “good” reasons—to keep people working on a project that has a chance and to keep morale up. Mann lies about the habitability of his world because, despite being built up in the story as the best of the scientists, he cannot take the strain of being alone. As such, the movie sends a mixed-message about conspiracies and lying scientists. While learning that some people are liars has value, this does not add to the movie’s value as a science class film. Now, to get back to the science.
The science core of the movie, however, focuses on holes: the wormhole and the black hole. As noted above, the movie does stick within the realm of speculative science in regards to the wormhole and the black hole—at least until near the end of the movie.
It turns out that all that is needed to fix Brand’s equation is data from inside a black hole. Conveniently, one is present. Also conveniently, Cooper and the cool robot TARS end up piloting their ships into the black hole as part of the plan to save Brand. It is at this point that the movie moves from science to fantasy.
Cooper and TARS manage to survive being dragged into the black hole, which might be scientifically fine. However, they are then rescued by the mysterious “they” (whoever created the wormhole and sent messages to NASA).
Cooper is transported into a tesseract or something. The way it works in the movie is that Cooper is floating “in” what seems to be a massive structure. In “reality” it is nifty blend of time and space—he can see and interact with all the temporal slices that occurred in Murph’s room. Crudely put, it allows him to move in time as if it were space. While it is also sort of still space. While this is rather weird, it is still within the realm of speculative science fiction.
Cooper is somehow able to interact with the room using weird movie plot rules—he can knock books off the shelves in a Morse code pattern, he can precisely change local gravity to provide the location of the NASA base in binary, and finally he can manipulate the hand of the watch he gave his daughter to convey the data needed to complete the equation. Weirdly, he cannot just manipulate a pen or pencil to just write things out. But, movie. While a bit absurd, this is still science fiction.
The main problem lies with the way Cooper solves the problem of locating Murph at the right time. While at this point I would have bought the idea that he figured out the time scale of the room and could rapidly check it, the story has Cooper navigate through the vast time room using love as a “force” that can transcend time. While it is possible that Cooper is wrong about what he is really doing, the movie certainly presents it as if this love force is what serves as his temporal positioning system.
While love is a great thing, there are no even remotely scientific theories that provide a foundation for love having the qualities needed to enable such temporal navigation. There is, of course, scientific research into love and other emotions. The best of current love science indicates that love is a “mechanical” phenomena (in the philosophical sense) and there is nothing to even suggest that it provides what amounts to supernatural abilities.
It would, of course, be fine to have Cooper keep on trying because he loves his children—love does that. But making love into some sort of trans-dimensional force is clearly fantasy rather than science and certainly not suitable for a science lesson (well, other than to show what is not science).
One last concern I have with using the movie in a science class is the use of what seem to be super beings. While the audience learns little of the beings, the movie does assert to the audience that these beings can obviously manipulate time and space. They create the wormhole, they pull Cooper and TARS from a black hole, they send Cooper back in time and enable him to communicate in stupid ways, and so on. The movie also tells the audience the beings are probably future humans (or what humanity becomes) and that they can “see” all of time. While the movie does not mention this, this is how St. Augustine saw God—He is outside of time. They are also clearly rather benign and show demonstrate that that do care about individuals—they save Cooper and TARS. Of course, they also let many people die needlessly.
Given these qualities, it is easy to see these beings (or being) as playing the role of God or even being God—a super powerful, sometimes benign being, that has incredible power over time and space. Yet is fine with letting lots of people die needlessly while miraculously saving a person or two.
Given the wormhole, it is easy to compare this movie to Star Trek: Deep Space Nine. This show had wormhole populated by powerful beings that existed outside of our normal dimensions. To the people of Bajor, these beings were divine and supernatural Prophets. To Star Fleet, they were the wormhole aliens. While Star Trek is supposed to be science fiction, some episodes involving the prophets did blur the lines into fantasy, perhaps intentionally.
Getting back to Interstellar, it could be argued that the mysterious “they” are like the Rakshasa of Lord of Light in that they (or whatever) have many of the attributes of God, but are not supernatural beings. Being fiction, this could be set by fiat—but this does raise the boundary question. To be specific, does saying that something that has what appear to be the usual supernatural powers is not supernatural make it science-fiction rather than fantasy? Answering this requires working out a proper theory of the boundary, which goes beyond the scope of this essay. However, I will note that having the day saved by the intervention of mysterious and almost divinely powerful beings does not seem to make the movie suitable for a science class. Rather, it makes it seem to be more of a fantasy story masquerading as science fiction.
My overall view is that showing parts of Interstellar, specifically the science parts, could be fine for a science class. However, the movie as a whole is more fantasy than science fiction.
Once and future presidential candidate Mike Huckabee recently expressed his concern about the profanity flowing from the mouths of New York Fox News ladies: “In Iowa, you would not have people who would just throw the f-bomb and use gratuitous profanity in a professional setting. In New York, not only do the men do it, but the women do it! This would be considered totally inappropriate to say these things in front of a woman. For a woman to say them in a professional setting that’s just trashy!”
In response, Erin Gloria Ryan posted a piece on Jezebel.com. As might be suspected, the piece utilized the sort of language that Mike dislikes and she started off with “listen up, cunts: folksy as balls probable 2016 Presidential candidate Mike Huckabee has some goddamn opinions about what sort of language women should use. And guess the fuck what? You bitches need to stop with this swearing shit.” While the short article did not set a record for OD (Obscenity Density), the author did make a good go at it.
I am not much for swearing. In fact, I used to say “swearing is for people who don’t how to use words.” That said, I do recognize that there are proper uses of swearing.
While I generally do not favor swearing, there are exceptions in which swearing was not only permissible, but necessary. For example, when I was running cross country, one of the other runners was looking super rough. The coach asked him how he felt and he said “I feel like shit coach.” The coach corrected him by saying “no, you feel like crap.” He replied, “No, coach, I feel like shit.” And he was completely right. Inspired by the memory of this exchange, I will endeavor to discuss proper swearing. I am, of course, not developing a full theory of swearing—just a brief exploration of the matter.
I do agree with some of what Huckabee said, namely the criticism of swearing in a professional context. However, my professional context is academics and I am doing my professional thing in front of students and other faculty—not exactly a place where gratuitous f-bombing would be appropriate or even useful. It would also make me appear sloppy and stupid—as if I could not express ideas or keep the attention of the class or colleagues without the cheap shock theatrics of swearing.
I am certainly open to the idea that such swearing could be appropriate in certain professional contexts. That is, that the vocabulary of swearing would be necessary to describe professional matters accurately and doing so would not make a person seem sloppy, disrespectful or stupid. Perhaps Fox News and Jezebel.com are such places.
While I was raised with certain patriarchal views, I have shed all but their psychological residue. Hearing a woman swear “feels” worse than hearing a man swear, but I know this is just the dregs of the past. If it is appropriate for a man to swear, the same right of swearing applies to a woman equally. I’m gender neutral, at least in principle.
Outside of the professional setting, I still have a general opposition to casual and repetitive swearing. The main reason is that I look at words and phrases as tools. As with any tool, they have the suitable and proper uses. While a screwdriver could be used to pound in nails, that is a poor use. While a shotgun could be used to kill a fly, that is excessive and will cause needless collateral damage. Likewise, swear words have specific functions and using them poorly can show not only a lack of manners and respect, but a lack of artistry.
In general, the function of swear words is to serve as dramatic tools—that is, they are intended to shock and to convey something rather strong, such as great anger. To use them casually and constantly is rather like using a scalpel for every casual cutting task—while it will work, the blade will grow dull from repeated use and will no longer function well when it is needed for its proper task. So, I reserve my swear words not because I am prudish, but because if I wear them out, they will not serve me when I really need them most. For example, if I say “we are fucked” all the time for any minor problem, then when a situation in which we are well and truly fucked arrives, I will not be able to use that phrase effectively. But, if I save it for when the fuck hits the fan, then people who know me will know that it has gotten truly serious—I have broken out the “it is serious” words.
As another example, swear words should be saved for when a powerful insult or judgment is needed. If I were to constantly call normal people “fuckers” or describe not-so-bad things as being “shit”, then I would have little means of describing truly bad people and truly bad things. While I generally avoid swearing, I do need those words from time to time, such as when someone really is a fucker or something truly is shit.
Of course, swear words can also be used for humorous purposes. This is not really my sort of thing, but their shock value can serve well here—to make a strong point or to shock. However, if the words are too worn by constant use, then they can no longer serve this purpose. And, of course, it can be all too easy and inartistic to get a laugh simply by being crude—true artistry involves being able to get laughs using the same language one would use in front of grandpa in church. Of course, there is also an artistry to swearing—but that is more than just doing it all the time.
I would not dream of imposing on others—folks who wish to communicate normally using swear words have every right to do so, just as someone is free to pound nails with a screwdriver or whittle with a scalpel. However, it does bother me a bit that these words are being dulled and weakened by excessive use. If this keeps up, we will need to make new words and phrases to replace them—and then, no doubt, new words to replace those.