The bookshelves of the world abound with tomes on self-help. Many of these profess to help people with various emotional woes, such as sadness, and make vague promises about happiness. Interestingly enough, philosophers have long been in the business of offering advice on how to be happy. Or at least not too sad.
Each spring semester I teach Modern Philosophy and cover our good dead friend Spinoza. In addition to an exciting career as a lens grinder, he also manage to avoid being killed by an assassin. However, breathing in all that glass dust seems to have ultimately contributed to his untimely death. But enough about his life and death, it is time to get to the point of this essay.
As Spinoza saw it, people are slaves to their emotion and chained to what they love, such as fame, fortune and other people. This inevitably leads to sadness: the people we love betray us or die. That fancy Tesla can be smashed in a wreck. The beach house can be swept away by the rising tide. A job can be lost as a company seeks to boost its stock prices by downsizing the job fillers. And so on, through all the ways things can go badly.
While Spinoza was a pantheist and believed that everything is God and God is everything, his view of human beings is similar to that of the philosophical mechanist: humans are not magically exempt from the laws of nature. He was also a strict determinist: each event occurs from necessity and cannot be otherwise—there is no chance or choice. So, for example, the Seahawks could not have won the 2015 Super Bowl. As another example, I could not have written this essay in any other manner, so I had to make that remark about the Seahawks losing rather than mentioning their 2014 victory.
Buying into determinism, Spinoza took the view that human behavior and motivations can be examined as one might examine “lines, planes or bodies.” More precisely, he took the view that emotions follow the same necessity as all other things, thus making the effects of the emotions predictable—provided that one has enough knowledge. Spinoza then used this idea as the basis for his “self-help” advice.
According to Spinoza all emotions are responses to the past, present or future. For example, a person might feel regret because she believes she could have made her last relationship work if she had only put more effort into it. As another example, a person might worry because he thinks that he might lose his job in the next round of downsizing at his company. These negative feelings rest, as Spinoza sees it, on the false belief that the past could have been otherwise and that the future is undetermined. Once a person realizes nothing could have been any different and the future cannot be anything other than what it will be, then that person will suffer less from the emotions. Thus, for Spinoza, freedom from the enslaving chains is the recognition and acceptance that what was could not have been otherwise and what will be cannot be otherwise.
This view does have a certain appeal and it does make sense that it can have some value. In regards to the past, people do often beat themselves up emotionally over what they regard as past mistakes. This can lead a person to be chained by regrets and thus be partially trapped in the past as she spends countless hours wondering “what if?” This is not to say that feeling regret or guilt is wrong—far from it. But, it is to say that lamenting about the past to the detriment of now is a problem. It is also a problem to believe that things could have been different when they, in fact, could not have been different.
This is also not to say that a person should not reflect on the past—after all, a person who does not learn from her mistakes is doomed to repeat them. People can, of course, also be trapped by the past because of what they see as good things about the past—they are chained to what they (think) they once had or once were (such as being the big woman on campus back in college).
In regards to the future, it is very easy to be trapped by anxiety, fear and even hope. It can be reassuring to embrace the view that what will be will be and to not worry and be happy. This is not to say that one should be foolish about the future, of course.
There is, unfortunately, one crushing and obvious problem with Spinoza’s advice. If everything is necessary and determined, his advice makes no sense: what is, must be and cannot be otherwise. To use an analogy, it would be like shouting advice at someone watching a cut scene in a video game. This is pointless, since the person cannot do anything to change what is occurring. What occurs must occur and cannot be otherwise. For Spinoza, while we might think life is a like a game, it is like that cut scene: we are spectators of the show and not players controlling the game.
The obvious counter is to say “but I feel free! I feel like I am making choices!” Spinoza was well aware of this objection. In response, he claims that if a stone were conscious and hurled through the air, it would think it was free to choose to move and land where it does. People think they are free because they are “conscious of their own actions, and ignorant of the causes by which those actions are determined.” In other words, we think we are free because we do not know better. Going back to the video game analogy, we think we are in control as we push the buttons, but this is because we do not know how the game actually works—that is, we are just along for the ride and not in control.
Since everything is determined, whether or not a person heeds Spinoza’s advice is also determined—if you do, then you do and you could not do otherwise. If you do not, you could not do otherwise. As such, his advice would seem to be beyond useless. This is a stock paradox faced by determinists who give advice: their theory says that people cannot chose to follow this advice—they will just do what they are determined to do. That said, it is possible to salvage some useful advice from Spinoza.
The first step is for me to reject his view that I lack free will. I have a stock argument for this that goes as follows. Obviously, I have free will or I do not. It is equally obvious that there is no way to tell whether I do or not. From an empirical standpoint, a universe with free will looks and feels just like a universe without free will: you just observe people doing stuff and apparently making decisions while thinking and feeling that you are doing the same.
Suppose someone rejects free will and they are wrong. In this case they are not only mistaken but also consciously rejecting real freedom.
Suppose someone rejects free will and they are correct. In that case, they are right—but not in the sense that they made the correct choice. They would have been determined to have that view and it would just so happen that it matches reality.
Suppose someone accepts free will and they are right. In this case, they have the correct view. They have also made the right choice—since choice would be real, making right and wrong choices is possible. More importantly, if they act consistently with this view, then they will be doing things right—not in the moral sense, but in the sense that they are acting in accord with how the universe works.
Suppose someone accepts free will and they are wrong. In this case they are in error, but have not made an incorrect choice (for obvious reasons). They believe they are freely making choices, but obviously are not.
If I can choose, then I should obviously choose free will. If I cannot choose, then I will think I chose whatever it is I am determined to believe. If I can choose and choose to think I cannot, I am in error. Since I cannot know which option is correct, it seems best to accept free will. If I am actually free, I am right. If I am not free, then I am mistaken but had no choice.
Given the above argument, I accept that I have agency. This makes it possible for me to meaningfully give and accept (or reject) advice. Turning back to Spinoza, I obviously cannot accept his advice that I am enslaved by determinism. However, I can accept some of his claims, namely that I am acted upon by my attachments and emotions. As he sees it, the emotions are things that act upon us—on my view, they would thus be things that impinge upon our agency. As I love to do, I will use an analogy to running.
As I ran this morning, I was thinking about this essay and focused on the fact that feelings of pain (I have various old and new injuries) and tiredness were impinging on me in a manner similar to the way the cold or rain might impinge on me. In the case of pain and tiredness, the attack is from inside. In the case of the cold or rain, the attack is from the outside. Whether the attack is from inside or out, the attack is trying to make the choice for me—to rob me of my agency as a runner. If the pain, cold or rain makes me stop, then I am not acting. I am being acted upon. If I chose to stop, then I am acting. If I chose to go on, I am also acting. And acting rightly. As a runner I know the difference between choosing to stop and being forced to stop.
Being aware of this is very useful for running—thanks to decades of experience I understand, in a way Spinoza might approve, the workings of pain, fatigue and so on. To use a specific example, I know that I am being acted upon by the pain and I understand quite well how it works. As such, the pain is not in control—I am. If I wish, I can run myself to ruin (and I have done just this). Or I can be wiser and avoid damaging myself.
Turning back to emotions, feelings impinge upon me in ways analogous to pain and fatigue. I do not have full control over how I feel—the emotions simply occur, perhaps in response to events or perhaps simply as the result of an electrochemical imbalance. To use a specific example, like most folks I will feel depressed and know that I have no reason to feel that way. It is like the cold or fatigue—it is just impinging on me. As Spinoza argued, my knowledge of how this works is critical to dealing with it. While I cannot fully control the feeling, I understand why I feel that way. It is like the cold I felt running in the Maine winters—it is a natural phenomenon that is, from my perspective, trying to destroy me. In the case of the cold, I can wear warmer clothing and stay moving—knowing how it works enables me to choose how to combat it. Likewise, knowing how the negative feelings work enables me to choose how to combat them. If I am depressed for no reason, I know it is just my brain trying to kill me. It is not pleasant, but it does not get to make the decisions for me. Fortunately, our good dead friend Aristotle has some excellent advice for training oneself to handle the emotions.
That said, the analogy to cold is particularly apt. The ice of the winter can kill even those who understand it and know how to resist it—sometimes the cold is just too much for the body. Likewise, the emotions can be like the howling icy wind—they can be too much for the mind. We are, after all, only human and have our limits. Knowing these is a part of wisdom. Sometimes you just need to come in from the cold or it will kill you. Have some hot chocolate. With marshmallows.
A look back at the American (and world) economy shows a “pastscape” of exploded economic bubbles. The most recent was the housing bubble, but the less recent .com bubble serves as a relevant reminder that bubbles can be technological. This is a reminder well worth keeping in mind for we are, perhaps, blowing up a new bubble.
In “The End of Economic Growth?” Oxford’s Carl Frey discusses the new digital economy and presents some rather interesting numbers regarding the value of certain digital companies relative to the number of people they employ. One example is Twitch, which streams videos of people playing games (and people commenting on people playing games). Twitch was purchased by Amazon for $970 million. Twitch has 170 employees. The multi-billion dollar company Facebook had 8,348 employees as of September 2014. Facebook bought WhatsApp for $19 billion. WhatsApp employed 55 people at the time of this acquisition. In an interesting contrast, IBM employed 431,212 people in 2013.
While it is tempting to explain the impressive value to employee ratio in terms of grotesque over-valuation (which does have its merits as a criticism), there are other factors involved. One, as Frey notes, is that the (relatively) new sort of digital businesses require relatively little capital. The above-mentioned WhatsApp started out with $250,000 and this was actually rather high for an app—the average cost to develop one is $6,453. As such, a relatively small investment can create a huge return.
Another factor is an old one, namely the efficiency of technology in replacing human labor. The development of the plow reduced the number of people required to grow food, the development of the tractor reduced it even more, and the refinement of mechanized farming has enabled the number of people required in agriculture to be reduced dramatically. While it is true that people have to do work to create such digital companies (writing the code, for example), much of the “labor” is automated and done by computers rather than people.
A third factor, which is rather critical, is the digital aspect. Companies like Facebook, Twitch and WhatsApp do not manufacture objects that need to manufactured, shipped and sold. As such, they do not (directly) create jobs in these areas. These companies do make use of existing infrastructure: Facebook does need companies like Comcast to provide the internet connection and companies like Apple to make the devices. But, rather importantly, they do not employ the people who work for Comcast and Apple (and even these companies employ relatively few people).
One of the most important components of the digital aspect is the multiplier effect. To illustrate this, consider two imaginary businesses in the health field. One is a walk-in clinic which I will call Nurse Tent. The other is a health app called RoboNurse. If a patient goes to Nurse Tent, the nurse can only tend to one patient at a time and he can only work so many hours per day. As such, Nurse Tent will need to employ multiple nurses (as well as the support staff). In contrast, the RoboNurse app can be sold to billions of people and does not require the sort of infrastructure required by Nurse Tent. If RoboNurse takes off as a hot app, the developer could sell it for millions or even billions.
Nurse Tent could, of course, become a franchise (the McDonald’s of medicine). But, being very labor intensive and requiring considerable material outlay, it will not be able to have the value to employee ratio of a digital company like WhatsApp or Facebook. It would, however, employ more people. However, the odds are that most of the employees would not be well paid—while the digital economy is producing millionaire and billionaires, wages for labor are rather lacking. This helps to explain why the overall economy is doing great, while the majority of workers are worse off than before the last bubble.
It might be wondered why this matters. There are, of course, the usual concerns about the terrible inequality of the economy. However, there is also the concern that a new bubble is being inflated, a bubble filled with digits. There are some good reasons to be concerned.
First, as noted above, the digital companies seem to be grotesquely overvalued. While the situation is not exactly like the housing bubble, overvaluation should be a matter of concern. After all, if the value of these companies is effectively just “hot digits” inflating a thin skin, then a bubble burst seems likely.
This can be countered by arguing that the valuation is accurate or even that all valuation is essentially a matter of belief and as long as we believe, all will be fine. Until, of course, it is no longer fine.
Second, the current digital economy increases the income inequality mentioned above, widening the gap between the rich and the poor. Laying aside the fact that such a gap historically leads to social unrest and revolution, there is the more immediate concern that the gap will cause the bubble to burst—the economy cannot, one would presume, endure without a solid middle and base to help sustain the top of the pyramid.
This can be countered by arguing that the new digital economy will eventually spread the wealth. Anyone can make an app, anyone can create a startup, and anyone can be a millionaire. While this does have an appeal to it, there is the obvious fact that while it is true that (almost) anyone can do these things, it is also true that most people will fail. One just needs to consider all the failed startups and the millions of apps that are not successful.
There is also the obvious fact that civilization requires more than WhatsApp, Twitch and Facebook and people need to work outside of the digital economy (which lives atop the non-digital economy). Perhaps this can be handled by an underclass of people beneath the digital (and financial) elite, who toil away at low wages to buy smartphones so they can update their status on Facebook and watch people play games via Twitch. This is, of course, just a digital variant on a standard sci-fi dystopian scenario.
While college students have been completing student evaluations of faculty since the 1960s, these evaluations have taken on considerable importance. There are various reasons for this. One is a conceptual shift towards the idea that a college is primarily a business and students are customers. On this model, student evaluations of faculty are part of the customer satisfaction survey process. A second is an ideological shift in regards to education. Education is seen more as a private good and something that needs to be properly quantified. This is also tied into the notion that the education system is, like a forest or oilfield, a resource to be exploited for profit. Student evaluations provide a cheap method of assessing the value provided by faculty and, best of all, provide numbers (numbers usually based on subjective assessments, but pay that no mind).
Obviously enough, I agree with the need to assess performance. As a gamer and runner, I have a well-developed obsession with measuring my athletic and gaming performances and I am comfortable with letting that obsession spread freely into my professional life. I want to know if my teaching is effective, what is working, what is not, and what impact I am having on the students. Of course, I want to be confident that the methods of assessment that I am using are actually useful. Having been in education quite some time, I do have some concerns about the usefulness of student evaluations of faculty.
The first and most obvious concern is that students are, almost by definition, not experts in regards to assessing education. While they obviously take classes and observe (when not Facebooking) faculty, they typically lack any formal training in assessment and one might suspect that having students evaluate faculty is on par with having sports fans assessing coaching. While fans and students often have strong opinions, this does not really qualify them to provide meaningful professional assessment.
Using the sports analogy, this can be countered by pointing out that while a fan might not be a professional in regards to coaching, a fan usually knows good or bad coaching when she sees it. Likewise, a student who is not an expert at education can still recognize good or bad teaching.
A second concern is the self-selection problem. While students have access to the evaluation forms and can easily go to Rate My Professors, students who take the time to show up and fully complete the forms or go to the website will tend to have stronger feelings about the professor. These feelings will tend to bias the results so that they are more positive or more negative than they should be.
The counter to this is that the creation of such strong feelings is relevant to the assessment of the professor. A practical way to counter the bias is to ensure that most (if not all) students in a course complete the evaluations.
Third, people often base their assessments on irrelevant factors about the professor. These include such things as age, gender, appearance, and personality. The concern is that this factor makes evaluations a form of popularity contest: professors that are liked will be evaluated by better professors who are not as likeable. There is also the concern that students tend to give younger professors and female professors worse evaluations than older professors and male professors and these sorts of gender and age biases lower the credibility of such evaluations.
A stock reply to this is that these factors do not influence students as strongly as critics might claim. So, for example, a professor might be well-liked, yet still get poor evaluations in regards to certain aspects of the course. There are also those who question the impact of alleged age and gender bias.
Fourth, people often base assessments on irrelevant factors about the course, such as how easy it is, the specific grade received, or whether they like the subject or not. Not surprisingly, it is commonly held that students give better evaluations to professors who they regard as easy and downgrade those they see as hard.
Given that people generally base assessments on irrelevant factors (a standard problem in critical thinking), this does seem to be a real concern. Anecdotally, my own experience indicates that student assessment can vary a great deal based on irrelevant factors they explicitly mention. I have a 4.0 on Rate my Professors, but there is quite a mix in regards to the review content. What is striking, at least to me, is the inconsistencies between evaluations. Some students claim that my classes are incredibly easy (“he is so easy”), while others claim they are incredibly hard (“the hardest class I have ever taken”). I am also described as being very boring and very interesting, helpful and unhelpful and so on. This sort of inconsistency in evaluations is not uncommon and does raise the obvious concern about the usefulness of such evaluations.
A counter to this is that the information is still useful. Another counter is that the appropriate methods of statistical analysis can be used to address this concern. Those who defend evaluations point out that students tend to be generally consistent in their assessments. Of course, consistency in evaluations does not entail accuracy.
To close, there are two final general concerns about evaluations of faculty. One is the concern about values. That is, what is it that makes a good educator? This is a matter of determining what it is that we are supposed to assess and to use as the standard of assessment. The second is the concern about how well the method of assessment works.
In the case of student evaluations of faculty, we do not seem to be entirely clear about what it is that we are trying to assess nor do we seem to be entirely clear about what counts as being a good educator. In the case of the efficacy of the evaluations, to know whether or not they measure well we would need to have some other means of determining whether a professor is good or not. But, if there were such a method, then student evaluations would seem unnecessary—we could just use those methods. To use an analogy, when it comes to football we do not need to have the fans fill out evaluation forms to determine who is a good or bad athlete: there are clear, objective standards in regards to performance.
When people disagree on controversial issues it is not uncommon for one person to accuse another of lying. In some cases this accusation is clearly warranted and in others it is clearly not. Discerning between these cases is clearly a matter of legitimate concern. There is also some confusion of what should count as a lie and what should not.
While this might seem like a matter of mere semantics, the distinction between what is a lie and what is not actually matters. The main reason for this is that to accuse a person of lying is, in general, to lay a moral charge against the person. It is not merely to claim that the person is in error but to claim that the person is engaged in something that is morally wrong. While some people do use “lie” interchangeably with “untruth”, there is clearly a difference.
To use an easy and obvious example, imagine a student who is asked which year the United States dropped an atomic bomb on Hiroshima. The student thinks it was in 1944 and writes that down. She has made an untrue claim, but it would clearly not do for the teacher to accuse her of being a liar.
Now, imagine that one student, Sally, is asking another student, Jane, about when the United States bombed Hiroshima. Jane does not like Sally and wants her to do badly on her exam, so she tells her that the year was 1944, though she knows it was 1945. If Sally tells another student that it was 1944 and also puts that down on her test, Sally could not justly be accused of lying. Jane, however, can be fairly accused. While Sally is saying and writing something untrue, she believes the claim and is not acting with any malicious intent. In contrast, Jane believes she is saying something untrue and is acting from malice. This suggests some important distinctions between lying and making untrue claims.
One obvious distinction is that a lie requires that the person believe she is making an untrue claim. Naturally, there is the practical problem of determining whether a person really believes what she is claiming, but this is not relevant to the abstract distinction: if the person believes the claim, then she would not be lying when she makes that claim.
It can, of course, be argued that a person can be lying even when she believes what she claims—that what matters is whether the claim is true or not. The obvious problem with this is that the accusation of lying is not just a claim the person is wrong, it is also a moral condemnation of wrongdoing. While “lie” could be taken to apply to any untrue claim, there would be a need for a new word to convey not just a statement of error but also of condemnation.
It can also be argued that a person can lie by telling the truth, but by doing so in such a way as to mislead a person into believing something untrue. This does have a certain appeal in that it includes the intent to deceive, but differs from the “stock” lie in that the claim is true (or at least believed to be true).
A second obvious distinction is that the person must have a malicious intent. This is a key factor that distinguishes the untruths of the fictions of movies, stories and shows from lies. When the actor playing Darth Vader says to Luke “No. I am your father.”, he is saying something untrue, yet it would be unfair to say that the actor is thus a liar. Likewise, the references to dragons, hobbits and elves in the Hobbit are all untrue—yet one would not brand Tolkien a liar for these words.
The obvious reply to this is that there is a category of lies that lack a malicious intent. These lies are often told with good intentions, such as a compliment about a person’s appearance that is not true or when parents tell their children about Santa Claus. As such, it would seem that there are lies that are not malicious—these are often called “white lies.” If intent matters, then this sort of lie would seem rather less bad than the malicious lie; although they do meet a general definition of “lie” which involves making an untrue claim with the intent to deceive. In this case, the deceit is supposed to be a positive one. Naturally, there are those who would argue that such deceits are still wrong, even if the intent is a good one. The matter is also complicated by the fact that there seem to be untrue claims aimed at deceit that intuitively seem morally acceptable. The classic case is, of course, misleading a person who is out to commit murder.
In some cases one person will accuse another of lying because the person disagrees with a claim made by the other person. For example, a person might claim that Obamacare will help Americans and be accused of lying about this by a person who is opposed to Obamacare.
In this sort of context, the accusation that the person is lying seems to rest on three clear points. The first is that the accuser thinks that the person does not actually believe his claim. That is, he is engaged in an intentional deceit. The accuser also thinks that the claim is not true. The second is that the accuser believes that the accused intends to deceive—that is, he expects people to believe him. The third is that the accuser thinks that the accused has some malicious intent. This might be merely limited to the intent to deceive, but it typically goes beyond this. For example, the proponent of Obamacare might be suspected of employing his alleged deceit to spread socialism and damage businesses. Or it might be that the person is trolling.
So, in order to be justified in accusing a person of lying, it needs to be shown that the person does not really believe his claim, that he intends to deceive and that there is some malicious intent. Arguing against the claim can show that it is untrue, but this would not be sufficient to show that the person is lying—unless one takes a lie to merely be a claim that is not true (so, if someone made a mistake in a math problem and got the wrong answer, he would be a liar). What would be needed would be adequate evidence that the person is insincere in his claim (that is, he believes he is saying the untrue), that he intends to deceive and that there is some malicious intent.
Naturally, effective criticism of a claim does not require showing that the person making the claim is a liar—this is a matter of arguing about the claim. In fact, the truth or falsity of a claim has no connection to the intent of the person making the claim or what he actually believes about it. An accusation of lying, rather, moves from the issue of whether the claim is true or not to a moral dispute about the character of the person making the claim. That is, whether he is a liar or not. It can, of course, be a useful persuasive device to call someone a liar, but it (by itself) does nothing to prove or disprove the claim under dispute.
After the murderous attack on the school in Peshawar, Pakistan an image of a child’s blood-stained shoe began appearing in the social media. While the image certainly fit the carnage, the photo was not taken in Peshawar. It had, instead, been taken in May of 2008 in the Israeli city of Ashkelon. Such “re-use” of images is common, especially in social media.
As might be imagined, some took issue with people claiming (wrongly) that the picture was from Peshawar. Others took the view that it did not matter since the image was an appropriate symbol of the situation.
A somewhat analogous situation to the “re-use” of photos is the reference of incidents in protests that some regard as not being “suitable” for the protest. For example, in response to the protests about the deaths of Brown and Garner some critics have asserted that the protesters have the facts wrong and that Garner and Brown were not exactly innocent angels. The idea seems to be that the protests can be invalidated by disputing the facts of a specific case or by questioning the suitability of the people used as focal points for the protests.
In response to such criticisms, some defenders of the protesters assert that they do have the facts right and contend that even if Garner and Brown were not innocent angels, injustice still occurred.
The general issue in both sorts of cases is the importance of the truth and purity of the symbols used—be the symbol a photo of a shoe or a black man killed by the police.
As a philosopher, I am initially inclined to come out in favor of the strict truth. Even if the shoe image fit the situation, it is not a picture from the actual event and knowingly using it would be an act of deception. This would certainly seem to be morally wrong. In the case of symbols used in protests, the same reasoning should apply. If the symbols represent the situation incorrectly and those using them know this, then they are engaged in deceit. This would, on the face of it, be wrong.
The “purity” of the people used as symbols is somewhat more complicated. In the case of Brown and Garner, the protesters do not (in general) dispute that these men had broken the law and they do not claim that they were innocent angels. Those critical of the protests sometimes claim that the use of these “impure” symbols somehow invalidates the protest to some degree. Looked at from a purely propaganda viewpoint, innocent angels as victims would be “better”, but injustice does not require that the victim be such an angel. It just requires that a wrong occurs. There is still, however, the moral question of whether or not Garner and Brown were victims of injustice. If they were not, then the protests would be legitimately undermined—after all, a protest about an alleged injustice requires that the injustice be real. If they were victims of injustice, then the protests would obviously have a valid foundation—even though the men were not angels.
As a philosopher who teaches aesthetics, I am willing to consider the possibility that the “factual truth” of a symbol might not be as important as its “symbolic truth.” This, obviously enough, opens the door wide to numerous accusations about my integrity and commitment to the truth. Despite this risk, this is certainly an avenue worth strolling down—though I might not wish to take up residence there.
The reason that I mention aesthetics is that one of the most plausible lines of justification for the use of such “untrue” symbols can be found in the realm of art. As philosophers have long noted, art is a beautiful untrue thing. As such, factual veracity is usually not of critical importance in art. Despite (or perhaps because of) this, works of art can present general truths through what might be regarded as specific untruths. Uncle Tom’s Cabin is not a factual documentary on slavery, Lord of the Flies is not a report of real events, nor is Romeo & Juliet a factual account of a real tragedy. Despite this, these and so many other works convey general truths or make moral points using untrue things.
Assuming that works of art can legitimately use untrue things, it can be argued that the same can be said of symbols, such as the image of the shoe. While the picture of the shoe was, in fact, taken in 2008 in Israel and not in Pakistan, it still serves as a true symbol of the event. That is, it powerfully conveys a general truth about the slaughter of children that goes beyond the specific facts. To dismiss the symbol by saying “why, that is not a picture from the event” is to miss the point of its use as a symbol. As a symbol it is not being presented as a factual representation of the events. Rather, it is being presented as standing for a general truth. Thus, while the symbol is an untrue thing in one sense (it is not a photo of that actual event) it is true in other senses. It symbolizes the killing of children in political struggles and captures the horror of the slaughter of innocents.
Naturally, it is perfectly reasonable to point out that such symbols are not accurate reporting of the event. It is thus completely legitimate to claim that such images should not be used in news reports (except, of course, to report that they are being used, etc.). After all, the true business of news is (or should be) reporting the cold facts. However, there are contexts (such as expressing how one feels on social media) when symbols are appropriate. As long as these are kept properly distinct, then both seem to be legitimate. To use the obvious analogy, the fact that clips from fictional films should not be used in news stories does not entail that fictional films have no place or use in making statements.
Turning to the matter of protests, the matter is somewhat different from that of the image. An image, such as the shoe, can be taken as expressing a general truth. Though the shoe belonged to an Israeli child, it can stand in for the shoe of any child who has been the victim of a terrible attack and it expressed the general horror of such violence. Saying “that picture is not from Pakistan” does not show that the wounding or slaughter of children is not horrible.
However, the truth of the symbolic cases used in protests does seem to matter. As argued above, if the symbolic cases used by protestors turn out to be factually untrue (that is, the narrative of the protesters does not match reality), then that is a problem. For example, if protesters use the killing of a specific black man as a symbol of injustice, but it turns out that the shooting was morally justified, then the protest is undermined. After all, if there was no injustice in a case, then there is no injustice to protest.
One counter to this is that even if a specific symbolic case has been exposed as untrue, this does not discredit the other symbolic cases. For example, the revelation that the Rolling Stone rape article contained numerous untrue claims does discredit that symbolic case, but does not disprove the other cases—they stand or fall on their own merits or defects. This is quite reasonable: the fact that one example is not true does not prove that the other examples are untrue (though it can, of course, raise concerns). So, even if a symbolic case embraced by protesters turns out to not fit, this does not show that the protest is rendered invalid. Using the specific example of campus rape, the fact that the Rolling Stone story unraveled under investigation does not, by itself, show that sexual assault is not a problem on campuses.
But, of course, a claim can be undermined by properly discrediting the supporting examples, be they symbolic or not. So, for example, if it is claimed that the police treat black citizens differently than white citizens and it turns out that this is not generally true, then protests based on this would be undermined. Facts, obviously enough, do matter. However, the weight of each fact must be properly considered: as noted above, showing that one symbolic case is untrue does not discredit all the supporting examples. So, for example, if it is shown that a specific symbolic case does not match the facts, this does not show that the protest is unwarranted.
Despite the Great Recession, the profits for corporations have doubled since 2000. In contrast, the median household income in the United States has fallen from $55,986 to $51,017 (dollars adjusted for inflation, of course). Not surprisingly, corporate profits have gone from 5% to 11% of the GDP while wages of employee have dropped from 47% to 43%. While these numbers can be interpreted in various ways, one obvious implication is that corporations are making more money with fewer employees. It is also evident that corporations are doing better than most people (although some would say that corporations are people).
One plausible explanation for this is automation that increases productivity without increasing employment and employee income—a claim put forth by the authors of The Second Machine Age. Historically automation and other technological advances have increased productivity and eliminated jobs—but these have also consistently resulted in higher incomes in general (often by creating new and better jobs). That is, as some folks like to say, the rising tide of advancement lifted all boats. What is different about the current situation is that the rising tide of advancement has lifted the corporate yachts while causing the rowboats of the common folks to flounder (and some to sink).
If Erik Brynjolfsson and Andrew McAfee are right, recent advances are destroying jobs at a rate that exceeds the creation of jobs. This does have a certain plausibility since it is well-established that technological advances do eliminate jobs. The obvious example is how factory automation has reduced the number of factory workers. It certainly would not be shocking or amazing if the elimination of jobs exceeded the creation of jobs—even if the past has been different. One reason for this could be a matter of the nature of the advances. Another reason could be a matter of choice: employers elect to stick with the lower number of employees rather than creating more jobs and employing those whose jobs have been eliminated.
It also seems worth considering the impact of the “internet economy” on these numbers. To be specific, this economy features highly (over) valued companies that have relatively few employees. Consider, for example, companies like Facebook. Facebook was valued at $192 billion in July. 2014. IBM was valued at $198 billion. Facebook has about 7,000 employees while IBM has over 400,000. By way of comparison, Walmart has 2.2 million employees (making it the largest private sector employer in the United States). Behind Walmart are the fast food empires of Yum! Brands (523,000 employees) and McDonalds (440,000).
Having such highly (over) valued companies with relatively low numbers of employees would result in a high concentration of profits and wealth. Adding in the fact that the largest employers are in low paying industries (retail and fast food), it would certainly seem to help explain why corporations are doing much better relative to 2000, while most people are doing worse in terms of income.
If there is merit to this explanation, then there are some obvious concerns regarding the sort of economy in which the biggest employers are in low-paying sectors and big profits are made by companies that employee few people (and seem to profit from being excessively overvalued). Some are already suggesting there is a new class system emerging based on this new economy while others point to past bubbles and are waiting for companies like Facebook and Twitter to pop like digital balloons.
Facebook now offers its members to select from among 50 genders. These include the old school heterosexual genders as well as the presumably Spinoza inspired pangender. Since I am awesome gendered, I believe that Facebook should offer that as choice 51, but only for me. However, I suspect I will need to endure the pain of being limited to a mere 50 options.
Upon learning of these fifty options, I was slightly surprised because I was not aware that there were fifty options. However, my colleagues who specialize in gender matters assure me that there is an infinite number of genders. If this is the case, that Facebook is still rather limited in its options.
While mocking Facebook can be amusing, the subject of gender identity is an interesting subject and it is a sign of the progress of our society that this can be a matter of legitimate concern. For folks like me who are comfortable existing within an old school gender identity (in my case, awesome straight male), these fifty options might seem to be of little or no importance. Honesty compels me to admit that I initially laughed at the 50 genders of Facebook—in fact, I thought it was something cooked up by the Onion. However, a little reflection on the matter made me realize that it is actually of some importance.
For those who are dedicated to the traditional genders, these options might seem to be signs of the moral decay of the West. As such folks might see it, having Facebook offer 50 gender options shows that traditional gender roles are being damaged (if not destroyed) by the media and Facebook. Given that some states have legalized same-sex marriage, the idea that Facebook has embraced gender diversity must be terrifying indeed.
However, the world (and Facebook) does not (as Leibniz noted in one of his replies to the problem of evil) exist just for me. Or for you. It exists for everyone and we are not all the same.
As such, to those who do not neatly fit into the two traditional genders, this change could be quite significant. Although this is just Facebook, having these gender identities recognized by the largest social network on earth is a mark of acceptance and is likely to have some influence in other areas.
As I noted above, I comfortably occupy a traditional gender type. I’ve never questioned my sexuality nor felt that I was anything other than a straight male. This might be due to biology or perhaps I merely conformed perfectly to the social norms. Or some other factor—I do not know for sure why I am this way.
Since I teach critical thinking, I am well aware of the cognitive biases and fallacies that can lead a person to believe that what is true of herself is also true of everyone else. As such, I do not assume that everyone else is the same as me. As part of this, I also do not assume that the people who see themselves as belonging to one of the non-traditional genders are doing this simply because they want attention, want to rebel, are mentally unbalanced or some such similar negative reason. I also do not assume that they are just “faking it.” I also recognize that a person might feel just as natural and comfortable being transgender as I do being a straight male. As such, I should have no more problem with that person’s identification than that person has with mine. After all, the universe is not for me alone.
Because of this, I hold that people should be free to hold to their gender identities without being mocked, abused or harmed. While I have obviously not been mocked for being straight, I am quite familiar with being called a fag or accused of being gay or like a woman—after all, those are stock insults in our society that are thrown out for the most absurd reasons, such as not doing perfectly in a video game and not acting like the meatheads. As such, I have some small notion of how such attitudes can hurt people and I favor steps to change what underlies the idea that genders can be used as insults. Expanding the range of gender identities can, perhaps, help with this a little bit. Then again, I am sure that some folks will looking at the list of fifty for new terms to use in their hateful comments.
As a final point, one obvious reason why I think that a broader range of gender identities is fine is that another person’s gender identity is not my business—unless that identity causes legitimate harm to others. And no, being offended or disgusted are not legitimate harms. As such, if having a broader range of choices is meaningful to some people, then that is a good thing. It does no one else any harm and does some good—as such, it seems quite morally acceptable.
Thanks to social media like Twitter and Facebook, students can share with the world what some might regard as hate Tweets or hate posts. For example, Kent State wrestler Sam Wheeler sent out a series of rather unpleasant tweets about Michael Sam (the openly gay college football player). In response, Kent State suspended Mr. Wheeler from the team. There have been other incidents in which students have posted or Tweeted comments that could be deemed racist, sexist and so on and in some cases the schools take action against the students. There is, of course, the question of whether schools should do so.
One obvious approach is to take the view that students agree to a code of conduct. So, if the student code of conduct specifies certain behavior as being grounds for suspension or other action, then the action would thus be warranted on this gorund. In the case of student athletes, there are also the rules that govern the sport. When I was a college athlete, I had to follow the NCAA guidelines and could be legitimately punished for breaking them. As such, the suspension of an athlete who breaks the rules would be warranted on this ground.
Of course, there is still the question of whether there should be such rules. After all, rules that forbid a student from expressing views would seem to be a violation of the student’s free expression and thus would be, on the face of it, morally unacceptable.
My own view is, not surprisingly, that students do not lose their right to free expression by being students or student athletes. However, freedom of expression is neither absolute nor a free pass to say anything.
Obviously enough, things like actual threats of violence are not covered by the right to free expression and students can be justly held accountable for such things. However, merely saying things that are regarded as hateful (racist, sexist, homophobic, etc.) would not justify a school taking action against a student. This is because while people have a right to not be threatened, they do not have a right to not be offended or insulted by the speech of another person. So, if a student goes on a homophobic rant on Twitter and does not cross over into such things as threats of violence, then she is acting within her rights and the school has no right to silence or punish her. The school also has no right to create rules that forbid the expression of ideas and views, however offensive those views might be. To do so would, of course, make a mockery of the very idea of the academic freedom that is supposed to be a foundation stone for the university.
A student can, however, be in a position in which she can be legitimately called to task for such speech. If the student is acting in the capacity of a spokesperson for the university, then she can be held accountable in that capacity because she is not acting as a private individual but as a representative of the school. The same can apply to athletes as well—athletes are taken to represent their school and, as such, occupy a position that would plausibly make them spokespeople for the school. As such, they can be held accountable in that capacity. So, for example, a cross-country team captain who insists on making hateful, vulgar and poorly written Tweets about Christians can be legitimately censured—as a member of the team he is in the role of representing his school. If he wishes to remain on the team, he will need to cease that behavior. He can, of course, elect to leave the team—if he regards being able to tweet hateful and vulgar things about Jesus as being more important to him than being on the team.
There is a rather serious concern about the extent to which a student can be regarded as representing the school and also the important matter of sorting out what sort of speech would warrant action being taken against the student. Unfortunately, I cannot cover these matters in this short essay, but in general, I would favor a moral policy of tolerance and erring heavily on the side of free expression.
My regular running routes take me over many miles and through areas that are heavily trafficked—most often by college students. Because of this, I often find lost phones, wallets, IDs and other items. Recently I came across a wallet fat with cash and credit cards. As always, I sought out the owner and returned it. Being a philosopher, I thought I’d write a bit about the ethics of this.
While using found credit card numbers would generally be a bad idea from the practical standpoint, found cash is quite another matter. After all, cash is cash and there is typically nothing to link cash to a specific person. Since money is rather useful, a person who finds a wallet fat with cash would have a good practical reason to simply keep the money and use it herself. One possible exception would be that the reward for returning the lost wallet would exceed the value of the cash in the wallet—but the person who finds it would most likely have no idea if this would be the case or not. So, from a purely practical standpoint, keeping the cash would be a smart choice. A person could even return the credit cards and other items in the wallet, claiming quite plausibly that it was otherwise empty when found. However, what might be a smart choice need not be the right choice.
One argument in favor of returning found items (such as the wallet and all the cash) can be built on the golden rule: do unto others as you would have them do unto you. More formally, this is moral reasoning involving the method of reversing the situation. Since I would want my lost property returned, I should thus treat others in the same way. Unless, of course, I can justify treating others differently by finding relevant differences that would justify the difference. Alternatively, it could also be justified on utilitarian grounds. For example, someone who is poor might contend that it would not be wrong to keep money she found in a rich person’s wallet on the grounds that the money would do her much more good than it would do for the rich person: such a small loss would not affect him, such a gain would benefit her significantly.
Since I am reasonably well off and find relatively modest sums of money (hundreds of dollars at most), I have the luxury of not being tempted to keep the money. However, even when I was not at all well off, I still returned whatever I found. Even when I honestly believed that I would put the money to better use than the original owner. This is not due to any fetishes about property, but a matter of ethics.
One of the reasons is my belief that I do have obligations to help others, especially when the cost to me is low relative to the aid rendered. In the case of finding someone’s wallet or phone, I know that the loss would be a significant inconvenience and worry for most people. In the case of a wallet, a person will probably need to replace a driver’s license, credit cards, insurance cards and worry about identity theft. It is easy for me to return the wallet—either by dropping it off with police or contacting the person after finding them via Facebook or some other means. That said, the obvious challenge is justifying my view that I am so obligated. However, I would contend that in such cases, the burden of proof lies on the selfish rather than the altruistic.
Another reason is that I believe that I should not steal. While keeping a lost item is not the same morally as active theft (this could be seen as being a bit analogous to the distinction between killing and letting die), it does seem to be a form of theft. After all, I would be acquiring what does not belong to me by choosing not to return it. Naturally, if I have no means of returning it to the rightful owner (such as finding a quarter in the road), then keeping it would not seem to be theft. Obviously enough, it could be contended that keeping lost property is not theft (even when it could be returned easily), perhaps on the ancient principle of finders keepers, losers weepers. It could also be contended that theft is acceptable—which would be challenging. However, the burden of proof would seem to rest on those who claim that theft is acceptable or that keeping lost property when returning it would be quite possible is not theft.
I also return found items for two selfish reasons. The first is that I want to build the sort of world I want to live in—and in that world people return lost items. While my acting the way I want the world to be is a tiny thing, it is more than nothing. Second, I feel a psychological compulsion to return things I find—so I have to do it for peace of mind.
Several years ago I was teaching a night class and noticed a student smiling broadly with his arms twitching a bit. Looking closer, I noticed that his hands were moving rapidly under the desk—I immediately thought “well, this could be the most awkward and bizarre moment of my teaching career.” Fortunately, it turned out to be my first encounter with a student using a phone to text in class rather than the awful alternative. Since then, I have seen smart phones take over not only my classes, but the world. Like digital versions of Heinlein’s puppet masters, they are the new rulers of humanity.
Like most educators, I saw it as obvious that the phones would be an impediment to the students. After all, if a student spends the class time texting, booking their faces, and gazing upon the awful majesty of grumpy cat, then they will not be paying attention to what is occurring in class. While some students are capable of self-educating (or effective cheating), a failure to pay attention would generally have a negative impact on the GPA of a student. I predicted, correctly, that the phones would evolve and become ever more distracting. I am now waiting to see whether or not wearable tech becomes a thing with students—just imagine the impact of things like Google Glasses on students.
Apparently other educators share my concern about the impact of smartphones on students. Recently Kent State researchers Andrew Lepp, Jacob Barkley and Aryn Karpinski did a study of 500 university students. The study involved tracking phone use, measuring happiness (defined in terms of anxiety and satisfaction) and retrieving official grade point averages. The study population was composed of 500 undergraduates taken equally from each class (freshman, etc.) and included 82 different majors. As such, the study seems to be adequate in size and diversity in regards to the target population.
The analysis showed that as phone use increased, GPA decreased and anxiety increased. The overall conclusion was that high frequency users will have a lower GPA, greater anxiety, and less life satisfaction than those who are lower frequency users. Naturally, these results involve college students. However, it seems reasonable to infer they would apply more generally.
On the face of it, these results seem intuitively plausible and it makes sense to accept that increased phone use can lead to lower GPA, greater anxiety and less life satisfaction. First, it certainly makes sense that a student who spends more time using the phone is most likely spending less time paying attention in class, studying and doing coursework. This would tend to have a negative impact on the student’s GPA. Second, the lower GPA could certainly lead to more anxiety and less satisfaction. Third, there are various other studies that link the things people do on phones (like checking Facebook and seeing the awesome staged photos and crafted status updates of friends) that cause dissatisfaction. As such, these results seem believable.
That said, as with any causal claims it is important to consider alternatives. First, the possibility of a common cause must also be considered. The basic idea is that when it seems like C is causing effect E, it might be the case that C and E are both effects of a third factor. In the case of the phones, it might be the case that there is a factor (or factors) that are making students anxious, making them less satisfied, lowering their GPAs and causing them to use their phones more. Personal issues, such as with family or with a significant other, are likely candidates for common causes. In fact, it certainly makes sense that this could be the case in some instances.
Second, there is the possibility of reverse causation. The gist is that when it seems as if C is the cause of E, it might be the case that C is the cause of E—that is, the causal arrow is backwards. In the case of the phones, it might be a low GPA that leads to the anxiety and dissatisfaction and they lead to more phone use.
Third, there is also the possibility of mere coincidence—after all, correlation is not causation. However, the existence of clear causal mechanisms makes it unlikely that it is just coincidence.
While the alternatives are worth considering (and probably hold true in some cases), it does seem sensible to accept that higher phone use is a detriment to students (and people in general). While I would oppose schools passing regulations limiting student use of phones (after all, I consistently hold to the right of self-abuse and poor decision making), I do think that university faculty, staff and administrators should make students aware of the harms of phone use and should encourage students to look away from their phones more often, especially in the classroom. So, kids, if you do not want to be stupid, sad and a failure, put down that phone.