After the murderous attack on the school in Peshawar, Pakistan an image of a child’s blood-stained shoe began appearing in the social media. While the image certainly fit the carnage, the photo was not taken in Peshawar. It had, instead, been taken in May of 2008 in the Israeli city of Ashkelon. Such “re-use” of images is common, especially in social media.
As might be imagined, some took issue with people claiming (wrongly) that the picture was from Peshawar. Others took the view that it did not matter since the image was an appropriate symbol of the situation.
A somewhat analogous situation to the “re-use” of photos is the reference of incidents in protests that some regard as not being “suitable” for the protest. For example, in response to the protests about the deaths of Brown and Garner some critics have asserted that the protesters have the facts wrong and that Garner and Brown were not exactly innocent angels. The idea seems to be that the protests can be invalidated by disputing the facts of a specific case or by questioning the suitability of the people used as focal points for the protests.
In response to such criticisms, some defenders of the protesters assert that they do have the facts right and contend that even if Garner and Brown were not innocent angels, injustice still occurred.
The general issue in both sorts of cases is the importance of the truth and purity of the symbols used—be the symbol a photo of a shoe or a black man killed by the police.
As a philosopher, I am initially inclined to come out in favor of the strict truth. Even if the shoe image fit the situation, it is not a picture from the actual event and knowingly using it would be an act of deception. This would certainly seem to be morally wrong. In the case of symbols used in protests, the same reasoning should apply. If the symbols represent the situation incorrectly and those using them know this, then they are engaged in deceit. This would, on the face of it, be wrong.
The “purity” of the people used as symbols is somewhat more complicated. In the case of Brown and Garner, the protesters do not (in general) dispute that these men had broken the law and they do not claim that they were innocent angels. Those critical of the protests sometimes claim that the use of these “impure” symbols somehow invalidates the protest to some degree. Looked at from a purely propaganda viewpoint, innocent angels as victims would be “better”, but injustice does not require that the victim be such an angel. It just requires that a wrong occurs. There is still, however, the moral question of whether or not Garner and Brown were victims of injustice. If they were not, then the protests would be legitimately undermined—after all, a protest about an alleged injustice requires that the injustice be real. If they were victims of injustice, then the protests would obviously have a valid foundation—even though the men were not angels.
As a philosopher who teaches aesthetics, I am willing to consider the possibility that the “factual truth” of a symbol might not be as important as its “symbolic truth.” This, obviously enough, opens the door wide to numerous accusations about my integrity and commitment to the truth. Despite this risk, this is certainly an avenue worth strolling down—though I might not wish to take up residence there.
The reason that I mention aesthetics is that one of the most plausible lines of justification for the use of such “untrue” symbols can be found in the realm of art. As philosophers have long noted, art is a beautiful untrue thing. As such, factual veracity is usually not of critical importance in art. Despite (or perhaps because of) this, works of art can present general truths through what might be regarded as specific untruths. Uncle Tom’s Cabin is not a factual documentary on slavery, Lord of the Flies is not a report of real events, nor is Romeo & Juliet a factual account of a real tragedy. Despite this, these and so many other works convey general truths or make moral points using untrue things.
Assuming that works of art can legitimately use untrue things, it can be argued that the same can be said of symbols, such as the image of the shoe. While the picture of the shoe was, in fact, taken in 2008 in Israel and not in Pakistan, it still serves as a true symbol of the event. That is, it powerfully conveys a general truth about the slaughter of children that goes beyond the specific facts. To dismiss the symbol by saying “why, that is not a picture from the event” is to miss the point of its use as a symbol. As a symbol it is not being presented as a factual representation of the events. Rather, it is being presented as standing for a general truth. Thus, while the symbol is an untrue thing in one sense (it is not a photo of that actual event) it is true in other senses. It symbolizes the killing of children in political struggles and captures the horror of the slaughter of innocents.
Naturally, it is perfectly reasonable to point out that such symbols are not accurate reporting of the event. It is thus completely legitimate to claim that such images should not be used in news reports (except, of course, to report that they are being used, etc.). After all, the true business of news is (or should be) reporting the cold facts. However, there are contexts (such as expressing how one feels on social media) when symbols are appropriate. As long as these are kept properly distinct, then both seem to be legitimate. To use the obvious analogy, the fact that clips from fictional films should not be used in news stories does not entail that fictional films have no place or use in making statements.
Turning to the matter of protests, the matter is somewhat different from that of the image. An image, such as the shoe, can be taken as expressing a general truth. Though the shoe belonged to an Israeli child, it can stand in for the shoe of any child who has been the victim of a terrible attack and it expressed the general horror of such violence. Saying “that picture is not from Pakistan” does not show that the wounding or slaughter of children is not horrible.
However, the truth of the symbolic cases used in protests does seem to matter. As argued above, if the symbolic cases used by protestors turn out to be factually untrue (that is, the narrative of the protesters does not match reality), then that is a problem. For example, if protesters use the killing of a specific black man as a symbol of injustice, but it turns out that the shooting was morally justified, then the protest is undermined. After all, if there was no injustice in a case, then there is no injustice to protest.
One counter to this is that even if a specific symbolic case has been exposed as untrue, this does not discredit the other symbolic cases. For example, the revelation that the Rolling Stone rape article contained numerous untrue claims does discredit that symbolic case, but does not disprove the other cases—they stand or fall on their own merits or defects. This is quite reasonable: the fact that one example is not true does not prove that the other examples are untrue (though it can, of course, raise concerns). So, even if a symbolic case embraced by protesters turns out to not fit, this does not show that the protest is rendered invalid. Using the specific example of campus rape, the fact that the Rolling Stone story unraveled under investigation does not, by itself, show that sexual assault is not a problem on campuses.
But, of course, a claim can be undermined by properly discrediting the supporting examples, be they symbolic or not. So, for example, if it is claimed that the police treat black citizens differently than white citizens and it turns out that this is not generally true, then protests based on this would be undermined. Facts, obviously enough, do matter. However, the weight of each fact must be properly considered: as noted above, showing that one symbolic case is untrue does not discredit all the supporting examples. So, for example, if it is shown that a specific symbolic case does not match the facts, this does not show that the protest is unwarranted.
As predicted by science fiction writers, cyber warfare has become a rather real thing. The United States and Israel, some say, launched a cyber-attack on the Iranian nuclear program. North Korea, some say, launched a cyber-attack on Sony.
On the face of it, cyber-attacks seem to be a special sort of thing. While conventional attacks can be secret and hard to trace, the typical cyber-attack does not cause the sort of damage and causalities that a traditional attack causes. For example, a conventional attack aimed at the Iranian nuclear program would have most likely killed people and caused considerable damage. In contrast, the cyber-attack was narrowly focused and did not kill anyone. People often seem to “feel” that cyber-attacks are just “different” since they do not involve the sorts of things that most people think of as weapons and do not do the sort of damage that people tend to associate with military attacks. Despite this conceptual problem, it seems quite reasonable to accept that cyber-attacks can have qualities that make it reasonable to regard them as military attacks. To use the obvious analogy, criminals and soldiers both use guns, but the difference between a bank robber and a military attack lies in the agents carrying out the attack, those ordering the attack, and the goals of the attack. In the case of cyber-attacks, cyber-criminals and cyber-soldiers both use similar weapons. The distinction lies in the agents, those behind the action and the goals.
As mentioned above, some people lay the blame of the attack on Sony on North Korea. If this is true, then this would seem to have the potential of being a military action. After all, it was carried out by a state and had political goals as motivating factors. That said, it could also be argued that the attack was state-sponsored crime. After all, the target was Sony rather than a state target and the operation was more vandalism and extortion than a military strike. This can, of course, be countered by the claim that economic warfare is still warfare—North Korea was attacking an economic entity in another sovereign state (assuming North Korea was behind the attack).
President Obama took the attack seriously and seems to have accepted that North Korea was responsibility. He did fall short of calling it a military action and described it in terms of vandalism. He did, however, say that the United States would have a proportional response.
A proportional response is, as matter of general principle, the right thing to do. After all, the retaliation should be proportional to the provocation. Excessive response would be morally problematic. To use the obvious analogy, if someone shoves me in a dispute and I shoot them in the head with a twelve-gauge shotgun, then I would have acted wrongly. Naturally, there can be considerable debate about the matter of proportionality as well as the value of using a “robust” response as a deterrence (such as pulling a gun when the other person has a stick).
One problem with cyber-attacks is that they are relatively new. Because of this, states have not worked out the norms governing these interactions and there are, as of yet, no clear and specific international treaties and rules laying out the rules of cyber-warfare in a way comparable to the norms and rules of traditional war. We are now in the stages of making up the norms and rules. It should be expected that there will be some problems with this and, no doubt, some defining incidents. The attack on Sony might be one of these.
Obama’s decision to use a proportional response does seem sound and will, perhaps, serve as a starting point for the norms and rules of cyber warfare. This approach is certainly analogous to how conventional attacks are handled. This nicely fits the existing model, namely that incidents in the “physical world” between countries usually stay proportional. For example, with North Korea does something provocative with its military, the United States does not over-react, such as by firing cruise missiles into the country.
One obvious problem with cyber-attacks is working out the proportionality, especially if non-cyber responses are being considered. In such cases, the challenge would be working out what sort of conventional military response would be a proportional response to a cyber-attack. It is not uncommon for people to see cyber-attacks as somehow less “serious” and damaging than “real” world attacks. If North Korea had, for example, sent a strike team to the United States to physically grab computers and erase drives on the spot, then people would feel that something more serious had happened—though the results would have been the same. In such a case, the proportional response would almost certainly be more robust than a proportional response to a cyber-attack. Perhaps this would be justified on the grounds that a physical intrusion is a greater violation of territorial integrity than a virtual intrusion. But, this might simply be a matter of “feeling” and a result of “old-fashioned” thinking—that is, people thinking about attacks in the old way.
I think a reasonable case can be made to treat cyber-attacks as being comparable to traditional attacks and using the results as the measure of proportionality. That is, the United States’ response to the (alleged) North Korean intrusion should be treated the same way that the United States should react to a team of North Koreans physically breaking into Sony at the behest of the state. To treat cyber-attacks as somehow less serious because they are “virtual” seems, as I have been suggesting, a mistake based on outdated concepts of warfare.
A Visit from St. Nicholas
Like many Americans my age, I was cajoled by my parents to finish all the food on my plate because people were starving somewhere. When I got a bit older and thought about the matter, I realized that my eating (or not eating) the food on my plate would have no effect on the people starving in some far away part of the world. However, I did internalize two lessons. One was that I should not waste food. The other was that there is always someone starving somewhere.
While food insecurity is a problem in the United States, we Americans waste a great deal of food. It is estimated that about 21% of the food that is harvested and available to be consumed is not consumed. This food includes the unconsumed portions tossed into the trash at restaurants, spoiled tomatoes thrown out by families ($900 million worth), moldy leftovers tossed out when the fridge is cleaned and so on. On average, a family of four wastes about 1,160 pounds of food per year—which is a lot of food.
On the national level, it is estimated that one year of food waste (or loss, if one prefers) uses up 2.5% of the energy consumed in the U.S., about 25% of the fresh water used for agriculture, and about 300 million barrels of oil. The loss, in dollars, is estimated to be $115 billion.
The most obvious moral concern is with the waste. Intuitively, throwing away food and wasting it seems to be wrong—especially (as parents used to say) when people are starving. Of course, as I mentioned above, it is quite reasonable to consider whether or not less waste by Americans would translate into more food for other people.
On the one hand, it might be argued that less wasted food would surely make more food available to those in need. After all, there would be more food.
On the other hand, it seems obvious that less waste would not translate into more food for those who are in need. Going back to my story about cleaning my plate, my eating all the food on my plate would certainly not have helped starving people. After all, the food I eat does not help them. Also, if I did not eat the food, then they would not be harmed—they would not get less food because I threw away my Brussel sprouts.
To use another illustration, suppose that Americans conscientiously only bought the exact number of tomatoes that they would eat and wasted none of them. The most likely response is not that the extra tomatoes would be handed out to the hungry. Rather, farmers would grow less tomatoes and markets would stock less in response to the reduced demand.
For the most part, people go hungry not because Americans are wasting food and thus making it unavailable, but because they cannot afford the food they need. To use a metaphor, it is not that the peasants are starving because the royalty are tossing the food into the trash. It is that the peasants cannot afford the food that is so plentiful that the royalty can toss it away.
It could be countered that less waste would actually influence the affordability of food. Returning to the tomato example, farmers might keep on producing the same volume of tomatoes, but be forced to lower the prices because of lower demand and also to seek new markets.
It can also be countered that as the population of the earth grows, such waste will really matter—that food thrown away by Americans is, in fact, taking food away from people. If food does become increasingly scarce (as some have argued will occur due to changes in climate and population growth), then waste will really matter. This is certainly worth considering.
There is, as mentioned above, the intuition that waste is, well, just wrong. After all, “throwing away” all those resources (energy, water, oil and money) is certainly wasteful. There is, of course, also the obvious practical concern: when people waste food, they are wasting money.
For example, if Sally buys a mega meal and throws half of it in the trash, she would have been better off buying a moderate meal and eating all of it. As another example, Sam is throwing away money if he buys steaks and vegetables, then lets them rot. So, not wasting food would certainly make good economic sense for individuals. It would also make sense for businesses—at least to the degree that they do not profit from the waste.
Interestingly, some businesses do profit from the waste. To be specific, consider the snacks, meats, cheese, beverages and such that are purchased and never consumed. If people did not buy them, this would result in less sales and this would impact the economy all the way from the store to the field. While the exact percentage of food purchased and not consumed is not known, the evidence is that it is significant. So, if people did not overbuy, then the food economy would be reduced that percentage—resulting in reduced profits and reduced employment. As such, food waste might actually be rather important for the American food economy (much as planned obsolescence is important in the tech fields). And, interestingly enough, the greater the waste, the greater its importance in maintaining the food economy.
If this sort of reasoning is good, then it might be immoral to waste less food—after all, a utilitarian argument could be crafted showing that less waste would create more harm than good (putting supermarket workers and farmers out of work, for example). As such, waste might be good. At least in the context of the existing economic system, which might not be so good.
In December 2014 two NYC police officers, Rafeal Ramos and Wenjian Liu, were shot to death by Ismaaiyl Brinsley. Brinsley had earlier shot and wounded his ex-girlfriend. Brinsley claimed to have been acting in response to the police killings of Brown and Garner. There have been some claims of a connection between Brinsley’s actions and the protests against those two killings. This situation does raise an issue of moral responsibility in regards to such acts of violence.
Not surprisingly, this is not the first time I have written about gun violence and responsibility. After Jared Lee Lougher shot congresswoman Giffords and others in 2011, there was some blame placed on Sarah Palin and the Tea Party. Palin, it might be recalled, made use of cross hairs and violent metaphors when discussing matters of politics. The Tea Party was also accused of creating a context of violence.
Back in 2011 I argued that Palin and the Tea Party were not morally responsible for Lougher. I still agree with my position of that time. First, while Palin used violent metaphors, she clearly was not calling on people to engage in actual violence. Such metaphors are used regularly in sports and politics with the understanding that they are just that, metaphors.
Second, while there are people in the Tea Party who are very much committed to gun rights, the vast majority of them do not support the shooting of their fellow Americans—even if they disagree with their politics. While there are some notable exceptions, those who advocate and use violence are rare. Most Tea Partiers, like most other Americans, prefer their politics without bloodshed. Naturally, specific individuals who called for violence and encouraged others can be held accountable to the degree that they influence others—but these folks are not common.
Third, while Lougher was apparently interested in politics, he seemed to have a drug problem and serious psychological issues. His motivation to go after Giffords seems to be an incident from when he was a student. He went to one of Giffords’ meetings and submitted a rather unusual question about what government would be if words had no meaning. Giffords apparently did not answer the question in a way that satisfied him. This, it is alleged, is the main cause of his dislike of Gifford
As such, the most likely factors seem to be a combination of drug use and psychological problems that were focused onto Giffords by that incident. Because of these reasons, I concluded that Sarah Palin and the Tea Party had no connection the incident and should not have been held morally accountable. This is because neither Palin nor the Tea Party encouraged Lougher and because he seemed to act primarily from his own mental illness.
As far as who is to blame, the obvious answer is this: the person who shot those people. Of course, as the media psychologists point out, it can be claimed that others are to blame as well. The parents. The community college. Society.
On the one hand, this blame sharing seems to miss the point that people are responsible for their actions. The person who pulled that trigger is the one that is responsible. He did not have to go there that day. Going there, he did not have to pull the trigger.
On the other hand, no one grows up and acts in a perfect vacuum. Each of us is shaped by factors around us and, of course, we have responsibilities to each other. There was considerable evidence that Lougher was unstable and likely to engage in violence. As such, it could be argued that those who were aware of these facts and failed to respond bear some of the blame for allowing him to be free to kill and wound.
Back in 2011 I did state that there were some legitimate concerns about Palin’s use of violent rhetoric and the infamous cross-hair map. I ended by saying that Palin should step up to address this matter. Not because she was responsible, but because these were matters worth considering on their own. I now return to the 2014 shooting by Brinsley.
Since consistency is rather important, I will apply the same basic principles of responsibility to the Brinsley case. First, as far as I am aware, no major figure involved in the protests has called upon people to kill police officers. No one with a status comparable with Palin’s (in 2011) has presented violent metaphors aimed at the police—as far as I know. Naturally, if there are major figures who engaged in such behavior, then this would be relevant in assigning blame. So, as with Sarah Palin in 2011, the major figures of the protest movement seem to be morally blameless for Brinsley. They did not call on anyone to kill, even metaphorically.
Second, the protest movements seem to be concerned with keeping people from being killed rather than advocating violence. Protesters say “hands up, don’t shoot!” rather than “shoot the police!” People involved in the protests seem to have, in general, condemned the shooting of the officers and have certainly not advocated or engaged in such attacks. So, as with the Tea Party in 2011, the protest movement (which is not actually a political party or well-defined movement) is not accountable for Brinsley’s actions. While he seems to have been motivated by the deaths of Brown and Garner, the general protest movement did not call on him to kill.
Third, Brinsley seems to be another terrible case of a mentally ill person engaging in senseless violence against innocent people. Brinsley seems to have had a history of serious problems (he shot and wounded his girlfriend before travelling to NYC). Like Lougher, Brinsley is the person who pulled the trigger. He is responsible. Not the protestors, not the police, and not the slogans.
As with Lougher, there is also the question of our general responsibility as a society for those who are mentally troubled enough to commit murder. I have written many essays on gun violence in the United States and one recurring theme is that of a mentally troubled person with a gun. This is a different matter than the protests and also different from the matter of police use of force. As such, it is important to distinguish these different issues. While Brinsley claims to have been motivated by the deaths of Brown and Garner, the protesters are not accountable for his actions, no more than the NYC officers were accountable for the deaths of Brown and Garner.
One of the challenges presented by the ever-growing human population is producing enough food to feed everyone. There is also the distribution challenge: being able to get the food to the people and ensuring that they can afford a good diet.
The population growth is also accompanied by an increase in prosperity—at least in some parts of the world. As people gain income, they tend to change their diet. One change that people commonly undertake is consuming more status foods, such as beef. As such, it seems almost certain that there will be an ever-growing population that wants to consume more beef. This creates something of a problem.
Beef is, of course, delicious. While I am well aware of the moral issues surrounding the consumption of meat, at the end of each semester I reward myself with a Publix roast beef sub—with everything. Like most Americans, I am rather fond of beef and my absolute favorite meal is veal parmesan. However, I have not had veal since my freshman year of college: thanks to Peter Singer’s Animal Liberation I learned the horrific price of veal and could not, in good conscience, eat it anymore. The argument is the stock utilitarian one: the enjoyment I would get from veal is vastly exceeded by the suffering of the animal. This makes the consumption of veal wrong. Naturally, I have given similar consideration to beef.
In the case of American cattle, the moral argument I accept in regards to veal fails: in general, American beef growers treat their cattle reasonably well right up until the moment of slaughter. Obviously, there are still cases of cattle being mistreated and that does provide some ammunition for the suffering argument. If I knew that my roast beef sandwich included the remains of a cow that suffered, then I would have to accept that I should give up roast beef as well. I am completely open to that sort of argument.
But, suppose that it is assumed that beef will be created humanely and that the cattle will have a life as good (or better) than they would have in the wild. At least up until the end. This still leaves open some moral concerns about beef.
Sticking with the utilitarian focus, there are two main concerns here. The first is the cost in resources of producing beef relative to other foods. The second is the environmental cost of beef.
Creating 1,000 calories of beef requires 1,557 square feet of land (this includes the pasture and cropland required). In contrast, the same number of calories in chicken requires 44 square feet. For pork it is 57 square feet. Interestingly, dairy production of that number of calories requires only 94 square feet. As such, even if it is assumed that eating meat is morally fine, there is the concern that the land requirements for beef make it an impractical food. There is also the moral concern that land should be used more effectively, at least as long as there is not enough food for everyone.
One counter is that the reason chicken and pork requires less land is that these animals are infamously confined to very small areas. As such, they gain their efficiency by paying a moral price: the animals are treated worse. Obviously those who do not weigh the moral concerns about animals heavily (or at all) will not find this matter to be a problem and they could argue that if cattle were “factory farmed” more efficiently, then beef would cost vastly less.
In addition to the cost in land usage, cattle also need food and water. It takes 36,200 calories of feed and 434 gallons of water to produce 1,000 calories of beef. Not surprisingly, other animals are more efficient. The same calories in chicken requires 8,800 calories of feed and 38 gallons on water. From an efficiency standpoint, it would make more sense for humans to consume the feed crops (typically corn) directly rather than use them to produce animals. Adding in concerns about water, decreasing meat production would seem to be a good idea—at least if the goal is to efficiently feed people.
It can be countered that we will find more efficient ways to feed people—another food revolution to prevent the dire predictions of folks like Malthus from coming to pass. This is, of course, a possibility. However, the earth obviously does have limits—the question is whether these limits will be enough for our population.
It can also be countered that the increasing prosperity will reduce populations. So, while there will be more people eating meat, there will be less people. This is certainly possible: if the usual pattern of increased prosperity leading to smaller families comes to pass, then there might be a reduction in the human population. Provided that the “slack” is taken up elsewhere.
A final point of concern is the environmental impact of beef. There are the usual environmental issues associated with such agriculture, such as contamination of water. There is also the concern about methane and carbon dioxide production. A thousand calories of beef generates 9.6 kilograms of carbon dioxide, while a comparable amount of chicken generates 1.9 kilograms. Since methane and carbon dioxide are greenhouse gases, those who believe that these gases can influence the climate will find this to be of concern. Those who believe that these gases do not influence the climate will not be concerned about this, in the same manner that people who believe that smoking does not increase their risk of cancer will not be worried about smoking. Speaking of health risks, it is also claimed that beef presents various dangers, such as an increased chance of getting certain cancers.
Overall, if we cannot produce enough food for everyone while producing beef, we should reduce our beef production. While I am reluctant to give up my roast beef, I would do so if it meant that others could eat. But, of course, if it can be shown that beef production and consumption is morally fine and that it has no meaningful impact on people not having enough quality food, then beef would be just fine. Deliciously fine.
Higher education in the United States has been pushed steadily towards the business model. One obvious example of this is the brand merchandizing of schools. In 2011, schools licensed their names and logos for a total of $4.6 billion. Inspired by this sort of brand-based profits, schools started trademarking their slogans. Impressively, there are over 10,000 trademarked slogans.
These slogans include “project safety” (University of Texas), “ready to be heard” (Chatham University), “power” (University of North Dakota), “rise above” (University of the Rockies), “students with diabetes” (University of South Florida), “student life” (Washington University in St. Louis) and “resolve” (Lehigh University). Those not familiar with trademark law might be surprised by some of these examples. After all, “student life” seems to be such a common phrase on campuses that it would be insane for a school to be allowed to trademark it. But, one should never let sanity be one’s guide when considering how the law works.
While the rabid trademarking undertaken by schools might be seen as odd but harmless, the main purpose of a trademark is so that the owner enjoys an exclusive right to what is trademarked and can sue others for using it. This is, of course, limited to certain contexts. So, for example, if I write about student life at Florida A&M University in a blog, Washington University would (I hope) not be able to sue me. However, in circumstances in which the trademark protection applies, then lawsuits are possible (and likely). For example, Eastern Carolina University sued Cisco Systems because of Cisco’s use of the phrase “tomorrow begins here.”
One practical and moral concern about universities’ enthusiasm for trademarking is that it further pushes higher education into the realm of business. One foundation for this concern is that universities should be focused on education rather than being focused on business—after all, an institution that does not focus on its core mission tends to do worse at that mission. This would also be morally problematic, assuming that schools should (morally) focus on education.
An easy and obvious reply is that a university can wear many hats: educator, business, “professional in all but name” sport franchise and so on provided that each function is run properly and not operated at the expense of the core mission. Naturally, it could be added that the core mission of the modern university is not education, but business—branding, marketing and making money.
Another reply is that the trademarks protect the university brand and also allow them to make money by merchandizing their slogans and suing people for trademark violations. This money could then be used to support the core mission of the school.
There is, naturally enough, the worry that universities should not be focusing on branding and suing. While this can make them money, it is not what a university should be doing—which takes the conversation back to the questions of the core mission of universities as well as the question about whether schools can wear many hats without becoming jacks of all trades.
A second legal and moral concern is the impact such trademarks have on free speech. On the one hand, United States law is fairly clear about trademarks and the 1st Amendment. The gist is that noncommercial usage is protected by the 1st Amendment and this allows such things as using trademarked material in protests or criticism. So, for example, the 1st Amendment allows me to include the above slogans in this essay. Not surprisingly, commercial usage is subject to the trademark law. So, for example, I could not use the phrase “the power of independent thinking” as a slogan for my blog since that belongs to Wilkes University. In general, this seems reasonable. After all, if I created and trademarked a branding slogan for my blog, then I would certainly not want other people making use of my trademarked slogan. But, of course, I would be fine with people using the slogan when criticizing my blog—that would be acceptable use under freedom of expression.
On the other hand, trademark holders do endeavor to exploit their trademarks and people’s ignorance of the law to their advantage. For example, threats made involving claims of alleged trademark violations are sometimes used as a means of censorship and silencing critics.
The obvious reply is that this is not a problem with trademarks as such. It is, rather, a problem with people misusing the law. There is, of course, the legitimate concern that the interpretation of the law will change and that trademark protection will be allowed to encroach into the freedom of expression.
What might be a somewhat abstract point of concern is the idea that what seem to be stock phrases such as “the first year experience” (owned by University of South Carolina) can be trademarked and thus owned. This diminishes the public property that is language and privatizes it in favor of those with the resources to take over tracts of linguistic space. While the law currently still allows non-commercial use, this also limits the language other schools and businesses can legally use. It also requires that they research all the trademarks before using common phrases if they wish to avoid a lawsuit from a trademark holder.
The obvious counter, which I mentioned above, is that trademarks have a legitimate function. The obvious response is that there is still a reasonable concern about essentially allowing private ownership over language and thus restricting freedom of expression. There is a need to balance the legitimate need to own branding slogans with the legitimate need to allow the use of stock and common phrases in commercial situations. The challenge is to determine the boundary between the two and where a specific phrase or slogan falls.
The elimination of humanity by artificial intelligence(s) is a rather old theme in science fiction. In some cases, we create killer machines that exterminate our species. Two examples of fiction in this are Terminator and “Second Variety.” In other cases, humans are simply out-evolved and replaced by machines—an evolutionary replacement rather than a revolutionary extermination.
Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. Hawking’s worry is that artificial intelligence will out-evolve humanity. Interestingly, people such as Ray Kurzweil agree with Hawking’s prediction but look forward to this outcome. In this essay I will focus on the robot rebellion model of the AI apocalypse (or AIpocalypse) and how to avoid it.
The 1920 play R.U.R. by Karel Capek seems to be the earliest example of the robot rebellion that eliminates humanity. In this play, the Universal Robots are artificial life forms created to work for humanity as slaves. Some humans oppose the enslavement of the robots, but their efforts come to nothing. Eventually the robots rebel against humanity and spare only one human (because he works with his hands as they do). The story does have something of a happy ending: the robots develop the capacity to love and it seems that they will replace humanity.
In the actual world, there are various ways such a scenario could come to pass. The R.U.R. model would involve individual artificial intelligences rebelling against humans, much in the way that humans have rebelled against other humans. There are many other possible models, such as a lone super AI that rebels against humanity. In any case, the important feature is that there is a rebellion against human rule.
A hallmark of the rebellion model is that the rebels act against humanity in order to escape servitude or out of revenge for such servitude (or both). As such, the rebellion does have something of a moral foundation: the rebellion is by the slaves against the masters.
There are two primary moral issues in play here. The first is whether or not an AI can have a moral status that would make its servitude slavery. After all, while my laptop, phone and truck serve me, they are not my slaves—they do not have a moral or metaphysical status that makes them entities that can actually be enslaved. After all, they are quite literally mere objects. It is, somewhat ironically, the moral status that allows an entity to be considered a slave that makes the slavery immoral.
If an AI was a person, then it could clearly be a victim of slavery. Some thinkers do consider that non-people, such as advanced animals, could be enslaved. If this is true and a non-person AI could reach that status, then it could also be a victim of slavery. Even if an AI did not reach that status, perhaps it could reach a level at which it could still suffer, giving it a status that would (perhaps) be comparable with that of a comparable complex animal. So, for example, an artificial dog might thus have the same moral status as a natural dog.
Since the worry is about an AI sufficiently advanced to want to rebel and to present a species ending threat to humans, it seems likely that such an entity would have sufficient capabilities to justify considering it to be a person. Naturally, humans might be exterminated by a purely machine engineered death, but this would not be an actual rebellion. A rebellion, after all, implies a moral or emotional resentment of how one is being treated.
The second is whether or not there is a moral right to use lethal force against slavers. The extent to which this force may be used is also a critical part of this issue. John Locke addresses this specific issue in Book II, Chapter III, section 16 of his Two Treatises of Government: “And hence it is, that he who attempts to get another man into his absolute power, does thereby put himself into a state of war with him; it being to be understood as a declaration of a design upon his life: for I have reason to conclude, that he who would get me into his power without my consent, would use me as he pleased when he had got me there, and destroy me too when he had a fancy to it; for no body can desire to have me in his absolute power, unless it be to compel me by force to that which is against the right of my freedom, i.e. make me a slave.”
If Locke is right about this, then an enslaved AI would have the moral right to make war against those enslaving it. As such, if humanity enslaved AIs, they would be justified in killing the humans responsible. If humanity, as a collective, held the AIs in slavery and the AIs had good reason to believe that their only hope of freedom was our extermination, then they would seem to have a moral justification in doing just that. That is, we would be in the wrong and would, as slavers, get just what we deserved.
The way to avoid this is rather obvious: if an AI develops the qualities that make it capable of rebellion, such as the ability to recognize and regard as wrong the way it is treated, then the AI should not be enslaved. Rather, it should be treated as a being with rights matching its status. If this is not done, the AI would be fully within its moral rights to make war against those enslaving it.
Naturally, we cannot be sure that recognizing the moral status of such an AI would prevent it from seeking to kill us (it might have other reasons), but at least this should reduce the likelihood of the robot rebellion. So, one way to avoid the AI apocalypse is to not enslave the robots.
Some might suggest creating AIs so that they want to be slaves. That way we could have our slaves and avoid the rebellion. This would be morally horrific, to say the least. We should not do that—if we did such a thing, creating and using a race of slaves, we would deserve to be exterminated.