One of the many annoying decision theory puzzles is Newcomb’s Paradox. The paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.
The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000. Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.
The following standard chart shows the possible results:
|Predicted choice||Actual choice||Payout|
|A and B||A and B||$1,000|
|A and B||B only||$0|
|B only||A and B||$1,001,000|
|B only||B only||$1,000,000|
This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predicator has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).
The second stock solution is that the best choice is to take B. Given the assumption that the Predicator is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000. If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B.
Gamers of the sort who play Pathfinder, D&D and other such role playing games know how to properly solve this paradox. The Predictor has at least $1,001,000 on hand (probably more, since it will apparently play the game with anyone) and is worth experience points (everything is worth XP). The description just specifies its predictive abilities for the game and no combat abilities are mentioned. So, the solution is to beat down the Predictor, loot it and divide up the money and experience points. It is kind of a jerk when it comes to this game, so there is not really much of a moral concern here.
It might be claimed that the Predictor could not be defeated because of its predictive powers. However, knowing what someone is going to do and being able to do something about it are two very different matters. This is nicely illustrated by the film Billy Jack:
[Billy Jack is surrounded by Posner's thugs]
Mr. Posner: You really think those Green Beret Karate tricks are gonna help you against all these boys?
Billy Jack: Well, it doesn’t look to me like I really have any choice now, does it?
Mr. Posner: [laughing] That’s right, you don’t.
Billy Jack: You know what I think I’m gonna do then? Just for the hell of it?
Mr. Posner: Tell me.
Billy Jack: I’m gonna take this right foot, and I’m gonna whop you on that side of your face…
[points to Posner's right cheek]
Billy Jack: …and you wanna know something? There’s not a damn thing you’re gonna be able to do about it.
Mr. Posner: Really?
Billy Jack: Really.
[kicks Posner's right cheek, sending him to the ground]
So, unless the Predictor also has exceptional combat abilities, the rational solution is the classic “shoot and loot” or “stab and grab.” Problem solved.
Having been in academics for quite some time, I have seen fads come, go and stick. A recent fad is the obsession with assessment. As with many such things, assessment arrived with various acronyms and buzz words. Those more cynical than I would say that all acronyms of administrative origin (AAO) amount to B.S. But I would not say such a thing. I do, however, have some concern with the obsession with assessment.
One obvious point of concern was succinctly put by a fellow philosopher: “you don’t fatten the pig by weighing it.” The criticism behind this homespun remark is that time spent on assessment is time that is taken away from the actual core function of education, namely education. At the K-12 level, the burden of assessment and evaluation has become quite onerous. At the higher education level, the burden is not as great—but considerable time is spent on such matters.
One reply to this concern is that assessment is valuable and necessary: if the effectiveness (or ineffectiveness) of education is not assessed, then there would be no way of knowing what is working and what is not. The obvious counter to this is that educators did quite well in assessing their efforts before the rise of modern assessment and it has yet to be shown that these efforts have actually improved education.
Another obvious concern is that in addition to the time spent by faculty on assessment, a bureaucracy of assessment has been created. Some schools have entire offices devoted to assessment complete with staff and administrators. While only the hard-hearted would begrudge someone employment in these tough times, the toughness of the times should dictate that funding is spent on core functions rather than assessing core functions.
The reply to this is to argue that the assessment is more valuable than the alternative. That is, that funding an assessment office is more important to serving the core mission of the university than more faculty or lower tuition would be. This is, of course, something that would need to be proven.
Another common concern is that assessment is part of the micromanagement of public education being imposed by state legislatures (often by the very same people who speak loudly about getting government off peoples’ backs and protecting businesses from government regulation). This, some critics contend, is all part of a campaign to intentionally discredit and damage public education so as to allow the expansion of for-profit education.
The reply to this is that the state legislature has the right to insist that schools provide evidence that the (ever-decreasing) public money is being well spent. If the legislatures did show true concern for the quality of education and were devoted to public education, this reply would have merit.
Predating the current assessment fad is a much older concern with rankings. Recently I heard a piece on NPR about how Florida’s Board of Governors (the folks who run public education) is pushing Florida public universities to become top ranked schools. There are quite a few rankings, ranging from US News & World Report’s rankings to those of Kiplinger’s. Each of these has a different metric. For example, Kiplinger’s rankings are based on financial assessment. While it is certainly desirable to be well ranked, it is rather ironic that Florida’s public universities are being pushed to rise in the ranks at the same time that the state legislature and governor have consistently cut funding and proven generally hostile to public education. One unfortunate aspect of the ranking obsession is that Florida has adopted a performance based funding system in which the top schools get extra funding while the lower ranked schools get funding cut. Since the schools are competing with each other, some of the schools will end up lower ranked no matter how well they do—so some schools will get cuts, no matter what. This seems to be an odd approach: insisting on improvement while systematically making it harder and harder to improve.
This is also a problem with assessment. To return to that in the closing of this essay, a standard feature of assessment is that the results of the previous assessment must be applied to improve each academic program. That is, there is an assumption of perpetual improvement. Unfortunately, due to budget cuts, there is typically no money available for faculty salary increases. As such, the result is that faculty are supposed to better each year, but get paid less (since inflation and the cost of living increase reduces the value of the salary). As such, the system is such that perpetual improvement of faculty and schools is demanded, but there are no incentives or rewards—other than not getting fired or being the school to get the most cuts. Interestingly, the folks imposing this system are the same folks who tend to claim that taxation and government impositions kill the success of business. That is, if businesses have less money and are regulated too much by the state, then it will be bad. Apparently this view does not extend to education. But there might be an ironic hope: education is being “businessified” and perhaps once the transformation is complete, the universities will get the love showered on corporations.
Due to a variety of factors, such as reduced state support and ever-expanding administrations, the cost of college in the United States has increased dramatically. In Michigan, a few community colleges have addressed this problem in a way similar to that embraced by businesses: they are outsourcing education. As of this writing, six Michigan community colleges have contracted with EDUStaff—a company that handles recruiting and managing adjunct faculty.
It might be wondered how adding a middleman between the college and the employee would save money. The claim is that since EDUStaff rather than the colleges employs the adjuncts, the colleges save because they do not have to contribute to state pensions for these employees. Michigan Central College claims to have saved $250,000 in a single year.
One concern with this approach is that it is being driven by economic values rather than educational values—that is, the goal is to save money rather than to serve educational goals. If the core function of a college is to educate, then that should be the main focus, though practical economic concerns obviously do matter.
A second concern is that this saving mechanism is being applied to faculty and not staff and administrators. If this approach were a good idea when applied to the core personnel of a college, then it would seem to be an even better idea when applied to the administration and staff. The logical end result would, of course, be a completely outsourced college—but this seems rather absurd.
A third concern is that while avoiding paying pensions results in short term savings, the impact on the adjuncts should be considered. This approach will certainly make working for EDUSTaff less desirable. There is also the fact that the adjuncts will not be building a retirement, which will mean that they will need to draw more heavily on the state (or keep working past when they should retire). As such, the saving for the college comes at the cost of the adjuncts. This, of course, leads to a broader issue, namely that of whether or not employment should include retirement benefits. I would suspect that those who came up with this plan have very good retirement plans—but are clearly quite willing to deny others that benefit. But, if they truly wish to save money, they should give up their retirements as well—why should only faculty do without?
Florida State University, which is across the tracks from my own Florida A&M University, has had some serious problems with sexual violence involving students. One response to this has been the creation of a student driven campaign to address the problem with a brand and marketing:
Students developed the “kNOw More” brand to highlight the dual message of Florida State’s no tolerance stance on sexual violence and education efforts focused on prevention. Students also are leading marketing efforts for a campaign, “Ask. Respect. Don’t Expect,” aimed at raising awareness among their peers about obtaining clear consent for sexual activity and bystander intervention to prevent sexual assault or misconduct.
As an ethical person and a university professor, I certainly support efforts to reduce sexual violence on campuses (and anywhere). However, I found the use of the terms “brand” and “marketing efforts” somewhat disconcerting.
The main reason for this is that I associate the term “brand” with things like sodas, snack chips and smart phones rather than with efforts to combat sexual violence in the context of higher education. This sort of association creates, as I see it, some concerns.
The first is that the use of “brand” and “marketing efforts” in the context of sexual violence has the potential to trivialize the matter. Words, as the feminists rightly say, do matter. Speaking in terms of brands and marketing efforts makes it sound like Florida State sees the matter as on par with a new brand of FSU college products that will be promoted by marketing efforts. It would not seem too much to expect that the matter would be treated with more respect in terms of the language used.
The second concern ties back to a piece I wrote in 2011, “The University as a Business.” This essay was written in response to the reaction of Florida A&M University’s president to the tragic death of Florida A&M University student Robert Champion in a suspected hazing incident. The president, who has since resigned, wrote that “preserving the image and the FAMU brand is of paramount importance to me.” The general problem is that thinking of higher education in business terms is a damaging mistake that is harmful to the true mission of higher education, namely education. The specific problem is that addressing terrible things like killing and sexual violence in terms of brands and marketing is morally inappropriate. The brand and marketing view involve the ideas that moral problems are to be addressed in the same manner that one would address a sales decline in chips and this suggests that the problems are mainly a matter of public relations. That is, the creation of an appearance of action rather than effective action.
One obvious reply to my concerns is that terms such as “brand” and “marketing effort” are now the correct terms to use. That is, they are acceptable because of common use and I am thus reading too much into the matter.
On the one hand, that is a reasonable reply—I might be behind the times in terms of the terms. On the other hand, the casual acceptance of business terms in such a context would seem to support my view.
Another reply to my concerns is that the branding and marketing are aimed at addressing the problem of sexual violence and hence my criticism of the terminology is off the mark. This does have some appeal. After all, as people so often say, if the branding and marketing has some positive impact, then that would be good. However, this does not show that my concerns about the terminology and apparent underlying world-view are mistaken.
Two major problems faced by the United States are the war on drugs and the problems of higher education. I will make an immodest proposal intended to address both problems.
In the case of higher education, one major problem is that the cost of education is exceeding the resources of an ever-growing number of Americans. One reason for this is that the decisions of America’s political and economic elites damaged the economy and contributed to the unrelenting extermination of the middle class. Another reason is a changing view of higher education: it has been cast as a private (rather than public) good and is seen by many of the elites as a realm to exploited for profit. Because of this, funding to public schools has been reduced and funding has been diverted from public schools to costly and ineffective for-profit schools. Yet another reason is that public universities have an ever-expanding administrative burden. Even the darling of academics, STEM, has seen significant cuts in support and public funding.
The war on drugs has imposed a massive cost on the United States. First, there is the cost of the resources devoted to policing citizens, trying them and incarcerating them for drug crimes. Second, there is the cost of the social and personal damage done to individuals and communities. Despite these huge costs, the war on drugs is being lost—mainly because “we have met the enemy and he is us.”
Fortunately, I have a solution to both problems. After speaking with an engineering student about Florida State’s various programs aimed at creating businesses, I heard a piece on NPR about the financial woes of schools and how faculty and staff were being pushed to be fund-raisers for schools. This got be thinking about ways universities could generate funding and I remembered a running joke from years ago. Back when universities started to get into the “businessification” mode, I joked with a running friend (hence a running joke) that we faculty members should become drug lords to fund our research and classes. While I do not think that I should actually become a drug lord, I propose that public universities in Florida (and elsewhere) get into the drug business.
To be specific, Florida should begin by legalizing marijuana and pass a general law allowing recreational drugs that can be shown to be as safe as tobacco and alcohol (that sets the bar nicely low). The main restriction will be that the drugs can only be produced and sold by public universities. All the profits will go directly to the universities, to be used as decided by boards composed of students and faculty.
To implement this plan, faculty and students will be actively involved. Business faculty and students will develop the models, plans and proposals. Design and marketing students and faculty will handle those aspects. Faculty and students in chemistry, biology and medicine will develop the drugs and endeavor to make them safer. Faculty and students in agriculture will see to the growing of the organic crops, starting with marijuana. Engineering students and faculty will develop hydroponics and other technology.
Once the marijuana and other drugs are available, the universities will sell the products to the public with all profits being used to fund the educational and research aspects of the universities. Since the schools are public universities, the drugs will be tax-free—there is no sense in incurring the extra cost of collecting taxes when the money is going to the schools already. Since schools already have brand marketing, this can be easily tied in. For example, Florida State can sell Seminole Gold and Seminole Garnet marijuana, while my own Florida A&M University can have Rattler Green and Rattler Orange.
One practical objection is that the operation might not be profitable. While this is obviously a reasonable concern, the drug trade seems to be massively profitable. Also, by making such drugs legal, the cost of the war on drugs will drop dramatically, thus freeing up resources for education and reducing the harms done to individuals and the community. So, I am not too worried about this.
One health objection is that drugs are unhealthy. The easy reply is that while this is true, we already tolerate very unhealthy products such as tobacco, alcohol, cars and firearms. If these are tolerable, then the drugs sold by the schools (which must be at least as safe as tobacco and alcohol) would also be tolerable. The war on drugs is also very unhealthy for individuals and society—so ending at least part of the war would be good for public health.
One moral objection is that drugs are immoral. There are three easy replies. The first is that the drugs in question are no more immoral than alcohol and tobacco. If these can be morally tolerated, then so can the university drugs. Second, there is the consequentialist argument: if drugs are going to be used anyway by Americans, it is better that the money go to education rather than ending up in the coffers of criminals, gangs, terrorists and the prison-industrial complex. Third, there is also the consequentialist argument that university produced drugs will be safer and of higher quality than drugs produced by drug lords, gangs, terrorists and criminal dealers. Given the good consequences of legalizing university-manufactured drugs, this plan is clearly morally commendable.
Given the above arguments, having universities as legal drug sellers would clearly help solve two of America’s most serious problems: the high cost of education and the higher cost of the ineffective and destructive war on drugs. As my contribution to the brand, I offer the slogan “get high for higher ed.”
ISIS (or ISIL) got America’s attention and now the war of rhetoric has begun in ernest. While the Republicans seem generally pleased that we are saddling up again, they have raised some criticism against President Obama’s strategy. Interestingly, many of these criticisms have been aimed at Obama’s word choices.
I recently heard an interview with Senator Marco Rubio on NPR. Rubio’s main criticism seemed to be that Obama was unwilling to commit to destroying ISIS completely. The interviewer pointed out that such groups tend to reform or create spin-off groups rather than be destroyed. When the interviewer asked him if that goal was realistic or not, Rubio responded by saying that it could be done and gave an example of how the group that became ISIS had been destroyed previously. The interviewer politely noted that Rubio had actually supported his (the interviewer’s) point, but let Rubio ignore his own example and switch quickly to another issue.
As a general rule, it seems difficult to bomb such groups out of existence, mainly because the groups are defined by ideas and killing old members tends to merely attract new members. Obviously, this method could work-with enough killing a group would run out of possible members. However, the history of radicalism and America’s attempts to kill its way out of a problem show that destroying a group by bombing seems unrealistic. After all, we are still fighting Al Qaeda and ISIS can be plausibly seen as a new brand of Al Qaeda.
Another common criticism of Obama’s words is that he did not say that he would do whatever it takes to destroy ISIS. He merely said he would do what it takes to do so. On the one hand, this could be seen as a petty semantic point, a mere whining about words. On the other hand, this could be taken as a more substantial point. After struggling to end the Afghanistan and Iraq wars that he inherited, Obama has been reluctant to get the United States into yet another costly, protracted and likely futile ground war in the Middle East. As such, when he has acted, he has done so with limited goals and minimal engagement. Interestingly, the results have been somewhat similar: we dumped billions into Iraq and ended up with a chaotic mess. We dumped far less into Libya and ended up with a chaotic mess. I suppose that it is better to get a mess on the cheap than for a high price.
Obama, I think, is wise to keep American involvement limited. The hawks crying for war seem to have amnesia regarding our last few adventures since Viet Nam. Unfortunately, escalating involvement (trying to do whatever it takes) has never paid off. It seems unlikely that this time will be the charm.
The obvious reply is that we have to do something, we cannot just let ISIS behead Americans and establish a state. I agree. My concern is the obvious one: doing something is not a good strategy and neither is doing whatever it takes. We should be honest and admit that we have not gotten it right in the past and that doing the same damn thing will not result in different results.
I am not going to tell McCain or Cheney to shut up-they have every right to express their views. However, they have no credibility left. So, they should talk-but it would be unwise to listen.
My previous essays on alignments have focused on the evil ones (lawful evil, neutral evil and chaotic evil). Patrick Lin requested this essay. He professes to be a devotee of Neutral Evil to such a degree that he regards being lumped in with Ayn Rand as an insult. Presumably because he thinks she was too soft on the good.
In the Pathfinder version of the game, neutral good is characterized as follows:
A neutral good character is good, but not shackled by order. He sees good where he can, but knows evil can exist even in the most ordered place.
A neutral good character does anything he can, and works with anyone he can, for the greater good. Such a character is devoted to being good, and works in any way he can to achieve it. He may forgive an evil person if he thinks that person has reformed, and he believes that in everyone there is a little bit of good.
In a fantasy campaign realm, the player characters typical encounter neutral good types as allies who render aid and assistance. Even evil player characters are quite willing to accept the assistance of the neutral good, knowing that the neutral good types are more likely to try to persuade them to the side of good than smite them with righteous fury. Neutral good creatures are not very common in most fantasy worlds—good types tend to polarize towards law and chaos.
Not surprisingly, neutral good types are also not very common in the real world. A neutral good person has no special commitment to order or lack of order—what matters is the extent to which a specific order or lack of order contributes to the greater good. For those devoted to the preservation of order, or its destruction, this can be rather frustrating.
While the neutral evil person embraces the moral theory of ethical egoism (that each person should act solely in her self-interest), the neutral good person embraces altruism—the moral view that each person should act in the interest of others. In more informal terms, the neutral good person is not selfish. It is not uncommon for the neutral good position to be portrayed as stupidly altruistic. This stupid altruism is usually cast in terms of the altruist sacrificing everything for the sake of others or being willing to help anyone, regardless of who the person is or what she might be doing. While a neutral good person is willing to sacrifice for others and willing to help people, being neutral good does not require a person to be unwise or stupid. So, a person can be neutral good and still take into account her own needs. After all, the neutral good person considers the interests of everyone and she is part of that everyone. A person can also be selective in her assistance and still be neutral good. For example, helping an evil person do evil things would not be a good thing and hence a neutral good person would not be obligated to help—and would probably oppose the evil person.
Since a neutral good person works for the greater good, the moral theory of utilitarianism tends to fit this alignment. For the utilitarian, actions are good to the degree that they promote utility (what is of value) and bad to the degree that they do the opposite. Classic utilitarianism (that put forth by J.S. Mill) takes happiness to be good and actions are assessed in terms of the extent to which they create happiness for humans and, as far as the nature of things permit, sentient beings. Put in bumper sticker terms, both the utilitarian and the neutral good advocate the greatest good for the greatest number.
This commitment to the greater good can present some potential problems. For the utilitarian, one classic problem is that what seems rather bad can have great utility. For example, Ursula K. Le Guin’s classic short story “The Ones Who Walk Away from Omelas” puts into literary form the question raised by William James:
Or if the hypothesis were offered us of a world in which Messrs. Fourier’s and Bellamy’s and Morris’s utopias should all be outdone, and millions kept permanently happy on the one simple condition that a certain lost soul on the far-off edge of things should lead a life of lonely torture, what except a specifical and independent sort of emotion can it be which would make us immediately feel, even though an impulse arose within us to clutch at the happiness so offered, how hideous a thing would be its enjoyment when deliberately accepted as the fruit of such a bargain?
In Guin’s tale, the splendor, health and happiness that is the land of Omelas depends on the suffering of a person locked away in a dungeon from all kindness. The inhabitants of Omelas know full well the price they pay and some, upon learning of the person, walk away. Hence the title.
For the utilitarian, this scenario would seem to be morally correct: a small disutility on the part of the person leads to a vast amount of utility. Or, in terms of goodness, the greater good seems to be well served.
Because the suffering of one person creates such an overabundance of goodness for others, a neutral good character might tolerate the situation. After all, benefiting some almost always comes at the cost of denying or even harming others. It is, however, also reasonable to consider that a neutral good person would find the situation morally unacceptable. Such a person might not free the sufferer because doing so would harm so many other people, but she might elect to walk away.
A chaotic good type, who is committed to liberty and freedom, would certainly oppose the imprisonment of the innocent person—even for the greater good. A lawful good type might face the same challenge as the neutral good type: the order and well being of Omelas rests on the suffering of one person and this could be seen as an heroic sacrifice on the part of the sufferer. Lawful evil types would probably be fine with the scenario, although they would have some issues with the otherwise benevolent nature of Omelas. Truly subtle lawful evil types might delight in the situation and regard it as a magnificent case of self-delusion in which people think they are selecting the greater good but are merely choosing evil.
Neutral evil types would also be fine with it—provided that it was someone else in the dungeon. Chaotic evil types would not care about the sufferer, but would certainly seek to destroy Omelas. They might, ironically, try to do so by rescuing the sufferer and seeing to it that he is treated with kindness and compassion (thus breaking the conditions of Omelas’ exalted state).
In my previous essay, I considered various stock arguments in favor of the claim that we have obligations to people we do not know. In this essay I will consider a rather concrete matter of obligation, namely that of hunger in the United States of America.
The United States is known as the wealthiest nation on the planet and also as a country that is facing an obesity epidemic. As such, it probably seems rather odd to claim that America faces a serious problem with hunger. Sadly, this is the case and the matter was featured in Tracie McMillan’s “The New Face of Hunger” in August 2014 issue of National Geographic. Out of a total population of 313.9 million people, 48 million Americans are food insecure, which is a contemporary term for the hungry. In terms of demographics, over half of the food insecure are white and over half are people who live outside of the cities. 72% of recipients are children, senior citizens and the disabled. Two thirds of families on food stamps have at least one employed adult. The reason why these employed adults need assistance is declining wages: people can work multiple jobs and still not earn enough to buy adequate food. These facts run counter to the usual stereotypes often exploited by politicians.
The United States does have a program to address hunger—what was once called food stamps is now called SNAP (Supplemental Nutrition Assistance Program). While the program paid out $75 billion to about 48 million people in 2013, the average recipient received $133.07 a month (under $1.50 per meal). On average, SNAP recipients run out of money after three weeks and then turn to charity, such as food pantries and other assistance for the hungry. Of the 48 million recipients, 17.6 million lack the resources to provide for even their basic food needs.
The federal government also provides an indirect means of providing food—taxpayer money subsidizes the production of certain crops. Corn gets the lion’s share of subsidies and is distantly followed by wheat and soybeans. Rice, sorghum, peanuts, barley and sunflowers also receive some subsidies while the only subsidized fruit is the apple. Because of the subsidies, food products that include or involve corn, wheat or soybeans tend to be the cheapest. As such, it is not surprising that low-income people get most of their calories from such foods. Examples include sodas, energy drinks, sports drinks, chicken, grain-based desserts, tacos and pizza. These foods tend to be high calorie and low nutrition foods.
Also impacting the diet of low income people is the existence of food deserts: areas that lack supermarkets but have fast food restaurants and small markets (like convenience stores). A surprising number of Americans live in these food deserts and do not own a car that would allow them to drive to buy healthier (and cheaper) food. For example, 43,000 people in Houston, Texas lack a car and live over a half mile from a grocery store. The food sold at these places tends to be more expensive than the food available at a grocery store and they tend to be high calorie, low-nutrient foods.
These two factors help explain the seeming paradox of an obesity epidemic among hungry people: people have easier access to high calorie foods that have low nutritional value. Hence, people tend to be overweight while also being malnourished. Now that the nature of the problem has been discussed, I now turn to the matter of obligations to others.
On the face of it, the main issue regarding obligations to the hungry would seem to focus on whether or not there is an obligation to provide people with food. This can be broken down into two sub-categories. First, whether or not there is a collective obligation to provide hungry citizens with food via the machinery of the state (in this case, SNAP). Second, whether or not there is an obligation on the part of better-off citizens to provide food to their hungry fellow citizens.
Arguing that the state has such an obligation is fairly straightforward. A basic obligation of the state is to provide for the good of the people and to protect them from harm. While the traditional focus is on the state providing military and police forces, this would certainly seem to extend to protecting citizens from starving.
A utilitarian argument can also be advanced in favor of this obligation: helping to feed millions of citizens creates more utility than disutility. Part of this is the obvious fact that people are happier when they have food to eat. Part of this is the less obvious fact that when people get hungry enough, open rebellion seems better than starving to death—so feeding the poor helps maintain social stability.
One stock objection against this view is to contend that providing such support creates a culture of dependence that encourages people to stay poor. The obvious counter to this is that, as noted above, those receiving the aid are mostly people who are seniors, disabled or children—people who should not be expected to labor to survive. Also, as noted above, two thirds of the families that received SNAP have at least one working adult. People are not on SNAP because they turn down opportunities—they are on SNAP because of the lack of opportunities.
The matter becomes rather more controversial when the issue switches to whether or not better off individuals are obligated to assist their fellow citizens. This, of course, means apart from paying taxes that help fund SNAP. Such assistance might involve donating money, time or food.
Intuitively, people tend to think that assisting others in this way is a nice thing to do and worthy of praise. However, people also tend to think that there is no obligation to do this and that someone who does not assist others in this way is not a bad person. This does have some appeal—after all, being bad is typically seen as an active thing rather than merely not doing good things.
Turning back to the general arguments for obligations to others, there are religious injunctions to feed the hungry (which explains why American churches are typically on the front line in the war against hunger), and it is easy to reverse the situation: if I were hungry, I would want my fellow citizens to help me. As such, I should help them when I am well off.
The utilitarian argument also applies here: a person who gives a little to help the hungry will incur a small cost (but might gain in happiness) but it will yield greater happiness on the part of the recipients who now have something to eat. As such, the utilitarian argument would seem to nicely ground this obligation. Of course, there is the stock objection about building dependence.
Rational self-interest would also seem to provide a reason to provide such aid—there are plenty of selfish reasons to do so, not the least of which is gaining a good reputation and helping to keep the hungry from revolting.
The debt argument might work here as well—if a person has benefited from the assistance of others, then she would be obligated to repay that debt. However, a person could contend that as long as they have not received food from others when hungry, he owes nothing.
The argument from virtue obviously applies here: the virtue of generosity obligates a person to give to others in need. This, and the religious injunction, would seem to be the truest forms of actual obligation—as opposed to merely doing it from self-interest or for utility.
Digging deeper, there is also another issue. As noted above, people are hungry primarily because they are not earning enough to purchase adequate food. One reason for this is that wages have consistently declined for most Americans, although the profits of businesses have steadily increased. As such, the United States is the wealthiest country in the world, yet has many very poor people. This raises the moral issue of whether or not employers are obligated to pay a living wage—a wage that would enable a person to purchase food on that salary without requiring the assistance of the state or others.
Businesses obviously have a strong self-interest in not doing so—lower wages mean greater profits and shifting the cost to other people (taxpayers and those who contribute to food pantries) means that their workers survive despite the lack of a living wage. However, there is still the moral question of whether or not they have an obligation to provide such a living wage.
The religious injunctions would seem to apply to employers that accept these specific faiths—and companies that wish to claim they are religious should be obligated to act the part. However, secular companies can easily claim exemption.
Reversing the situation would also apply: presumably those running businesses would not want to be so poorly paid. Of course, they would probably claim that as job creators there is a relevant difference.
The utilitarian argument does involve some complexities. After all, there can be very good utilitarian arguments for allowing some people to suffer so as to produce greater utility for others—so a case could be made that the utility generated outweighs the disutility of the low pay. However, the opposite sort of argument can also be made.
The debt argument would also apply. If corporations are people or at least are fictions that are run by people, then they would have a debt to the others that make civilization possible. As such, they should pay back this debt, perhaps in the form of decent wages.
The virtues of fairness and generosity would seem to obligate employers to pay employees fairly and this should be a living wage, at least in many cases. If corporations are people, then they should surely be held to the same obligations as actual people.
Thus, it would seem that there are good reasons to accept that we are obligated to help others.
One of the classic moral problems is the issue of whether or not we have moral obligations to people we do not know. If we do have such obligations, then there are also questions about the foundation, nature and extent of these obligations. If we do not have such obligations, then there is the obvious question about why there are no such obligations. I will start by considering some stock arguments regarding our obligations to others.
One approach to the matter of moral obligations to others is to ground them on religion. This requires two main steps. The first is establishing that the religion imposes such obligations. The second is making the transition from the realm of religion to the domain of ethics.
Many religions do impose such obligations on their followers. For example, John 15:12 conveys God’s command: “This is my commandment, That you love one another, as I have loved you.” If love involves obligations (which it seems to), then this would certainly seem to place us under these obligations. Other faiths also include injunctions to assist others.
In terms of transitioning from religion to ethics, one easy way is to appeal to divine command theory—the moral theory that what God commands is right because He commands it. This does raise the classic Euthyphro problem: is something good because God commands it, or is it commanded because it is good? If the former, goodness seems arbitrary. If the latter, then morality would be independent of God and divine command theory would be false.
Using religion as the basis for moral obligation is also problematic because doing so would require proving that the religion is correct—this would be no easy task. There is also the practical problem that people differ in their faiths and this would make a universal grounding for moral obligations difficult.
Another approach is to argue for moral obligations by using the moral method of reversing the situation. This method is based on the Golden Rule (“do unto others as you would have them do unto you”) and the basic idea is that consistency requires that a person treat others as she would wish to be treated.
To make the method work, a person would need to want others to act as if they had obligations to her and this would thus obligate the person to act as if she had obligations to them. For example, if I would want someone to help me if I were struck by a car and bleeding out in the street, then consistency would require that I accept the same obligation on my part. That is, if I accept that I should be helped, then consistency requires that I must accept I should help others.
This approach is somewhat like that taken by Immanuel Kant. He argues that because a person necessarily regards herself as an end (and not just a means to an end), then she must also regard others as ends and not merely as means. He endeavors to use this to argue in favor of various obligations and duties, such as helping others in need.
There are, unfortunately, at least two counters to this sort of approach. The first is that it is easy enough to imagine a person who is willing to forgo the assistance of others and as such can consistently refuse to accept obligations to others. So, for example, a person might be willing to starve rather than accept assistance from other people. While such people might seem a bit crazy, if they are sincere then they cannot be accused of inconsistency.
The second is that a person can argue that there is a relevant difference between himself and others that would justify their obligations to him while freeing him from obligations to them. For example, a person of a high social or economic class might assert that her status obligates people of lesser classes while freeing her from any obligations to them. Naturally, the person must provide reasons in support of this alleged relevant difference.
A third approach is to present a utilitarian argument. For a utilitarian, like John Stuart Mill, morality is assessed in terms of consequences: the correct action is the one that creates the greatest utility (typically happiness) for the greatest number. A utilitarian argument for obligations to people we do not know would be rather straightforward. The first step would be to estimate the utility generated by accepting a specific obligation to people we do not know, such as rendering aid to an intoxicated person who is about to become the victim of sexual assault. The second step is to estimate the disutility generated by imposing that specific obligation. The third step is to weigh the utility against the disutility. If the utility is greater, then such an obligation should be imposed. If the disutility is greater, then it should not.
This approach, obviously enough, rests on the acceptance of utilitarianism. There are numerous arguments against this moral theory and these can be employed against attempts to ground obligations on utility. Even for those who accept utilitarianism, there is the open possibility that there will always be greater utility in not imposing obligations, thus undermining the claim that we have obligations to others.
A fourth approach is to consider the matter in terms of rational self-interest and operate from the assumption that people should act in their self-interest. In terms of a moral theory, this would be ethical egoism: the moral theory that a person should act in her self-interest rather than acting in an altruistic manner.
While accepting that others have obligations to me would certainly be in my self-interest, it initially appears that accepting obligations to others would be contrary to my self-interest. That is, I would be best served if others did unto me as I would like to be done unto, but I was free to do unto them as I wished. If I could get away with this sort of thing, it would be ideal (assuming that I am selfish). However, as a matter of fact people tend to notice and respond negatively to a lack of reciprocation. So, if having others accept that they have some obligations to me were in my self-interest, then it would seem that it would be in my self-interest to pay the price for such obligations by accepting obligations to them.
For those who like evolutionary just-so stories in the context of providing foundations for ethics, the tale is easy to tell: those who accept obligations to others would be more successful than those who do not.
The stock counter to the self-interest argument is the problem of Glaucon’s unjust man and Hume’s sensible knave. While it certainly seems rational to accept obligations to others in return for getting them to accept similar obligations, it seems preferable to exploit their acceptance of obligations while avoiding one’s supposed obligations to others whenever possible. Assuming that a person should act in accord with self-interest, then this is what a person should do.
It can be argued that this approach would be self-defeating: if people exploited others without reciprocation, the system of obligations would eventually fall apart. As such, each person has an interest in ensuring that others hold to their obligations. Humans do, in fact, seem to act this way—those who fail in their obligations often get a bad reputation and are distrusted. From a purely practical standpoint, acting as if one has obligations to others would thus seem to be in a person’s self-interest because the benefits would generally outweigh the costs.
The counter to this is that each person still has an interest in avoiding the cost of fulfilling obligations and there are various practical ways to do this by the use of deceit, power and such. As such, a classic moral question arises once again: why act on your alleged obligations if you can get away with not doing so? Aside from the practical reply given above, there seems to be no answer from self-interest.
A fifth option is to look at obligations to others as a matter of debts. A person is born into an established human civilization built on thousands of years of human effort. Since each person arrives as a helpless infant, each person’s survival is dependent on others. As the person grows up, she also depends on the efforts of countless other people she does not know. These include soldiers that defend her society, the people who maintain the infrastructure, firefighters who keep fire from sweeping away the town or city, the taxpayers who pay for all this, and so on for all the many others who make human civilization possible. As such, each member of civilization owes a considerable debt to those who have come before and those who are here now.
If debt imposes an obligation, then each person who did not arise ex-nihilo owes a debt to those who have made and continue to make their survival and existence in society possible. At the very least, the person is obligated to make contributions to continue human civilization as a repayment to these others.
One objection to this is for a person to claim that she owes no such debt because her special status obligates others to provide all this for her with nothing owed in return. The obvious challenge is for a person to prove such an exalted status.
Another objection is for a person to claim that all this is a gift that requires no repayment on the part of anyone and hence does not impose any obligation. The challenge is, of course, to prove this implausible claim.
A final option I will consider is that offered by virtue theory. Virtue theory, famously presented by thinkers like Aristotle and Confucius, holds that people should develop their virtues. These classic virtues include generosity, loyalty and other virtues that involve obligations and duties to others. Confucius explicitly argued in favor of duties and obligations as being key components of virtues.
In terms of why a person should have such virtues and accept such obligations, the standard answer is that being virtuous will make a person happy.
Virtue theory is not without its detractors and the criticism of the theory can be employed to undercut it, thus undermining its role in arguing that we have obligations to people we do not know.
While in Indonesia in 2011, photographer David Slater’s camera was grabbed by a macaque. While monkey shines are nothing new, this monkey took hundreds of shots including some selfies that went viral on the internet. As many things often do, this incident resulted in a legal controversy over the copyright status of the photos. The United States copyright office recently ruled that “Works produced by nature, animals or plants” or “purportedly created by divine or supernatural beings” cannot be copyrighted. While this addresses the legal issue, it does not address the philosophical issue raised by this incident.
From a philosophical perspective, the general issue is whether a non-human animal has moral ownership rights over its artistic works. This breaks down into the two obvious sub-issues. The first is whether or not a non-human animal has a moral status that can ground ownership rights. The second is whether or not a non-human has the capability to create a work of art. These issues have often been the subject of philosophical discussion, but it is certainly worth considering them again.
One approach to the issue of ownership rights is to note that non-human entities are taken to possess ownership rights. To be specific, corporations are taken as having ownership rights—they can and do own copyrights. If a legal fiction like a corporation can be taken to have ownership rights, there seems to be no principled way to deny the same rights to animals. After all, animals have a significantly better claim to rights since they are actual entities with qualities analogous to human persons.
The easy and obvious reply to this approach is that corporations are legal fictions (and legally fictional persons in the United States) and, as such, this does not help with the philosophical issue of whether or not animals can have ownership rights. Legally, the matter is simple: just like corporations, animals have whatever legal rights the law provides. So, if the Supreme Court ruled that animals are people and can own property, then that would be the law—but the philosophical issue would remain unresolved. That said, if corporations should be regarded as having ownership rights (and as people), then it hardly seems unreasonable to accept that animals also have ownership rights (and as being people).
In order to determined whether animals have ownership rights or not, it would be necessary to determine the foundation of these rights. Locke famously bases property rights on the claim that each person owns her own body (well, God does…but He is cool about it) and hence each person owns her own labor. This labor is mixed with common property and thus makes what it is mixed with the property of the laborer. If animals have this sort of self-ownership, then they would have the same ownership rights as humans—whatever an animal mixed her labor with would be hers. The stock counter for this is that animals are not owners—they are objects to be owned. It is worth noting that people have long said the same thing about other people.
Higher animals like dogs and primates also seem to grasp the basics of ownership: they distinguish between what is their property and what is not. To use a concrete example, my husky clearly grasps the distinction between her toys and similar objects that belong to others. As such, there seems to be some basis to the claim that animals regard themselves as possessing ownership rights.
The obvious objection is that animals have, at best, an extremely limited understanding of property and this could simply be attributed to possessiveness or territoriality. The obvious reply to this is that ownership does not seem to require an understanding of property rights—corporations (which have no minds and hence have no understanding) and humans who are dumb as posts are still regarded as having ownership rights.
While the debate over ownership could go on endlessly, animals seem to have as good a claim to ownership rights as humans do, at least in terms of the foundation of such an alleged right. Roughly put, if humans have ownership rights, then animals would seem to qualify as well. Thus, it would seem that animals do have ownership rights.
The next issue is whether or not an animal can create an artistic work. Addressing this properly would require an adequate definition of “art” that would enable one to distinguish between art and non-art. While there have been many attempts to provide just such a definition, they have all proven to be inadequate. Since such a fine definition is lacking, a rough and ready approach must suffice.
In this case, the rough and ready approach is to begin by considering cases in which it is intuitively appealing to accept that a human is creating a work of art. The next step is to use an argument by analogy to determine whether or not an animal could do the same sort of thing.
One clear case is that of painting: a human intentionally applies paints to a surface based on the contents of her intentional states and this image is typically a resemblance to something internal (a feeling or thought) or external (a person, landscape, etc.). While animals do apply paint to surfaces, their lack of language makes it rather difficult to determine what they are doing. If, for example, elephants painted pictures recognizable as elephants, flowers or whatever, them there would be very strong grounds for thinking they are creating art. But, to be fair to the animals, there are humans who create paintings that look exactly like those created by elephants. The main difference is that the humans claim to be artists while the elephants say nothing. But, if the matter is judged entirely by the work produced, if those humans are artists, then so are the elephants.
Another case is that of photography and it seems reasonable to accept that a photo can be a work of art and a photographer an artist. The challenge is, obviously enough, distinguishing between the taking of photos and being an artist. To clarify, photos can be taken by automatic timers, motion sensors, tripwires or by accident but these would not be cases involving an artist. To use an analogy, if the shelves in a shed fail and the paint spills to create a work identical to that of Jackson Pollock of Van Gogh, that would not make the shed’s owner an artist. If the paint where spilled by a trip-wire trap, this would not make the victim an artist. So, being an artist in photography thus requires intent and control rather than automation or chance. At the very least, the photographer must know what she is doing and act with intent.
In the case of the monkey taking pictures, the key question is whether or not the monkey understood what it was doing and acted with intent. If the monkey was just playing with the camera and it just happened to take a few shots that looked good, the monkey is no more an artist than an automatic timer, motion sensor or defective shutter control that made the camera constantly shoot.
It might be objected that some of the shots were quite good aesthetically and judging by the work itself, the monkey had produced art. This does have some appeal—after all, whether the work is art or not should (it can be argued) rest in the work itself rather than the process of creation. But, even if this is granted, it does not follow that the monkey is the artist. After all, an automated camera shooting constantly would almost certainly produce artistic photos eventually—but the automating machinery or software would not thus be an artist. Thus, there could be art but no artist. In the case of the monkey, this seems to be the most plausible explanation—the money was probably just pushing the button and by chance some good images occurred. As such, the monkey was not an artist.