Rolling through the Cosmodrome with my Dinklebot.
Killing Dregs and Vandals.
Dropping them with just one shot.
Off to Luna to bring the Hive some ruin.
What’s that, Dinklebot?
That Wizard came from the moon?
Flying to Venus to grind the Ishtar Sink.
Who’s setting off the alarms?
Why, it’s a bot named “Dink.”
Finally, to Mars to shoot up the Cabal.
F@ck that annoying talking ball.
In response to the nighttime announcement of the Ferguson verdict in which officer Wilson was not indicted, some people attacked the police and damaged property. Some experts have been critical of the decision to make the announcement at night, since the time of day does actually influence how people behave. In general, making such an announcement at night is a bad idea—unless one intends to increase the chances that people will respond badly.
Obviously enough, peacefully protesting is a basic right and in a democratic state the police should not interfere with that right. However, protests do escalate and violence can occur. In the United States it is all too common for peaceful protests to be marred by violence—most commonly damage to businesses and looting.
When considering reports of damage and looting during protests it is reasonable to consider whether or not the damage and looting is being done by actual protestors or by people who are opportunists using the protest as cover or an excuse. An actual protestor is someone whose primary motivation is a moral one—she is there to express her moral condemnation of something she perceives as wrong. Not all people who go to protests are actual protestors—some are there for other reasons, some of which are not morally commendable. Some people, not surprisingly, know that a protest can provide an excellent opportunity to engage in criminal activity—to commit violence, to damage property and to loot. Protests do, sadly, attract such people and often these are people who are not from the area.
Of course, actual protesters can engage in violence and damage property. Perhaps they can even engage in looting (though that almost certainly crosses a moral line). Anger and rage are powerful things, especially righteous anger. A protestor who is motivated by her moral condemnation of a perceived wrong can give in to her anger and do damage to others or their property. When people damage the businesses in their own community, this sort of behavior seems irrational—probably because it is. After all, setting a local gas station on fire is hardly morally justified by the alleged injustice of the grand jury’s verdict in regards to not indicting Officer Wilson for the shooting of Brown. However, anger tends to impede rationality. I, and I assume most people, have seen people angry enough to break their own property.
While I am not a psychologist, I do suspect that people do such damage when they are angry because they cannot actually reach the target of their anger. Alternatively, they might be damaging property to vent their rage in place of harming people. I have seen people do just that. For example, I saw a person hit a metal door frame (and break his hand) rather than hit the person he was mad at. Anger does summon up a need to express itself and this can easily take the form of property damage.
When a protest becomes destructive (or those using it for cover start destroying things), the police do have a legitimate role to play at protests. While protests are intended to draw attention and often aim to do so by creating a disruption of the normal course of events, a state of protest does not grant protestors a carte blanche right to interfere with the legitimate rights of others. As such, the police have a legitimate right to prevent protestors from violating the rights of others and this can correctly involve the use of force.
That said, the role of rage needs to be considered. When property is destroyed during protests, some people immediately condemn the destruction and wonder why people are destroying their own neighborhoods. In some cases, as noted above, the people doing the damage might not be from the neighborhood at all and might be there to destroy rather than to protest. If such people can be identified, they should be dealt with as the criminals they are. What becomes somewhat more morally problematic are people who are driven to such destruction by moral rage—that is, they have been pushed to a point at which they believe they must use violence and destruction to express their moral condemnation.
When looked at from the cool and calm perspective of distance, such behavior seems irrational and unwarranted. And, I think, it usually is. However, it is well worth it to think of something that has caused the fire of righteous anger to ignite your soul. Think of that and consider how you might respond if you believed that you have been systematically denied justice. Over. And over. Again.
In August of 2014 police officer Darren Wilson shot the unarmed Michael Brown to death. On November 24, 2014 a grand jury in Missouri failed to indict police officer Darren Wilson. Like most Americans, I have some thoughts about this matter.
In the United States, a grand jury’s function is to determine whether or not there is probable cause to prosecute. This level of proof is much lower than that of a criminal trial—such a trial requires (in theory) proof beyond a reasonable doubt. Unlike in a criminal trial, the grand jury is effectively run by the prosecutor and the defense has no real role in the process. As might be suspected, grand juries almost always indict. Almost always, that is, unless the person under consideration is a police officer who has killed someone. In such cases the officer is almost never indicted. As such, the decision in the Wilson case is exactly what should have been expected.
Now, it might be that the reason that police officers are almost never indicted for killing is that nearly all the killings are justified. In contrast, the reason that non-officers are almost always indicted is that there is almost always legitimate probable cause. This is, obviously enough, not impossible.
Of course, the real concern here is not with the grand juries in general, but with this grand jury in particular. According to the various news reports and experts, Wilson received a “gold plated” grand jury in terms of how it was handled by the prosecutor and the state. To be specific, the grand jury seemed to be run in such a way that Wilson received exceptionally good treatment in regards to the case. This is in contrast with the sort of grand jury treatment other citizens typically get, which have been described as “tin plated.” In these grand juries an indictment is almost a forgone conclusion. This is not to say that Wilson’s grand jury involved corruption or misdeeds. Rather, the point is that there is a stark contrast between the sort of grand jury that a typical citizen will receive and the one that Wilson received.
This distinction in treatment is one reason that people are justifiably angry about the matter. After all, a proper justice system would treat everyone equally—everyone would get the “gold plated” grand jury (or the “tin plated” one) rather than getting the sort of justice deemed fit for the person’s race, class, or profession. This sort of disparity is yet one more example of the injustices of our justice system.
Naturally, I am well aware that the real does not (and probably cannot) match the ideal. However, this sort of appeal to the real is more of an acceptance of the problem than a refutation of criticisms of the problem. Also, I do not expect a perfect system—merely a reasonably fair one.
In addition to the nature of the grand jury, there is also obviously the central issue: was Wilson justified in shooting Brown to death? In this case, the justification is grounded on the principle of defense of life: an officer is justified in using violence to protect his life or that of an innocent person when he has an “objectively reasonable” belief that there is such a threat. In Wilson’s case, the shooting of Brown would be warranted if Wilson had an “objectively reasonable” belief that Brown presented such a threat. Since the justification is based on the reasonable belief in a threat, the warrant for the use of force ends when the threat ends.
According to the information released to the public, there is evidence that Brown had close contact with Wilson, which is consistent with Wilson’s claim that Brown attacked him and tried to take his gun. Brown died a considerable distance from Wilson and this raises the legal and moral question of whether or not Wilson still had an “objectively reasonable” belief that Brown still presented a threat that could only be dealt with by lethal force. The grand jury decided that he did, which settles the legal aspect of the case. However, there is still the matter of the moral aspect—was Wilson actually warranted in killing Brown?
On the one hand, when one considers that Brown was unarmed and too far from Wilson to attack him, then it would be reasonable to consider that Wilson was not justified in killing Brown. On the other hand, if Brown appeared to be charging towards Wilson, then Wilson could be justified in shooting him. Since Wilson was not shot in the back, it does seem clear that Brown was facing Wilson—but facing someone is not the same thing as being a threat. Unfortunately, there is no video of the incident and the eye-witness reports conflict (and eye-witness reports, even given in all honesty, are not very reliable). Since Brown is dead, we only have Wilson’s side of the story. As such, one cannot be certain whether Wilson was justified or not, assuming a right to kill when one has an “objectively reasonable” belief that one is threatened.
This principle can, of course, be challenged. Some people take the principle to set a very low threshold—an officer just has to feel threatened in order to be warranted to use deadly force. This, as might be imagined, can be seen as a threshold that is too low. Some states do give citizens the same right (against other citizens) as shown in the various infamous stand your ground laws and these have proven rather problematic. Others take the view that the principle itself is reasonable—after all, it essentially expresses John Locke’s principle that force can be used to protect one’s life or the lives of the innocent. But, even if the principle is reasonable, there is also the question of whether or not it is applied correctly. My view is that the use of lethal force requires a comparable threat to justify it, on the principle of a proportional response. That said, one must also consider the practicalities of combat situations—it can be difficult to judge intent and the heat of a fight can easily change a person’s perceptions.
As one final point, even if Wilson was justified in shooting Brown, the perception remains that the police and the justice system treat black Americans very different from white Americans. Not surprisingly, some white people doubt this and do so in all honesty—they are assessing the system from their experiences and assume that everyone else has the same sort of experiences as they do. However, one must look beyond one’s experiences and consider those of others. While no one can completely get the experience and being of another, it would be a good thing for white folks to give some thought to what it is like to be non-white in America.
The most recent offering in Blackwell’s Philosophy & Pulp Culture Series is the appropriately named Dungeons & Dragons & Philosophy. I was offered a free copy in return for mentioning the book on my blog and I am making good on that deal. If time permits, I’ll write a review of the book as well. I am not one of the authors and wasn’t asked to contribute, so there is no conflict of interest. Well, other than the free copy.
Here is the back cover info for the book:
“Does justice exist in the drow city of Menzoberranzan?
How does one cope with the death of a player character?
Is it ever morally acceptable to cast necromancy and summoning spells?
Is Raistlin Majere the same person over time?
Do demons and devils have free will?
First introduced by war-game enthusiasts Gary Gygax and Dave Arneson in 1974, Dungeons & Dragons developed into a cultural phenomenon that continues to cast a spell on millions of gaming aficionados around the world. Dungeons & Dragons and Philosophy delves into the heroic quests, deadly battles, and medieval courtly intrigue of the legendary role-playing game to probe its rich terrain of philosophically compelling concepts and ideas. From the nature of free will and the metaphysics of personal identity to the morality of crafting fictions and the role of friendship in collaborative storytelling,D&D players and gaming enthusiasts will gain startling insights into the deep philosophical issues that underlie a broad swath of role-playing tactics and strategies. Put the broadswords away and letDungeons & Dragons and Philosophy transport you across the philosophical divide.”
To answer the questions:
1. No. Or yes. Traditional drow are always chaotic evil, so they have no justice. Except that which is dispensed by the adventurers who give them the deaths they really, really deserve. New drow can be any alignment, but tend to be evil and crazy. So, justice is possible, but usually not actual. But, in D&D justice is whatever the DM says it is.
2. Roll a new one.
3. Yes. Necromancy includes healing spells like cure light wounds (look at the spell descriptions). Healing people is morally okay, in general. Summoned creatures are (in the standard game) not permanently harmed by their battles. Also, most of the time they match the summoner in alignment and usually advance the cause of the alignment when summoned to fight. So, summoning them is fine. Plus, they are often things that really like to fight. In D&D that is most things.
4. No idea who that is. I’m getting vague memories about the Dragonlance books I never read, though. I’ll go with the usual answer about games: whatever the DM says.
5. Depends on the DM. Metaphysical issues in RPGs are settled by the dungeon master. In my campaign, they get free will. So yes. For me. Some DMs take devils and demons to always be evil and without free moral choice. That is AD&D Monster Manual-they are always evil (lawful for devils, chaotic for demons). So no. For them. D&D metaphysics is easy.
On November 20, 2014 Myron May allegedly shot three people on the FSU campus in Tallahassee, Florida. He was shot to death after allegedly firing at the police. I did not know May, but I do know people who did—that is the sort of place Tallahassee is: if you don’t know someone, you know someone who does.
While the wounding of the three people was terrible, May can be seen as the fourth victim. I did know that May had been a cross-country runner, that he had graduated from FSU and then had gone on to law school. During most of his life, May seemed to be the last person who would hurt anyone else—he was well regarded and interested in doing good for the community. But, at some point, his mind apparently spiraled down into the darkness—he showed signs of mental illness that culminated in his death on the campus he loved.
Due to the terrible regularity of gun violence in the United States, I have nothing new to say about the usual issues relating to guns. However, I will address some important issues relating to mental illness in the United States.
As I learned many shootings ago, a person can only be involuntarily detained for mental health issues when he presents an imminent danger. One practical impact of this high threshold is that authorities often cannot act until someone has actually acted and then it can be too late.
It can be argued that the threshold should be lower so that a person can be helped before he engages in violence. The practical challenge is determining the extent to which a person presents a danger to himself or others. The moral challenge is justifying lowering the threshold.
A plausible way to justify this is by use of a utilitarian argument: helping someone with mental issues before he commits violence will help prevent such acts of violence. That said, there is a moral concern with allowing authorities to use its compulsive power on someone because he might do something despite a lack of adequate evidence that he intends to take a harmful actions.
It could be countered that certain mental issues are adequate evidence that a person is reasonably likely to engage in harmful behavior, even though she has done nothing to reach the imminent danger threshold.
This is certainly appealing. To use an analogy to physical health, if certain factors indicate a high risk of an illness arising, then it is sensible to treat that condition before it manifests. Likewise, if certain factors indicate a high risk of a person with mental issues engaging in violence against others, then it makes sense to treat for that condition before it manifests.
An obvious objection is that people can refuse medical treatment for physical conditions and hence they should be able to do the same for dangerous mental issues. A reply is that if a person refuses treatment for a physical ailment, he is usually only endangering himself. But if someone refuses treatment for a condition that can result in her engaging in violence against others, then she is putting others in danger without their consent and she does not have the liberty or right to do this. To use another analogy, some forms of mental illness can be seen as analogous to highly infectious diseases. The analogy would not be to claim that mental illness can be caught, but that an infected person presents a serious risk to others and, likewise, a person with a certain sort of mental illness can also present a serious risk to others. Provided that there is adequate evidence of the danger, then the state can be warranted in acting against the individual’s will. The practical challenge is determining what conditions warrant acting.
One practical concern is that mental health science is behind the physical health sciences and the physical health sciences are still rather limited. Because of this, predictions made using mental health science will tend to be of dubious accuracy. To use the coercive power of the state on such a tenuous foundation would be morally problematic. After all, a person can only be justly denied liberty on adequate grounds and such a prediction does not seem strong enough to warrant such action.
A counter to this is to argue that preventing another mass shooting is worth the price of denying people their freedom. An obvious worry is that without clear guidelines and limitations, this sort of principle could be extended to anyone who might commit a crime—thus justifying locking up people for being potential criminals. This would certainly be wrong.
It might be countered that there is no danger of the principle being extended and that such worries are worries based on a slippery slope. After all, one might say, the principle only applies to those deemed to have a certain sort of mental issue. Normal people, one might say in a calm voice, have nothing to worry about.
However, it seems that normal people would have reason to worry. After all, it is normal for people to have the occasional mental issue (such as depression). There is also the concern that the application of the fuzzy science of mental health might result in people being subject to coercion without real justification.
In light of these considerations, I do recommend that we reconsider the threshold for applying the coercive power of the state to people with mental issues. However, this reconsideration needs to involve carefully considered guidelines and should be focused on helping people rather than merely locking them away in the hopes of protecting others.
The situation at FSU also illustrated another point of moral concern: while May was apparently justly shot by the police after allegedly firing on them, the officers only viable response was lethal in nature. While police do have some less-than-lethal options like Tasers and nightsticks, these options are usually not viable against a person actively shooting at an officer from a distance. There have been some efforts to produce less-than-lethal options that are as or nearly effective as guns, but these options have not proven successful and police have generally not adopted them.
From a moral perspective, it would clearly be preferable if officers had better less-than-lethal options. In the case of May’s situation, if he had been rendered unable to act rather than shot to death, he might have been able to benefit from medical help and return to a normal life. In the case of criminals who are not suffering from mental illness, it would still seem morally preferable to be able to effective subdue them without shooting them. As such, there is a good moral reason to develop an effective less-than-lethal weapon.
It is also important to note that such a weapon would need to be effective enough to morally justify its use in place of a gun. After all, the police should not be expected to use a weapon that is not adequately effective—this would put them and the public in unjustified danger. Such a weapon could be less effective than a gun and still be acceptable, but there is clearly an important question in regards to how effective the weapon would need to be. In practical terms, of course, there is the question of whether or not such a weapon is even possible. After all, while something like the stun setting on a Star Trek phaser would be ideal, it is likely to always just be science fiction.
Although bionics have been part of science fiction for quite some time (a well-known example is the Six Million Dollar Man), the reality of prosthetics has long been rather disappointing. But, thanks to America’s endless wars and recent advances in technology, bionic prosthetics are now a reality. There are now replacement legs that replicate the functionality of the original organics amazingly well. There have also been advances in prosthetic arms and hands as well as progress in artificial sight. As with all technology, these bionic devices raise some important ethical issues.
The easiest moral issue to address is that involving what could be called restorative bionics. These are devices that restore a degree of the original functionality possessed by the lost limb or organ. For example, a soldier who lost the lower part of her leg to an IED in Iraq might receive a bionic device that restores much of the functionality of the lost leg. As another example, a person who lost an arm in an industrial accident might be fitted with a replacement arm that does some of what he could do with the original.
On the face of it, the burden of proof would seem to rest on those who would claim that the use of restorative bionics is immoral—after all, they merely restore functionality. However, there is still the moral concern about the obligation to provide such restorative bionics. One version of this is the matter of whether or not the state is morally obligated to provide such devices to soldiers maimed in the course of their duties. Another is whether or not insurance should cover such devices for the general population.
In general, the main argument against both obligations is financial—such devices are still rather expensive. Turned into a utilitarian moral argument, the argument would be that the cost outweighs the benefits; therefore the state and insurance companies should not pay for such devices. One reply, at least in the case of the state, is that the state owes the soldiers restoration. After all, if a soldier lost the use of a body part (or parts) in the course of her duty, then the state is obligated to replace that part if it is possible. Roughly put, if Sally gave her leg for her country and her country can provide her with a replacement bionic leg, then it should do so.
In the case of insurance, the matter is somewhat more complicated. In the United States, insurance is mostly a private, for-profit business. As such, a case can be made that the obligations of the insurance company are limited to the contract with the customer. So, if Sam has coverage that pays for his leg replacement, then the insurance company is obligated to honor that. If Bill does not have such coverage, then the company is not obligated to provide the replacement.
Switching to a utilitarian counter, it can be argued that the bionic replacements actually save money in the long term. Inferior prosthetics can cause the user pain, muscle and bone issues and other problems that result in more ongoing costs. In contrast, a superior prosthetic can avoid many of those problems and also allow the person to better return to the workforce or active duty. As such, there seem to be excellent reasons in support of the state and insurance companies providing such restorative bionics. I now turn to the ethics of bionics in sports.
Thanks to the (now infamous) “Blade Runner” Oscar Pistorious, many people are familiar with unpowered, relatively simple prosthetic legs that allow people to engage in sports. Since these devices seem to be inferior to the original organics, there is little moral worry here in regards to fairness. After all, a device that merely allows a person to compete as he would with his original parts does not seem to be morally problematic. This is because it confers no unfair advantage and merely allows the person to compete more or less normally. There is, however, the concern about devices that are inferior to the original—these would put an athlete at a disadvantage and could warrant special categories in sports to allow for fair competition. Some of these categories already exist and more should be expected in the future.
Of greater concern are bionic devices that are superior to the original organics in relevant ways. That is, devices that could make a person faster, better or stronger. For example, powered bionic legs could allow a person to run at higher speeds than normal and also avoid the fatigue that limits organic legs. As another example, a bionic arm coupled with a bionic eye could allow a person incredible accuracy and speed in pitching. While such augmentations could make for interesting sporting events, they would seem to be clearly unethical when used in competition against unaugmented athletes. To use the obvious analogy, just as it would be unfair for a person to use a motorcycle in a 5K foot race, it would be unfair for a person to use bionic legs that are better than organic legs. There could, of course, be augmented sports competitions—these might even be very popular in the future.
Even if the devices did not allow for superior performance, it is worth considering that they might be banned from competition for other reasons. For example, even if someone’s powered legs only allowed them a slow jog in a 5K, this would be analogous to using a mobility scooter in such a race—though it would be slow, the competitor is not moving under her own power. Naturally, there should be obvious exceptions for events that are merely a matter of participation (like charity walks).
Another area of moral concern is the weaponization of bionic devices. When I was in graduate school, I made some of my Ramen noodle money writing for R. Talsorian Games Cyberpunk. This science fiction game featured a wide selection of implanted weapons as well as weapon grade cybernetic replacement parts. Fortunately, these weapons do not add a new moral problem since they fall under the existing ethics regarding weaponry, concealed or otherwise. After all, a gun in the hand is still a gun, whether it is held in an organic hand or literally inside a mechanical hand.
One final area of concern is that people will elect to replace healthy organic parts with bionic components either to augment their abilities or out of a psychological desire or need to do so. Science fiction, such as the above mentioned Cyberpunk, has explored these problems and even come up with a name for the mental illness caused by a person becoming more machine than human: cyberpsyhcosis.
In general, augmenting for improvement does seem morally acceptable, provided that there are no serious side effects (like cyberpsychosis) or other harms. However, it is easy enough to imagine various potential dangers: augmented criminals, the poor being unable to compete with the augmented rich, people being compelled to upgrade to remain competitive, and so on—all fodder for science fiction stories.
As far as people replacing their healthy organic parts because of some sort of desire or need to do so, that would also seem acceptable as a form of life style choice. This, of course, assumes that the procedures and devices are safe and do not cause health risks. Just as people should be allowed to have tattoos, piercings and such, they should be allowed to biodecorate.
While I like being a professor, I am obligated to give a warning to those considering this career path. To be specific, I would warn you to reconsider. This is not because I fear the competition (I am a tenured full professor, so I won’t be competing with anyone for a job). It is not because I have turned against my profession to embrace anti-intellectualism or some delusional ideology about the awfulness of professors. It is not even due to disillusionment. I still believe in education and the value of educators. My real reason is altruism and honesty: I want potential professors to know the truth because it will benefit them. I now turn to the reasons.
First, there is the cost. In order to be a professor, you will need a terminal degree in the field—typically a Ph.D. This means that you will need to first get a B.A. or B.S. first and college is rather expensive these days. Student debt, as the media has been pointing out, it is at a record high. While a bachelor’s degree is, in general, a great investment, you will need to go beyond that and complete graduate school.
While graduate school is expensive, many students work as teaching or research assistants. These positions typically pay the cost of tuition and provide a very modest paycheck. Since the pay is low and the workload is high, you will be more or less in a holding pattern for the duration of grad school in terms of pay and probably life. After 3-7+ years, you will (if you are persistent and lucky) have the terminal degree.
If you are paying for graduate school, it will be rather expensive and will no doubt add to your debt. You might be able to work a decent job at the same time, but that will tend to slow down the process, thus dragging out graduate school.
Regardless of whether you had to pay or not, you will be attempting to start a career after about a decade (or more) in school—so be sure to consider that fact.
Second, the chances of getting a job are usually not great. While conditions do vary, the general trend has been that education budgets have been getting smaller and universities are spending more on facilities and administrators. As such, if you are looking for a job in academics, your better bet is to try to become an administrator rather than a professor. The salary for administrators is generally better than that of professors, although the elite coaches of the prestige sports have the very best salaries.
When I went on the job market in 1993, it was terrible. When I applied, I would get a form letter saying how many hundreds of people applied and how sorry the search committee was about my not getting an interview. I got my job by pure chance—I admit this freely. While the job market does vary, the odds are not great. So, consider this when deciding on the professor path.
Third, even if you do get a job, it is more likely to be a low-paying, benefit free adjunct position. Currently, 51.2% of faculty in non-profit colleges and universities are adjunct faculty. The typical pay for an adjunct is $20-25,000 per year and most positions have neither benefits nor security. The average salary for professors is $84,000. This is good, but not as good as what a person with an advanced degree makes outside of academics. Also, it is worth noting that the average salary for someone with just a B.A. is $45,000. By the numbers, if you go for a professorship, the odds are that you will be worse off financially than if you just stuck with a B.A. and went to work.
Fourth, the workload of professors is rather higher than people think. While administrative, teaching and research loads vary, professors work about 61 hours per week and work on weekends (typically grading, class prep and research). Thanks to budget cuts and increased enrollment, class sizes have tended to increase or remain high. For example, I typically have 150+ students per semester, with three of those classes being considered “writing intensive” (= lots of papers to grade).
People do like to point out that professors get summers off, it is important to point out that a summer off is a summer without pay. Also, even when a professor is not under contract for the summer, she is typically still doing research and class preparation. So, if you are dreaming about working two or three days a week and having an easy life, then being a professor is not the career for you.
Fifth, the trend in academics has been that professors do more and more uncompensated administrative work on top of their academic duties (research, teaching, advising, etc.). As one extreme example, one semester I was teaching four classes, advising, writing a book, directing the year long seven year program review, completing all the assessment tasks, and serving on nine committees. So, be sure to consider the joys of paperwork and meetings when considering being a professor.
Sixth, while there was at time that professors were well-respected, that respect has faded. Some of this is due to politicization of education. Those seeking to cut budgets to lower taxes, to transform education into a for-profit industry, and to break education unions have done an able job demonizing the profession and academics. Some is, to be fair, due to professors. As a whole, we have not done as good a job as we should in making the case for our profession in the public arena.
Seventh, while every generation claims that the newer generations are worse, the majority of students today see education as a means to the end of getting a well-paying job (or just a job). Given the economy that our political and financial elites have crafted, this is certainly a sensible and pragmatic approach. However, it has also translated into less student interest. So, if you are expecting students who value education, you must prepare for disappointment. The new model of education, as crafted by state legislators, administrators and the business folks is to train the job fillers for the job creators. The students have largely accepted this model as well, with some exceptions.
Finally, the general trend in politics has been one of increased hostility to education and in favor of seeing education as yet another place to make money. So, things will continue to worsen—perhaps to the point that professors will all be low-paid workers in the for-profit education factories that are manufacturing job fillers for the job creators.
In light of all this, you should probably not be a professor.
For those not familiar with the term, to catcall is to whistle, shout or make a comment of a sexual nature to a person passing by. In general, the term is used when the person being harassed is a women, but men can also be subject to such harassment.
Thanks to a video documenting a woman’s 10 hours of being catcalled as she walked New York City, catcalling has garnered considerable attention. While it is well known that men catcall, it is less obvious why men engage in this behavior.
Some men seem to hold to the view that they have a right to catcall. As one man put it, “if you have a beautiful body, why can’t I say something?” This view seems to have two main parts. The first (“you have a beautiful body”) seems to indicate that the woman is responsible for the response of men because she has a beautiful body. It is, I think, reasonable to accept the idea that beauty, be it in a person or painting, can evoke a response from a viewer. The problem is, however, that a catcall is not a proper response to beauty and certainly not a proper response to a person. Also, while a woman’s appearance might cause a reaction, the verbal response chosen by the man (or boy) is his responsibility. To use an analogy, seeing a cake at a wedding might make me respond with hunger, but if I chose to paw at the cake and drool on it, then the response (which is very inappropriate) is my choice. To forestall any criticism, I am not saying that women are objects—I just needed an analogy and I am hungry as I write this. Hence the cake analogy.
The second part (“why can’t I say something?”) seems to indicate that the man has a presumptive right to catcall. Put another way, this seems to assume that the burden of proving that men should not catcall rests on women and that it should be assumed that a man has such a right. While the moral right to free speech does entail than men have a right to express their views, there is also the matter of whether it is right to engage in such catcalling. I would say not, on the grounds that the harm done to women by men catcalling them outweighs the harm that would be done to men if they did not engage in such behavior. While I am vary of any laws that infringe on free expression, I do hold that men should not (in the moral sense) behave this way.
This question also seems to show a sense of entitlement—that the man seeing the woman as beautiful entitles him to harass her. This seems similar to believing that seeing someone as unattractive warrants saying derogatory things about the person. Again, while people do have a freedom of expression, there are things that are unethical to express.
Some men also claim that the way a woman dresses warrants their behavior. As one young man said, “If a girl comes out in tight leggings, and you can see something back there… I’m saying something.” This is, obviously enough, just an expression of the horrible view that a woman invites or deserves the actions of men by her choice of clothing. This “justification” is best known as a “defense” for rape—the idea that the woman was “asking for it” because she was dressed in provocative clothing. However, a woman’s mode of dress does not warrant her being catcalled or attacked. After all, if a man was wearing an expensive Rolex watch and he was robbed, it would not be said that he was provocative or was “asking for it” by displaying such an expensive timepiece. Naturally, it might be a bad idea to dress a certain way or wear an expensive watch when going certain places, but this does not justify the catcalling or robbery.
There has been some speculation that catcalling, like everything else, is the result of natural selection. Looked at one way, if the theory of evolution is correct and one also accepts the notion that human behavior is determined (rather than free), then this would be true. This is because all human behavior would be the result of such selection and determining factors. In this case, one cannot really say that the behavior would be wrong, at least if something being immoral requires that the person engaging in the behavior could do otherwise. If a person cannot do otherwise, placing blame or praise on the person would be pointless—like praising or blaming water for boiling at a certain temperature and pressure. Looked at another way, it might be useful to consider the evolutionary forces that might lead to the behavior.
One possible “just so” story is that males would call out to passing females as a form of mating display (like how birds display for each other). Some of the females would respond positively and thus the catcalling genes would be passed on to future generations of men who would in turn catcall women to attract a mate.
One reason to accept this view is that some forms of what could be regarded as catcalling do seem to work. Having been on college campuses for decades, I have seen a vast amount of catcalling in various forms (including the “hollaback” thing). Some women respond by ignoring it, some respond with hostility, and some respond positively. While the positive response rate seems low, it is a low effort “fishing trip” and hence the cost to the male is rather small. After all, he just has to sit there and say things as “bait” in the hopes he will get a bite. Like fishing, a person might cast hundreds of times to catch a single fish.
One reason to reject this view is that many of the guys who use it will obviously never get a positive response. However, they might think they will—they are casting away like mad, not realizing that their “bait” will never work. After all, they might have seen it work for other guys and think they have a chance.
Moving away from evolution, one stock explanation for catcalling is that men do it as an expression of power—they are doing it to show (to themselves, other men and women) that they have power over women. A man might be an unfit, ugly, overweight, graceless, unemployed slob but he can make a fit, beautiful and successful woman feel afraid and awful by screeching about her buttocks or breasts. Of course, catcalling is not limited to such men, though the power motive would still seem to hold. This is clearly morally reprehensible because of the harm it does to women. Even if the woman is not afraid of the man, having to hear such things can diminish her enjoyment. While I am a man, I do understand what it is like to have stupid and hateful remarks yelled at me. When I was young and running was not as accepted as it is now, it was rare for me to go for a run without someone saying something stupid or hateful. Or throwing things. Being a reasonably large male, I did not feel afraid (most of those yelling did so from the safety of passing automobiles). However, such remarks did bother me—much in the way that being bitten by mosquitoes bothers me. That is, it just made the run less pleasant. As such, I have some idea of what it is like for women to be catcalled, but it is presumably much worse for them.
I have even been catcalled by women—but I am sure that it is not the same sort of experience that women face when catcalled by men. After all, the women who have catcalled me are probably just kidding (perhaps even being ironic) and, even if they are not, they almost certainly harbor no hostile intentions and present no real threat. To have a young college woman yell “nice ass” from her car as I run through the FSU campus is a weird sort of compliment rather than a threat. Though it is still weird. In contrast, when men engage in such behavior it seems overtly predatory and threatening. So, stop catcalling, guys.
One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior.
Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults.
Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward.
Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense.
It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter.
Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time.
Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.
Like most people, I have eaten bugs. Also, like most Americans, this consumption has been unintentional and often in ignorance. In some cases, I’ve sucked in a whole bug while running. In most cases, the bugs are bug parts in foods—the FDA allows a certain percentage of “debris” in our food and some of that is composes of bugs.
While Americans typically do not willingly and knowingly eat insects, about 2 billion people do and there are about 2,000 species that are known to be edible. As might be guessed, many of the people who eat insect live in developing countries. As the countries develop, people tend to switch away from eating insects. This is hardly surprising—eating meat is generally seen as a sign of status while eating insects typically is not. However, there are excellent reasons to utilize insects on a large scale as a food source for humans and animals. Some of these reasons are practical while others are ethical.
One practical reason to utilize insects as a food source is the efficiency of insects. 10 pounds of feed will yield 4.8 pounds of cricket protein, 4.5 pounds of salmon, 2.2 pounds of chicken, 1.1 pounds of pork, and .4 pounds of beef. With an ever-growing human population, increased efficiency will be critical to providing people with enough food.
A second practical reason to utilize insects as a food source is that they require less land to produce protein. For example, it takes 269 square feet to produce a pound of pork protein while it requires only 88 square feet to generate one pound of mealworm protein. Given an ever-expanding population and every-less available land, this is a strong selling point for insect farming as a food source. It is also morally relevant, at least for those who are concerned about the environmental impact of food production.
A third reason, which might be rejected by those who deny climate change, is that producing insect protein generates less greenhouse gas. The above-mentioned pound of pork generates 38 pounds of CO2 while a pound of mealworms produces only 14. For those who believe that CO2 production is a problem, this is clearly both a moral and practical reason in favor of using insects for food. For those who think that CO2 has no impact or does not matter, this would be no advantage.
A fourth practical reason is that while many food animals are fed using food that humans could also eat (like grain and corn based feed), many insects readily consume organic waste that is unfit for human consumption. As such, insects can transform low-value feed material (such as garbage) into higher value feed or food. This would also provide a moral reason, at least for those who favor reducing the waste that ends up in landfills. This could provide some interesting business opportunities and combinations—imagine a waste processing business that “processes” organic waste with insects and then converts the insects to feed, food or for use in other products (such as medicine, lipstick and alcoholic beverages).
Perhaps the main moral argument in favor of choosing insect protein over protein from animals such as chicken, pigs and cows is based on the assumption than insects have a lower moral status than such animals or at least would suffer less.
In terms of the lower status version, the argument would be a variation on one commonly used to support vegetarianism over eating meat: plants have a lower moral status than animals; therefore it is preferable to eat plants rather than animals. Assuming that insects have a lower moral status than chickens, pigs, cows, etc., then using insects for food would be morally preferable. This, of course, also rests on the assumption that it is preferable to do wrong (in this case kill and eat) to beings with a lesser moral status than to those with a higher status.
In terms of the suffering argument, this would be a stock utilitarian style argument. The usual calculation involves weighing the harms (in this case, the suffering) against the benefits. Insects are, on the face of it, less able to suffer (and less able understand their own suffering) than animals like pigs and cows. Also, insects would seem to suffer less under the conditions in which they would be raised. While chickens might be factory farmed with their beaks clipped and confined to tiny cages, mealworms would be pretty much doing what they would do in the “wild” when being raised as food. While the insect would still be killed, it would seem that the overall suffering generated by using insects as food would be far less than that created by using animals like pigs and cows as food. This would seem to be a morally compelling argument.
The most obvious problem with using insects as food is what people call the “yuck factor.” Bugs are generally seen as dirty and gross—things that you do not want to find in food, let alone being the food. Some of the “yuck” is visual—seeing the insect as one eats it. One obvious solution is to process insects into forms that look like “normal” foods, such as powders, pastes, and the classic “mystery meat patty.” People can also learn to overcome the distaste, much as some people have to overcome their initial rejection of foods like lobster and crab.
Another concern is that insect might bear the stigma of being a food suitable for “primitive” cultures and not suitable for “civilized” people. Insect based food products might also be regarded as lacking in status, especially in contrast with traditional meats. These are, of course, all matters of social perception. Just as they are created, they can be altered. As such, these problems could be overcome.
Since I grew up eating lobsters and crabs (I’m from Maine), I am already fine with eating “bug-like” creatures. So, I would not have any problem with eating actual bugs, provided that they are safe to eat. I will admit that I probably will not be serving up plates of fried beetles to my friends, but I would have no problem serving up food containing properly processed insects. And not just because it would be, at least initially, funny.