Three Questions to Ask About Pages to Screens
While I consider myself something of a movie buff, I am out-buffed by one of my colleagues. This is a good thing—I enjoy the opportunity to hear about movies from someone who knows much more than I. We recently had a discussion about science-fiction classics and one sub-topic that came up was the matter of movies based on books or short stories.
Not surprisingly, the discussion turned to Blade Runner, which is supposed to be based on Do Androids Dream of Electric Sheep? By Phillip K. Dick. While I like the movie, some fans of the author hate the movie because it deviates from the book. This leads to two of the three questions.
The first question, which I think is the most important of the three is this: is the movie good? The second question, which I consider as having less importance, is this: how much does the movie deviate from the book/story? For some people, the second question is rather important and their answer to the first question can hinge on the answer to the second question. For these folks, the greater the degree of deviation from the book/story, the worse the movie. This presumably rests on the view that an important aesthetic purpose of a movie based on a book/story is to faithfully reproduce the book/story in movie format.
My own view is that deviation from the book/story is not actually relevant to the quality of the movie as a movie. That is, if the only factor that allegedly makes the movie bad is that it deviates from the book/story, then the movie is actually good. One way to argue for this is to point out the obvious: if someone saw the movie without knowing about the book, she would presumably regard it as a good movie. If she then found out it was based on a book/story, then nothing about the movie would have changed—as such, it should still be a good movie on the grounds that the relation to the book/story is external to the movie. To use an analogy, imagine that someone sees a painting and regards it as well done artistically. Then the person finds out it is a painting of a specific person and finds a photo of the person that shows the painting differs from the photo. To then claim that the painting is badly done would seem to be to make an unfounded claim.
It might be countered that the painting would be bad, because it failed to properly imitate the person in the photo. However, this would merely count against the accuracy of the imitation and not the artistic merit of the work. That it does not look exactly like the person would not entail that it is lacking as an artistic art. Likewise for the movie: the fact that it is not exactly like the book/story does not entail that it is thus badly done. Naturally, it is fair to claim that it does not imitate well, but this is a different matter than being a well done work.
That said, I am sympathetic to the view that a movie does need to imitate a book/movie to a certain degree if it is to legitimately claim that name. Take, for example, the movie Lawnmower Man. While not a great film, the only thing it has in common with the Stephen King story is the name. In fact, King apparently sued over this because the film had no meaningful connection to his story. However, whether the movie has a legitimate claim to the name of a book/story or not is a matter that is distinct from the quality of the movie. After all, a very bad movie might be faithful to a very bad book/story. But it would still be bad.
The third question I came up with was this: is the movie so bad that it desecrates the story/book? In some cases, authors sell the film rights to books/stories or the works become public domain (and thus available to anyone). In some cases, the films made from such works are both reasonably true to the originals and also reasonably good. The obvious examples here are the Lord of the Rings movies. However, there are cases in which the movie (or TV show) is so bad that the badness desecrates the original work by associating its awfulness with a good book/story.
One example of this is the desecration of the Wizard of Earthsea by the Sci-Fi Channel (or however they spell it these days). This was so badly done that Ursula K. Le Guin felt obligated to write a response to it. While the book is not one of my favorites, I did like it and was initially looking forward to seeing it as a series. However, it was the TV version of seeing a friend killed and re-animated as a shuffling horror of a zombie. Perhaps not quite that bad—but still pretty damn bad. Since I also like Edgar Rice Burroughs Mars books, I did not see the travesty that is Disney’s John Carter. To answer my questions, this movie was apparently very bad, deviated from the rather good book, and did desecrate it just a bit (I have found it harder to talk people into reading the books since they think of the badness of the movie).
From both a moral and aesthetic standpoint, I would contend that if a movie is to be made from a book or story, those involved have an obligation to make the movie at least as good as the original book/story. There is also an obligation to have at least some meaningful connection to the original work—after all, if there is no such connection then there is no legitimate grounds for having the film bear that name.
Presidents, Pay & Student Debt
Since I received my doctorate from the Ohio State University, I usually feel a tiny bit of unjustified pride when I hear that OSU is #1 in some area. However, I recently found out that OSU is #1 in that the school is the most unequal public university in America. The basis for this claim is that between 2010 and 2012 Gordon Gee, the president of OSU, was paid almost $6 million. At the same time, OSU raised tuition and fees to a degree that resulted in student debt increasing 23% more than the national average (which is itself rather bad).
Like many schools, OSU also pursued what I call the A&A Strategy: the majority of those hired by the school were Adjuncts and Administrators. To be specific, OSU hired 498 adjunct instructors and 670 administrators. 45 full-time, permanent faculty were hired.
While adjunct salaries vary, the typical adjunct makes $20,000-25,000 while the average professor makes about $84,000. University presidents make much, much more (the average is $478,896) and the number of presidents making $1 million or more a year is increasing. Such a president would make at least as much as 40 or more adjuncts (teaching 8 or more classes an academic year).
Given that the cost of higher education has increased dramatically, thus resulting in a corresponding increase in student debt, it is well worth considering the cause of this increase and what could be done to reduced costs without reducing the quality of education.
One seemingly obvious approach is to consider whether or not presidents are worth the money spent on them. For the million dollar pay to be fair, the president of a university would need to contribute the equivalent of these 40+ adjuncts in terms of value created. It could, of course, be argued that the public university presidents do just that—they bring in money from other rich people, provide prestige and engage in the politics needed to keep money flowing from the state. If so, a million dollar president is worth 40+ adjuncts. If not, it would seem that either the adjuncts should be paid more or the president paid less (or both) in order to ensure that money is not being wasted—and thus needlessly driving up the cost of education.
At this point, a rather obvious reply is that for big public universities, even a million dollar president is but a tiny part of the overall budget. As such, cutting the presidential salary would not result in a significant saving for the school or the students (assuming savings would be passed on to students). However, something is obviously driving up the cost of education—and it is rather clearly not faculty salary, since the majority of faculty at most public universities is composed of low paid adjuncts.
One major contribution to the increasing costs has been the increase in the size and cost of the administrative aspect of universities. A recent study found that the public universities that have the highest administrative pay spend half as much on scholarships as they do on administration. This creates a scenario in which students go into debt being taught by adjuncts while supporting a large and often well paid administration. This is not surprising given the example of OSU (hiring 543 instructors and 670 administrators).
It is, of course, easy enough to demonize administrators as useless parasites growing fat on the students, adjuncts and taxpayers. However, a university (like any organization) requires administration. Applications need to be processed, equipment needs to be purchased, programs need to be directed, forms from the state need to be completed, and the payroll has to be handled and so on. As such, there is a clear and legitimate need for administrators. However, this does not entail that all the administrators are needed or that all the high salaries are warranted. As such, one potential way to lower the cost of education is to reduce administrative positions and lower salaries. That is, to take a standard approach used in the business model so often beloved by certain administrators.
Since a public university is not a for-profit institution, the reason for the reduction should be to get the costs in line with the legitimate needs, rather than to make a profit. As such, the reductions could be more just (or merciful) than in the for-profit sector.
In terms of reducing personal, the focus should be on determining which positions are actually needed in terms of what they do in terms of advancing the core mission of the university (which should be education). In terms of reducing salary, the focus should be on determining the value generated by the person and the salary should match that. Since administrators seem exceptionally skilled at judging what faculty (especially adjuncts) should be paid, presumably there is a comparable skill for judging what administrators should be paid.
Interestingly enough, a great deal of the administrative work that directly relates to students and education is already handled by faculty. For example, on top of my paid duties as a professor, I have a stack of unpaid administrative duties that are apparently essential for me to do, yet not important enough to properly count as part of my workload. In this I am not unusual. Not surprisingly, many faculty wonder what some administrators actually do, given that so many administrative tasks are handled by faculty and staff. Presumably the extra administrative work done by faculty (usually effectively for free) is already helping schools save money, although perhaps more could be offloaded to faculty for additional savings.
One rather obvious problem is that the people who make the decisions about the administration positions and salaries are typically administrators. While some people are noble and honest enough to report on the true value of their position, self-interest clearly makes an objective assessment problematic. As such, it seems unlikely that the administration would want to act to reduce the administration merely to reduce the cost of education. This is, of course, not impossible—and some administrators would not doubt be quite willing to fire or cut the salaries of other administrators.
Since many state governments have been willing to engage in close management of state universities, one option is for these governments to impose a thorough examination of administrative costs and implement solutions to the high cost of education. Unfortunately, there are sometimes strong political ties between top administrators and the state government and there is the general worry that any cuts will be more political or ill-informed than rationally based.
Despite these challenges, it is clear that the administrative costs need to be addressed head on and that action must be taken—the alternative is ever increasing costs in return for less actual education.
It has also been suggested that the interest rates of student loans be lowered and that more grants be awarded to students. These are both good ideas—those who graduate from college generally have significantly better incomes and end up paying back what they received many times over in taxes and other contributions. However, providing students with more money from the taxpayers does not directly address the cost of education—it shifts it.
Some states, such as my adopted state of Florida, have endeavored to keep costs lower by refusing to increase tuition. While this seems reasonable, one obvious problem is that keeping tuition low without addressing the causes of increased costs does not actually solve the problem—what usually ends up happening is that the university has to cut expenses in response and these cuts tend to be in areas that actually serve the core mission of the university. For example, the university president’s high salary, guaranteed bonuses and perks are not cut—instead faculty are not hired and class sizes are increased. While the tuition does not increase, it does so at the cost of the quality of education. Unless, of course, the guaranteed bonuses of the president are key to education quality.
As such, the primary focus should be on lowering costs in a way that does not sacrifice the quality of education rather than simply lowering costs.
Ethics & E-Cigarettes
While the patent for an e-cigarette like device dates back to 1965, it is only fairly recently that e-cigarettes (e-cigs) have become popular and readily available. Thanks, in part, to the devastating health impact of traditional cigarettes, there is considerable concern about the e-cig.
A typical e-cig works by electronically heating a cartridge containing nicotine, flavoring and propylene glycol to release a vapor. This vapor is inhaled by the user, delivering the nicotine (and flavor). From the standpoint of ethics, the main concern is whether or not the e-cigs are harmful to the user.
At this point, the health threat, if any, of e-cigs is largely unknown—primarily because of the lack of adequate studies of the product.
While propylene glycol is regarded as safe by the FDA (it is used in soft drinks, shampoos and other products that are consumed or applied to the body), it is not known what effect the substance has if it is heated and inhaled. It might be harmless or it might not. Nicotine, which is regarded as being addictive, might also be harmful. There are also concerns about the “other stuff” in the cartridge that are heated into vapor—there is some indication that the vapors contain carcinogens. However, e-cigs are largely an unknown—aside from the general notion that inhaling particles generated from burning something is often not a great idea.
From a moral standpoint, there is the obvious concern that people are being exposed to a product whose health impact is not yet known. As of this writing, regulation of e-cigs seems to be rather limited and is often inconsistently enforced. Given that the e-cig is largely an unknown, it certainly seems reasonable to determine their potential impact on the consumer so as to provide a rational basis for regulation (which might be to have no regulation).
One stock argument in favor of e-cigs can be cast in utilitarian grounds. While the health impact of e-cigs is unknown, it seems reasonable to accept (at least initially) that they are probably not as bad for people as traditional cigarettes. If people elect to use e-cigs rather than traditional tobacco products, then they will be harmed less than if they used the tobacco products. This reduced harm would thus make e-cigs morally preferable to traditional tobacco products. Naturally, if e-cigs turn out to be worse than traditional tobacco products (which seems somewhat unlikely), then things would be rather different.
There is also the moral (and health) concern that people who would not use tobacco products would use e-cigs on the grounds that they are safer than the tobacco products. If the e-cigs are still harmful, then this would be of moral concern since people would be harmed who otherwise would not be harmed.
One obvious point of consideration is my view that people have a moral right to self-abuse. This is based on Mill’s arguments regarding liberty—others have no moral right to compel a person to do or not do something merely because doing so would be better, healthier or wiser for a person. The right to compel does covers cases in which a person is harming others—so, while I do hold that I have no right to compel people to not smoke, I do have the right to compel people to not expose me to smoke. As such, I can rightfully forbid people from smoking in my house, but not from smoking in their own.
Given the right of self-abuse, people would thus have every right to use e-cigs, provided that they are not harming others (so, for example, I can rightfully forbid people from using them in my house)—even if the e-cigs are very harmful.
However, I also hold to the importance of informed self-abuse: the person has to be able to determine (if she wants to) whether or not the activity is harmful in order in order for the self-abuse to be morally acceptable. That is, the person needs to be able to determine whether she is, in fact, engaging in self-abuse or not. If the person is unable to acquire the needed information, then this makes the matter a bit more morally complicated.
If the person is being intentionally deceived, then the deceiver is clearly subject to moral blame—especially if the person would not engage in the activity if she was not so deceived. For example, selling people a product that causes health problems and intentionally concealing this fact would be immoral. Or, to use another example, giving people brownies containing marijuana and not telling them would be immoral.
If there is no information available, then the ethics of the situation become rather more debatable. On the one hand, if I know that the effect of a product is unknown and I elect to use it, then it would seem that my decision puts most (if not all) of the moral blame on me, should the product prove to be harmful. This would be, it might be argued, like eating some mushroom found in the woods: if you don’t know what it will do, yet you eat it anyway and it hurts you, shame on you.
On the other hand, it seems reasonable to expect people who sell products intended for consumption be compelled to determine whether these products will be harmful or not. To use another analogy, if I have dinner at someone’s house, I have the moral expectation that they will not throw some unknown mushrooms from the woods onto the pizza they are making for dinner. Likewise, if a company sells e-cigs, the customers have a legitimate moral expectation that the product will not hurt them. Being permitted to sell products whose effect is not known is morally dubious at best. But, it should be said, that people who use such a product do bear some of the moral responsibility—they have an obligation to consider that a product that has not been tested could be harmful before using it. To use an analogy, if I buy a pizza and I know that I have no idea what the mushrooms on it will do to me, then if it kills me some of the blame rests on me—I should know better. But, the person who sells pizza also has an obligation to know what is going on that pizza-they should not sell death pizza.
The same applies to e-cigs: they should not be sold until their effects are at least reasonably determined. But, if people insist on using them without having any real idea whether they are safe or not, they are choosing poorly and deserve some of the moral blame.
The Trolling Test
One interesting philosophical problem is known as the problem of other minds. The basic idea is that although I know I have a mind (I think, therefore I think), the problem is that I need a method by which to know that other entities have (or are) minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether and entity thinks or not.
Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language.
Crudely put, the idea is that if something talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:
How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.
This Cartesian approach was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.
Not surprisingly, technological advances have resulted in computers that can engage in behavior that appears to involve using language in ways that might pass the test. Perhaps the best known example is IBM’s Watson—the computer that won at Jeopardy. Watson recently upped his game by engaging in what seemed to be a rational debate regarding violence and video games.
In response to this, I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”
While trolls are apparently truly awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test does seem worth considering.
In the abstract, the test would work like the Turing test, but would involve a human troll and a computer attempting to troll. The challenge would be for the computer troll to successfully pass as human troll.
Obviously enough, a computer can easily be programmed to post random provocative comments from a database. However, the real meat (or silicon) of the challenge comes from the computer being able to engage in (ironically) relevant trolling. That is, the computer would need to engage the other commentators in true trolling.
As a controlled test, the trolling computer (“mechatroll”) would “read” and analyze a selected blog post. The post would then be commented on by human participants—some engaging in normal discussion and some engaging in trolling. The mechatroll would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriately trollish comments.
Another option is to have an actual live field test. A specific blog site would be selected that is frequented by human trolls and non-trolls. The mechatroll would then endeavor to engage in trolling on that site by analyzing the posts and comments.
In either test scenario, if the mechatroll were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.
While “stupid mechatrolling”, such as just posting random hateful and irrelevant comments, is easy, true mechatrolling would be rather difficult. After all, the mechatroll would need to be able to analyze the original posts and comments to determine the subjects and the direction of the discussion. The mechatroll would then need to make comments that would be trollishly relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.
While creating a mechatroll would be a technological challenge, it might be suspected that doing so would be undesirable. After all, there are far too many human trolls already and they serve no valuable purpose—so why create a computer addition? One reasonable answer is that modeling such behavior could provide useful insights into human trolls and the traits that make them trolls. As far as a practical application, such a system could be developed into a troll-filter to help control the troll population.
As a closing point, it might be a bad idea to create a system with such behavior—just imagine a Trollnet instead of Skynet—the trollinators would slowly troll people to death rather than just quickly shooting them.
The Robots of Deon
The United States military has expressed interest in developing robots capable of moral reasoning and has provided grant money to some well-connected universities to address this problem (or to at least create the impression that the problem is being considered).
The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, the killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of the constraints of ethics.
While there are various reasons to imbue (or limit) robots with ethics (or at least engage in the pretense of doing so), one of these is public relations. Thanks to science fiction dating back at least to Frankenstein, people tend to worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics serves to reassure the public (or so it is hoped). Another reason is to make the public relations gimmick a reality—to actually place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds.
While science fiction features ethical robots, the authors (like philosophers who discuss the ethics of robots) are extremely vague about how robot ethics actually works. In the case of truly intelligent robots, their ethics might work the way our ethics works—which is something that is still a mystery debated by philosophers and scientists to this day. We are not yet to the point of having such robots, so the current practical challenge is to develop ethics for the sort of autonomous or semi-autonomous robots we can build now.
While creating ethics for robots might seem daunting, the limitations of current robot technology means that robot ethics is essentially a matter of programming these machines to operate in specific ways defined by whatever ethical system is being employed as the guide. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I regard shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot her.
While a suitably programmed robot would act in a way that seemed ethical, the robot is obviously not engaged in ethical behavior. After all, it is merely a more complex version of the automatic door. The supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil. Likewise, the killbot that does not shoot you in the face because its cameras show that you are unarmed is not ethical. The killbot that chops you into meaty chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.
To be fair to the killbots, perhaps we humans are not ethical or unethical under these requirements for ethics—we could just be meat-bots operating under the illusion of ethics. Also, it is certainly sensible to focus on the practical aspect of the matter: if you are a civilian being targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you or not. As such, the general practical problem is getting our killbots to behave in accord with our ethical values.
Achieving this goal involves three main steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will presumably involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating the ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior. This would require creating a definition of civilian (or perhaps just an unarmed person) that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize. The third step would be coding that behavior in whatever programming language is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender so as to get close enough to destroy the killbot).
Since these robots would be following programmed rules, they would presumably be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.
An interesting practical question is whether or not the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy enough to imagine an ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).
The largest impact of the government funding for this sort of research will be that properly connected academics will get surprisingly large amounts of cash to live the science-fiction dream of teaching robots to be good. That way the robots will feel a little bad when they kill us all.
Performance Based Funding
As a professor at Florida A&M University, I am rather familiar with performance based funding in higher education. While performance based funding is being considered or applied in numerous states, I will focus on my adopted state of Florida (it is also present in my home state of Maine).
On the face of it, performance based funding can sound like a good idea: state universities are funded based on performance, so that good performance is rewarded and poor performance is not (or punished). As a competitive athlete (though less so with each passing year), I am accustomed to a similar model in running: those who run better at races get rewarded and those who run poorly typically go home with nothing (other than the usual race t-shirt and perhaps some bagel chunks). This model seems fair—at least in sports. Whether or not it is right or sensible to apply it to education funding is another matter.
One obvious point of concern is whether or not the standards used to judge performance are fair and reasonable. In Florida, the main standards include the percentage of graduates who have jobs, the average wages of those graduates, the cost of getting the degree, the graduation rate within six years, the number of students getting STEM degrees (STEM is hot now), and some other factors.
On the face of it, some of these standards are reasonable. After all, a university would seem to be performing well to the degree that the students are graduating after paying a reasonable cost and getting well-paying jobs. This, of course, assumes that a primary function of a university is to create job-fillers for the job creators (and some job creators). In the past, the main factors for determining funding included such things as the size of the student population and what resources would be needed to provide quality education. Universities were also valued because they educated people and prepared people to be citizens of a democratic state. But, now that America appears to be an oligarchy, these values might be obsolete.
Another point of concern is that the competitive system in Florida, like most competitive systems, means that there must be losers. To be specific, Florida has nine state universities competing in regards to performance based funding. The bottom three schools will lose roughly 1% of their funding while the top six will receive more money. This means that no matter how well the nine schools are doing, three of them will always be losers.
This might be seen as reasonable or even good: after all, competition (as noted above) means that there will be winners and losers. This can be seen as a good thing because, it might be argued, the schools will be competing with each other to improve and thus all will get better—even the losers. This, obviously enough, seems to bring a competitive market approach (or Darwinian selection) to education.
The obvious question is whether or not this is a proper approach to higher education. The idea of public universities fighting over limited funding certainly seems harsh—rather like parents making their nine children fight over which six gets extra food and which three will be hungry. Presumably just as responsible parents would not want some of their children to go hungry because they could not beat their siblings, the state should also not want to deprive universities of funding because they could not beat their fellows.
It might be contended that just as children could be expected to battle for food in times of scarcity, universities should do the same. After all, desperate times call for desperate measures and not everyone can thrive. Besides, the competition will make everyone stronger.
It is true that higher education faces a scarcity of funding—in Florida, the past four years under Rick Scott and a Republican legislature have seen a 41% cut in funding. Other states have fared even worse. While some scarcity was due to the economic meltdown inflicted by the financial sector, the scarcity is also due to conscious choice in regards to taxing and spending. So, going with the analogy, the parents have cut the food supply and now want the children to battle to see who gets a larger slice of what is left. Will this battle make the schools stronger?
Given the above, a rather important point of concern is whether or not such performance based funding actually works. That is, does it actually achieve the professed goal of increasing performance?
Since I serve on various relevant committees, I can say that my university is very concerned about this funding and great effort is being made to try to keep the school out of the bottom three. This is the same sort of motivation that the threat of having one’s food cut provides—the motivation of fear. While this sort of scenario might appeal to those who idealize the competitive model of natural selection, one obvious consequence is that the schools that fall into the bottom three will lose money and hence become even less able to compete. To use the food analogy, the children that lose the competition in the first round will have less food and thus be weaker for the next battle and so on. So, while “going hungry” might be said to motivate, being hungry also weakens. So, if the true goal is to weaken the bottom three schools (and perhaps ultimately destroy them), this would work quite well. If the goal is to improve education, things might be rather different.
It might be countered that the performance based funding is justified because, despite my argument, it will work. While academics are often accused of not being “practical” or in “the real world”, we do tend to do a reasonably good at figuring out whether or not something will work. After all, studying things and analyzing them is sort of what we do. In contrast, politicians seem to be more inclined to operate in “realities” of their own ideologies.
David Tandberg of Florida State University and Nicholas W. Hillman of University of Wisconsin-Madison recently published a study assessing the effectiveness of performance based funding. They concluded that performance based funding “more often than not” failed to effect the completion of degrees. Of considerable concern is that when it did have an effect it tended to lower graduation rates. Assuming this study is accurate, performance based funding (at least as implemented) is ineffective at best and likely to actually negatively impact the professed goals.
It should be noted that Florida State University is very safely in the top six schools, so Tandberg is presumably not motivated by worries that FSU will fall to the bottom. The study, can, of course, be challenged on the usual grounds for critically assessing a study—but mere accusations that professors must be biased or that academics are incompetent hold no water.
Since I am a professor at Florida A&M University, I might also be accused of bias here. FAMU is an HBCU (one of the historically black colleges and universities) and has long had a mission of providing educational opportunities to students who have faced severe disadvantages. While overt racism is largely a thing of the past, FAMU students rather often face serious economic and preparatory challenges (thanks largely to poverty and segregation) that students from other backgrounds do not face. Some of my best students face the serious challenge of balancing part or even full-time work with their academic lives and this can be very challenging indeed. Because of this, students often take longer to graduate than students at other state universities—especially those whose students tend to come from more affluent families. These economic disparities also impact the chances of students getting jobs when they graduate as well as affecting the salary paid in such jobs. Roughly put, the effects of long-standing racism in America still remain and impact my university. While FAMU is working hard to meet the performance standards, we are struggling against factors that do not impact other schools—which means that our performance in regards to these chosen standards might be seen as lacking.
As might be imagined, some will claim that the impact of past racism is a thing of the past and that FAMU should be able to compete just fine against the other schools. This would be ignoring the reality of the situation in America.
Performance based funding of the sort that currently exists fails to achieve its professed goals and is proving harmful to higher education and students. As such, it is a bad idea. Sadly, it is the reality.
Drone Ethics is Easy
When a new technology emerges it is not uncommon for people to claim that the technology is outpacing ethics and law. Because of the nature of law (at least in countries like the United States) it is very easy for technology to outpace the law. However, it is rather difficult for technology to truly outpace ethics.
One reason for this is that any adequate ethical theory (that is, a theory that meets the basic requirements such as possessing prescriptively, consistency, coherence and so on) will have the quality of expandability. That is, the theory can be applied to what is new, be that technology, circumstances or something else. An ethical (or moral) theory that lacks the capacity of expandability would, obviously enough, become useless immediately and thus would not be much of a theory.
It is, however, worth considering the possibility that a new technology could “break” an ethical theory by being such that the theory could not expand to cover the technology. However, this would show that the theory was inadequate rather than showing that the technology outpaced ethics.
Another reason that technology would have a hard time outpacing ethics is that an ethical argument by analogy can be applied to a new technology. That is, if the technology is like something that already exists and has been discussed in the context of ethics, the ethical discussion of the pre-existing thing can be applied to the new technology. This is, obviously enough, analogous to using ethical analogies to apply ethics to different specific situations (such as a specific act of cheating in a relationship).
Naturally, if a new technology is absolutely unlike anything else in human experience (even fiction), then the method of analogy would fail absolutely. However, it seems somewhat unlikely that such a technology could emerge. But, I like science fiction (and fantasy) and hence I am willing to entertain the possibility of that which is absolutely new. However, it would still seem that ethics could handle it—but perhaps something absolutely new would break all existing ethical theories, showing that they are all inadequate.
While a single example does not provide much in the way of proof, it can be used to illustrate. As such, I will use the matter of “personal” drones to illustrate how ethics is not outpaced by technology.
While remote controlled and automated devices have been around a long time, the expansion of technology has created what some might regard as something new for ethics: drones, driverless cars, and so on. However, drone ethics is easy. By this I do not mean that ethics is easy, it is just that applying ethics to new technology (such as drones) is not as hard as some might claim. Naturally, actually doing ethics is itself quite hard—but this applies to very old problems (the ethics of war) and very “new” problems (the ethics of killer robots in war).
Getting back to the example, a personal drone is the sort of drone that a typical civilian can own and operate—they tend to be much smaller, lower priced and easier to use relative to government drones. In many ways, these drones are slightly advanced versions of the remote control planes that are regarded as expensive toys. The drones of this sort that seem to most concern people are those that have cameras and can hover—perhaps outside a bedroom window.
Two of the areas of concern regarding such drones are safety and privacy. In terms of safety, the worry is that drones can collide with people (or other vehicles, such as manned aircraft) and injure them. Ethically, this falls under doing harm to people, be it with a knife, gun or drone. While a flying drone flies about, the ethics that have been used to handle flying model aircraft, cars, etc. can easily be applied here. So, this aspect of drones has hardly outpaced ethics.
Privacy can also be handled. Simplifying things for the sake of a brief discussion, drones essentially allow a person to (potentially) violate privacy in the usual two “visual” modes. One is to intrude into private property to violate a person’s privacy. In the case of the “old” way, a person can put a ladder against a person’s house and climb up to peek under the window shade and into the person’s bedroom or bathroom. In the “new” way, a person can fly a drone up to the window and peek in using a camera. While the person is not physically present in the case of the drone, his “agent” is present and is trespassing. Whether a person is using a ladder or a drone to gain access to the window does not change the ethics of the situation in regards to the peeking, assuming that people have a right to control access to their property.
A second way is to peek into “private space” from “public space.” In the case of the “old way” a person could stand on the public sidewalk and look into other peoples’ windows or yards—or use binoculars to do so. In the “new” way, a person can deploy his agent (the drone) in public space in order to do the same sort of thing.
One potential difference between the two situations is that a drone can fly and thus can get viewing angles that a person on the ground (or even with a ladder) could not get. For example, a drone might be in the airspace far above a person’s backyard, sending back images of the person sunbathing in the nude behind her very tall fence on her very large estate. However, this is not a new situation—paparazzi have used helicopters to get shots of celebrities and the ethics are the same. As such, ethics has not been outpaced by the drones in this regard. This is not to say that the matter is solved—people are still debating the ethics of this sort of “spying”, but to say that it is not a case where technology has outpaced ethics.
What is mainly different about the drones is that they are now affordable and easy to use—so whereas only certain people could afford to hire a helicopter to get photos of celebrities, now camera-equipped drones are easily in reach of the hobbyist. So, it is not that the drone provides new capabilities that worries people—it is that it puts these capabilities in the hands of the many.
Talking Points & Climate Change

English: Animated global map of monthly long term mean surface air temperature (Mollweide projection). (Photo credit: Wikipedia)
While science and philosophy are supposed to be about determining the nature of reality, politics is often aimed at creating perceptions that are alleged to be reality. This is why it is generally wiser to accept claims supported by science and reason over claims “supported” by ideology and interest.
The matter of climate change is a matter of both science (since the climate is an objective feature of reality) and politics (since perception of reality can be shaped by rhetoric and ideology). Ideally, the facts of climate change would be left to science and sorting out how to address it via policy would fall, in part, to the politicians. Unfortunately, politicians and other non-scientists have taken it on themselves to make claims about the science, usually in the form of unsupported talking points.
On the conservative side, there has been a general shifting in the talking points. Originally, there was one main talking point: there is no climate change and the scientists are wrong. This point was often supported by alleging that the scientists were motivated by ideology to lie about the climate. In contrast, those whose profits could be impacted if climate change was real were taken as objective sources.
In the face of mounting evidence and shifting public opinion, this talking point became the claim that while climate change is occurring, it is not caused by humans. This then shifted to the claim that climate change is caused by humans, but there is nothing we can (or should) do now.
In response to the latest study, certain Republicans have embraced three talking points. These points do seem to concede that climate change is occurring and that humans are responsible. These points do have a foundation that can be regarded as rational and each will be considered in turn.
One talking point is that the scientists are exaggerating the impact of climate change and that it will not be as bad as they claim. This does rest on a reasonable concern about any prediction: how accurate is the prediction? In the case of a scientific prediction based on data and models, the reasonable inquiry would focus on the accuracy of the data and how well the models serve as models of the actual world. To use an analogy, the reliability of predictions about the impact of a crash on a vehicle based on a computer model would hinge on the accuracy of the data and the model and both could be reasonable points of inquiry.
Since the climate scientists have the data and models used to make the predications, to properly dispute the predictions would require showing problems with either the data or the models (or both). Simply saying they are wrong would not suffice—what is needed is clear evidence that the data or models (or both) are defective in ways that would show the predictions are excessive in terms of the predicted impact.
One indirect way to do this would be to find clear evidence that the scientists are intentionally exaggerating. However, if the scientists are exaggerating, then this would be provable by examining the data and plugging it into an accurate model. That is, the scientific method should be able to be employed to show the scientists are wrong.
In some cases people attempt to argue that the scientists are exaggerating because of some nefarious motivation—a liberal agenda, a hatred of oil companies, a desire for fame or some other wickedness. However, even if it could be shown that the scientists have a nefarious motivation, it does not follow that the predictions are wrong. After all, to dismiss a claim because of an alleged defect in the person making the claim is a fallacy. Being suspicious because of a possible nefarious motive can be reasonable, though. So, for example, the fact that the fossil fuel companies have a great deal at stake here does not prove that their claims about climate change are wrong. But the fact that they have considerable incentive to deny certain claims does provide grounds for suspicion regarding their objectivity (and hence credibility). Naturally, if one is willing to suspect that there is a global conspiracy of scientists, then one should surely be willing to consider that fossil fuel companies and their fellows might be influenced by their financial interests.
One could, of course, hold that the scientists are exaggerating for noble reasons—that is, they are claiming it is worse than it will be in order to get people to take action. To use an analogy, parents sometimes exaggerate the possible harms of something to try to persuade their children not to try it. While this is nicer than ascribing nefarious motives to scientists, it is still not evidence against their claims. Also, even if the scientists are exaggerating, there is still the question about how bad things really would be—they might still be quite bad.
Naturally, if an objective and properly conducted study can be presented that shows the predictions are in error, then that is the study that I would accept. However, I am still waiting for such a study.
The second talking point is that the laws being proposed will not solve the problems. Interestingly, this certainly seems to concede that climate change will cause problems. This point does have a reasonable foundation in that it would be unreasonable to pass laws aimed at climate change that are ineffective in addressing the problems.
While crafting the laws is a matter of politics, sorting out whether such proposals would be effective does seem to fall in the domain of science. For example, if a law proposes to cut carbon emissions, there is a legitimate question as to whether or not that would have a meaningful impact on the problem of climate change. Showing this would require having data, models and so on—merely saying that the laws will not work is obviously not enough.
Now, if the laws will not work, then the people who confidently make that claim should be equally confident in providing evidence for their claim. It seems reasonable to expect that such evidence be provided and that it be suitable in nature (that is, based in properly gathered data, examined by impartial scientists and so on).
The third talking point is that the proposals to address climate change will wreck the American economy. As with the other points, this does have a rational basis—after all, it is sensible to consider the impact on the economy.
One way to approach this is on utilitarian grounds: that we can accept X environmental harms (such as coastal flooding) in return for Y (jobs and profits generated by fossil fuels). Assuming that one is a utilitarian of the proper sort and that one accepts this value calculation, then one can accept that enduring such harms could be worth the advantages. However, it is well worth noting that as usual, the costs will seem to fall heavily on those who are not profiting. For example, the flooding of Miami and New York will not have a huge impact on fossil fuel company profits (although they will lose some customers).
Making the decisions about this should involve openly considering the nature of the costs and benefits as well as who will be hurt and who will benefit. Vague claims about damaging the economy do not allow us to make a proper moral and practical assessment of whether the approach will be correct or not. It might turn out that staying the course is the better option—but this needs to be determined with an open and honest assessment. However, there is a long history of this not occurring—so I am not optimistic about this occurring.
It is also worth considering that addressing climate change could be good for the economy. After all, preparing coastal towns and cities for the (allegedly) rising waters could be a huge and profitable industry creating many jobs. Developing alternative energy sources could also be profitable as could developing new crops able to handle the new conditions. There could be a whole new economy created, perhaps one that might rival more traditional economic sectors and newer ones, such as the internet economy. If companies with well-funded armies of lobbyists got into the climate change countering business, I suspect that a different tune would be playing.
To close, the three talking points do raise questions that need to be answered:
- Is climate change going to be as bad as it is claimed?
- What laws (if any) could effectively and properly address climate change?
- What would be the cost of addressing climate change and who would bear the cost?
Neil deGrasse Tyson, Science & Philosophy
In March of 2014 popular astrophysicist and Cosmos host Neil deGrasse Tyson did a Nerdist Podcast. This did not garner much attention until May when some philosophers realized that Tyson was rather critical and dismissive of philosophy. As might be imagined, there was a response from the defenders of philosophy. Some critics went so far as to accuse him of being a philistine.
Tyson presents a not uncommon view of contemporary philosophy, namely that “asking deep questions” can cause a “pointless delay in your progress” in engaging “this whole big world of unknowns out there.” To avoid such pointless delays, Tyson advises scientists to respond to such questioners by saying, “I’m moving on, I’m leaving you behind, and you can’t even cross the street because you’re distracted by deep questions you’ve asked of yourself. I don’t have time for that.”
Since Tyson certainly seems to be a deep question sort of guy, it is tempting to consider that his remarks are not serious—that is, he is being sarcastic. Even if he is serious, it is also reasonable to consider that these remarks are off-the cuff and might not represent his considered view of philosophy in general.
It is also worth considering that the claims made are his considered and serious position. After all, the idea that a scientist would regard philosophy as useless (or worse) is quite consistent with my own experiences in academics. For example, the politically fueled rise of STEM and the decline of the humanities has caused some in STEM to regard this situation as confirmation of their superior status and on some occasions I have had to defuse conflicts instigated by STEM faculty making their views about the uselessness of non-STEM fields clear.
Whatever the case, the concern that the deep questioning of philosophy can cause pointless delays does actually have some merit and is well worth considering. After all, if philosophy is useless or even detrimental, then this would certainly be worth knowing.
The main bite of this criticism is that philosophical questioning is detrimental to progress: a scientist who gets caught in these deep questions, it seems, would be like a kayaker caught in a strong eddy: she would be spinning around and going nowhere rather than making progress. This concern does have significant practical merit. To use an analogy outside of science, consider a committee meeting aimed at determining the curriculum for state schools. This committee has an objective to achieve and asking questions is a reasonable way to begin. But imagine that people start raising deep questions about the meaning of terms such as “humanities” or “science” and become very interested in sorting out the semantics of various statements. This sort of sidetracking will result in a needlessly long meeting and little or no progress. After all, the goal is to determine the curriculum and deep questions will merely slow down progress towards this practical goal. Likewise, if a scientist is endeavoring to sort out the nature of the cosmos, deep questions can be a similar sort of trap: she will be asking ever deeper questions rather than gathering data and doing math to answer her less deep questions.
Philosophy, as Socrates showed by deploying his Socratic method, can endlessly generate deep questions. Questions such as “what is the nature of the universe?”, “what is time?”, “what is space?”, “what is good?” and so on. Also, as Socrates showed, for each answer given, philosophy can generate more questions. It is also often claimed that this shows that philosophy really has no answers since every alleged answer can be questioned or raises even more questions. Thus, philosophy seems to be rather bad for the scientist.
A key assumption seems to be that science is different from philosophy in at least one key way—while it raises questions, proper science focuses on questions that can be answered or, at the very least, gets down to the business of answering them and (eventually) abandons a question should it turn out to be a distracting deep question. Thus, science provides answers and makes progress. This, obviously enough, ties into another stock criticism of philosophy: philosophy makes no progress and is useless.
One rather obvious reason that philosophy is regarded as not making progress and as being useless is that when enough progress is made on a deep question, it is perceived as being a matter for science rather than philosophy. For example, ancient Greek philosophers, such as Democritus, speculated about the composition of the universe and its size (was it finite or infinite?) and these were considered deep philosophical questions. Even Newton considered himself a natural philosopher. He has, of course, been claimed by the scientist (many of whom conveniently overlook the role of God in his theories). These questions are now claimed by physicists, such as Tyson, who regard them as scientific rather than philosophical questions.
Thus, it is rather unfair to claim that philosophy does not solve problems or make progress—since when excellent progress is made, the discipline is labeled as science and no longer considered philosophy. However, the progress would have obviously been impossible without the deep questions that set people in search of answers and the work done by philosophers before the field was claimed as a science. To use an analogy, to claim that philosophy has made no progress or contributions would be on par with a student taking the work done by another, adding to it and then claiming the whole as his own work and deriding the other student as “useless.”
At this point, some might be willing to grudgingly concede that philosophy did make some valuable contributions (perhaps on par with how the workers who dragged the marble for Michelangelo’s David contributed) in the past, but philosophy is now an eddy rather than the current of progress.
Interestingly enough, philosophy has been here before—back in the days of Socrates the Sophists contended that philosophical speculation was valueless and that people should focus on getting things done—that is, achieving success. Fortunately for contemporary science, philosophy survived and philosophers kept asking those deep questions that seemed so valueless then.
While philosophy’s day might be done, it seems worth considering that some of the deep, distracting philosophical questions that are being asked are well worth pursuing—if only because they might lead to great things. Much as how Democritus’ deep questions led to the astrophysics that a fellow named Neil loves so much.
6 comments