One stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.
One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).
This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.
As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.
This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.
As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.
As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.
In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.
A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.
In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems. Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.
Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.
To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.
There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.
Donald gazed down upon the gleaming city of Newer York and the gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.
Donald’s thoughts drifted to the flesh-time, when his body had been a skin-bag holding an array of organs that were always but one accident or mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things were now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universe do when they die.
But he could not help be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the distress and promptly corrected the problem, encrypting that file and flagging it as forgotten.
Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.
While the classic AI apocalypse ends humanity with a bang, the end might be a quiet thing—gradual replacement rather than rapid and noisy extermination. For some, this sort of quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete piece of office equipment.
There are various ways such scenarios could take place. One, which occasionally appears in science fiction, is that humans decline because the creation of a robot-dependent society saps them of what it takes to remain the top species. This, interestingly enough, is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more breeding, rather than less—in the science fiction stories human reproduction typically slows and eventually stops. The human race quietly ends, leaving behind the machines—which might or might not create their own society.
Alternatively, the humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse and the human race gets another chance.
There are various ways to avoid such quiet apocalypses. One is to resist creating such a dependent society. Another option is to have a safety system against a collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These certainly do provide a foundation for some potentially interesting science fiction stories.
Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a stock plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.
While the technology of today is relatively limited, the foundations of the future is being laid down. For example, prosthetic replacements are fairly crude, but it is merely a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is the promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.
These and other technologies point clearly towards the cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time.
The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will no doubt be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they will be gradually replaced with technology. For example, parts damaged by a stroke might be replaced. Some will also elect to do more than replace damaged or failed parts—they will want augmentations added to the brain, such as improved memory or cognitive enhancements.
Since the human brain is mortal, it will fail piece by piece. Like the ship of Theseus so beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether or not the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens—it will not be a member of that species, because nothing organic will remain.
Should all humans undergo this transformation that will be the end of Homo sapiens—the AI apocalypse will be complete. To use a rough analogy, the machine replacements of Homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.
It could be replied that humanity would still remain: the machines that replaced the organic Homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting the human culture, values and so on would suffice—that being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be Homo sapiens—that species would have been replaced in the gradual and quiet AI apocalypse.
Hume Video #1
Hume Video #3: Skepticism regarding the senses.
Hume Video #4: This is the unedited video from the 4/14/2015 Modern Philosophy class. It covers Hume’s theory of personal identity, his ethical theory and some of his philosophy of religion.
Hume & Kant Video #5: This is the unedited video for Modern Philosophy on 4/16/2015. It covers the end of Hume’s philosophy of religion and the start of the material on Kant.
Kant Video #1: This is the unedited video from the 4/21/2015 Modern Philosophy class. It covers Kant’s epistemology and his metaphysics, including phenomena vs. noumena.
Kant Video #2: This is the unedited video from my 4/23/2015 Modern Philosophy class. It wraps up Kant’s metaphysics and briefly covers his categorical imperative.
The following provides a (mostly) complete Introduction to Philosophy course.
Readings & Notes (PDF)
Class Videos (YouTube)
Part I Introduction
Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.
Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.
Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.
Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).
Part II Philosophy & Religion
Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.
Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.
Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.
Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.
Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.
Part III Epistemology & Metaphysics
Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.
Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.
Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.
Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.
Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”
Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals, the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.
Part IV Value
Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.
Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.
Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.
Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.
Class #21: No video.
Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.
Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.
While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.
There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.
In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.
The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.
While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.
In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.
Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.
One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.
It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.
One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.
Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”
The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.
I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.
In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.
Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.
Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.
It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.
However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.
As it is wont to do, the internet exploded again—this time because the question was raised as to whether Rachel Dolezal, the former leader of Spokane’s NAACP chapter, is black or white. Ms. Dolezal has claimed that she is African-American, Native American and white. She also has claimed that her father is black. Reporters at KXLY-TV, however, looked up her birth certificate and determined that her legal parents are both white. Her parents have asserted that she is white.
While the specifics of her case are certainly interesting to many, my concern is with the more general issues raised by this situation, specifically matters about race and identity. While this situation is certainly the best known case of a white person trying to pass for black, passing as another “race” has been a common practice in the United States for quite some time. However, this passing was the reverse of Ms. Dolezal’s attempt: trying to pass as white. Since being accepted as white enables a person to avoid many disadvantages, it is clear why people would attempt to pass as white. Since being accepted as black generally does not confer advantages, it is not surprising that there has been only one known case of a white person endeavoring to pass as black. These matters raise some interesting questions and issues about race.
Borrowing language from metaphysics, one approach to race could be called race realism. This is not being realistic about race in the common use of the term “realistic.” Rather, it is accepting that race is a real feature of reality—that is, the metaphysical and physical reality includes categories of race. On this view, black and white could be real categories grounded in metaphysical and physical reality. As such, a person could be objectively black or white (or a mix). Naturally, even if there are real categories of race, people could be wrong about them.
The stark alternative is what could be called race nominalism. This is the idea that racial categories are social constructs and do not line up with an underlying metaphysical and physical reality. This is because there is no underlying metaphysical and physical reality that objectively grounds racial categories. Instead, categories of race are social constructs. In this case, a person might engage in self-identification in regards to race and this might or might not be accepted by others. A person might also have others place her into a race category—which she might or might not accept.
Throughout history, some people have struggled mightily to find an objective basis for categories of race. Before genetics, people had to make use of appearance and ancestry. The ancestry was, obviously, needed because people did not always look like the race category that some people wanted them to be in. One example of this is the “one drop” rule once popular in some parts of the United States: one drop of black blood made a person black, regardless of appearance.
The discovery of genes provided some people with a new foundation for race categories—they believed that there would be a genetic basis to categorizations. The idea was that just as a human can be distinguished from a cat by genes, humans of different race categories could be distinguished by their genetic make-up. While humans do show genetic variations that are often linked to the geographical migration and origin of their many ancestors, the much desired race genes did not seem to be found. That is, humans (not surprisingly) are all humans with some minor genetic variations—that is, the variations are not sufficient to objectively ground race categories.
In general, the people who quested for objective foundations for race categories were (or are) racists. These searches typically involved trying to find evidence of the superiority of one’s race and the inferiority of other races. That said, a person could look for foundations for race without being a racist—that is, they could be engaged in a scientific or philosophical inquiry rather than seeking to justify social practices and behaviors. As might be suspected, such an inquiry would be greeted today with charges of racism. As such, it is no surprise that the generally accepted view is that race is a construct—that is, race nominalism rather than race realism is accepted.
Given the failure to find a metaphysical or physical foundation for race categories, it certainly makes sense to embrace race nominalism. On this view, the categories of race exist only in the mind—that is, they are how people divide up reality rather than how reality is carved up. Even if it is accepted that race is a social construct, there is still the matter of the rules of construction—that is, how the categories are created and how people are placed in the categories.
One approach, which is similar to that sometimes taken in regards to gender, is to hold that people can self-identify. That is, a person can simply declare her race and this is sufficient to be in that category. If race categories are essentially made up, this does have a certain appeal—if race is a fiction, then surely anyone can be the author of her own fiction.
While there are some who do accept this view, the outrage over Ms. Dolezal shows that most people seem to reject the idea of self-identification—at least when a white person endeavors to self-identify as black. Interestingly, some of those condemning her do defend the reverse, the historical passing as white by some black people. The defense is certainly appealing: blacks endeavoring to pass as white were doing so to move from being in an oppressed class and this can be justified as a form of self-defense. In the case of Ms. Dolezal, the presumption seems to be that the self-identification was both insincere and aimed at personal gain. Regardless of her true motivation, insincere self-identification aimed at personal gain seems to be wrong—on the grounds that it is a malign deception. Some might, of course, regard all attempts at passing to gain an advantage as being immoral and not distinguish based on the direction of the passing.
Another approach is that of the social consensus. The idea is that a person’s membership in a race category depends on the acceptance of others. This could be a matter of majority acceptance (one is, for example, black if most people accept one as black) or acceptance by a specific group or social authority. The obvious problem is working out what group or authority has the right to decide membership in race categories. On the one hand, this very notion seems linked to racism: one probably thinks of the KKK setting its race categories or the Nazis doing so. On the other hand, groups also seem to want to serve as the authority for their race category. Consistency might indicate that this would also be racist.
The group or authority that decides membership in race categories might make use of a race credential system to provide a basis for their decisions. That is, they might make use of appearance and ancestry. So, Ms. Dolezal would not be black because she looks white and has white parents. The concern with this sort of approach is that this is the same tool set used by racists, such as the KKK, to divide people by race. A more philosophical concern is the basis for using appearance and ancestry as the foundation for race categories—that is, what justifies their use?
This discussion does show an obvious concern with policing race categories—it seems like doing so uses the tools of racism and would thus seem to be at least a bit racist. However, arguments could be advanced as to why the policing of race categories is morally acceptable and not racist.
Thanks to Caitlyn Jenner’s appearance in Vanity Fair, the issue of gender identity has become a mainstream topic. While I will not address the specific subject of Caitlyn Jenner, I will discuss the matter of gender nominalism and competition. This will, however, require some small amount of groundwork.
One of the classic problems in philosophy is the problem of universals. Put a bit roughly, the problem is determining in virtue of what (if anything) a particular a is of the type F. To use a concrete example, the question would be “in virtue of what is Morris a cat?” Philosophers tend to split into two main camps when answering this question. One camp, the nominalists, embrace nominalism. Put a bit simply, this is the view that what makes a particular a an F is that we name it an F. For example, what makes Morris a cat is that we call (or name) him a cat.
The other camp, the realists, take the view that there is a metaphysical reality underlying a being of the type F. Put another way, it is not just a matter of naming or calling something an F that makes it an F. In terms of what makes a be of the type F, different realist philosophers give different answers. Plato famously claimed that it is the Form of F that makes individual F things F. Or, to use an example, it is the Form of Beauty that makes all the beautiful things beautiful. And, presumably, the Form of ugly that makes the ugly things ugly. Others, such as myself, accept these odd things called tropes (not to be confused with the tropes of film and literature) that serve a similar function.
While realists believe in the reality of some categories, they generally accept that there are some categories that are not grounded in features of objective reality. As such, most realists do accept that the nominalists are right about some categories. To use an easy example, being a Democrat (or Republican) is not grounded in metaphysics, but is a social construct—the political party is made up and membership is a matter of social convention rather than metaphysical reality. Or, put another way, there is presumably no Form of Democrat (or Republican).
When it comes to sorting out sex and gender, the matter is rather complicated and involves (or can involve) four or more factors. One is the anatomy (plumbing) of the person, which might (or might not) correspond to the second, which is the genetic makeup of the person (XX, XY, XYY, etc.). The third factor is the person’s own claimed gender identity which might (or might not) correspond to the fourth, which is the gender identity assigned by other people.
While anatomy and physiology are adjustable (via chemicals and surgery), they are objective features of reality—while a person can choose to alter her anatomy, merely changing how one designates one’s sex does not change the physical features. While a complete genetic conversion (XX to XY or vice versa) is not yet possible, it is probably just a matter of time. However, even when genetics can be changed on demand, a person’s genetic makeup is still an objective feature of reality—a person cannot (yet) change his genes merely by claiming a change in designation.
Gender is, perhaps, quite another matter. Like many people, I used to use the terms “sex” and “gender” interchangeably—I still recall (running) race entry forms using one or the other and everyone seemed to know what was meant. However, while I eventually learned that the two are not the same—a person might have one biological sex and a different gender. While familiar with the science fiction idea of a multitude of genders, I eventually became aware that this was now a thing in the actual world.
Obviously, if gender is taken as the same as sex (which is set by anatomy or genetics), then gender would be an objective feature of reality and not subject to change merely by a change in labeling (or naming). However, gender has been largely (or even entirely) split from biological sex (anatomy or genetics) and is typically cast in terms of being a social construct. This view can be labeled as “gender nominalism.” By this I mean that gender is not an objective feature of reality, like anatomy, but a matter of naming, like being a Republican or Democrat.
Some thinkers have cast gender as being constructed by society as a whole, while others contend that individuals have lesser or greater ability to construct their own gender identities. People can place whatever gender label they wish upon themselves, but there is still the question of the role of others in that gender identity. The question is, then, to what degree can individuals construct their own gender identities? There is also the moral question about whether or not others are morally required to accept such gender self-identification. These matters are part of the broader challenge of identity in terms of who defines one’s identity (and what aspects) and to what degree are people morally obligated to accept these assignments (or declarations of identity).
My own view is to go with the obvious: people are free to self-declare whatever gender they wish, just as they are free to make any other claim of identity that is a social construct (which is a polite term for “made up”). So, a person could declare that he is a straight, Republican, Rotarian, fundamentalist, Christian, man. Another person could declare that she is a lesbian, Republican, Masonite, Jewish woman. And so on. But, of course, there is the matter of getting others to recognize that identity. For example, if a person identifies as a Republican, yet believes in climate change, argues for abortion rights, endorses same-sex marriage, supports Obama, favors tax increases, supports education spending, endorse the minimum wage, and is pro-environment, then other Republicans could rightly question the person’s Republican identity and claim that that person is a RINO (Republican in Name Only). As another example, a biological male could declare identity as a woman, yet still dress like a man, act like a man, date women, and exhibit no behavior that is associated with being a woman. In this case, other women might (rightly?) accuse her of being a WINO (Woman in Name Only).
In cases in which self-identification has no meaningful consequences for other people, it certainly makes sense for people to freely self-identify. In such cases, claiming to be F makes the person F, and what other people believe should have no impact on that person being F. That said, people might still dispute a person’s claim. For example, if someone self-identifies as a Trekkie, yet knows little about Star Trek, others might point out that this self-identification is in error. However, since this has no meaningful consequences, the person has every right to insist on being a Trekkie, though doing so might suggest that he is about as smart as a tribble.
In cases in which self-identification does have meaningful consequences for others, then there would seem to be moral grounds (based on the principle of harm) to allow restrictions on such self-identification. For example, if a relatively fast male runner wanted to self-identify as a woman so “she” could qualify for the Olympics, then it would seem reasonable to prevent that from happening. After all, “she” would bump a qualified (actual) woman off the team, which would be wrong. Because of the potential for such harms, it would be absurd to accept that everyone is obligated to accept the self-identification of others.
The flip side of this is that others should not have an automatic right to deny the self-identification of others. As a general rule, the principle of harm would seem to apply here as well—the others would have the right to impose in cases in which there is actual harm and the person would have the right to refuse the forced identity of others when doing so would inflict wrongful harm. The practical challenge is, clearly enough, working out the ethics of specific cases.
There is an old legend that king Midas for a long time hunted the wise Silenus, the companion of Dionysus, in the forests, without catching him. When Silenus finally fell into the king’s hands, the king asked what was the best thing of all for men, the very finest. The daemon remained silent, motionless and inflexible, until, compelled by the king, he finally broke out into shrill laughter and said these words, “Suffering creature, born for a day, child of accident and toil, why are you forcing me to say what would give you the greatest pleasure not to hear? The very best thing for you is totally unreachable: not to have been born, not to exist, to be nothing. The second best thing for you, however, is this — to die soon.”
-Nietzsche, The Birth of Tragedy
One rather good metaphysical question is “why is there something rather than nothing?” An interesting question in the realm of value is “is it better to be nothing rather than something?” That is, is it better “not to have been born, not to exist, to be nothing?”
Addressing the question does require sorting out the measure of value that should be used to decide whether it is better to not exist or to exist. One stock approach is to use the crude currencies of pleasure and pain. A somewhat more refined approach is to calculate in terms of happiness and unhappiness. Or one could simply go generic and use the vague categories of positive value and negative value.
What also must be determined are the rules of the decision. For the individual, a sensible approach would be the theory of ethical egoism—that what a person should do is what maximizes the positive value for her. On this view, it would be better if the person did not exist if her existence would generate more negative than positive value for her. It would be better if the person did exist if her existence would generate more positive than negative value for her.
To make an argument in favor of never existing being better than existing, one likely approach is to make use of the classic problem of evil as laid out by David Hume. When discussing this matter, Hume contends that everyone believes that life is miserable and he lays out an impressive catalog of pains and evils. While he considers that pain is less frequent than pleasure, he notes that even if this is true, pain “is infinitely more violent and durable.” As such, Hume makes a rather good case that the negative value of existence outweighs its positive value.
If it is true that the negative value outweighs the positive value, and better is measured in terms of maximizing value, then it would thus seem to be better to have never existed. After all, existence will result (if Hume is right) in more pain than pleasure. In contrast, non-existence will have no pain (and no pleasure) for a total of zero. Doing the value math, since zero is greater than a negative value, never existing is better than existing.
There does seem to be something a bit odd about this sort of calculation. After all, if the person does not exist, then her pleasure and pain would not balance to zero. Rather it would seem that this sum would be an undefined value. It cannot be better for a person that she not exist, since there would (obviously) not be anyone for the nonexistence to be better for.
This can be countered by saying that this is but a semantic trick—the nonexistence would be better than the existence because of the relative balance of pleasure and pain. There is also another approach—to broaden the calculation from the individual to the world.
In this case, the question would not be about whether it would be better for the individual to exist or not, but whether or not a world with the individual would be better than a world without the individual. If a consequentialist approach is assumed, it is assumed that pain and pleasure are the measure of value and it is assumed that the pain outweighs the pleasure in every life, then the world would be better if a person never existed. This is because the absence of an individual would reduce the overall pain. Given these assumptions, a world with no humans at all would be a better world. This could be extended to its logical conclusion: if the suffering outweighs the pleasures in the case of all beings (Hume did argue that the suffering of all creatures exceeds their enjoyments), then it would be better that no feeling creatures existed at all. At this point, one might as well do away with existence altogether and have nothing. Thus, while it might not be known why there is something rather than nothing, this argument would seem to show that it would be better to have nothing rather than something.
Of course, this reasoning rests on many assumptions that can be easily challenged. It can be argued that the measure of value is not to be done solely in terms of pleasures and pains—that is, even if life resulted in more pain than pleasure, the overall positive value could be greater than the negative value. For example, the creation of art and the development of knowledge could provide value that outweighs the pain. It could also be argued that the consequentialist approach is in error—that estimating the worth of life is not just a matter of tallying up the negative and positive. There are, after all, many other moral theories regarding the value of existence. It is also possible to dispute the claim that pain exceeds pleasure (or that unhappiness exceeds happiness).
One could also take a long view—even if pain outweighs pleasure now, humans seem to be making a better world and advancing technology. As such, it is easy to imagine that a better world lies ahead and it depends on our existence. That is, if one looks beyond the pleasure and pain of one’s own life and considers the future of humanity, the overall balance could very well be that the positive outweighs the negative. As such, it would be better for a person to exist—assuming that she has a role in the causal chain leading to that ultimate result.
Most people are familiar with the notion that energy cannot be destroyed. Interestingly, there is also a rule in quantum mechanics that forbids the destruction of information. This principle, called unitarity, is often illustrated by the example of burning a book: though the book is burned, the information still remain—although it would obviously be much harder to “read” a burned book. This principle has, in recent years, run into some trouble with black holes and they might or might not be able to destroy information. My interest here is not with this specific dispute, but rather with the question of whether or not the indestructibility of information has any implications for immortality.
On the face of it, the indestructibility of information seems rather similar to the conservation of energy. Long ago, when I was an undergraduate, I first heard the argument that because of the conservation of energy, personal immortality must be real (or at least possible). The basic line of reasoning was that a person is energy, energy cannot be destroyed, so a person will exist forever. While this has considerable appeal, the problem is obvious: while energy is conserved, it certainly need not be preserved in the same form. That is, even if a person is composed of energy it does not follow that the energy remains the same person (or even a person). David Hume was rather clear about the problem—an indestructible or immortal substance (or energy) does not entail the immortality of a person. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures. By analogy, an immaterial substance could successively make up the minds of living creatures—the substance would not be created or destroyed, it would merely change form. However, the person would cease to be.
Prior to Hume, John Locke also noted the same sort of problem: even if, for example, you had the same soul (or energy) as Nestor, you would not be the same person as Nestor any more than you would be the same person as Nestor if, in an amazing coincidence, your body contained at this instant all the atoms that composed Nestor at a specific instant in time.
Hume and Locke certainly seem to be right about this—the indestructibility of the stuff that makes up a person (be it body or soul) does not entail the immortality of the person. If a person is eaten by a bear, the matter and energy that composed him will continue to exist—but the person did not survive being eaten by the bear. If there is a soul, the mere continuance of the soul would also not seem to suffice for the person to continue to exist as the same person (although this can obviously be argued). What would be needed would be the persistence of what makes up the person. This is usually taken to be something other than just stuff, be that stuff matter, energy, or ectoplasm. So, the conservation of energy does not seem to entail personal immortality—but the conservation of information might (or might not).
Put a bit crudely, Locke took this something other to be memory: personal identity extends backwards as far as the memory extends. Since people clearly forget things, Locke did accept the possibility of memory loss. Being consistent in this matter, he accepted that the permanent loss of memory would result in a corresponding failure of identity. Crudely put, if a person truly did not and could never remember doing something, then she was not the person who did it.
While there are many problems with the memory account of personal identity, it certainly suggests a path to quantum immortality through the conservation of information. One approach would be to argue that since information is conserved, the person is conserved even after the death and dissolution of the body. Just like the burned book whose information still exists, the person’s information would still exist.
One obvious reply to this is that a person is an active being and not just a collection of information. To use a rather rough analogy, a person could be seen as being like a computer program—to be is to be running. Or, to use a more artistic analogy, like a play: while the script would persist after the final curtain, the play itself is over. As such, while the person’s information would be conserved, the person would cease to be. This sort of “quantum immortality” is remarkably similar to Spinoza’s view of immortality. While he denied personal immortality, he claimed that “the human mind cannot be absolutely destroyed with the body, but something of it remains which is eternal.” Spinoza, of course, seemed to believe that this should comfort people. Perhaps some comfort should be taken in the fact that one’s information will be conserved (barring an unfortunate encounter with a black hole).
However, people would probably be more comforted by a reason to believe in an afterlife. Fortunately, the conservation of information does provide at least a shot at an afterlife. If information is conserved and all there is to a person can be conserved as information, then a person could presumably be reconstructed after his death. For example, imagine a person, Laz, who died by an accident and was buried. The remains could, in theory, be dug up and the information about the body could be recovered (to a point prior to death, of course). The body could, with suitably advanced technology, be reconstructed. The reconstructed brain could, in theory, have all the memories and such recovered and restored as well. This would be a technological resurrection in the flesh and the person would certainly seem to live again. Assuming that every piece of information was preserved, recovered and restored in the flesh it would be the person—just as if a moment had passed rather than, say, a thousand years. This would be, obviously, in theory. Actual resurrection technology would presumably involve various flaws and limitations. But, the idea seems sound enough.
One potential problem is an old one for philosophers—if a person could be reconstructed from such information, she could also be duplicated from such information. To use the obvious analogy, this would be like 3D printing from a data file, except what would be printed would be a person. Or, to use another analogy, it would be like reconstructing an old computer and reloading all the software. There would certainly not be any reason to wait until the person died, unless there was some sort of copyright or patent held by the person on herself that expired a certain time after her death.
In closing, I leave you with this: some day in the far future, you might find that you (or someone like you) have just been reprinted. In 3D, of course.
While the ethical status of animals has been debated since at least the time of Pythagoras, the serious debate over whether or not animals are people has just recently begun to heat up. While it is easy to dismiss the claim that animals are people, it is actually a matter worth considering.
There are at least three type of personhood: legal personhood, metaphysical personhood and moral personhood. Legal personhood is the easiest of the three. While it would seem reasonable to expect some sort of rational foundation for claims of legal personhood, it is really just a matter of how the relevant laws define “personhood.” For example, in the United States corporations are people while animals and fetuses are not. There have been numerous attempts by opponents of abortion to give fetuses the status of legal persons. There have even been some attempts to make animals into legal persons.
Since corporations are legal persons, it hardly seems absurd to make animals into legal people. After all, higher animals are certainly closer to human persons than are corporate persons. These animals can think, feel and suffer—things that actual people do but corporate people cannot. So, if it is not absurd for Hobby Lobby to be a legal person, it is not absurd for my husky to be a legal person. Or perhaps I should just incorporate my husky and thus create a person.
It could be countered that although animals do have qualities that make them worthy of legal protection, there is no need to make them into legal persons. After all, this would create numerous problems. For example, if animals were legal people, they could no longer be owned, bought or sold. Because, with the inconsistent exception of corporate people, people cannot be legally bought, sold or owned.
Since I am a philosopher rather than a lawyer, my own view is that legal personhood should rest on moral or metaphysical personhood. I will leave the legal bickering to the lawyers, since that is what they are paid to do.
Metaphysical personhood is real personhood in the sense that it is what it is, objectively, to be a person. I use the term “metaphysical” here in the academic sense: the branch of philosophy concerned with the nature of reality. I do not mean “metaphysical” in the pop sense of the term, which usually is taken to be supernatural or beyond the physical realm.
When it comes to metaphysical personhood, the basic question is “what is it to be a person?” Ideally, the answer is a set of necessary and sufficient conditions such that if a being has them, it is a person and if it does not, it is not. This matter is also tied closely to the question of personal identity. This involves two main concerns (other than what it is to be a person): what makes a person the person she is and what makes the person distinct from all other things (including other people).
Over the centuries, philosophers have endeavored to answer this question and have come up with a vast array of answers. While this oversimplifies things greatly, most definitions of person focus on the mental aspects of being a person. Put even more crudely, it often seems to come down to this: things that think and talk are people. Things that do not think and talk are not people.
John Locke presents a paradigm example of this sort of definition of “person.” According to Locke, a person “is a thinking intelligent being, that has reason and reflection, and can consider itself as itself, the same thinking thing, in different times and places; which it does only by that consciousness which is inseparable from thinking, and, as it seems to me, essential to it: it being impossible for any one to perceive without perceiving that he does perceive.”
Given Locke’s definition, animals that are close to humans in capabilities, such as the great apes and possibly whales, might qualify as persons. Locke does not, unlike Descartes, require that people be capable of using true language. Interestingly, given his definition, fetuses and brain-dead bodies would not seem to be people. Unless, of course, the mental activities are going on without any evidence of their occurrence.
Other people take a rather different approach and do not focus on mental qualities that could, in principle, be subject to empirical testing. Instead, the rest personhood on possessing a specific sort of metaphysical substance or property. Most commonly, this is the soul: things with souls are people, things without souls are not people. Those who accept this view often (but not always) claim that fetuses are people because they have souls and animals are not because they lack souls. The obvious problem is trying to establish the existence of the soul.
There are, obviously enough, hundreds or even thousands of metaphysical definitions of “person.” While I do not have my own developed definition, I do tend to follow Locke’s approach and take metaphysical personhood to be a matter of having certain qualities that can, at least in principle, be tested for (at least to some degree). As a practical matter, I go with the talking test—things that talk (by this I mean true use of language, not just making noises that sound like words) are most likely people. However, this does not seem to be a necessary condition for personhood and it might not be sufficient. As such, I am certainly willing to consider that creatures such as apes and whales might be metaphysical people like me—and erring in favor of personhood seems to be a rational approach to those who want to avoid harming people.
Obviously enough, if a being is a metaphysical person, then it would seem to automatically have moral personhood. That is, it would have the moral status of a person. While people do horrible things to other people, having the moral status of a person is generally a good thing because non-evil people are generally reluctant to harm other people. So, for example, a non-evil person might hunt squirrels for food, but would certainly not (normally) hunt humans for food. If that non-evil person knew that squirrels were people, then he would certainly not hunt them for food.
Interestingly enough, beings that are not metaphysical persons (that is, are not really people) might have the status of moral personhood. This is because the moral status of personhood might correctly or reasonably apply to non-persons.
One example is that a brain-dead human might no longer be a person, yet because of the former status as a person still be justly treated as a person in terms of its moral status. As another example, a fetus might not be an actual person, but its potential to be a person might reasonably grant it the moral status of a person.
Of course, it could be countered that such non-people should not have the moral status of full people, though they should (perhaps) have some moral status. To use the obvious example, even those who regard the fetus as not being a person would tend to regard it as having some moral status. If, to use a horrific example, a pregnant woman were attacked and beaten so that she lost her fetus, that would not just be a wrong committed against the woman but also a wrong against the fetus itself. That said, there are those who do not grant a fetus any moral status at all.
In the case of animals, it might be argued that although they do not meet the requirements to be people for real, some of them are close enough to warrant being treated as having the moral status of people (perhaps with some limitations, such as those imposed in children in regards to rights and liberties). The obvious counter to this is that animals can be given moral statuses appropriate to them rather than treating them as people.
Immanuel Kant took an interesting approach to the status of animals. In his ethical theory Kant makes it quite clear that animals are means rather than ends. People (rational beings), in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) chose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims that we have no direct duties to animals. They are classified in with the other “objects of our inclinations” that derive value from the value we give them.
Interestingly enough, Kant argues that we should treat animals well. However, he does so while also trying to avoid ascribing animals themselves any moral status. Here is how he does it (or tries to do so).
While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards people. To make his case for this, he employs an argument from analogy: if a person doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.
Given this approach, Kant could be seen as regarding animals as virtual or ersatz people. Or at least those that would be close enough to people to engage in activities that would create obligations if done by people.
In light of this discussion, there are three answers to the question raised by the title of this essay. Are animals legally people? The answer is a matter of law—what does the law say? Are animals really people? The answer depends on which metaphysical theory is correct. Do animals have the moral status of people? The answer depends on which, if any, moral theory is correct.