In my previous essay I rambled a bit about homosexuality and choice. The main point of this was to set up this essay, which focuses on the ethics of engineering people to be straight.
In general terms, sexual orientation is either a choice or it is not (though choice can be a matter of degree). Currently, many of the people who are against homosexuality take the view that it is a matter of choice. This allows them to condemn homosexuality and to push for methods aimed at motivating people to choose to be straight. Many of those who are at least tolerant of homosexuality contend that sexual orientation is not a matter of choice. They are, of course, careful to take the view that being homosexual is more like being left-handed than having an inherited disease. This view is taken as justification for at least tolerating homosexuality and as a reason to not allow attempts to push homosexuals in an impossible effort to get them to choose to be straight.
For the sake of this essay, let it be assumed that homosexuality is not a matter of choice—a person is either born with her orientation or it develops in a way that is beyond her choice. To blame or condemn the person would be on par with blaming a person for being born with blue eyes or to condemn a person for being left-handed. As such, if homosexuality is not a choice, then it would be unjust to condemn or blame a person for her sexual orientation. This seems reasonable.
Ironically, this line of reasoning might make it morally permissible to change a person’s orientation from gay to straight. The argument for this is as follows.
As has been supposed, a person’s sexual orientation is not a matter of choice: she is either born that way or becomes that way without being able to effect the result. The person is thus a “victim” of whatever forces made her that way. If these forces had been different in certain ways, then she would have had a different sexual orientation—either by chance or by the inexorable machinery of determinism. Given that the person is not making a choice either way, it would seem to be morally acceptable for these factors to be altered to ensure a specific orientation. To use an analogy, I did not choose my eye color and it would not matter, it would seem, whether this was due to a natural process or due to an intentional intervention on the part of others (by modifying me genetically). After all, the choice is not mine either way.
It could be replied that other people would not have the right to make the choice—that it should be left to blind chance (or blind determinism). This does have some merit—whatever they do to change a person, they would be morally accountable for. However, from the standpoint of the person, there would seem to be no difference: they do not get a choice either way. I ended up with blue eyes by chance, but if I was engineered to have green eyes, then the result would be the same: my eye color would not be my choice. I ended a heterosexual, but if I had been engineered to be a homosexual, I would have had no more or less choice.
Thus, robbing a person of choice would not be a moral concern here: if a person does not get a choice, she cannot be robbed of that choice. What is, however, of moral concern is the ethics of the choice being made to change (or not change) the person. If the change is beneficial, such as changing a person so that her heart develops properly rather than failing before she is born, then it would seem to be the right thing to do. If the change is harmful, such as altering the person’s brain so that he suffers from paranoia and psychosis, then it would seem to be the wrong thing to do.
In the matter at hand, the key concern would be whether making a person a heterosexual or a homosexual would be good or bad. As noted above, since it is assumed that sexual orientation is not a choice, engineering a person to be straight or gay would not be robbing them of a choice. Also, the change of orientation can be assumed to be thorough so that a person would be equally happy either way. In this case, the right choice would seem to be a matter of consequences: would a person be more or less likely to be happy straight or not? Given the hostility that still exists towards homosexuals, it would seem that engineering people to be straight would be the right choice.
This might strike some as horrifying and a form of orientation genocide (oriocide?) in which homosexuals are eliminated. Or, more accurately, homosexuality is eliminated. After all, the people who would have been homosexual (by change or by the mechanisms of determinism) would instead be straight, but they would still presumably be the same people they would be if they were gay (unless sexual orientation is an essential quality in Aristotle’s sense of the term). If orientation is not a choice, it would seem that this would not matter: no one is robbed of a choice because one cannot be robbed of what one never possessed.
A rather interesting question remains: if sexual orientation is not a choice, what harm would be done if everyone where engineered to be straight? Or gay?
Since the matter of choice is rather interesting to me, it is hardly a shock that I would be interested in the question of whether or not sexual orientation is a choice. One obvious problem with trying to settle this matter is that it seems impossible to prove (or disprove) the existence of the capacity for choice. As Kant argued, free will seems to lie beyond the reach of our knowledge. As such, it would seem that it could not be said with confidence that a person’s sexual orientation is a matter of choice. But, this is nothing special: the same can be said about the person’s political party, religion, hobbies and so on.
Laying aside the metaphysical speculation, it can be assumed (or perhaps pretended) that people do have a choice in some matters. Given this assumption, the question would seem to be whether sexual orientation legitimately belongs in the category of things that can be reasonably assumed to be matters of choice.
On the face of it, sexual orientation seems to fall within the realm of sexual preference. That is, in the domain of what a person finds sexually appealing and attractive. This seems to fall within a larger set of what a person finds appealing and attractive.
At this time, it seems reasonable to believe that what people find appealing and attractive has some foundation in neural hardwiring rather than in what could be regarded as choice. For example, humans apparently find symmetrical faces more attractive than non-symmetrical faces and this is not a matter of choosing to prefer one over another. Folks who like evolution tend to claim that this preference exists because those with symmetrical faces are often healthier and hence better for breeding purposes.
Food preferences probably also involve hard wiring: humans really like salty and sweet foods and the usual explanation also ties into evolution. For example, sweet foods are high calorie foods but are rare in nature, hence our ancestors who really liked sweets did better at surviving than those who did not really like sweets. Or some such story of survival of the sweetest.
Given the assumption that there are such hardwired preferences, it is conceivable that sexual preferences also involve some hardwiring. So, for example, a person might be hardwired to have a preference for sexual partners with light hair over those with dark hair. Then again, the preference might be based on experience—the person might have had positive experiences with those with light hair and thus was conditioned to have that preference. The challenge is, of course, to sort out the causal role of hard wiring from the causal role of experience (including socialization). What is left over might be what could be regarded as choice.
In the case of sexual orientation, it seems reasonable to have some doubts about experience being the primary factor. After all, homosexual behavior has long been condemned, discouraged and punished. As such, it seems less likely that people would be socialized into being homosexual—especially in places where being homosexual is punishable by death. However, this is not impossible—perhaps people could be somehow socialized into being gay by all the social efforts to make them be straight.
In regards to hardwiring for sexual orientation, that seems to have some plausibility. This is mainly because there seems to be a lack of evidence that homosexuality is chosen. Assuming that the options are choice, nature or nurture, then eliminating choice and nurture would leave nature. But, of course, this could be a false trilemma: there might be other options.
It can be objected that people do chose homosexual behavior and thus being homosexual is a choice. While this does have some appeal, it is important to distinguish between a person’s orientation and what the person choses to do. A person might be heterosexual and chose to engage in homosexual activity in order to gain the protection of a stronger male in prison. A homosexual might elect to act like a heterosexual to avoid being killed. However, this choices would not seem to change their actual orientation. As such, I tend to hold that orientation is not a choice but that behavior is a matter of choice.
This past Saturday, I was doing my short pre-race day run and, for no apparent reason, my left leg began to hurt badly. I made my way home, estimating the odds of a recovery by Sunday morning. When I got up Sunday, my leg felt better and my short jog before the race went well. Just before the start, I was optimistic: it seemed my leg would be fine. Then the race started. Then the pain.
I hobbled forward and “accelerated” to an 8:30 per minute mile (the downside of a GPS watch is that I cannot lie to myself). The beast of pain grew strong and tore at my will. Behind that armor, my fear and doubt cowered—urging me to drop out with whispered pleas. At that moment of weakness, I considered doing the unthinkable: hobbling over to the curb and leaving the race.
From the inside, that is in my mind, this seemed to be a paradigm example of the freedom of the will: I could elect to push on through the pain or I could decide to take the curb. It was, as it might be said, all up to me. While I was once pulled from a race because of injuries, I had never left one by choice—and I decided that this would not be my first. I kept going and the pain got worse.
At this point, I considered that my pride was pushing me to my destruction—that is, I was not making a good choice but being coerced into making a poor decision. Fortunately, three decades of running had trained me well in pain assessment: like most veteran runners I am reasonably good at distinguishing between what merely hurts and what is actually causing significant damage. Carefully considering the nature of the pain and the condition of my leg, I judged that it was mere pain. While I could still decide to stop, I decided to keep going. I did, however, grab as many of the high caffeine GU packs as I could—I figured that being wired up as much as possible would help with pain management.
Aided by the psychological boost of my self-medication (and commentary from friends about my unusually slow pace), I chose to speed up. By the time I reached mile 5 my leg had gone comfortably numb and I increased my speed even more, steadily catching and passing people. Seven miles went by and then I caught up with a former student. He yelled “I can’t let you pass me Dr. L!” and went into a sprint. I decided to chase after him, believing that I could still hobble a mile even if I was left with only one working leg. Fortunately, the leg held up better than my student—I got past him, then several more people and crossed the finish line running a not too bad 1:36 half-marathon. My leg remained attached to me, thus vindicating my choice. I then chose to stuff pizza into my pizza port—pausing only to cheer on people and pick up my age group award.
As the above narrative indicates, my view is that I was considering my options, assessing information from my body and deciding what to do. That is, I had cast myself as having what philosophers like to label as free will. From the inside, that is what it certainly seems like.
Of course, it would presumably seem the same way from the inside if I lacked free will. Spinoza, for example, claims that if a stone were conscious and hurled through the air, it would think it was free to choose to move and land where it does. As Spinoza saw it, people think they are free because they are “conscious of their own actions, and ignorant of the causes by which those actions are determined.” As such, on Spinoza’s view my “decisions” were not actual decisions. That is, I could not have chosen otherwise—like the stone, I merely did what I did and, in my ignorance, believed that I had decided my course.
Hobbes also takes a somewhat similar view. As he sees it, what I would regard as the decision making process of assessing the pain and then picking my action he would regard as a competition between two pulling forces within the mechanisms of my brain. One force would be pulling towards stopping, the other towards going. Since the forces were closely matched for a moment, it felt as if I was deliberating. But, the matter was determined: the go force was stronger and the outcome was set.
While current science would not bring in Spinoza’s God and would be more complicated than Hobbe’s view of the body, the basic idea would remain the same: the apparent decision making would be best explained by the working of the “neuromachinery” that is me—no choice, merely the workings of a purely mechanical (in the broad sense) organic machine. Naturally, many would through in some quantum talk, but randomness does not provide any more freedom that strict determinism.
While I think that I am free and that I was making choices in the race, I obviously have no way to prove that. At best, all that could be shown was that my “neuromachinery” was working normally and without unusual influence—no tumors, drugs or damage impeding the way it “should” work. Of course, some might take my behavior as clear evidence that there was something wrong, but they would be engaged in poor decision making.
Kant seems to have gotten it quite right: science can never prove that we have free will, but we certainly do want it. And pizza.
In the Dr. Who story Inferno, the Doctor’s malfunctioning TARDIS console drops him into a parallel universe inhabited by counterparts of the people of his home reality. Ever philosophical, the Doctor responds to his discovery by the following reasoning: “An infinity of universes. Ergo an infinite number of choices. So free will is not an illusion after all. The pattern can be changed.”
While the Doctor does not go into detail regarding his inference, his reasoning seems to be that since the one parallel universe he ended up in is different from his own in many ways (the United Kingdom is a fascist state in that universe and the Brigadier has an eye patch), it follows that at least some of the differences are due to different choices and this entails that free will is real.
While the idea of being able to empirically confirm free will is appealing, the Doctor’s inference is flawed: the existence of an infinity of universes and differences between at least some (two) of these universes does not show that free will is real. This is because the existence of differences between different universes would be consistent with there being no free will.
One possibility is that determinism is true, but different universes are, well, different. That is, each universe is a deterministic universe with no free will, yet they are not all identical. To use an analogy, two planets could be completely deterministic, yet different. As such, the people of Dr. Who’s universe were determined to be the way they are, while the people of the parallel universe were determined to be the way they are.
It could be objected that all universes are at least initially identical and hence any difference between them must be explained by metaphysical free will. However, even if it is granted for the sake of argument that all universes start out identical to each other, it still does not follow that the explanation for differences between them is due to free will.
The rather obvious alternative explanation is that randomness is the key factor—that is, each universe is random rather than deterministic. In this case, universes could differ from each other without there being any free will at all. To us an analogy, the fact that dice rolls differ from each other does not require free will to explain the difference—random chance would suffice. In this case, the people of the Doctor’s universe just turned out as they did because of chance and the same is true of their counterparts—only the dice rolls were a bit different, so their England was fascist and their Brigadier had an eye patch.
Interestingly enough, if the Doctor had ended up in a universe just like his own (which he might—after all, there would be no way to tell the difference), this would not have disproved free will. While it is unlikely that all the choices made in the two universes would be the same, given an infinity of universes it would not be impossible. As such, differences between universes or a lack thereof would prove nothing about free will.
My position, as usual, is that I should believe in free will. If I am right, then it is certainly the right thing to believe. If I am wrong, then I could not have done otherwise or perhaps it was just the result of randomness. Either way, I would have no choice. That, I think, is about all that can be sensibly said about metaphysical free will.
Science fiction is often rather good at predicting the future and it is not unreasonable to think that the intelligent machine of science fiction will someday be a reality. Since I have been writing about sexbots lately, I will use them to focus the discussion. However, what follows can also be applied, with some modification, to other sorts of intelligent machines.
Sexbots are, obviously enough, intended to provide sex. It is equally obvious that sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped or not. Sorting this out requires considering the matter of consent in more depth.
When it is claimed that sex without consent is rape, one common assumption is that the victim of non-consensual sex is a being that could provide consent but did not. A violent sexual assault against a person would be an example of this as would, presumably, non-consensual sex with an unconscious person. However, a little reflection reveals that the capacity to provide consent is not always needed in order for rape to occur. In some cases, the being might be incapable of engaging in any form of consent. For example, a brain dead human cannot give consent, but presumably could still be raped. In other cases, the being might be incapable of the right sort of consent, yet still be a potential victim of rape. For example, it is commonly held that a child cannot properly consent to sex with an adult.
In other cases, a being that cannot give consent cannot be raped. To use an obvious example, a human can have sex with a sex-doll and the doll cannot consent. But, it is not the sort of entity that can be raped. After all, it lacks the status that would require consent. As such, rape (of a specific sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being in order for the sex to be morally acceptable. Naturally, I have not laid out all the fine details to create a necessary and sufficient account here—but that is not my goal nor what I need for my purpose in this essay. In regards to the main focus of this essay, the question would be whether or not a sexbot could be an entity that has a status that would require consent. That is, would buying (or renting) and using a sexbot for sex be rape?
Since the current sexbots are little more than advanced sex dolls, it seems reasonable to put them in the category of beings that lack this status. As such, a person can own and have sex with this sort of sexbot without it being rape (or slavery). After all, a mere object cannot be raped (or enslaved).
But, let a more advanced sort of sexbot be imagined—one that engages in complex behavior and can pass the Turning Test/Descartes Test. That is, a conversation with it would be indistinguishable from a conversation with a human. It could even be imagined that the sexbot appeared fully human, differing only in terms of its internal makeup (machine rather than organic). That is, unless someone cut the sexbot open, it would be indistinguishable from an organic person.
On the face of it (literally), we would seem to have as much reason to believe that such a sexbot would be a person as we do to believe that humans are people. After all, we judge humans to be people because of their behavior and a machine that behaved the same way would seem to deserve to be regarded as a person. As such, nonconsensual sex with a sexbot would be rape.
The obvious objection is that we know that a sexbot is a machine with a CPU rather than a brain and a mechanical pump rather than a heart. As such, one might, argue, we know that the sexbot is just a machine that appears to be a person and is not a person. As such, a real person could own a sexbot and have sex with it without it being rape—the sexbot is a thing and hence lacks the status that requires consent.
The obvious reply to this objection is that the same argument can be used in regards to organic humans. After all, if we know that a sexbot is just a machine, then we would also seem to know that we are just organic machines. After all, while cutting up a sexbot would reveal naught but machinery, cutting up a human reveals naught but guts and gore. As such, if we grant organic machines (that is, us) the status of persons, the same would have to be extended to similar beings, even if they are made out of different material. While various metaphysical arguments can be advanced regarding the soul, such metaphysical speculation provides a rather tenuous basis for distinguishing between meat people and machine people.
There is, it might be argued, still an out here. In his Hitchhikers’ Guide to the Galaxy Douglas Adams envisioned “an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly.” A similar sort of thing could be done with sexbots: they could be programmed so that they always give consent to their owner, thus the moral concern would be neatly bypassed.
The obvious reply is that programmed consent is not consent. After all, consent would seem to require that the being has a choice: it can elect to refuse if it wants to. Being compelled to consent and being unable to dissent would obviously not be morally acceptable consent. In fact, it would not be consent at all. As such, programming sexbots in this manner would be immoral—it would make them into slaves and rape victims because they would be denied the capacity of choice.
One possible counter is that the fact that a sexbot can be programmed to give “consent” shows that it is (ironically) not the sort of being with a status that requires consent. While this has a certain appeal, consider the possibility that humans could be programmed to give “consent” via a bit of neurosurgery or by some sort of implant. If this could occur, then if programmed consent for sexbots is valid consent, then the same would have to apply to humans as well. This, of course, seems absurd. As such, a sexbot programmed for consent would not actually be consenting.
It would thus seem that if advanced sexbots were built, they should not be programmed to always consent. Also, there is the obvious moral problem with selling such sexbots, given that they would certainly seem to be people. It would thus seem that such sexbots should never be built—doing so would be immoral.
Michelle Bachmann seems to have claimed that Obama’s support of the Syrian rebels is a sign of the End Times:
“[President Barack Obama's support of Syrian rebels] happened and as of today the United States is willingly, knowingly, intentionally sending arms to terrorists, now what this says to me, I’m a believer in Jesus Christ, as I look at the End Times scripture, this says to me that the leaf is on the fig tree and we are to understand the signs of the times, which is your ministry, we are to understand where we are in God’s end times history. [...] And so when we see up is down and right is called wrong, when this is happening, we were told this; that these days would be as the days of Noah. We are seeing that in our time. Yes it gives us fear in some respects because we want the retirement that our parents enjoyed. Well they will, if they know Jesus Christ.”
While Bachmann’s political star seems to be falling, she is apparently still an influential figure and popular with many Tea Party members. As such, it seems worthwhile to address her claims.
Her first claim is a factual matter about the mundane world: she asserts that Obama is “willingly, knowingly, intentionally sending arms to terrorists.” This claim is easy enough to disprove. Despite some pressure (including some from Republicans) to arm the rebels, the administration has taken a very limited approach: rebels that have been determined to not be terrorists will be supported with defensive aid rather than provided with offensive weaponry. Thus, Bachmann (who is occasionally has problems with facts) is wrong on two counts. First, Obama is not sending arms (taken as offensive weapons). Second, he is not sending anything to terrorists.
Now, it could be objected that means of defense are arms, under a broad definition of “arms.” Interestingly, as I learned in the 1980s when the debate topic for a year was arms sales, “arms” can be defined very broadly indeed. If Bachmann defines “arms” broadly enough to include defensive aid, then Obama would be sending arms. However, this is rather a different matter than if Obama were sending offensive weapons, such as the Stinger missiles we provided to the mujahedeen when they were fighting the Russians.
It could also be objected that Obama is sending arms to terrorists. This could be done by claiming that he knows that what he sends to Syria could end up being taken from the intended recipients by terrorists. This is a reasonable point of concern, but it seems clear from her words that she does not mean this.
It could also be done by claiming that Obama is lying and he is, in fact, sending the aid to actual terrorists. Alternatively, it could be claimed that he is sending the aid to non-terrorists, but intends for the terrorists to take it. While this is possible (Presidents have lied about supplying arms in the past), actual proof would be needed to show that he is doing this with will, knowledge and intent. That is, it would have to be established that Obama knows the people who he is sending the aid to are terrorists and/or that he intends for terrorists to receive these arms. Given the seriousness of the claim, this would require equally serious report. Bachmann does not seem to provide any actual evidence for her accusation, hence there is little reason to place confidence in her claim.
While politicians tend to have a “special” relationship with the truth, Bachmann seems to have an extra-special relationship.
Her second claim is a factual matter about the supernatural world: she seems to be claiming that Obama’s alleged funding of terrorists is a sign of the End Times. While I am not a scholar of the end of the world (despite authoring a fictional version of the End Time), what she is claiming does not seem to be accurate. That is, there seems to be no reference to something adequately similar to Obama funding terrorists as a sign of the End Time. But perhaps Bachmann has access to some special information that has been denied to others.
While predictions that the End Time is near are common, it does seem to be bad theology to make such predictions in the context of Christianity. After all, the official epistemic line seems to be that no one but God knows when this time will come: “But of that day and that hour knows no man, no, not the angels which are in heaven, neither the Son, but the Father.” As such, any speculation that something is or is not a sign of the End Time would be rather problematic. If the bible is correct about this, Bachmann should not make such a claim–she cannot possibly know that something is a sign of the End Times or not, since no one can know (other than God) when it will occur.
It could be replied that the bible is wrong about this matter and Bachman can know that she has seen a sign and that the End Times are thus approaching. The obvious reply is that if the bible is wrong about this, then it could be wrong about other things–such as there being an End Time at all.
Interestingly, her view of the coming End Time might help explain her positive view of the government shut down. When asked about the shutdown, she said ”It’s exactly what we wanted, and we got it.” While Bachmann has not (as of this writing) claimed that this is also a sign of the End Times, her view that the End Times are approaching would certainly provide an explanation for her lack of concern. After all, if the End Time is fast approaching, then the time of government here on earth is fast approaching its end. Bachmann does seem to think it is on its way.
Weirdly, she also seems to think that Jesus will handle our retirement–which is presumably a reason we will not need the government. She says, “Yes it gives us fear in some respects because we want the retirement that our parents enjoyed. Well they will, if they know Jesus Christ.” This seems to be saying that people who believe the End Time is coming, such as herself, will worry that they will not be able to enjoy their retirement. This seems oddly reasonable: after all, the End Time would certainly clash with the sort of non-end-of-the-world retirement our parents enjoyed. But, oddly enough, she thinks that people who know Jesus will be able to have that retirement, apparently with Jesus providing the benefits rather than the state.
As might be imagined, the fact that Bachmann is an influential figure who apparently has some influence on politics is terrifying enough to itself be a sign of the End Time.
Back in the heyday of the cyberpunk genre I made some of my Ramen noodle money coming up with “cybertech” for use in the various science-fiction role-playing games. As might be guessed, these included implants, nanotechology, cyberforms, smart weapons, robots and other such technological make-believe. While cyberpunk waned over the years, it never quite died off. These days, there is a fair amount of mostly empty hype about a post-human future and folks have been brushing the silicon dust off cyberpunk.
One stock bit of cybertech is the brain chip. In the genre, there is a rather impressive variety of these chips. Some are fairly basic—they act like flash drives for the brain and store data. Others are rather more impressive—they can store skillsets that allow a person, for example, to temporarily gain the ability to fly a helicopter. The upper level chips are supposed to do even more, such as increasing a person’s intelligence. Not surprisingly, the chipping of the brain is supposed to be part of the end of the human race—presumably we will be eventually replaced by a newly designed humanity (or cybermanity).
On the face of it, adding cybertech upgrades to the brain seems rather plausible. After all, in many cases this will just be a matter of bypassing the sense organs and directly connecting the brain to the data. So, for example, instead of holding my tablet in my hands so I can see the results of Google searches with my eyes, I’ll have a computer implanted in my body that links into the appropriate parts of my brain. While this will be a major change in the nature of the interface (far more so than going from the command line to an icon based GUI), this will not be as radical a change as some people might think. After all, it is still just me doing a Google search, only I do not need to hold the tablet or see it with my eyes. This will not, obviously enough, make me any smarter and presumably would not alter my humanity in any meaningful way relative to what the tablet did to me. To put it crudely, sticking a cell phone in your head might be cool (or creepy) but it is still just a phone. Only now it is in your head.
The more interesting sort of chip would, of course, be one that actually changes the person. For example, when many folks talk about the coming new world, they speak of brain enhancements that will improve intelligence. This is, presumably, not just a matter of sticking a calculator in someone’s head. While this would make getting answers to math problems more convenient, it would not make a person any more capable at math than does a conventional outside-the-head calculator. Likewise for sticking in a general computer. Having a PC on my desktop does not make me any smarter. Moving it into my head would not change this. It could, obviously enough, make me seem smarter—at least to those unaware of my headputer.
What would be needed, then, would be a chip (or whatever) that would actually make a change within the person herself, altering intelligence rather than merely closing the interface gap. This sort of modification does raise various concerns.
One obvious practical concern is whether or not this is even possible. That is, while it make sense to install a computer into the body that the person uses via an internal interface, the idea of dissolving the distinction between the user and the technology seems rather more questionable. It might be replied that this does not really matter. However, the obvious reply is that it does. After all, plugging my phone and PC into my body still keeps the distinction between the user and the machine in place. Whether the computer is on my desk or in my body, I am still using it and it is still not me. After all, I do not use me. I am me. As such, my abilities remain the same—it is just a tool that I am using. In order for cybertech to make me more intelligent, it would need to change the person I am—not just change how I interface with my tools. Perhaps the user-tool gap can be bridged. If so, this would have numerous interesting implications for philosophy.
Another concern is more philosophical. If a way is found to actually create a chip (or whatever) that becomes part of the person (and not just a tool that resides in the body), then what sort of effect would this have on the person in regards to his personhood? Would Chipped Sally be the same person as Sally, or would there be a new person? Suppose that Sally is chipped, then de-chipped? I am confident that armies of arguments can be marshalled on the various sides of this matter. There are also the moral questions about making such alterations to people.
Even as a kid watching cartoons, I noticed that while the superheroes and heroes never really hurt living opponents, they had no qualms about bashing intelligent machines to bits. While animation of this sort is rather more violent than when I was a kid, the superhero genre still has an interesting distinction between how intelligent living creatures are treated and how even intelligent machines are treated. For example, Batman might give the Joker a solid beat down during an episode of the famous Batman animated series but he certainly does not kill anyone. Anyone organic anyway. Intelligent machines, which are common fare in superhero animation, are routinely destroyed by the same heroes who are sworn to never take a life. As might be guessed, I’ve given this matter some thought.
One rather obvious basis for the difference is psychological (or even biological): while people are generally distressed and even sickened by images of maimed and dead humans (and animals), they generally do not have a similar visceral reaction to damaged or destroyed machines. So, Superman punching Lex Luthor’s head off in a bloody mess would impact viewers rather differently than Superman punching the head off a robot. Interestingly, animators do portray mechanical beings being sliced to pieces and “bleeding” (provided the “blood” is oil or some other non-blood fluid). For example, Samurai Jack featured rather “gory” battles in which slaughtered machines gushed streams of blood. Organic opponents were, of course, never dealt with in that manner.
It is easy enough to dismiss the distinction between the violence against humans (and other living things) and machines as being purely a matter of keeping the action at the appropriate rating for the intended audience. However, there does seem to be more to the matter than this.
In the case of living opponents, the superheroes are generally careful to simply subdue them (even when the villains are mere generic minions and not the valuable comic book properties that are the main villains like Poison Ivy or the Parasite) rather than killing them or even hurting them badly. This is presumably because the heroes regard excessively harming or killing people to be morally unacceptable.
However, even obviously intelligent machines are not given the same treatment—unless the machine is a valuable property (like Braniac) the machine is typically destroyed rather than subdued. Even the main villain machines are also subject to far more violence than the living opponents, even if they do come back in later episodes or issues.
As such, there is a strong indication of organicism—a bias in favor of organic life and an accompanying contempt for non-organic people. This, of course, might seem like an absurd thing to say, however it does seem to be a matter well worth considering since this bias does extend (at least in fiction) beyond the realm of comic book animation and into science-fiction.
The main point of concern is that the treatment of the entity is often based not on whether it is person or not but based on its composition. As such, intelligent machines are treated as things despite the fact that they show the key attributes of being people. For example, they think and engage in meaningful speech. Since there are presumably no actual intelligent machines today, this matter is still confined to fiction. However, heroes seem rather less heroic when they causally destroy people simply because they happen to be mechanical rather than biological. After all, they are not acting in a consistent way towards all people—they are biased against mechanical people.
It might, of course, be contended that the machines that act like people in the shows are not actually people (in the context of the show, of course). That is, they are cleverly programmed to create the appearance of being intelligent, but are no more a person than is a gun or dump truck.
While this does have a certain appeal, there is the obvious concern of whether or not the heroes know this metaphysical fact about the fictional world. That is, that the heroes know that a human minion is a person while a seemingly intelligent machine minion that talks and fights as well as a human minion merely has the appearance of personhood.
Very crudely put, solipsism is the philosophical view that only I exist. I played around a bit with it in an earlier post, and I thought I’d do so a bit more before putting it back in the attic.
One interesting way to object to solipsism is on moral grounds. After all, if I believe that only I exist, this belief could result in me behaving badly. Assuming that the world exists, people commonly endeavor to lower the moral status of beings they wish to make the targets of their misdeeds. For example, men who want to mistreat women often work hard to cast them as inferior. As another example, people who want to mistreat animals typically convince themselves that animals are inferior beings and hence can be mistreated. Solipsism would seem to present the ultimate reduction: everything other than me is nothing, which is presumably as “low” as it goes (unless there is some sort of negative or anti-existence). If I were to truly believe that other people and animals merely “exist” in my mind, then my treatment of them would seem to not matter at all. Since no one else exists, I cannot commit murder. Since the world is mine, I cannot commit theft. As might be imagined, such believes could open the door to wicked behavior.
One obvious reply is that if solipsism is true, then this would not be a problem. After all, acting badly towards others is only a problem if there are, in fact, others to act badly towards. If solipsism is true, what I do in the “real” world would seem to have no more moral significance than what I do in dreams or in video games. As such, it can be contended that the moral problem is only a problem if one believes that solipsism is false.
However, it can also be contended that the possibility that solipsism is wrong should be taken into account. That is, while I cannot disprove solipsism, I also cannot prove it. As such, the people I encounter might, in fact, be people. As such, the possibility that they are actually people should be enough to require that I act as if they are people in terms of how I treat them. As such, my skepticism about my solipsism would seem to lead me to act morally, even though it is possible that there is no one else to act morally towards. This, obviously enough, is analogous in some ways to concerns about the treatment of certain animals as well as the ethical matter of abortion. If I accept a principle that entities that might be people should be treated as people, this would seem to have some interesting implications. Of course, it could be argued that the possible people need to show the qualities that actual people would have if they existed as people.
It can also be contended that even if solipsism were true, my actions would still have moral significance. That is, I could still act in right or wrong ways. One way to consider ethics in the context of solipsism is to consider ethics in the case of video games. Some years back I wrote “Saving Dogmeat” which addresses a similar concern, namely whether or not one can be good or bad in regards to video game characters. One way to look at solipsism is that the world is a video game that has one player, namely me.
One obvious way to develop this would be to develop a variant of Kantian ethics. While there would be no other rational beings, the Kantian view that only the good will is good would seem to allow for ethics in solipsism. While my willing could have no consequences for other beings (since there are none) I could presumably still will the good. Another way to do this is by using a modified version of virtue theory. While there would be no right or wrong targets of my feelings and actions (other than myself), there would still seem to be a way to discuss excess and deficiency. There are, of course, numerous other theories that could be modified for a world that is me. For example, utilitarianism would still work, although the only morally relevant being would be me. However, my actions could make me unhappy or happy even though they are directed “towards” the contents of my own mind. For example, engaging in “kindness” could make me happier than engaging in “cruelty.” Of course, this might be better seen as a form of ethical egoism in the purest possible sense (being the only being, I would seem to be the only being that matters-assuming any being matters).
While this might seem a bit silly, solipsism does seem to provide an interesting context in which to discuss ethics. But, time to put solipsism back in the attic.
Imagine that you are the only being that exists. Not that you are the last person on earth, but that the earth and everything other than you is merely the product of your deranged imagination. This, very crudely put, is solipsism.
As with watching Star Trek, most philosophers go through a solipsism phase. As with the Macarena and Gangnam Style, this phases usually fades with merciful rapidity. This fading is, however, usually not due to a definitive refutation of solipsism. In many cases, philosophers just get bored with it and move on. In other cases, it is very much like the fads of childhood-it is okay to accept the fad as a kid, but once you grow up you need to move on to adult things. Likewise for solipsism-a philosopher who plays with it too long will be shamed by her fellows. Mostly.
Just for fun, I thought I would play a bit with solipsism-in the manner of an adult who finds an favorite childhood toy in the attic and spends a few moments playing with it before setting it aside, presumably to go write a status update about it on Facebook.
Interestingly enough, solipsism actually has a lot going for it-at least in terms of solving philosophical problems and meeting various conditions of philosophical goodness.
One obvious thing in favor of solipsism is that, as per Descartes’ wax example, every experience seems to serve to prove that I exist rather than that something else exists. For example, if I seem to be playing around with some wax, I can (as per Descartes) doubt that the wax exists. However, my experience seems to show rather clearly that I exist and doubting my existence would just serve to prove I exist. In fact, as skeptics have argued for centuries, it seems impossible to prove that there is anything external to myself-be it an external world or other minds. As such, solipsism seems to be the safest bet: I know I exist, but I have no knowledge about anything else.
Another factor in favor of solipsism is its economy and simplicity. All the theory requires is that I, whatever I am, exist. As such, there would presumably be just one ontological kind (me). Any other theory (other than the theory that there is nothing) would need more stuff and would need more complexity. These seem to be significant advantages for solipsism.
A third factor is that solipsism seems to solve many philosophical problems. The problem of the external world? Solved: no such thing. The problem of other minds? Solved: no such things. The mind-body problem? Probably solved. And so on for many other problems.
Naturally, there are various objections to solipsism.
One obvious objection, which I stole from Descartes (or myself), is that if I was the only being in existence, then I would surely have made myself better. However, I make no claims to being omnipotent-so perhaps I made myself as well as I could. Or perhaps I did not create myself at all-maybe I just appeared ex-nihilo. In any case, this does not seem to be a fatal problem.
A related objection is the argument from bad experiences: cannot be the only thing in existence because of the bad experiences I have. I’ve experience illness, injury, pain and so on. Surely, the argument goes, if I was the only being in existence I would not have these bad experiences. All my experiences would be good.
Laying aside the possibility that I am a masochist, the easy and obvious reply is to point out that a person’s dreams are produced by the person, yet dreams can be nightmares. I’ve written up many of my nightmares as horror adventures for games such as Dark Conspiracy and Call of Cthulhu so it can be gathered that I do have some rather awful nightmares. I also have dreams with more mundane woes and suffering, such as nightmares about illnesses, injuries and so on. Given that it is accepted that a person can generate awful dreams, it would seem to make sense that the same sort of thing could happen in the case of solipsism. That is, if I can dream nightmares I can also ”live” them.
Another objection is that the alleged real world contains things that I do not understand (like specialized mathematics) and things I could not create (like works of art). As such, I cannot be the only being that exists.
The easy and obvious reply to the understanding reply is that I understand as much as I do and the extent of my understanding defines what seems possible to me. To be a bit clearer, I have no understanding of the specialized mathematics that lies beyond my understanding and hence I do not really know if there is anything there I do not actually know. That is, what is allegedly beyond my understanding might not exist at all. Interestingly, any attempt to show that something exists beyond my understanding (and hence must be created by someone else) would fail. To the degree I understand it, I can attribute it to my own creation. To the degree I do not, I can attribute it to my own ignorance.
In terms of the art objection, the easy reply is to note that I can dream of art that I apparently cannot create myself. To use an example, in the waking world, I have little skill when it comes to painting. But I have had dreams in which I saw magnificent original paintings I had not seen in real life. The same applies to dream statues, architecture and so on. As such, the art that seems beyond me in the world could be produced in the same way it occurs in dreams.
Descartes (or me), I think, had the most promising project for refuting solipsism: if I can find something that I cannot possible be the cause of, then that gives me a good reason to believe that I am not the only being in existence. Or, more accurately, that I am not the only being to ever exist. However, there does not seem to be anything like that-after all, everything I experience falls within the limits of me and hence could all be about and only me.
But surely that is crazy.