A Philosopher's Blog

Sexbots are Persons, Too?

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on January 6, 2014

Page_1In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.

Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.

It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.

In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.

Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

23 Responses

Subscribe to comments with RSS.

  1. apollonian said, on January 6, 2014 at 9:14 am

    The Ethics Of Pretending To Ethics–Why?

    The fallacy to this essay is contained in, “Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person,” which is merely and obviously question-begging. The question-begging is then repeated a little later in, “…it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood.” The next sentence (in the essay, above) following these first two, here quoted, then repeats yet again–repeated question-begging and circular logic.

    For definition of human is rational creature (fm Aristotle, I believe)–at least this is traditional and seems most reasonable.

    Another product of question-begging fallacy is “morality”–why be “moral”?–there’s got to be some reason. And I think it was Hobbes who gave the right answer–it’s in our interest to respect another creature who might otherwise do us harm if we didn’t respect him/her. Thus the premise to ethics is self-interest, the nature of humanity, the rational creature.

    Again, note ethics is mere logic btwn ends and means. Ultimate end is life and happiness (or so it seems), hence the means is reason.

    Thus the essay poses the imagined ethical quandary as to treatment of what’s admitted to be non-human (non-rational) entity–the robot–but which “speaks,” so we contrive to pretend it’s human or rational. Question is why should we so pretend?–there’s no good reason given (but for the pointless emulation of Descartes). We must and should merely do that which is rational; that’s all, and there’s no compelling reason (given in essay) to pretend the robot is human or equivalent.

    So the essay has to do w. pretending to be “ethical”–question then is why so pretend?–to emulate Descartes? Ho ho ho–there’s got to be a reason for ethics, like the preservation of a rational life, and pretending to ethics by itself is just pretending–there’s no end actually there, aside fm pretending.

  2. drewdog2060drewdog2060 said, on January 6, 2014 at 9:36 am

    What about people with severe learning disabilities and/or those who used to be categorised as deaf and dumb?

    • Michael LaBossiere said, on January 6, 2014 at 12:16 pm

      Descartes does address that exact point: he says that even the “deaf and dumb” still make signs that are true language.

  3. drewdog2060drewdog2060 said, on January 6, 2014 at 9:41 am

    Reblogged this on newauthoronline and commented:
    Just because it walks like a duck and quacks like a duck is it, in fact a duck? I am not sure that it is, although if it could be demonstrated that advanced robots of the future feel (as opposed to simulate pain) then there would be a case for according them similar treatment to humans, I.E. some kind of human rights.

  4. manchesterflickchick said, on January 6, 2014 at 10:41 am

    The fact someone has to even ask if animals have minds when animals dream, feel loss and envy ect I find bizarre. I do think it’s an incredibly interesting when applied to androids and ‘sexbots however.

    • Michael LaBossiere said, on January 6, 2014 at 12:21 pm

      People ask whether people have minds as well. Skinner style behaviorism once dominated psychology-so if people can doubt that people have minds, people can doubt that animals have minds as well.

      Also, when some folks doubt animals have minds, they are thinking of a very specific sort of mind. Descartes believed animals felt and had emotions, but this was all biological/mechanical. The current scientific view of humans is the same that Descartes held towards animals: we are complex biological automatons.

      • WTP said, on January 6, 2014 at 2:15 pm

        Interesting question. Do Jews have minds? If indeed Jews follow the Talmud, by definition (see RevisionistReview.blogspot.com for best Talmudic expo, also ck Come-and-hear.com)–and those who pretend to not observe Talmudic religion always go along w. the Talmudists–just keep ur eyes open to verify. Talmud preaches war against the gentiles. “The best of gentiles, kill him.” This is serious business, If they do not have minds but are merely objects, robots, or perhaps zombies if you will, then the morality of killing them is the same as killing any other animal. I, and I’m sure appoplexia, would be interested in your thoughts on this. All this SEX SEX SEX and ROBOTS ROBOTS ROBOTS is surely not an attempt to dodge the TRUTH TRUTH TRUTH, is it? HO HO HO.

        • apollonian said, on January 6, 2014 at 3:06 pm

          WTP: u’re obviously a Jew loyal to Jews, right? Q.E.D.

          • T. J. Babson said, on January 6, 2014 at 10:06 pm

            Ho ho ho. Why is it always about the Jews? There are maybe 15 million Jews out of a world population of 7 billion. That is 0.2%. Ho ho ho.

            • apollonian said, on January 6, 2014 at 10:16 pm

              Good question, TJ: why indeed is it about Jews?–u’re Jew too, eh? Ho ho ho ho ho

              U’re sure right–out of pop. of .02%, they make up at least a third of USA’s billionaires, eh?–interesting, isn’t it?

            • T. J. Babson said, on January 6, 2014 at 10:59 pm

              So what if lots of billionaires are Jews? Have you ever heard of the bell curve? Do you understand the concept of long tails?

            • magus71 said, on January 7, 2014 at 7:35 am

              TJ,

              Off topic, per my habit, and because I don’t like the feeling that talking about the Illuminati gives me. Have you ever heard of Lt. Col Jeff Cooper? Now deceased. An outstanding individual who you may find of interest in his writings. Mostly about liberty, self-defense etc. A WWII and Korean War vet. PHD in history, and one of the founding fathers of modern firearms philosophy and theory. One of the best writers I’ve ever read. A true American. An amazing mind. Yet another guy that I’ve found to have my world view.

              You said you’ve read Stephen Hunter’s novels. If that’s true and you enjoyed them, you’ll love Cooper.

            • magus71 said, on January 7, 2014 at 7:38 am

            • magus71 said, on January 7, 2014 at 7:54 am

              I’m shocked, shocked I tell you, that this woman is not a billionaire.

  5. apollonian said, on January 6, 2014 at 11:34 pm

    We see u tacitly admit u’re Jew. U ask “so what”?–well, does it occur to u that as Talmud is perfect philosophy for criminals, e.g., “God’s chosen,” it being ok to cheat and murder gentiles, that Jews are just a large criminal family? Is it any wonder Jews are so cordially hated throughout the world, having been expelled fm practically every country on earth–esp. when u take into consideration the murderous, criminal Talmud?

    Have u ever hrd of the US Federal Reserve Bank (“Fed”)?–is it not simply legalized COUNTERFEITING, a criminal fraud?–is it any wonder Jews are so prominent in “banking”? Ho ho ho ho ho

    Do u doubt Israel, Jews, and MOSSAD were heavily involved in 9/11, the cover-up thereof, the JFK assassination, the brazen attack on the US Liberty–and practically every other large crime and criminal activity?

    By the way, did u ck out the ref.s on the criminal nature of Talmud?

  6. stephanie said, on January 8, 2014 at 9:44 pm

    Hi Michael, this was really thought provoking.

    I know it is just a blog but I wish you had unpacked this: ‘As such, whether the sexbot freely consented or not would be a morally important matter’ a bit more. What it made me think was that, if a sexbot is so well constructed as to emulate humans to the point of being mistaken by us for a person, then probably it could consent or dissent to sex. So then non-consensual sex with a sexbot would be morally equivalent to non-consensual sex. And if the sexbot was programmed not to dissent ever, then it would not be a convincing human, and then it would not be morally problematic to have sex with it.

    Reading this helped me clarify some of my thoughts about consent in the human realm, and especially ideas about ‘grooming’ children for sex, and whether ‘rape culture’ does not groom some people, mostly heterosexual women, but I might just be speaking subjectively here, to give consent in a programmed sort of way. Perhaps the sluggishness of society to legally recognize of rape within marriage for example.

    On a different level it made me think about Buffy the Vampire Slayer, and the vampire character Spike’s moral progression in Season 5 of the series. Pop-culture analysts equate his moral development to behavioral models (see Sakal, “No Big Win: Themes of Sacrifice, Salvation, and Redemption”, in James B South ed. Buffy the Vampire Slayer and Philosophy (Open Court, 2003)), and funnily enough he has a sexual relationship with a sexbot, who is programmed to always consent. In the show, as I remember it, it seems like the BuffyBot is a humorous ‘object’ and there are some moments where the audience is invited to sympathize with it, like when it ‘dies’, but there is not a lot of concern for its welfare, or the idea that it is being raped or in any kind of relationship, even though it is a pretty convincing talker. That having sex with it is of moral concern with reference to Spike is more clear. Bringing me to back to your posting and how, it seems that both parties, the person and the bot, are in a moral limbo when they have what could be non-consensual sex with something person-like, even when the scenario is constructed and openly acknowledged to be a manipulation.

    Finally, and in jest, I think some of your commentators here… seem not to pass the Turing Test themselves I don’t know how you run a blog. I certainly could not.

    • Michael LaBossiere said, on January 10, 2014 at 12:20 pm

      Stephanie,

      I’m reasonably sure that Apollonian is a rogue internet connected coffee maker.

      Thanks to your comment, I plan to write another post on programmed consent-that seems interesting.

      You do raise an interesting dilemma: a sexbot that cannot dissent would not (appear to be) a person and hence there would be no moral problem with simply using it. But, if it could dissent, then it would (appear to be) a person and the consent would make the sex consensual and morally acceptable (at least in regards to the consent bit).

      Good point: the ‘rape culture’ can be seen as an attempt to program women to not dissent. After all, the message that women should simply submit or go along is certainly an attempt to eliminate resistance to unwanted sexual advances.

      Interestingly, intelligent machines often get a bad deal on TV and in the movies. I did a piece a while back about how superheroes who will not kill even the most evil intelligent life forms have no qualms about destroying intelligent robots. This could be a bias against machines, but could also be something deeper-like the idea that even intelligent beings can be treated badly if they are perceived as objects (hence the objectification of, for example, women).

      • apollonian said, on January 10, 2014 at 2:04 pm

        “Rape culture”?–what’s this supposed to be all about?–any references or citations?

      • T. J. Babson said, on January 10, 2014 at 3:16 pm

        “But, if it could dissent, then it would (appear to be) a person and the consent would make the sex consensual and morally acceptable (at least in regards to the consent bit).”

        You could program it to dissent randomly, but that would not make a sexbot a person.

  7. apollonian said, on January 8, 2014 at 11:54 pm

    Stephanie: remember–just because u babble doesn’t mean u’re making sense or speaking in substantial manner. Blogging?–it’s just keeping a public diary, and thinking out loud–comments are optional. In ur case, u’d need to work on the thinking part, I suspect.

    • Michael LaBossiere said, on January 10, 2014 at 12:23 pm

      Sometimes people should follow their own advice.

      • apollonian said, on January 10, 2014 at 1:54 pm

        Well prof., if there’s a problem, I think it best to address the specific pt. to be made, and then one could generalize fm there.


Leave a reply to apollonian Cancel reply