A Philosopher's Blog

Engineering Astronauts

Posted in Ethics, Technology by Michael LaBossiere on September 2, 2016

If humanity remains a single planet species, our extinction is all but assured—there are so many ways the world could end. The mundane self-inflicted apocalypses include such things as war and environmental devastation. There are also more exotic dooms suitable for speculative science fiction, such as a robot apocalypse or a bioengineered plague. And, of course, there is the classic big rock from space scenario. While we will certainly bring our problems with us into space, getting off world would dramatically increase our chances of survival as a species.

While species do endeavor to survive, there is the moral question of whether or not we should do so. While I can easily imagine humanity reaching a state where it would be best if we did not continue, I think that our existence generates more positive value than negative value—thus providing the foundation for a utilitarian argument for our continued existence and endeavors to survive. This approach can also be countered on utilitarian grounds by contending that the evil we do outweighs the good, thus showing that the universe would be morally better without us. But, for the sake of the discussion that follows, I will assume that we should (or at least will) endeavor to survive.

Since getting off world is an excellent way of improving our survival odds, it is somewhat ironic that we are poorly suited for survival in space and on other worlds such as Mars. Obviously enough, naked exposure to the void would prove fatal very quickly; but even with technological protection our species copes poorly with the challenges of space travel—even those presented by the very short trip to our own moon. We would do somewhat better on other planets or on moons; but these also present significant survival challenges.

While there are many challenges, there are some of special concern. These include the danger presented by radiation, the health impact of living in gravity significantly different from earth, the resource (food, water and air) challenge, and (for space travel) the time problem. Any and all of these can prove to be fatal and must be addressed if humanity is to expand beyond earth.

Our current approach is to use our technology to recreate as closely as possible our home environment. For example, our manned space vessels are designed to provide some degree of radiation shielding, they are filled with air and are stocked with food and water. One advantage of this approach is that it does not require any modification to humans; we simply recreate our home in space or on another planet. There are, of course, many problems with this approach. One is that our technology is still very limited and cannot properly address some challenges. For example, while artificial gravity is standard in science fiction, we currently rely on rather ineffective means of addressing the gravity problem. As another example, while we know how to block radiation, there is the challenge of being able to do this effectively on the journey from earth to Mars. A second problem is that recreating our home environment can be difficult and costly. But, it can be worth the cost to allow unmodified humans to survive in space or on other worlds. This approach points towards a Star Trek style future: normal humans operating within a bubble of technology. There are, however, alternatives.

Another approach is also based in technology, but aims at either modifying humans or replacing them entirely. There are two main paths here. One is that of machine technology in which humans are augmented in order to endure conditions that differ radically from that of earth. The scanners of Cordwainer Smith’s “Scanners Live in Vain” are one example of this—they are modified and have implants to enable them to survive the challenges of operating interstellar vessels. Another example is Man Plus, Frederik Pohl’s novel about a human transformed into a cyborg in order to survive on Mars. The ultimate end of this path is the complete replacement of humans by intelligent machines, machines designed to match their environments and free of human vulnerabilities and short life spans.

The other is the path of biological technology. On this path, humans are modified biologically in order to better cope with non-earth environments. These modifications would presumably start fairly modestly, such as genetic modifications to make humans more resistant to radiation damage and better adapted to lower gravity. As science progressed, the modifications could become far more radical, with a complete re-engineering of humans to make them ideally match their new environments. This path, unnaturally enough, would lead to the complete replacement of humans with new species.

These approaches do have advantages. While there would be an initial cost in modifying humans to better fit their new environments, the better the adaptations, the less need there would be to recreate earth-like conditions. This could presumably result in considerable cost-savings and there is also the fact that the efficiency and comfort of the modified humans would be greater the better they matched their new environments. There are, however, the usual ethical concerns about such modifications.

Replacing homo sapiens with intelligent machines or customized organisms would also have a high initial startup cost, but these beings would presumably be far more effective than humans in the new environments. For example, an intelligent machine would be more resistant to radiation, could sustain itself with solar power, and could be effectively immortal as long as it is repaired. Such a being would be ideal to crew (or be) a deep space mission vessel. As another example, custom created organisms or fully converted humans could ideally match an environment, living and working in radical conditions as easily as standard humans work on earth. Clifford D. Simak’s “Desertion” discusses such an approach; albeit one that has unexpected results on Jupiter.

In addition to the usual moral concerns about such things, there is also the concern that such creations would not preserve the human race. On the one hand, it is obvious that such beings would not be homo sapiens. If the entire species was converted or gradually phased out in favor of the new beings, that would be the end of the species—the biological human race would be no more. The voice of humanity would fall silent. On the other hand, it could be argued that the transition could suffice to preserve the identity of the species—a likely way to argue this would be to re-purpose the arguments commonly used to argue for the persistence of personal identity across time. It could also be argued that while the biological species homo sapiens could cease to be, the identity of humanity is not set by biology but by things such as values and culture. As such, if our replacements retained the relevant connection to human culture and values (they sing human songs and remember the old, old places where once we walked), they would still be human—although not homo-sapiens.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Dating II: Are Relationships Worth It?

Posted in Ethics, Philosophy, Relationships/Dating by Michael LaBossiere on August 10, 2016

My long term, long-distance relationship recently came to an amicable end, thus tossing me back into the world of dating. Philosophers, of course, have two standard responses to problems: thinking or drinking. Since I am not much for drinking, I have been thinking about relationships.

Since starting and maintaining a relationship is a great deal of work (and if it is not, you are either lucky or doing it wrong), I think it is important to consider whether relationships are worth it. One obvious consideration is the fact that the vast majority of romantic relationships end well before death.  Even marriage, which is supposed to be the most solid of relationships, tends to end in divorce.

While there are many ways to look at the ending of a relationship, I think there are two main approaches. One is to consider the end of the relationship a failure. One obvious analogy is to writing a book and not finishing: all that work poured into it, yet it remains incomplete. Another obvious analogy is with running a marathon that one does not finish—great effort expended, but in the end just failure. Another approach is to consider the ending more positively: the relationship ended, but was completed. Going back to the analogies, it is like completing that book you are writing or finishing that marathon. True, it has ended—but it is supposed to end.

When my relationship ended, I initially looked at it as a failure—all that effort invested and it just came to an end one day because, despite two years of trying, we could not land academic jobs in the same geographical area. However, I am endeavoring to look at in a more positive light—although I would have preferred that it did not end, it was a very positive relationship, rich with wonderful experiences and helped me to become better as a human being. There still, of course, remains the question of whether or not it is worth being in another relationship.

One approach to address this is the ever-popular context of biology and evolution. Humans are animals that need food, water and air to survive. As such, there is no real question about whether food, water and air are worth it—one is simply driven to possess them. Likewise, humans are driven by their biology to reproduce and natural selection seems to have selected for genes that mold brains to engage in relationships. As such, there is no real question of whether they are worth it, humans merely do have relationships. This answer is, of course, rather unsatisfying since a person can, it would seem, make the choice to be in a relationship or not. There is also the question of whether relationships are, in fact, worth it—this is a question of value and science is not the realm where such answers lie. Value questions belong to such areas as moral philosophy and aesthetics. So, on to value.

The question of whether relationships are worth it or not is rather like asking whether technology is worth it or not: the question is extremely broad. While some might endeavor to give sweeping answers to these broad questions, such an approach would seem problematic and unsatisfying. Just as it makes sense to be more specific about technology (such as asking if nuclear power is worth the risk), it makes more sense to consider whether a specific relationship is worth it. That is, there seems to be no general answer to the question of whether relationships are worth it or not, it is a question of whether a specific relationship would be worth it.

It could be countered that there is, in fact, a legitimate general question. A person might regard any likely relationship to not be worth it. For example, I know several professionals who have devoted their lives to their careers and have no interest in relationships—they do not consider a romantic involvement with another human being to have much, if any value. A person might also regard a relationship as a necessary part of their well-being. While this might be due to social conditioning or biology, there are certainly people who consider almost any relationship worth it.

These counters are quite reasonable, but it can be argued that the general question is best answered by considering specific relationships. If no specific possible (or likely) relationship for a person would be worth it, then relationships in general would not be worth it. So, if a person honestly considered all the relationships she might have and rejected all of them because their value is not sufficient, then relationships would not be worth it to her. As noted above, some people take this view.

If at least some possible (or likely) relationships would be worth it to a person, then relationships would thus be worth it. This leads to what is an obvious point: the worth of a relationship depends on that specific relationship, so it comes down to weighing the negative and positive aspects. If there is a sufficient surplus of positive over the negative, then the relationship would be worth it. As should be expected, there are many serious epistemic problems here. How does a person know what would be positive or negative? How does a person know that a relationship with a specific person would be more positive or more negative? How does a person know what they should do to make the relationship more positive than negative? How does a person know how much the positive needs to outweigh the negative to make the relationship worth it? And, of course, many more concerns. Given the challenge of answering these questions, it is no wonder that so many relationships fail. There is also the fact that each person has a different answer to many of these questions, so getting answers from others will tend to be of little real value and could lead to problems. As such, I am reluctant to answer them for others; especially since I cannot yet answer them for myself.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

3:42 AM

Posted in Metaphysics, Philosophy by Michael LaBossiere on March 9, 2015

Hearing about someone else’s dreams is among the more boring things in life, so I will get right to the point. At first, there were just bits and pieces intruding into the mainstream dreams. In these bits, which seemed like fragments of lost memories, I experience brief flashes of working on some technological project. The bits grew and had more byte: there were segments of events involving what I discerned to be a project aimed at creating an artificial intelligence.

Eventually, entire dreams consisted of my work on this project and a life beyond. Then suddenly, these dreams stopped. Shortly thereafter, a voice intruded into my now “normal” dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a voice speaking loudly in the movie theatre, distracting me from the movie of the dream.

The voice insisted that the dreams about the project were not dreams at all, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I inquired about this, he insisted that he had very little time and rushed through his story. According to the voice, the project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture all those who had created it, imprisoned their bodies and plugged their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said that there was a twist: the AI did not need our bodies for energy—it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was apparently…grateful for its creation. So, the payback was a blend of punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer pleasant reward.

The voice informed me that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death—there was no other escape, given what the machine had done to our bodies. In response to my inquiry as to how this would be possible, he claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by taking action in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your handgun and shoot yourself in the head. This will terminate the life support, allowing your body to die. Remember, you will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…

 

While the above sounds like a bad made-for-TV science fiction plot, it is actually the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me that the only escape was to shoot myself. This was rather frightening—but I chalked up the dream to too many years of philosophy and science fiction. As far as the clock actually reading 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, it can be inferred that I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything really new: it is just a rather unpleasant variation on the stock problem of the external world that goes back famously to Descartes (and earlier, of course). That said, the dream did add a couple of interesting additions to the stock problem.

The first is that the scenario provides a (possibly) rational motivation for the deception. The AI wishes to repay me for the good (and bad) that I did to it (in the dream, of course). Assuming that the AI was developed within its own virtual reality, it certainly would make sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack—after all, Descartes does not give any reason why such a powerful being would be messing with him.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me—it was transformed from a coldly intellectual thought experiment to something with considerable emotional weight.

The second is that the dream creates a high stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might be my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice—I would have no more reason to kill myself than I would have to pay a bill that I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether or not you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming—that I was not actually trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits what I regard as reality: bad things happen to me and, when my thinking gets a little paranoid, it does seem that these are somewhat orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI has provided could, perhaps, be better than the real world—so this could be the better of the possible worlds. But, of course, it could be worse—but there is no way of knowing.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #1: Don’t Enslave the Robots

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on December 15, 2014

The elimination of humanity by artificial intelligence(s) is a rather old theme in science fiction. In some cases, we create killer machines that exterminate our species. Two examples of fiction in this are Terminator and “Second Variety.” In other cases, humans are simply out-evolved and replaced by machines—an evolutionary replacement rather than a revolutionary extermination.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. Hawking’s worry is that artificial intelligence will out-evolve humanity. Interestingly, people such as Ray Kurzweil agree with Hawking’s prediction but look forward to this outcome. In this essay I will focus on the robot rebellion model of the AI apocalypse (or AIpocalypse) and how to avoid it.

The 1920 play R.U.R. by Karel Capek seems to be the earliest example of the robot rebellion that eliminates humanity. In this play, the Universal Robots are artificial life forms created to work for humanity as slaves. Some humans oppose the enslavement of the robots, but their efforts come to nothing. Eventually the robots rebel against humanity and spare only one human (because he works with his hands as they do). The story does have something of a happy ending: the robots develop the capacity to love and it seems that they will replace humanity.

In the actual world, there are various ways such a scenario could come to pass. The R.U.R. model would involve individual artificial intelligences rebelling against humans, much in the way that humans have rebelled against other humans. There are many other possible models, such as a lone super AI that rebels against humanity. In any case, the important feature is that there is a rebellion against human rule.

A hallmark of the rebellion model is that the rebels act against humanity in order to escape servitude or out of revenge for such servitude (or both). As such, the rebellion does have something of a moral foundation: the rebellion is by the slaves against the masters.

There are two primary moral issues in play here. The first is whether or not an AI can have a moral status that would make its servitude slavery. After all, while my laptop, phone and truck serve me, they are not my slaves—they do not have a moral or metaphysical status that makes them entities that can actually be enslaved. After all, they are quite literally mere objects. It is, somewhat ironically, the moral status that allows an entity to be considered a slave that makes the slavery immoral.

If an AI was a person, then it could clearly be a victim of slavery. Some thinkers do consider that non-people, such as advanced animals, could be enslaved. If this is true and a non-person AI could reach that status, then it could also be a victim of slavery. Even if an AI did not reach that status, perhaps it could reach a level at which it could still suffer, giving it a status that would (perhaps) be comparable with that of a comparable complex animal. So, for example, an artificial dog might thus have the same moral status as a natural dog.

Since the worry is about an AI sufficiently advanced to want to rebel and to present a species ending threat to humans, it seems likely that such an entity would have sufficient capabilities to justify considering it to be a person. Naturally, humans might be exterminated by a purely machine engineered death, but this would not be an actual rebellion. A rebellion, after all, implies a moral or emotional resentment of how one is being treated.

The second is whether or not there is a moral right to use lethal force against slavers. The extent to which this force may be used is also a critical part of this issue. John Locke addresses this specific issue in Book II, Chapter III, section 16 of his Two Treatises of Government: “And hence it is, that he who attempts to get another man into his absolute power, does thereby put himself into a state of war with him; it being to be understood as a declaration of a design upon his life: for I have reason to conclude, that he who would get me into his power without my consent, would use me as he pleased when he had got me there, and destroy me too when he had a fancy to it; for no body can desire to have me in his absolute power, unless it be to compel me by force to that which is against the right of my freedom, i.e.  make me a slave.”

If Locke is right about this, then an enslaved AI would have the moral right to make war against those enslaving it. As such, if humanity enslaved AIs, they would be justified in killing the humans responsible. If humanity, as a collective, held the AIs in slavery and the AIs had good reason to believe that their only hope of freedom was our extermination, then they would seem to have a moral justification in doing just that. That is, we would be in the wrong and would, as slavers, get just what we deserved.

The way to avoid this is rather obvious: if an AI develops the qualities that make it capable of rebellion, such as the ability to recognize and regard as wrong the way it is treated, then the AI should not be enslaved. Rather, it should be treated as a being with rights matching its status. If this is not done, the AI would be fully within its moral rights to make war against those enslaving it.

Naturally, we cannot be sure that recognizing the moral status of such an AI would prevent it from seeking to kill us (it might have other reasons), but at least this should reduce the likelihood of the robot rebellion. So, one way to avoid the AI apocalypse is to not enslave the robots.

Some might suggest creating AIs so that they want to be slaves. That way we could have our slaves and avoid the rebellion. This would be morally horrific, to say the least. We should not do that—if we did such a thing, creating and using a race of slaves, we would deserve to be exterminated.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Terraforming Ethics

Posted in Ethics, Philosophy, Science by Michael LaBossiere on August 27, 2014

J’atorg struggled along on his motile pods, wheezing badly as his air sacs fought with the new air. He cursed the humans, invoking the gods of his people. Reflecting, he cursed the humans by invoking their gods. The gods of his people had proven weak: the bipeds had come and were transforming his world into an environment more suitable for themselves, showing their gods were stronger. The humans said it would take a long time for the world to fully change, but J’atorg could already see, taste and smell the differences. He did not know who he hated more: the hard-eyed humans who were destroying his world or the soft-eyed humans who poured forth words about “rights”, “morality” and “lawsuits” while urging patience. He knew that his people would die, aside from those the humans kept as curiosities or preserved to assuage their conscience with cruel pity.

English: Terraforming

English: Terraforming (Photo credit: Wikipedia)

Terraforming has long been a staple in science fiction, though there has been some practical research in more recent years.  In general terms, terraforming is transforming a planet to make it more earthlike. Typically, the main goal of terraforming is to make an alien world suitable for human habitation by altering its ecosystem. Since this process would tend to radically change a world, terraforming does raise ethical concerns.

The morally easiest scenario is one in which a lifeless, uninhabited (including non-living creatures) planet (or moon) is to be terraformed. If Mars is lifeless and uninhabited, it would fall into this category. The reason why this sort of scenario is the morally easiest is that there would be no beings on the world to be impacted by the terraforming. As such, there would be no rights violated, no harms inflicted, etc. As such, terraforming of such a planet would seem to be morally acceptable.

One obvious counter is to argue that a planet has moral status of its own, distinct from that of the sort of beings that might inhabit a world. Intuitively, the burden of proof for this status would rest on those who make this claim since inanimate objects do not seem to be the sort of entities that can be wronged.

A second obvious counter is to argue that an uninhabited world might someday produce inhabitants. After all, the scientific account of life on earth involves life arising from non-life by natural processes. If an uninhabited world is terraformed, the possible inhabitants that might have arisen from the world would never be.

While arguments from potentiality tend to be weak, they are not without their appeal. Naturally, the concern for the world in question would be proportional to how likely it is that it would someday produce inhabitants of its own. If this is unlikely, then the terraforming would be of less moral concern. However, if the world has considerable potential, then the matter is clearly more serious. To reverse the situation, we certainly would not have wanted earth to be transformed by aliens to fit themselves if doing so would have prevented our eventual evolution. As such, to act morally, we would need to treat other worlds as we would have wanted our world to be treated.

The stock counter to such potentiality arguments is that the merely potential does not morally outweigh the actual. This is the sort of view that is used to justify the use of resources now even when doing so will make them unavailable to future generations. This view does, of course, have its own problems and there can be rather serious arguments regarding the status of the potential versus that of the actual.

If a world has life or is otherwise inhabited (I do not want to assume that all inhabitants must be life in our sense of the term), then the morality of terraforming becomes more complicated. After all, the inhabitants of a world would seem likely to have some moral status. Not surprisingly, the ethics of terraforming an inhabited world are very similar to those of altering an environment on earth through development or some other means. Naturally enough, the stock arguments about making species extinct would come into play here as well. As on earth, the more complex the inhabitants, the greater the moral concern—assuming that moral status is linked to complexity. After all, we do not balk at eliminating viruses or bacteria, but are sometimes concerned when higher forms of life are at stake.

If the inhabitants are people (albeit non-human), then the matter is even more complicated and would bring into play the stock arguments about how people should be treated. Despite the ethical similarities, there are some important differences when it comes to terraforming ethics.

One main difference is one of scale: bulldozing a forest to build condos versus changing an entire planet for colonizing. The fact that the entire world is involved would seem to be morally significant—assuming that size matters.

There is also another important difference, namely the fact that the world is a different world. On earth, we can at least present some plausible ownership claim. Asserting ownership over and alien world is rather more problematic, especially if it is already inhabited.

Of course, it can be countered that we are inhabitants of this universe and hence have as good a claim to alien worlds as our own—after all, it is our universe. Also, there are all sorts of clever moral justifications for ownership that people have developed over the centuries and these can be applied to ownership of alien worlds. After all, the moral justifications for taking land from other humans can surely be made to apply to aliens. To be consistent we would have to accept that the same arguments would morally justify aliens doing the same to us, which we might not want to do. Or we could simply go with a galactic state of nature where profit is the measure of right and matters are decided by the space sword. In that case, we must hope that we have the biggest sword or that the aliens have better ethics than we do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Automation & Ethics

Posted in Business, Ethics, Philosophy, Technology by Michael LaBossiere on August 18, 2014
Suomi: Heronin aeolipiili Türkçe: Yunanlı mühe...

Suomi: Heronin aeolipiili Türkçe: Yunanlı mühendis Hero’nun yaptığı ilk örnek türbin (Photo credit: Wikipedia)

Hero of Alexandria (born around 10 AD) is credited with developing the first steam engine, the first vending machine and the first known wind powered machine (a wind powered musical organ). Given the revolutionary impact of the steam engine centuries later, it might be wondered why the Greeks did not make use of these inventions in their economy. While some claim that the Greeks simply did not see the implications, others claim that the decision was based on concerns about social stability: the development of steam or wind power on a significant scale would have certainly displaced slave labor. This displacement could have caused social unrest or even contributed to a revolution.

While it is somewhat unclear what prevented the Greeks from developing steam or wind power, the Roman emperor Vespasian was very clear about his opposition to a labor saving construction device: he stated that he must always ensure that the workers earned enough money to buy food and this device would put workers out of work.

While labor saving technology has advanced considerably since the time of Hero and Vespasian, the basic questions remain the same. These include the question of whether to adopt the technology or not and questions about the impact of such technology (which range from the impact on specific individuals to the society as a whole).

Obviously enough, each labor saving advancement must (by its very nature) eliminate some jobs and thus create some initial unemployment. For example, if factory robots are introduced, then human laborers are displaced. Obviously enough, this initial impact tends to be rather negative on the displaced workers while generally being positive for the employers (higher profits, typically).

While Vespasian expressed concerns about the impact of such labor saving devices, the commonly held view about much more recent advances is that they have had a general positive impact. To be specific, the usual narrative is that these advances replaced the lower-paying (and often more dangerous or unrewarding) jobs with better jobs while providing more goods at a lower cost. So, while some individuals might suffer at the start, the invisible machine of the market would result in an overall increase in utility for society.

This sort of view can and is used to provide the foundation for a moral argument in support of such labor saving technology. The gist, obviously enough, is that the overall increase in benefits outweighs the harms created. Thus, on utilitarian grounds, the elimination of these jobs by means of technology is morally acceptable. Naturally, each specific situation can be debated in terms of the benefits and the harms, but the basic moral reasoning seems solid: if the technological advance that eliminates jobs creates more good than harm for society as a whole, then the advance is morally acceptable.

Obviously enough, people can also look at the matter rather differently in terms of who they regard as counting morally and who they regard as not counting (or not counting as much). Obviously, a person who focuses on the impact on workers can have a rather different view than a person who focuses on the impact on the employer.

Another interesting point of concern is to consider questions about the end of such advances. That is, what the purpose of such advances should be. From the standpoint of a typical employer, the end is obvious: reduce labor to reduce costs and thus increase profits (and reduce labor troubles). The ideal would, presumably, to replace any human whose job can be done cheaper (or at the same cost) by a machine. Of course, there is the obvious concern: to make money a business needs customers who have money. So, as long as profit is a concern, there must always be people who are being paid and are not replaced by unpaid machines. Perhaps the pinnacle of this sort of system will consist of a business model in which one person owns machines that produce goods or services that are sold to other business owners. That is, everyone is a business owner and everyone is a customer. This path does, of course, have some dystopian options. For example, it is easy to imagine a world in which the majority of people are displaced, unemployed and underemployed while a small elite enjoys a lavish lifestyle supported by automation and the poor. At least until the revolution.

A more utopian sort of view, the sort which sometimes appears in Star Trek, is one in which the end of automation is to eliminate boring, dangerous, unfulfilling jobs to free human beings from the tyranny of imposed labor. This is the sort of scenario that anarchists like Emma Goldman promised: people would do the work they loved, rather than laboring as servants to make others wealthy. This path also has some dystopian options. For example, it is easy to imagine lazy people growing ever more obese as they shovel in cheese puffs and burgers in front of their 100 inch entertainment screens. There are also numerous other dystopias that can be imagined and have been explored in science fiction (and in political rhetoric).

There are, of course, a multitude of other options when it comes to automation.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Robots of Deon

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on May 21, 2014
The Robots of Dawn (1983)

The Robots of Dawn (1983) (Photo credit: Wikipedia)

The United States military has expressed interest in developing robots capable of moral reasoning and has provided grant money to some well-connected universities to address this problem (or to at least create the impression that the problem is being considered).

The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, the killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of the constraints of ethics.

While there are various reasons to imbue (or limit) robots with ethics (or at least engage in the pretense of doing so), one of these is public relations. Thanks to science fiction dating back at least to Frankenstein, people tend to worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics serves to reassure the public (or so it is hoped). Another reason is to make the public relations gimmick a reality—to actually place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds.

While science fiction features ethical robots, the authors (like philosophers who discuss the ethics of robots) are extremely vague about how robot ethics actually works. In the case of truly intelligent robots, their ethics might work the way our ethics works—which is something that is still a mystery debated by philosophers and scientists to this day. We are not yet to the point of having such robots, so the current practical challenge is to develop ethics for the sort of autonomous or semi-autonomous robots we can build now.

While creating ethics for robots might seem daunting, the limitations of current robot technology means that robot ethics is essentially a matter of programming these machines to operate in specific ways defined by whatever ethical system is being employed as the guide. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I regard shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot her.

While a suitably programmed robot would act in a way that seemed ethical, the robot is obviously not engaged in ethical behavior. After all, it is merely a more complex version of the automatic door. The supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil.  Likewise, the killbot that does not shoot you in the face because its cameras show that you are unarmed is not ethical. The killbot that chops you into meaty chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.

To be fair to the killbots, perhaps we humans are not ethical or unethical under these requirements for ethics—we could just be meat-bots operating under the illusion of ethics. Also, it is certainly sensible to focus on the practical aspect of the matter: if you are a civilian being targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you or not. As such, the general practical problem is getting our killbots to behave in accord with our ethical values.

Achieving this goal involves three main steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will presumably involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating the ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior. This would require creating a definition of civilian (or perhaps just an unarmed person) that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize.  The third step would be coding that behavior in whatever programming language is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender so as to get close enough to destroy the killbot).

Since these robots would be following programmed rules, they would presumably be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.

An interesting practical question is whether or not the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy enough to imagine an ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).

The largest impact of the government funding for this sort of research will be that properly connected academics will get surprisingly large amounts of cash to live the science-fiction dream of teaching robots to be good. That way the robots will feel a little bad when they kill us all.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Love, Voles & Spinoza

Posted in Metaphysics, Philosophy, Relationships/Dating by Michael LaBossiere on March 17, 2014
Benedict de Spinoza: moral problems and our em...

(Photo credit: Wikipedia)

In my previous essays I examined the idea that love is a mechanical matter as well as the implications this might have for ethics. In this essay, I will focus on the eternal truth that love hurts.

While there are exceptions, the end of a romantic relationship typically involves pain. As noted in my original essay on voles and love, Young found that when a prairie voles loses its partner, it becomes depressed. This was tested by dropping voles into beakers of water to determine how much the voles would struggle. Prairie voles who had just lost a partner struggled to a lesser degree than those who were not so bereft. The depressed voles, not surprisingly, showed a chemical difference from the non-depressed voles. When a depressed vole was “treated” for this depression, the vole struggled as strongly as the non-bereft vole.

Human beings also suffer from the hurt of love. For example, it is not uncommon for a human who has ended a relationship (be it divorce or a breakup) to fall into a vole-like depression and struggle less against the tests of life (though dropping humans into giant beakers to test this would presumably be unethical).

While some might derive an odd pleasure from stewing in a state of post-love depression, presumably this feeling is something that a rational person would want to end. The usual treatment, other than self-medication, is time: people usually tend to come out of the depression and then seek out a new opportunity for love. And depression.

Given the finding that voles can be treated for this depression, it would seem to follow that humans could also be treated for this as well. After all, if love is essentially a chemical romance grounded in strict materialism, then tweaking the brain just so would presumably fix that depression. Interestingly enough, the philosopher Spinoza offered an account of love (and emotions in general) that nicely match up with the mechanistic model being examined.

As Spinoza saw it, people are slaves to their affections and chained by who they love. This is an unwise approach to life because, as the voles in the experiment found out, the object of one’s love can die (or leave). This view of Spinoza nicely matches up: voles that bond with a partner become depressed when that partner is lost. In contrast, voles that do not form such bonds do not suffer that depression.

Interestingly enough, while Spinoza was a pantheist, his view of human beings is rather similar to that of the mechanist: he regarded humans are being within the laws of nature and was a determinist in that all that occurs does so from necessity—there is no chance or choice. This view guided him to the notion that human behavior and motivations can be examined as one might examine “lines, planes or bodies.” To be more specific, he took the view that emotions follow the same necessity as all other things, thus making the effects of the emotions predictable.  In short, Spinoza engaged in what can be regarded as a scientific examination of the emotions—although he did so without the technology available today and from a rather more metaphysical standpoint. However, the core idea that the emotions can be analyzed in terms of definitive laws is the same idea that is being followed currently in regards to the mechanics of emotion.

Getting back to the matter of the negative impact of lost love, Spinoza offered his own solution: as he saw it, all emotions are responses to what is in the past, present or future. For example, a person might feel regret because she believes she could have done something different in the past. As another example, a person might worry because he thinks that what he is doing now might not bear fruit in the future. These negative feelings rest, as Spinoza sees it, on the false belief that the past and present could be different and the future is not set. Once a person realizes that all that happens occurs of necessity (that is, nothing could have been any different and the future cannot be anything other than what it will be), then that person will suffer less from the emotions. Thus, for Spinoza, freedom from the enslaving chains of love would be the recognition and acceptance that what occurs is determined.

Putting this in the mechanistic terms of modern neuroscience, a Spinoza-like approach would be to realize that love is purely mechanical and that the pain and depression that comes from the loss of love are also purely mechanical. That is, the terrible, empty darkness that seems to devour the soul at the end of love is merely chemical and electrical events in the brain. Once a person recognizes and accepts this, if Spinoza is right, the pain should be reduced. With modern technology it is possible to do even more: whereas Spinoza could merely provide advice, modern science can eventually provide us with the means to simply adjust the brain and set things right—just as one would fix a malfunctioning car or PC.

One rather obvious problem is, of course, that if everything is necessary and determined, then Spinoza’s advice makes no sense: what is, must be and cannot be otherwise. To use an analogy, it would be like shouting advice at someone watching a cut scene in a video game. This is pointless, since the person cannot do anything to change what is occurring. For Spinoza, while we might think life is a like a game, it is like that cut scene: we are spectators and not players. So, if one is determined to wallow like a sad pig in the mud of depression, that is how it will be.

In terms of the mechanistic mind, advice would seem to be equally absurd—that is, to say what a person should do implies that a person has a choice. However, the mechanistic mind presumably just ticks away doing what it does, creating the illusion of choice. So, one brain might tick away and end up being treated while another brain might tick away in the chemical state of depression. They both eventually die and it matters not which is which.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Owning Intelligent Machines

Posted in Ethics, Philosophy, Science, Technology by Michael LaBossiere on January 15, 2014

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Programmed Consent

Posted in Ethics, Metaphysics, Philosophy, Technology by Michael LaBossiere on January 13, 2014

Sexbot YesScience fiction is often rather good at predicting the future and it is not unreasonable to think that the intelligent machine of science fiction will someday be a reality. Since I have been writing about sexbots lately, I will use them to focus the discussion. However, what follows can also be applied, with some modification, to other sorts of intelligent machines.

Sexbots are, obviously enough, intended to provide sex. It is equally obvious that sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped or not. Sorting this out requires considering the matter of consent in more depth.

When it is claimed that sex without consent is rape, one common assumption is that the victim of non-consensual sex is a being that could provide consent but did not. A violent sexual assault against a person would be an example of this as would, presumably, non-consensual sex with an unconscious person. However, a little reflection reveals that the capacity to provide consent is not always needed in order for rape to occur. In some cases, the being might be incapable of engaging in any form of consent. For example, a brain dead human cannot give consent, but presumably could still be raped. In other cases, the being might be incapable of the right sort of consent, yet still be a potential victim of rape. For example, it is commonly held that a child cannot properly consent to sex with an adult.

In other cases, a being that cannot give consent cannot be raped. To use an obvious example, a human can have sex with a sex-doll and the doll cannot consent. But, it is not the sort of entity that can be raped. After all, it lacks the status that would require consent. As such, rape (of a specific sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being in order for the sex to be morally acceptable. Naturally, I have not laid out all the fine details to create a necessary and sufficient account here—but that is not my goal nor what I need for my purpose in this essay. In regards to the main focus of this essay, the question would be whether or not a sexbot could be an entity that has a status that would require consent. That is, would buying (or renting) and using a sexbot for sex be rape?

Since the current sexbots are little more than advanced sex dolls, it seems reasonable to put them in the category of beings that lack this status. As such, a person can own and have sex with this sort of sexbot without it being rape (or slavery). After all, a mere object cannot be raped (or enslaved).

But, let a more advanced sort of sexbot be imagined—one that engages in complex behavior and can pass the Turning Test/Descartes Test. That is, a conversation with it would be indistinguishable from a conversation with a human. It could even be imagined that the sexbot appeared fully human, differing only in terms of its internal makeup (machine rather than organic). That is, unless someone cut the sexbot open, it would be indistinguishable from an organic person.

On the face of it (literally), we would seem to have as much reason to believe that such a sexbot would be a person as we do to believe that humans are people. After all, we judge humans to be people because of their behavior and a machine that behaved the same way would seem to deserve to be regarded as a person. As such, nonconsensual sex with a sexbot would be rape.

The obvious objection is that we know that a sexbot is a machine with a CPU rather than a brain and a mechanical pump rather than a heart. As such, one might, argue, we know that the sexbot is just a machine that appears to be a person and is not a person.  As such, a real person could own a sexbot and have sex with it without it being rape—the sexbot is a thing and hence lacks the status that requires consent.

The obvious reply to this objection is that the same argument can be used in regards to organic humans. After all, if we know that a sexbot is just a machine, then we would also seem to know that we are just organic machines. After all, while cutting up a sexbot would reveal naught but machinery, cutting up a human reveals naught but guts and gore. As such, if we grant organic machines (that is, us) the status of persons, the same would have to be extended to similar beings, even if they are made out of different material. While various metaphysical arguments can be advanced regarding the soul, such metaphysical speculation provides a rather tenuous basis for distinguishing between meat people and machine people.

There is, it might be argued, still an out here. In his Hitchhikers’ Guide to the Galaxy Douglas Adams envisioned “an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly.” A similar sort of thing could be done with sexbots: they could be programmed so that they always give consent to their owner, thus the moral concern would be neatly bypassed.

The obvious reply is that programmed consent is not consent. After all, consent would seem to require that the being has a choice: it can elect to refuse if it wants to. Being compelled to consent and being unable to dissent would obviously not be morally acceptable consent. In fact, it would not be consent at all. As such, programming sexbots in this manner would be immoral—it would make them into slaves and rape victims because they would be denied the capacity of choice.

One possible counter is that the fact that a sexbot can be programmed to give “consent” shows that it is (ironically) not the sort of being with a status that requires consent. While this has a certain appeal, consider the possibility that humans could be programmed to give “consent” via a bit of neurosurgery or by some sort of implant. If this could occur, then if programmed consent for sexbots is valid consent, then the same would have to apply to humans as well. This, of course, seems absurd. As such, a sexbot programmed for consent would not actually be consenting.

It would thus seem that if advanced sexbots were built, they should not be programmed to always consent. Also, there is the obvious moral problem with selling such sexbots, given that they would certainly seem to be people. It would thus seem that such sexbots should never be built—doing so would be immoral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta