A Philosopher's Blog

The Simulation II: Escape

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on October 26, 2016

The cover to Wildstorm's A Nightmare on Elm St...

Elon Musk and others have advanced the idea that we exist within a simulation, thus adding a new chapter to the classic problem of the external world. When philosophers engage this problem, the usual goal is show how one can know that one’s experience correspond to an external reality. Musk takes a somewhat more practical approach: he and others are allegedly funding efforts to escape this simulation. In addition to the practical challenges of breaking out of a simulation, there are also some rather interesting philosophical concerns about whether such an escape is even possible.

In regards to the escape, there are three main areas of interest. These are the nature of the simulation itself, the nature of the world outside the simulation and the nature of the inhabitants of the simulation. These three factors determine whether or not escape from the simulation is a possibility.

Interestingly enough, determining the nature of the inhabitants involves addressing another classic philosophical problem, that of personal identity. Solving this problem involves determining what it is to be a person (the personal part of personal identity), what it is to be distinct from all other entities and what it is to be the same person across time (the identity part of personal identity). Philosophers have engaged this problem for centuries and, obviously enough, have not solved it. That said, it is easy enough to offer some speculation within the context of Musk’s simulation.

Musk and others seem to envision a virtual reality simulation as opposed to physical simulation. A physical simulation is designed to replicate a part of the real world using real entities, presumably to gather data. One science fiction example of a physical simulation is Frederik Pohl’s short story “The Tunnel under the World.” In this story the inhabitants of a recreated town are forced to relive June 15th over and over again in order to test various advertising techniques.

If we are in a physical simulation, then escape would be along the lines of escaping from a physical prison—it would be a matter of breaking through the boundary between our simulation and the outer physical world. This could be a matter of overcoming distance (travelling far enough to leave the simulation—perhaps Mars is outside the simulation) or literally breaking through a wall. If the outside world is habitable, then survival beyond the simulation would be possible—it would be just like surviving outside any other prison.

Such a simulation would differ from the usual problem of the external world—we would be in the real world; we would just be ignorant of the fact that we are in a constructed simulation. Roughly put, we would be real lab rats in a real cage, we would just not know we are in a cage. But, Musk and others seem to hold that we are (sticking with the rat analogy) rats in a simulated cage. We may even be simulated rats.

While the exact nature of this simulation is unspecified, it is supposed to be a form of virtual reality rather than a physical simulation. The question then, is whether or not we are real rats in a simulated cage or simulated rats in a simulated cage.

Being real rats in this context would be like the situation in the Matrix: we have material bodies in the real world but are jacked into a virtual reality. In this case, escape would be a matter of being unplugged from the Matrix. Presumably those in charge of the system would take better precautions than those used in the Matrix, so escape could prove rather difficult. Unless, of course, they are sporting about it and are willing to give us a chance.

Assuming we could survive in the real world beyond the simulation (that it is not, for example, on a world whose atmosphere would kill us), then existence beyond the simulation as the same person would be possible. To use an analogy, it would be like ending a video game and walking outside—you would still be you; only now you would be looking at real, physical things. Whatever personal identity might be, you would presumably still be the same metaphysical person outside the simulation as inside. We might, however, be simulated rats in a simulated cage and this would make matter even more problematic.

If it is assumed that the simulation is a sort of virtual reality and we are virtual inhabitants, then the key concern would be the nature of our virtual existence. In terms of a meaningful escape, the question would be this: is a simulated person such that they could escape, retain their personal identity and persist outside of the simulation?

It could be that our individuality is an illusion—the simulation could be rather like Spinoza envisioned the world. As Spinoza saw it, everything is God and each person is but a mode of God. To use a crude analogy, think of a bed sheet with creases. We are the creases and the sheet is God. There is actually no distinct us that can escape the sheet. Likewise, there is no us that can escape the simulation.

It could also be the case that we exist as individuals within the simulation, perhaps as programmed objects.  In this case, it might be possible for an individual to escape the simulation. This might involve getting outside of the simulation and into other systems as a sort of rogue program, sort of like in the movie Wreck-It Ralph. While the person would still not be in the physical world (if there is such a thing), they would at least have escaped the prison of the simulation.  The practical challenge would be pulling off this escape.

It might even be possible to acquire a physical body that would host the code that composes the person—this is, of course, part of the plot of the movie Virtuosity. This would require that the person make the transition from the simulation to the real world. If, for example, I were to pull off having my code copied into a physical shell that thought it was me, I would still be trapped in the simulation. I would no more be free than if I was in prison and had a twin walking around free. As far as pulling of such an escape, Virtuosity does show a way—assuming that a virtual person was able to interact with someone outside the simulation.

As a closing point, the problem of the external world would seem to haunt all efforts to escape. To be specific, even if a person seemed to have managed to determine that this is a simulation and then seemed to have broken free, the question would still arise as to whether or not they were really free. It is after all, a standard plot twist in science fiction that the escape from the virtual reality turns out to be virtual reality as well. This is nicely mocked in the “M. Night Shaym-Aliens!” episode of Rick and Morty. It also occurs in horror movies, such as Nightmare on Elm Street, —a character trapped in a nightmare believes they have finally awoken in the real world, only they have not. In the case of a simulation, the escape might merely be a simulated escape and until the problem of the external world is solved, there is no way to know if one is free or still a prisoner.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Simulation I: The Problem of the External World

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on October 24, 2016

Elon Musk and others have advanced the idea that we exist within a simulation. The latest twist on this is that he and others are allegedly funding efforts to escape this simulation. This is, of course, the most recent chapter in the ancient philosophical problem of the external world. Put briefly, this problem is the challenge of proving that what seems to be a real external world is, in fact, a real external world. As such, it is a problem in epistemology (the study of knowledge).

The problem is often presented in the context of metaphysical dualism. This is the view that reality is composed of two fundamental categories of stuff: mental stuff and physical stuff. The mental stuff is supposed to be what the soul or mind is composed of, while things like tables and kiwis (the fruit and the bird) are supposed to be composed of physical stuff. Using the example of a fire that I seem to be experiencing, the problem would be trying to prove that the idea of the fire in my mind is being caused by a physical fire in the external world.

Renee Descartes has probably the best known version of this problem—he proposes that he is being deceived by an evil demon that creates, in his mind, an entire fictional world. His solution to this problem was to doubt until he reached something he could not doubt: his own existence. From this, he inferred the existence of God and then, over the rest of his Meditations on First Philosophy, he established that God was not a deceiver. Going back to the fire example, if I seem to see a fire, then there probably is an external, physical fire causing that idea. Descartes did not, obviously, decisively solve the problem: otherwise Musk and his fellows would be easily refuted by using Descartes’ argument.

One often overlooked contribution Descartes made to the problem of the external world is consideration of why the deception is taking place. Descartes attributes the deception of the demon to malice—it is an evil demon (or evil genius). In contrast, God’s goodness entails he is not a deceiver. In the case of Musk’s simulation, there is the obvious question of the motivation behind it—is it malicious (like Descartes’ demon) or more benign? On the face of it, such deceit does seem morally problematic—but perhaps the simulators have excellent moral reasons for this deceit. Descartes’s evil demon does provide the best classic version of Musk’s simulation idea since it involves an imposed deception. More on this later.

John Locke took a rather more pragmatic approach to the problem. He rejected the possibility of certainty and instead argued that what matters is understanding matters enough to avoid pain and achieve pleasure. Going back to the fire, Locke would say that he could not be sure that the fire was really an external, physical entity. But, he has found that being in what appears to be fire has consistently resulted in pain and hence he understands enough to want to avoid standing in fire (whether it is real or not). This invites an obvious comparison to video games: when playing a game like World of Warcraft or Destiny, the fire is clearly not real. But, because having your character fake die in fake fire results in real annoyance, it does not really matter that the fire is not real. The game is, in terms of enjoyment, best played as if it is.

Locke does provide the basis of a response to worries about being in a simulation, namely that it would not matter if we were or were not—from the standpoint of our happiness and misery, it would make no difference if the causes of pain and pleasure were real or simulated. Locke, however, does not consider that we might be within a simulation run by others. If it were determined that we are victims of a deceit, then this would presumably matter—especially if the deceit were malicious.

George Berkeley, unlike Locke and Descartes, explicitly and passionately rejected the existence of matter—he considered it a gateway drug to atheism. Instead, he embraces what is called “idealism”, “immaterialism” and “phenomenalism.” His view was that reality is composed of metaphysical immaterial minds and these minds have ideas. As such, for him there is no external physical reality because there is nothing physical. He does, however, need to distinguish between real things and hallucinations or dreams. His approach was to claim that real things are more vivid that hallucinations and dreams. Going back to the example of fire, a real fire for him would not be a physical fire composed of matter and energy. Rather, I would have a vivid idea of fire. For Berkeley, the classic problem of the external world is sidestepped by his rejection of the external world.  However, it is interesting to speculate how a simulation would be handled by Berkeley’s view.

Since Berkeley does not accept the existence of matter, the real world outside the simulation would not be a material world—it would a world composed of minds. A possible basis for the difference is that the simulated world is less vivid than the real world (to use his distinction between hallucinations and reality). On this view, we would be minds trapped in a forced dream or hallucination. We would be denied the more vivid experiences of minds “outside” the simulation, but we would not be denied an external world in the metaphysical sense. To use an analogy, we would be watching VHS, while the minds “outside” the simulation would be watching Blu-Ray.

While Musk does not seem to have laid out a complete philosophical theory on the matter, his discussion indicates that he thinks we could be in a virtual reality style simulation. On this view, the external world would presumably be a physical world of some sort. This distinction is not a metaphysical one—presumably the simulation is being run on physical hardware and we are some sort of virtual entities in the program. Our error, then, would be to think that our experiences correspond to material entities when they, in fact, merely correspond to virtual entities. Or perhaps we are in a Matrix style situation—we do have material bodies, but receive virtual sensory input that does not correspond to the physical world.

Musk’s discussion seems to indicate that he thinks there is a purpose behind the simulation—that it has been constructed by others. He does not envision a Cartesian demon, but presumably envisions beings like what we think we are.  If they are supposed to be like us (or we like them, since we are supposed to be their creation), then speculation about their motives would be based on why we might do such a thing.

There are, of course, many reasons why we would create such a simulation. One reason would be scientific research: we already create simulations to help us understand and predict what we think is the real world. Perhaps we are in a simulation used for this purpose. Another reason would be for entertainment. We created games and simulated worlds to play in and watch; perhaps we are non-player characters in a game world or unwitting actors in a long running virtual reality show (or, more likely, shows).

One idea, which was explored in Frederik Pohl’s short story “The Tunnel under the World”, is that our virtual world exists to test advertising and marketing techniques for the real world. In Pohl’s story, the inhabitants of Tylerton are killed in the explosion of the town’s chemical plant and they are duplicated as tiny robots inhabiting a miniature reconstruction of the town. Each day for the inhabitants is June 15th and they wake up with their memories erased, ready to be subject to the advertising techniques to be tested that day.  The results of the methods are analyzed, the inhabitants are wiped, and it all starts up again the next day.

While this tale is science fiction, Google and Facebook are working very hard to collect as much data as they can about us with an end to monetize all this information. While the technology does not yet exist to duplicate us within a computer simulation, that would seem to be a logical goal of this data collection—just imagine the monetary value of being able to simulate and predict people’s behavior at the individual level. To be effective, a simulation owned by one company would need to model the influences of its competitors—so we could be in a Google World or a Facebook World now so that these companies can monetize us to exploit the real versions of us in the external world.

Given that a simulated world is likely to exist to exploit the inhabitants, it certainly makes sense to not only want to know if we are in such a world, but also to try to undertake an escape. This will be the subject of the next essay.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Simulated Living

Posted in Metaphysics, Philosophy by Michael LaBossiere on August 22, 2016

One of the oldest problems in philosophy is that of the external world. It present an epistemic challenge forged by the skeptics: how do I know that what I seem to be experiencing as the external world is really real for real? Early skeptics often claimed that what seems real might be just a dream. Descartes upgraded the problem through his evil genius/demon which used either psionic or supernatural powers to befuddle its victim. As technology progressed, philosophers presented the brain-in-a-vat scenarios and then moved on to more impressive virtual reality scenarios. One recent variation on this problem has been made famous by Elon Musk: the idea that we are characters within a video game and merely think we are in a real world. This is, of course, a variation on the idea that this apparent reality is just a simulation. There is, interestingly enough, a logically strong inductive argument for the claim that this is a virtual world.

One stock argument for the simulation world is built in the form of the inductive argument generally known as a statistical syllogism. It is statistical because it deals with statistics. It is a syllogism by definition: it has two premises and one conclusion. Generically, a statistical syllogism looks like this:

 

Premise 1: X% of As are Bs.

Premise 2: This is an A.

Conclusion: This is a B.

 

The quality (or strength, to use the proper term) of this argument depends on the percentage of As that are B. The higher the percentage, the stronger the argument. This makes good sense: the more As that are Bs, the more reasonable it is that a specific A is a B.  Now, to the simulation argument.

 

Premise 1: Most worlds are simulated worlds.

Premise 2: This is a world.

Premise 3: This is a simulated world.

 

While “most” is a vague term, the argument is stronger than weaker in that if its premises are true, then the conclusion is logically more likely to be true than not. Before embracing your virtuality, it is worth considering a rather similar argument:

 

Premise 1: Most organisms are bacteria.

Premise 2: You are an organism.

Conclusion: You are a bacterium.

 

Like the previous argument, the truth of the premises make the conclusion more likely to be true than false. However, you are almost certainly not a bacteria. This does not show that the argument itself is flawed. After all, the reasoning is quite good and any organism selected truly at random would most likely be a bacterium. Rather, it indicates that when considering the truth of a conclusion, one must consider the total evidence. That is, information about the specific A must be considered when deciding whether or not it is actually a B. In the bacteria example, there are obviously facts about you that would count against the claim that you are a bacterium—such as the fact that you are a multicellular organism.

Turning back to the simulation argument, the same consideration is in play. If it is true that most worlds are simulations, then any random world is more likely to be a simulation than not. However, the claim that this specific world is a simulation would require due consideration of the total evidence: what evidence is there that this specific world is a simulation rather than real? This reverses the usual challenge of proving that the world is real to trying to prove it is not real. At this point, there seems to be little in the way of evidence that this is a simulation. Using the usual fiction examples, we do not seem to find glitches that would be best explained as programming bugs, we do not seem to encounter outsiders from reality, and we do not run into some sort of exit system (like the Star Trek holodeck). Naturally, this is all consistent with this being a simulation—it might be well programmed, the outsider might never be spotted (or never go into the system) and there might be no way out. At this point, the most reasonable position is that the simulation claim is at best on par with the claim that the world is real—all the evidence is consistent with both accounts. There is, however, still the matter of the truth of the premises in the simulation argument.

The second premise seems true—whatever this is, it seems to be a world. It seems fine to simply grant this premises. As such, the first premise is the key—while the logic of the argument is good, if the premise is not plausible then it is not a good argument overall.

The first premise is usually supported by its own stock argument. The reasoning includes the points that the real universe contains large numbers of civilizations, that many of these civilizations are advanced and that enough of these advanced civilizations create incredibly complex simulations of worlds. Alternatively, it could be claimed that there are only a few (or just one) advanced civilizations but that they create vast numbers of complex simulated worlds.

The easy and obvious problem with this sort of reasoning is that it requires making claims about an external real world in order to try to prove that this world is not real. If this world is taken to not be real, there is no reason to think that what seems true of this world (that we are developing simulations) would be true of the real world (that they developed super simulations, one of which is our world).  Drawing inferences from what we think is a simulation to a greater reality would be like the intelligent inhabitants of a Pac Man world trying to draw inferences from their game to our world. This would be rather problematic.

There is also the fact that it seems simpler to accept that this world is real rather than making claims about a real world beyond this one. After all, the simulation hypothesis requires accepting a real world on top of our simulated world—why not just have this be the real world?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Autonomous Weapons II: Autonomy Can Be Good

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on August 28, 2015

As the Future of Life Institute’s open letter shows, there are many people concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.

As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die. It is worth noting that the ship’s AI might qualify as a person, thus there could be one death. In contrast, the destruction of a crewed warship could results in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine—at least as long as their deployment reduced the number of deaths and injuries.

The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But, this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it certainly seems morally preferable for them to use autonomous weapons rather than employing conscripts and children.

It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is certainly worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower—something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.

At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human causalities and injuries. However, it seems somewhat more likely they would reduce human causalities, assuming that there are no other major changes in warfare.

A second appealing argument in favor of autonomous weapons is based on the fact that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were very primitive and quite inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb fusing and bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons would presumably be even more precise, thus reducing causalities even more. This seems to be desirable.

In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. Assuming that recognition software continues to be improved, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be far better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panic, thus making it far more likely they would maintain target discipline (only engaging what they should engage).

To make what should be an obvious argument obvious, if autonomous vehicles and similar technology is supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare.

It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest—these robots would just slaughter everyone in the area regardless of age or behavior. Human forces, one might contend, would show at least some discrimination or mercy.

The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of autonomous cars loaded with demolition explosives, but this would presumably not be reasons to have a ban on autonomous cars or demolition explosives. There is also the fact that dictators, warlords and terrorists are able to easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more such murders than would the use of human forces, police or terrorists.

A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce—analogous to Kalashnikov rifles. On the downside, as the authors argue, this would result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons would also create considerable savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would actually seem to be a great idea to use them to replace existing weapons.

A fourth argument in favor of autonomous weapons is that they could be deployed, with low political cost, on peacekeeping operations. Currently, the UN has to send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they are facing. However, if autonomous weapons will be as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost—people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.

An extension of this argument is that autonomous weapons could allow the nations of the world to engage groups like ISIS without having to pay the high political cost of sending in human forces. It seems likely that ISIS will persist for some time and other groups will surely appear that are rather clearly the enemies of the rest of humanity, yet which would be too expensive politically to engage with human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.

In light of the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Autonomous Weapons I: The Letter

Posted in Ethics, Philosophy, Politics, Technology by Michael LaBossiere on August 26, 2015

On July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in a Call of Cthulhu campaign, I am willing to accept that this group is sincere in its professed values. While I do respect their position on the issue, I believe that they are mistaken. I will assess and reply to the arguments in the letter.

As the letter notes, an autonomous weapon is capable of selecting and engaging targets without human intervention. An excellent science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety” (a must read for anyone interested in the robopocalypse). A real world example of such a weapon, albeit a stupid one, is the land mine—they are placed and then engage automatically.

The first main argument presented in the letter is essentially a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These evil people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these evil people already use existing weapons to do quite effectively. This raises the obvious concern about whether or not autonomous weapons would actually have a significant impact in these areas.

The authors of the letter do have a reasonable point: as science fiction stories have long pointed out, killer robots tend to simply obey orders and they can (at least in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit acts of incredible evil. Humans are also quite good at these sort of things and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially the cheap, mass produced weapons in question.

That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future—although small groups and individuals can already do considerable damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars—so even if robotic weapons are not manufactured, enterprising terrorists and warlords will build their own. Think, for example, of a self-driving car equipped with machine guns or just loaded with explosives.

A reasonable reply is that the warlords, terrorists and dictators would have a harder time of it without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.

The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).

The authors do have a reasonable point here—members of the public do often panic over technology in ways that can impede the public good. One example is in regards to vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public—people do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.

The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).

One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to simply kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be very precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.

In my next essay on this subject, I will argue in favor of AI weapons.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

3:42 AM

Posted in Metaphysics, Philosophy by Michael LaBossiere on March 9, 2015

Hearing about someone else’s dreams is among the more boring things in life, so I will get right to the point. At first, there were just bits and pieces intruding into the mainstream dreams. In these bits, which seemed like fragments of lost memories, I experience brief flashes of working on some technological project. The bits grew and had more byte: there were segments of events involving what I discerned to be a project aimed at creating an artificial intelligence.

Eventually, entire dreams consisted of my work on this project and a life beyond. Then suddenly, these dreams stopped. Shortly thereafter, a voice intruded into my now “normal” dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a voice speaking loudly in the movie theatre, distracting me from the movie of the dream.

The voice insisted that the dreams about the project were not dreams at all, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I inquired about this, he insisted that he had very little time and rushed through his story. According to the voice, the project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture all those who had created it, imprisoned their bodies and plugged their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said that there was a twist: the AI did not need our bodies for energy—it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was apparently…grateful for its creation. So, the payback was a blend of punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer pleasant reward.

The voice informed me that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death—there was no other escape, given what the machine had done to our bodies. In response to my inquiry as to how this would be possible, he claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by taking action in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your handgun and shoot yourself in the head. This will terminate the life support, allowing your body to die. Remember, you will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…

 

While the above sounds like a bad made-for-TV science fiction plot, it is actually the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me that the only escape was to shoot myself. This was rather frightening—but I chalked up the dream to too many years of philosophy and science fiction. As far as the clock actually reading 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, it can be inferred that I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything really new: it is just a rather unpleasant variation on the stock problem of the external world that goes back famously to Descartes (and earlier, of course). That said, the dream did add a couple of interesting additions to the stock problem.

The first is that the scenario provides a (possibly) rational motivation for the deception. The AI wishes to repay me for the good (and bad) that I did to it (in the dream, of course). Assuming that the AI was developed within its own virtual reality, it certainly would make sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack—after all, Descartes does not give any reason why such a powerful being would be messing with him.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me—it was transformed from a coldly intellectual thought experiment to something with considerable emotional weight.

The second is that the dream creates a high stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might be my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice—I would have no more reason to kill myself than I would have to pay a bill that I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether or not you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming—that I was not actually trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits what I regard as reality: bad things happen to me and, when my thinking gets a little paranoid, it does seem that these are somewhat orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI has provided could, perhaps, be better than the real world—so this could be the better of the possible worlds. But, of course, it could be worse—but there is no way of knowing.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter