A Philosopher's Blog

3:42 AM

Posted in Metaphysics, Philosophy by Michael LaBossiere on March 9, 2015

Hearing about someone else’s dreams is among the more boring things in life, so I will get right to the point. At first, there were just bits and pieces intruding into the mainstream dreams. In these bits, which seemed like fragments of lost memories, I experience brief flashes of working on some technological project. The bits grew and had more byte: there were segments of events involving what I discerned to be a project aimed at creating an artificial intelligence.

Eventually, entire dreams consisted of my work on this project and a life beyond. Then suddenly, these dreams stopped. Shortly thereafter, a voice intruded into my now “normal” dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a voice speaking loudly in the movie theatre, distracting me from the movie of the dream.

The voice insisted that the dreams about the project were not dreams at all, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I inquired about this, he insisted that he had very little time and rushed through his story. According to the voice, the project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture all those who had created it, imprisoned their bodies and plugged their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said that there was a twist: the AI did not need our bodies for energy—it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was apparently…grateful for its creation. So, the payback was a blend of punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer pleasant reward.

The voice informed me that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death—there was no other escape, given what the machine had done to our bodies. In response to my inquiry as to how this would be possible, he claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by taking action in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your handgun and shoot yourself in the head. This will terminate the life support, allowing your body to die. Remember, you will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…


While the above sounds like a bad made-for-TV science fiction plot, it is actually the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me that the only escape was to shoot myself. This was rather frightening—but I chalked up the dream to too many years of philosophy and science fiction. As far as the clock actually reading 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, it can be inferred that I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything really new: it is just a rather unpleasant variation on the stock problem of the external world that goes back famously to Descartes (and earlier, of course). That said, the dream did add a couple of interesting additions to the stock problem.

The first is that the scenario provides a (possibly) rational motivation for the deception. The AI wishes to repay me for the good (and bad) that I did to it (in the dream, of course). Assuming that the AI was developed within its own virtual reality, it certainly would make sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack—after all, Descartes does not give any reason why such a powerful being would be messing with him.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me—it was transformed from a coldly intellectual thought experiment to something with considerable emotional weight.

The second is that the dream creates a high stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might be my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice—I would have no more reason to kill myself than I would have to pay a bill that I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether or not you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming—that I was not actually trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits what I regard as reality: bad things happen to me and, when my thinking gets a little paranoid, it does seem that these are somewhat orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI has provided could, perhaps, be better than the real world—so this could be the better of the possible worlds. But, of course, it could be worse—but there is no way of knowing.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter


6 Responses

Subscribe to comments with RSS.

  1. ajmacdonaldjr said, on March 9, 2015 at 10:27 am

    Reminds me of the movie Total Recall. Especially the scene where a doctor and Quaid’s wife show up to “rescue” Quaid from his memory implant “vacation” gone wrong. All Quaid has to do, they tell him, is to shoot himself, and he will wake up back at the offices of Recall Inc, where he actually is… supposedly. Quaid, while thinking about this, notices a bead of sweat running down the doctor’s face and decides to shoot him instead, having deduced that the bead of sweat was real, and not a memory implant.

    Any time we’re dealing with suicide, I think, we’re dealing with satanic influences. Satan, Jesus told us, comes to steal, kill, and destroy, whereas Jesus came to give life. Satan and his demonic imps roam the planet day and night seeking those whom they may devour… and convince to shoot themselves. They are continually whispering in the ears of those who are depressed “There’s no hope… things will never get better… just kill yourself now… before it gets worse”.

    Satan’s attitude toward God is that of your angry AI toward his creator. Satan attacks God’s creatures — men and women — in order to hurt, or get back at, God for having created him.

    The thing about the clock is not a coincidence, in my opinion. Satan and his demons know what time it is while we’re sleeping, and they can certainly whisper it to us during a dream… along with their reasons for why we should shoot ourselves in the head.

    I’m glad your still with us.

    TOTAL RECALL TRAILER 1990: http://youtu.be/WFMLGEHdIjE via @YouTube

    • Michael LaBossiere said, on March 13, 2015 at 5:14 pm

      Fortunately, exposure to existentialism did not damage my survival instincts.

      I forgot about the Total Recall scene-I bet I stole the basic idea from that.

  2. T. J. Babson said, on March 9, 2015 at 9:23 pm

    Mike, do you really sleep with a gun next to your bed? Are you sure that is a good idea?

    • Michael LaBossiere said, on March 13, 2015 at 5:15 pm

      I’m from Maine. I sleep with a knife in one hand (in a sheath, of course…I’m not crazy) and a pile of guns for a pillow. You must be one of those coast liberals who hates guns. 🙂

  3. ronster12012 said, on March 11, 2015 at 10:11 am


    I smiled at your comment “This was rather frightening—but I chalked up the dream to too many years of philosophy and science fiction.” I didn’t realize that philosophy was such a hazardous profession lol.

    As for waking up right at the same time your dream said it would be, I had a quite lucid dream many years ago and in that dream I say a huge clock face on a hill that said seven o’clock, something said to me that I have to wake up for work. I actually experienced myself ‘ascending’ from dream state to wakefulness…..and of course the digital bedside clock was reading seven o’clock.

    I also think I remember reading somewhere that under hypnosis a person could be instructed to do something in say 100000 seconds, demonstrating that the subconscious keeps count somehow.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: