A Philosopher's Blog

Ex Machina & Other Minds III: The Mind of the Machine

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on September 11, 2015

While the problem of other minds is a problem in epistemology (how does one know that another being has/is a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regards to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind and it is certainly interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (with the exception of functionalism) were developed to provide accounts of the minds of living creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of mater.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But, there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the rather serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct, but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way:  mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two way causation. It also seems to have the defect of making the mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weirder than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that the mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism is does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor—just the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regards to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body.

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same requires functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different, but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind would be a rather plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested in order to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Chipped Brain & You

Posted in Ethics, Metaphysics, Philosophy by Michael LaBossiere on August 26, 2013
Cover of Cyberpunk 2020

(Photo credit: Wikipedia)

Back in the heyday of the cyberpunk genre I made some of my Ramen noodle money coming up with “cybertech” for use in the various science-fiction role-playing games. As might be guessed, these included implants, nanotechology, cyberforms, smart weapons, robots and other such technological make-believe. While cyberpunk waned over the years, it never quite died off. These days, there is a fair amount of mostly empty hype about a post-human future and folks have been brushing the silicon dust off cyberpunk.

One stock bit of cybertech is the brain chip. In the genre, there is a rather impressive variety of these chips. Some are fairly basic—they act like flash drives for the brain and store data. Others are rather more impressive—they can store skillsets that allow a person, for example, to temporarily gain the ability to fly a helicopter. The upper level chips are supposed to do even more, such as increasing a person’s intelligence. Not surprisingly, the chipping of the brain is supposed to be part of the end of the human race—presumably we will be eventually replaced by a newly designed humanity (or cybermanity).

On the face of it, adding cybertech upgrades to the brain seems rather plausible. After all, in many cases this will just be a matter of bypassing the sense organs and directly connecting the brain to the data. So, for example, instead of holding my tablet in my hands so I can see the results of Google searches with my eyes, I’ll have a computer implanted in my body that links into  the appropriate parts of my brain. While this will be a major change in the nature of the interface (far more so than going from the command line to an icon based GUI), this will not be as radical a change as some people might think. After all, it is still just me doing a Google search, only I do not need to hold the tablet or see it with my eyes. This will not, obviously enough, make me any smarter and presumably would not alter my humanity in any meaningful way relative to what the tablet did to me. To put it crudely, sticking a cell phone in your head might be cool (or creepy) but it is still just a phone. Only now it is in your head.

The more interesting sort of chip would, of course, be one that actually changes the person. For example, when many folks talk about the coming new world, they speak of brain enhancements that will improve intelligence. This is, presumably, not just a matter of sticking a calculator in someone’s head. While this would make getting answers to math problems more convenient, it would not make a person any more capable at math than does a conventional outside-the-head calculator. Likewise for sticking in a general computer. Having a PC on my desktop does not make me any smarter. Moving it into my head would not change this. It could, obviously enough, make me seem smarter—at least to those unaware of my headputer.

What would be needed, then, would be a chip (or whatever) that would actually make a change within the person herself, altering intelligence rather than merely closing the interface gap. This sort of modification does raise various concerns.

One obvious practical concern is whether or not this is even possible. That is, while it make sense to install a computer into the body that the person uses via an internal interface, the idea of dissolving the distinction between the user and the technology seems rather more questionable. It might be replied that this does not really matter. However, the obvious reply is that it does. After all, plugging my phone and PC into my body still keeps the distinction between the user and the machine in place. Whether the computer is on my desk or in my body, I am still using it and it is still not me. After all, I do not use me. I am me. As such, my abilities remain the same—it is just a tool that I am using. In order for cybertech to make me more intelligent, it would need to change the person I am—not just change how I interface with my tools. Perhaps the user-tool gap can be bridged. If so, this would have numerous interesting implications for philosophy.

Another concern is more philosophical. If a way is found to actually create a chip (or whatever) that becomes part of the person (and not just a tool that resides in the body), then what sort of effect would this have on the person in regards to his personhood? Would Chipped Sally be the same person as Sally, or would there be a new person? Suppose that Sally is chipped, then de-chipped? I am confident that armies of arguments can be marshalled on the various sides of this matter. There are also the moral questions about making such alterations to people.

My Amazon Author Page

Enhanced by Zemanta