A Philosopher's Blog

Poverty & the Brain

Posted in Business, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on July 14, 2017

A key part of the American mythology is the belief that a person can rise to the pinnacle of success from the depths of poverty. While this does occur, most understand that poverty presents a considerable obstacle to success. In fact, the legendary tales that tell of such success typically embrace an interesting double vision of poverty: they praise the hero for overcoming the incredible obstacle of poverty while also asserting that anyone with gumption should be able to achieve this success.

Outside of myths and legends, it is a fact that poverty is difficult to overcome. There are, of course, the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care—both necessities are expensive and the poor typically have less access to good food and good care. There is also recent research by scientists such as Kimberly G. Noble  that suggests a link between poverty and brain development.

While the most direct way to study the impact of poverty and the brain is by imaging the brain, this (as researchers have noted) is expensive. However, the research that has been conducted shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.

Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test for the functionality of the brain. As might be expected, children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.

As would be expected, there are individuals who do not conform to the generally correlation. That is, there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. As such, knowing the economic class of a child does not tell one what their individual capabilities are. However, there is a clear correlation when the matter is considered in terms of populations rather than single individuals. This is important to consider when assessing the impact of anecdotes of successful rising from poverty—as with all appeals to anecdotal evidence, they do not outweigh the bulk of statistical evidence.

To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not entail that one knows that Sally is weaker than Bob the boy. Sally might be much stronger than Bob. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the obvious question about whether poverty is a causal factor in brain development.

Those with even passing familiarity with causal reasoning know that correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities that there must be a causal connection would be to fall victim to the most basic of causal fallacies. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both—that is, poverty and the cognitive abilities are both effects.

There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.

This view does have considerable appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are needed for success and it seems reasonable to think that people who were less capable would thus be less successful. To use an analogy, there is a clear correlation between running speed and success in track races. It is not, of course, losing races that makes a person slow. It is being slow that causes a person to lose races.

Despite the appeal of this interpretation of the data, to rush to the conclusion that it is the cognitive abilities that cause poverty would be as much a fallacy as rushing to the conclusion that poverty influences brain development. Both views do seem plausible and it is certainly possible that there is causation going in both directions. The challenge, then, is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble—providing the experimental group of low income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.

Intuitively, it makes sense that an adequate family income would generally have a positive impact on the development of children. After all, this income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that throwing money at poverty is the cure; but reducing poverty is certainly a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who are concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Body Hacking II: Restoration & Replacement

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on March 23, 2016

While body hacking is sometimes presented as being new and radical, humans have been engaged in the practice (under other names) for quite some time. One of the earliest forms of true body hacking was probably the use of prosthetic parts to replace lost pieces, such as a leg or hand. These hacks were aimed at restoring a degree of functionality, so they were practical hacks.

While most contemporary body hacking seems aimed at gimmickry or rather limited attempts at augmentation, there are some serious applications that involve replacement and restoration. One example of this is the color blind person who is using a skull mounted camera to provide audio clues regarding colors. This hack serves as a replacement to missing components of the eye, albeit in a somewhat odd way.

Medicine is, obviously enough, replete with body hacks ranging from contact lenses to highly functional prosthetic limbs. These technologies and devices provide people with some degree of replacement and restoration for capabilities they lost or never had. While these sort of hacks are typically handled by medical professionals, advances in existing technology and the rise of new technologies will certainly result in more practical hacks aimed not at gimmickry but at restoration and replacement. There will also certainly be considerable efforts aimed at augmentation, but this matter will be addressed in another essay.

Since humans have been body hacking for replacement and restoration for thousands of years, the ethics of this matter are rather well settled. In general, the use of technology for medical reasons of replacement or restoration is morally unproblematic. After all, this process is simply fulfilling the main purpose of medicine: to get a person as close to their normal healthy state as possible. To use a specific example, there really is no morally controversy over the use of prosthetic limbs that are designed to restore functionality. In the case of body hacks, the same general principle would apply: hacks that aim at restoration or replacement are generally morally unproblematic. That said, there are some potential areas of concern.

One area of both moral and practical concern is the risk of body hacking done by non-professionals. That is, amateur or DIY body hacking. The concern is that such hacking could have negative consequences—that is, the hack could turn out to do more harm than good. This might be due to bad design, poor implementation or other causes. For example, a person might endeavor a hack to replace a missing leg and have it fail catastrophically, resulting in a serious injury. This is, of course, not unique to body hacking—this is a general matter of good decision making.

As with health and medicine in general, it is generally preferable to go with a professional rather than an amateur or a DIY endeavor. Also, the possibility of harm makes it a matter of moral concern. That said, there are many people who cannot afford professional care and technology will afford people an ever-growing opportunity to body hack for medical reasons. This sort of self-help can be justified on the grounds that some restoration or replacement is better than none. This assumes that the self-help efforts do not result in worse harm than doing nothing. As such, body hackers and society will need to consider the ethics of the risks of amateur and DIY body hacking. Guidance can be found here in existing medical ethics—such as moral guides for people attempting to practice medicine on themselves and others without proper medical training.

A second area of moral concern is that some people will engage in replacing fully functional parts with body hacks that are equal or inferior to the original (augmentation will be addressed in the next essay). For example, a person might want to remove a finger to replace it with a mechanical finger with a built in USB drive. As another example, a person might want to replace her eye with a camera comparable or inferior to her natural eye.

One clear moral concern is the potential dangers in such hacks—removing a body part can be rather dangerous. One approach would be to weigh the harms and benefits of such hacking. On the face of it, such replacement hacks would seem to be at best neutral—that is, the person will end up with the same capabilities as before. It is also possible, perhaps likely, that the replacement attempt will result in diminished capabilities, thus making the hack wrong because of the harm inflicted. Some body hackers might argue that such hacks have a value beyond the functionality. For example, the value of self-expression or achieving a state of existence that matches one’s conception or vision of self. In such cases, the moral question would be whether or not these factors are worth considering and if they are, how much weight they should be given morally.

There is also the worry that such hacks would be a form of unnecessary self-mutilation and thus at best morally dubious. A counter to this is to argue, as John Stuart Mill did, that people have a right to self-harm, provided that they do not harm others.  That said, arguing that people do not have a right to interfere with self-harm (provided the person is acting freely and rationally) does not entail that self-harm is morally acceptable. It is certainly possible to argue against self-harm on utilitarian grounds and also on the basis of moral obligations to oneself. Arguments from the context of virtue theory would also apply—self harm is certainly contrary to developing one’s excellence as a person.

These approaches could be countered. Utilitarian arguments can be met with utilitarian arguments that offer a different evaluation of the harms and benefits. Arguments based on obligations to oneself can be countered by arguing that there are not such obligations or that the obligations one does have allows from this sort of modification. Argument from virtue theory could be countered by attacking the theory itself or showing how such modifications are consistent with moral excellence.

My own view, which I consistently apply to other areas such as drug use, diet, and exercise, is that people have a moral right to the freedom of self-abuse/harm. This requires that the person is capable of making an informed decision and is not coerced or misled. As such, I hold that a person has every right to DIY body hacking. Since I also accept the principle of harm, I hold that society has a moral right to regulate body hacking of others as other similar practices (such as dentistry) are regulated. This is to prevent harm being inflicted on others. Being fond of virtue theory, I do hold that people should not engage in self-harm, even though they have every right to do so without having their liberty restricted. To use a concrete example, if someone wants to spoon out her eyeball and replace it with a LED light, then she has every right to do so. However, if an untrained person wants to set up shop and scoop eyeballs for replacement with lights, then society has every right to prevent that. I do think that scooping out an eye would be both foolish and morally wrong; which is also how I look at heroin use and smoking tobacco.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds III: The Mind of the Machine

Posted in Epistemology, Metaphysics, Philosophy by Michael LaBossiere on September 11, 2015

While the problem of other minds is a problem in epistemology (how does one know that another being has/is a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regards to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind and it is certainly interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (with the exception of functionalism) were developed to provide accounts of the minds of living creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of mater.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But, there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the rather serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct, but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way:  mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two way causation. It also seems to have the defect of making the mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weirder than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that the mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism is does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor—just the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regards to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body.

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same requires functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different, but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind would be a rather plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested in order to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Teenage Mind & Decision Making

Posted in Philosophy, Reasoning/Logic by Michael LaBossiere on November 14, 2014

One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior.

Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults.

Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward.

Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense.

It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter.

Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time.

Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Love, Voles & Spinoza

Posted in Metaphysics, Philosophy, Relationships/Dating by Michael LaBossiere on March 17, 2014
Benedict de Spinoza: moral problems and our em...

(Photo credit: Wikipedia)

In my previous essays I examined the idea that love is a mechanical matter as well as the implications this might have for ethics. In this essay, I will focus on the eternal truth that love hurts.

While there are exceptions, the end of a romantic relationship typically involves pain. As noted in my original essay on voles and love, Young found that when a prairie voles loses its partner, it becomes depressed. This was tested by dropping voles into beakers of water to determine how much the voles would struggle. Prairie voles who had just lost a partner struggled to a lesser degree than those who were not so bereft. The depressed voles, not surprisingly, showed a chemical difference from the non-depressed voles. When a depressed vole was “treated” for this depression, the vole struggled as strongly as the non-bereft vole.

Human beings also suffer from the hurt of love. For example, it is not uncommon for a human who has ended a relationship (be it divorce or a breakup) to fall into a vole-like depression and struggle less against the tests of life (though dropping humans into giant beakers to test this would presumably be unethical).

While some might derive an odd pleasure from stewing in a state of post-love depression, presumably this feeling is something that a rational person would want to end. The usual treatment, other than self-medication, is time: people usually tend to come out of the depression and then seek out a new opportunity for love. And depression.

Given the finding that voles can be treated for this depression, it would seem to follow that humans could also be treated for this as well. After all, if love is essentially a chemical romance grounded in strict materialism, then tweaking the brain just so would presumably fix that depression. Interestingly enough, the philosopher Spinoza offered an account of love (and emotions in general) that nicely match up with the mechanistic model being examined.

As Spinoza saw it, people are slaves to their affections and chained by who they love. This is an unwise approach to life because, as the voles in the experiment found out, the object of one’s love can die (or leave). This view of Spinoza nicely matches up: voles that bond with a partner become depressed when that partner is lost. In contrast, voles that do not form such bonds do not suffer that depression.

Interestingly enough, while Spinoza was a pantheist, his view of human beings is rather similar to that of the mechanist: he regarded humans are being within the laws of nature and was a determinist in that all that occurs does so from necessity—there is no chance or choice. This view guided him to the notion that human behavior and motivations can be examined as one might examine “lines, planes or bodies.” To be more specific, he took the view that emotions follow the same necessity as all other things, thus making the effects of the emotions predictable.  In short, Spinoza engaged in what can be regarded as a scientific examination of the emotions—although he did so without the technology available today and from a rather more metaphysical standpoint. However, the core idea that the emotions can be analyzed in terms of definitive laws is the same idea that is being followed currently in regards to the mechanics of emotion.

Getting back to the matter of the negative impact of lost love, Spinoza offered his own solution: as he saw it, all emotions are responses to what is in the past, present or future. For example, a person might feel regret because she believes she could have done something different in the past. As another example, a person might worry because he thinks that what he is doing now might not bear fruit in the future. These negative feelings rest, as Spinoza sees it, on the false belief that the past and present could be different and the future is not set. Once a person realizes that all that happens occurs of necessity (that is, nothing could have been any different and the future cannot be anything other than what it will be), then that person will suffer less from the emotions. Thus, for Spinoza, freedom from the enslaving chains of love would be the recognition and acceptance that what occurs is determined.

Putting this in the mechanistic terms of modern neuroscience, a Spinoza-like approach would be to realize that love is purely mechanical and that the pain and depression that comes from the loss of love are also purely mechanical. That is, the terrible, empty darkness that seems to devour the soul at the end of love is merely chemical and electrical events in the brain. Once a person recognizes and accepts this, if Spinoza is right, the pain should be reduced. With modern technology it is possible to do even more: whereas Spinoza could merely provide advice, modern science can eventually provide us with the means to simply adjust the brain and set things right—just as one would fix a malfunctioning car or PC.

One rather obvious problem is, of course, that if everything is necessary and determined, then Spinoza’s advice makes no sense: what is, must be and cannot be otherwise. To use an analogy, it would be like shouting advice at someone watching a cut scene in a video game. This is pointless, since the person cannot do anything to change what is occurring. For Spinoza, while we might think life is a like a game, it is like that cut scene: we are spectators and not players. So, if one is determined to wallow like a sad pig in the mud of depression, that is how it will be.

In terms of the mechanistic mind, advice would seem to be equally absurd—that is, to say what a person should do implies that a person has a choice. However, the mechanistic mind presumably just ticks away doing what it does, creating the illusion of choice. So, one brain might tick away and end up being treated while another brain might tick away in the chemical state of depression. They both eventually die and it matters not which is which.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The Chipped Brain & You

Posted in Ethics, Metaphysics, Philosophy by Michael LaBossiere on August 26, 2013
Cover of Cyberpunk 2020

(Photo credit: Wikipedia)

Back in the heyday of the cyberpunk genre I made some of my Ramen noodle money coming up with “cybertech” for use in the various science-fiction role-playing games. As might be guessed, these included implants, nanotechology, cyberforms, smart weapons, robots and other such technological make-believe. While cyberpunk waned over the years, it never quite died off. These days, there is a fair amount of mostly empty hype about a post-human future and folks have been brushing the silicon dust off cyberpunk.

One stock bit of cybertech is the brain chip. In the genre, there is a rather impressive variety of these chips. Some are fairly basic—they act like flash drives for the brain and store data. Others are rather more impressive—they can store skillsets that allow a person, for example, to temporarily gain the ability to fly a helicopter. The upper level chips are supposed to do even more, such as increasing a person’s intelligence. Not surprisingly, the chipping of the brain is supposed to be part of the end of the human race—presumably we will be eventually replaced by a newly designed humanity (or cybermanity).

On the face of it, adding cybertech upgrades to the brain seems rather plausible. After all, in many cases this will just be a matter of bypassing the sense organs and directly connecting the brain to the data. So, for example, instead of holding my tablet in my hands so I can see the results of Google searches with my eyes, I’ll have a computer implanted in my body that links into  the appropriate parts of my brain. While this will be a major change in the nature of the interface (far more so than going from the command line to an icon based GUI), this will not be as radical a change as some people might think. After all, it is still just me doing a Google search, only I do not need to hold the tablet or see it with my eyes. This will not, obviously enough, make me any smarter and presumably would not alter my humanity in any meaningful way relative to what the tablet did to me. To put it crudely, sticking a cell phone in your head might be cool (or creepy) but it is still just a phone. Only now it is in your head.

The more interesting sort of chip would, of course, be one that actually changes the person. For example, when many folks talk about the coming new world, they speak of brain enhancements that will improve intelligence. This is, presumably, not just a matter of sticking a calculator in someone’s head. While this would make getting answers to math problems more convenient, it would not make a person any more capable at math than does a conventional outside-the-head calculator. Likewise for sticking in a general computer. Having a PC on my desktop does not make me any smarter. Moving it into my head would not change this. It could, obviously enough, make me seem smarter—at least to those unaware of my headputer.

What would be needed, then, would be a chip (or whatever) that would actually make a change within the person herself, altering intelligence rather than merely closing the interface gap. This sort of modification does raise various concerns.

One obvious practical concern is whether or not this is even possible. That is, while it make sense to install a computer into the body that the person uses via an internal interface, the idea of dissolving the distinction between the user and the technology seems rather more questionable. It might be replied that this does not really matter. However, the obvious reply is that it does. After all, plugging my phone and PC into my body still keeps the distinction between the user and the machine in place. Whether the computer is on my desk or in my body, I am still using it and it is still not me. After all, I do not use me. I am me. As such, my abilities remain the same—it is just a tool that I am using. In order for cybertech to make me more intelligent, it would need to change the person I am—not just change how I interface with my tools. Perhaps the user-tool gap can be bridged. If so, this would have numerous interesting implications for philosophy.

Another concern is more philosophical. If a way is found to actually create a chip (or whatever) that becomes part of the person (and not just a tool that resides in the body), then what sort of effect would this have on the person in regards to his personhood? Would Chipped Sally be the same person as Sally, or would there be a new person? Suppose that Sally is chipped, then de-chipped? I am confident that armies of arguments can be marshalled on the various sides of this matter. There are also the moral questions about making such alterations to people.

My Amazon Author Page

Enhanced by Zemanta

Mental Illness, Violence & Liberty

Posted in Ethics, Law, Philosophy, Politics by Michael LaBossiere on December 19, 2012
Human brain NIH

 (Photo credit: Wikipedia)

The mass murder that occurred at Sandy Hook Elementary school has created significant interest in both gun control and mental health. In this essay I will focus on the matter of mental health.

When watching the coverage on CNN, I saw a segment in which Dr. Gupta noted that currently people can only be involuntarily detained for mental health issues when they present an imminent danger. He expressed concern about this high threshold, noting that this has the practical impact that authorities generally cannot act until someone has done something harmful and then it can be rather too late. One rather important matter is sorting out what the threshold for official intervention.

On the one hand, it can be argued that the relevant authorities need to be proactive. They should not wait until they learn that someone with a mental issue is plotting to shoot children before acting. They certainly should not wait until after someone with a mental issue has murdered dozens of people. They have to determine whether or not a person with a mental issue (or issues) is likely to engage in such behavior and deal with the person well before people are hurt.  That is, the authorities need to catch and deal with the person while he is still a pre-criminal rather than an actual criminal.

In terms of arguing in favor of this, a plausible line of approach would be a utilitarian argument: dealing with people with mental issues before they commit acts of violence will prevent the harmful consequences that otherwise would have occurred.

On the other hand, there is the obvious moral concern with allowing authorities to detain and deal with people not for something they have done or have even plotted to do but merely might do.  Obviously, there is rather serious practical challenge of sorting out what a person might do when they are not actually conspiring or planning a misdeed. There is also the moral concern of justifying coercing or detaining a person for what they might do. Intuitively, the mere fact that a person could or might do something wrong does not warrant acting against the person. The obvious exception is when there is adequate evidence to establish that a person is plotting or conspiring to commit a crime. However, these sorts of things are already covered by the law, so what would seem to be under consideration would be coercing people without adequate evidence that they are plotting or conspiring to commit crimes. On the face of it, this would seem unacceptable.

One obvious way to justify using the coercive power of the state against those with mental issues before they commit or even plan a crime is to argue that certain mental issues are themselves adequate evidence that a person is reasonably likely to engage in a crime, even though nothing she has done meets the imminent danger threshold.

On an abstract level, this does have a certain appeal. To use an analogy to physical health, if certain factors indicate a high risk of a condition occurring, then it make sense to treat for that condition before it manifests. Likewise, if certain factors indicate a high risk of a person with mental issues engaging in violence against others, then it makes sense to treat for that condition before it manifests.

It might be objected that people can refuse medical treatment for physical conditions and hence they should be able to do the same for dangerous mental issues. The obvious reply is that if a person refuses treatment for a physical ailment, he is only endangering himself. But if someone refuses treatment for a condition that can result in her engaging in violence against others, then she is putting others in danger without their consent and she does not have the liberty or right to do this.

Moving into the realm of the concrete, the matter becomes rather problematic. One rather obvious point of concern is that mental health science is lagging far behind the physical health sciences (I am using the popular rather than philosophical distinction between mental and physical here) and the physical health sciences are still rather limited. As such, using the best mental health science of the day to predict how likely a person is likely to engage in violence (in the absence of evidence of planning and actual past crimes) will typically result in a prediction of dubious accuracy. To use the coercive power of the state against an individual on the basis of such dubious evidence would not be morally acceptable. After all, a person can only be justly denied liberty on adequate grounds and such a prediction does not seem strong enough to warrant such action.

It might be countered that in the light of such events as the shootings at Sandy Hook and Colorado, there are legitimate grounds to use the coercive power of the state against people who might engage in such actions on the grounds that preventing another mass murder is worth the price of denying people their freedom on mere suspicion.

As might be imagined, without very clear guidelines and limitations, this sort of principle could easily be extended to anyone who might commit a crime—thus justifying locking up people for being potential criminals. This would certainly be wrong.

It might be countered that there is no danger of the principle being extended and that such worries are worries based on a slippery slope. After all, one might say, the principle only applies to those deemed to have the right (or rather wrong) sort of mental issues. Normal people, one might say in a calm voice, have nothing to worry about.

However, it seems that normal people might. After all, it is normal for people to have the occasional mental issue (such as depression) and there is the concern that the application of the fuzzy science of mental health might result in incorrect determinations of mental issues.

To close, I am not saying that we should not reconsider the threshold for applying the coercive power of the state to people with mental issues. Rather, my point is that this should be done with due care to avoid creating more harm than it would prevent.

 

My Amazon Author Page

Enhanced by Zemanta

Premonitions

Posted in Epistemology, Philosophy by Michael LaBossiere on August 5, 2010
a human brain in a jar

Image via Wikipedia

While I am rather rational person, I have the occasional premonition. These, as most premonitions are,  tend to be predictions of something potentially bad. My first vivid premonition was when I was an undergraduate. I was in the dorm bathroom and I suddenly had an intuition that I would need to quickly finish my “business.” I had no sooner left the stall when the fire alarm went off.

Since then I have had numerous premonitions, most of them have proven very useful indeed. However, in some cases, they merely warn me of something bad to come without enabling me to avoid it. For example, when I finished my search committee meeting after turning in my summer grades yesterday, I should have felt like I was done. However, I had a clear feeling that something bad was yet to follow and mentioned this to my colleagues as the meeting ended. Sure enough, this morning I received an email from a candidate and learned that HR had made an error with his application. So, I spent a good chunk of the day sorting that out.

Being a philosopher, I am (of course) rather skeptical of premonitions. Even my own. After all, I know that memory is rather selective. People will tend to remember the few premonitions that are followed by a significant event and forget the hundreds that amount to nothing. However, in my own case I am careful to note when I have such an intuition and wait to see if it is followed by a suitable event. While I have not done a statistically rigorous study, my premonitions seem fairly reliable. Naturally, I do have them and nothing follows, but more often than not something does.

This leads to the question of what is going on. One obvious option is that I am simply fooling myself-I think I am keeping a reasonable track of hits and misses, yet I am still remembering the hits and letting the misses slide.

Another option is that a premonition tends to be rather vague and thus can be “confirmed” by anything negative (or positive for that sort of premonition). Since bad things commonly happen, the odds are that most such intuitions would thus be followed by such an event.

A third option is that the premonitions are actually real. Since I am not inclined to believe in a supernatural cause, I suspect that these premonitions are actually intuitions. That is, I suspect that my mind (or brain) is processing all sorts of information and probabilities and yielding a specific sort of feeling. In many cases I suppose that I am working with information I am not consciously aware of, yet acquired by the usual mundane means.

I do find these premonitions rather useful and they seem to work about as well as weather predictions (that is, not great but not always wrong). Sadly, I never get anything really useful, like lottery numbers.

Enhanced by Zemanta

Mind Reading

Posted in Epistemology, Ethics, Law, Metaphysics by Michael LaBossiere on January 31, 2008

Mind reading machines have long been a standard feature in science fiction, but they are now (to a degree) a reality.

Scientists have found that by using a functional magnetic resonance imaging device (a form of MRI) they can determine, with a high degree of accuracy, what a person is thinking. Of course, the capabilities of the technology are still somewhat limited. For example, the scientists could tell whether the subject was thinking about a hammer as opposed to a pair of pliers. But, they presumably could not read the contents of this blog from my brain. At least not yet.

Naturally, this has numerous philosophical implications. Fortunately, philosophers have already thought a great deal about this matter.

When I discuss John Locke’s theory of personal identity, I always bring up the matter of the mind reading machine. Locke’s view is that personal identity is based on consciousness, so the same person=the same consciousness. Put crudely, if you truly remember something, then that was you.

Locke goes on to discuss the implications his theory has for punishment. He argues that if you do not remember doing X, then you did not do X. If X is a crime, then you would not be guilty of that crime. Locke does note that we cannot tell whether people truly remember or not, hence the courts convict based on the evidence available to them. Since God knows everything, God knows what is remembered or not-and hence God only punishes people for what they remember (and hence did).

When discussing this, I always mentioned that if a machine could be made that would read memories, then guilt and innocence could be determined-assuming, of course, that Locke’s theory is correct. Of course, when I talked about this in the past, such a machine was a mere theoretical possibility.

Now that the machine is a reality, it can be used for just such a purpose. We might very well see people being brain scanned during police investigations in order to determine what they remember. For example, if only the killer and the victim would remember certain details about a murder, then the machine could be used to test suspects.

While such usage as a crime fighting device might be laudable and while I generally like technology, when I read about these new capabilities provided by the fMRI in Newsweek (Page 22, January 2008 issue) I felt a chill. I have a rather active imagination and immediately thought of how this capability will be horribly abused and misused in the future.

It could be used to steal secrets from people (imagine a brain scanner that can work from a distance-which is certainly a theoretical possibility).

It could also be used to seek out dissidents in repressive states. In short, the device could become the means to break through what had been our last area of true privacy-our minds.

While this seems like a small thing, it could well be a breakthrough (or horror) on par with nuclear weapons-something that radically changes the nature of the world in nightmarish ways. Be afraid…but try not to think about it.