A Philosopher's Blog

Poverty & the Brain

Posted in Business, Philosophy, Politics, Reasoning/Logic by Michael LaBossiere on July 14, 2017

A key part of the American mythology is the belief that a person can rise to the pinnacle of success from the depths of poverty. While this does occur, most understand that poverty presents a considerable obstacle to success. In fact, the legendary tales that tell of such success typically embrace an interesting double vision of poverty: they praise the hero for overcoming the incredible obstacle of poverty while also asserting that anyone with gumption should be able to achieve this success.

Outside of myths and legends, it is a fact that poverty is difficult to overcome. There are, of course, the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care—both necessities are expensive and the poor typically have less access to good food and good care. There is also recent research by scientists such as Kimberly G. Noble  that suggests a link between poverty and brain development.

While the most direct way to study the impact of poverty and the brain is by imaging the brain, this (as researchers have noted) is expensive. However, the research that has been conducted shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.

Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test for the functionality of the brain. As might be expected, children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.

As would be expected, there are individuals who do not conform to the generally correlation. That is, there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. As such, knowing the economic class of a child does not tell one what their individual capabilities are. However, there is a clear correlation when the matter is considered in terms of populations rather than single individuals. This is important to consider when assessing the impact of anecdotes of successful rising from poverty—as with all appeals to anecdotal evidence, they do not outweigh the bulk of statistical evidence.

To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not entail that one knows that Sally is weaker than Bob the boy. Sally might be much stronger than Bob. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the obvious question about whether poverty is a causal factor in brain development.

Those with even passing familiarity with causal reasoning know that correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities that there must be a causal connection would be to fall victim to the most basic of causal fallacies. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both—that is, poverty and the cognitive abilities are both effects.

There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.

This view does have considerable appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are needed for success and it seems reasonable to think that people who were less capable would thus be less successful. To use an analogy, there is a clear correlation between running speed and success in track races. It is not, of course, losing races that makes a person slow. It is being slow that causes a person to lose races.

Despite the appeal of this interpretation of the data, to rush to the conclusion that it is the cognitive abilities that cause poverty would be as much a fallacy as rushing to the conclusion that poverty influences brain development. Both views do seem plausible and it is certainly possible that there is causation going in both directions. The challenge, then, is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble—providing the experimental group of low income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.

Intuitively, it makes sense that an adequate family income would generally have a positive impact on the development of children. After all, this income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that throwing money at poverty is the cure; but reducing poverty is certainly a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who are concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy, Running, Gaming & the Quantified Self

Posted in Philosophy, Reasoning/Logic, Running, Technology by Michael LaBossiere on May 4, 2015

“The unquantified life is not worth living.”

 

While the idea of quantifying one’s life is an old idea, one growing tech trend is the use of devices and apps to quantify the self. As a runner, I started quantifying my running life back in 1987—that is when I started keeping a daily running log. Back then, the smartest wearable was probably a Casio calculator watch, so I kept all my records on paper. In fact, I still do—as a matter of tradition.

I use my running log to track my distance, running route, time, conditions, how I felt during the run, the number of time I have run in the shoes and other data I feel like noting at the time. I also keep a race log and a log of my yearly mileage. So, like Ben Franklin, I was quantifying before it became cool. Like Ben, I have found this rather useful—looking at my records allows me to form hypotheses regarding what factors contribute to injury (high mileage, hill work and lots of racing) and what results in better race times (rest and speed work). As such, I am sold on the value of quantification—at least in running.

In addition to my ORD (Obsessive Running/Racing Disorder) I am also a nerdcore gamer—I started with the original D&D basic set and still have shelves (and now hard drive space) devoted to games. In the sort of games I play the most, such as Pathfinder, Call of Cthulu and World of Warcraft the characters are fully quantified. That is, the character is a set of stats such as strength, constitution, dexterity, hit points, and sanity. Such games also feature sets of rules for the effects of the numbers as well as clear optimization paths. Given this background in gaming, it is not surprising that I see the quantified self as an attempt by a person to create, in effect, a character sheet for herself. That is, to see all her stats and to look for ways to optimize this character that is a model of the self. As such, I get the appeal. Naturally, as a philosopher I do have some concerns about the quantified self and how that relates to the qualities of life—but that is a matter for another time. For now, I will focus on a brief critical look at the quantified self.

Two obvious concerns about the quantified data regarding the self (or whatever is being measured) are questions regarding the accuracy of the data and questions regarding the usefulness of the data. To use an obvious example about accuracy, there is the question of how well a wearable really measures sleep.  In regards to usefulness, I wonder what I would garner from knowing how long I chew my food or the frequency of my urination.

The accuracy of the data is primarily a technical or engineering problem. As such, accuracy problems can be addressed with improvements in the hardware and software. Of course, until the data is known to be reasonably accurate, then it should be regarded with due skepticism.

The usefulness of the data is partially a subjective matter. That is, what counts as useful data will vary from person to person based on their needs and goals. For example, knowing how many steps I have taken at work is probably not useful data for me—since I run about 60 miles per week, that little amount of walking is most likely insignificant in regards to my fitness. However, someone who has no other exercise might find such data very useful. As might be suspected, it is easy to be buried under an avalanche of data and a serious challenge for anyone who wants to make use of the slew of apps and devices is to sort out the data that would actually be useful from the thousands or millions of data bits that would not be useful.

Another area of obvious concern is the reasoning applied to the data. Some devices and apps supply raw data, such as miles run or average heartrate. Others purport to offer an analysis of the data—that is, to engage in automated reasoning regarding the data. In any case, the user will need to engage in some form of reasoning to use the data.

In philosophy, the two main basic tools in regards to personal causal reasoning are derived from Mill’s classic methods. One method is commonly known as the method of agreement (or common thread reasoning). Using this method involves considering an effect (such as poor sleep or a knee injury) that has occurred multiple times (at least twice). The basic idea is to consider the factor or factors that are present each time the effect occurs and to sort through them to find the likely cause (or causes). For example, a runner might find that all her knee issues follow times when she takes up extensive hill work, thus suggesting the hill work as a causal factor.

The second method is commonly known as the method of difference. Using this method requires at least two situations: one in which the effect in question has occurred and one in which it has not. The reasoning process involves considering the differences between the two situations and sorting out which factor (or factors) is the likely cause. For example, a runner might find that when he does well in a race, he always gets plenty of rest the week before. When he does poorly, he is always poorly rested due to lack of sleep. This would indicate that there is a connection between the rest and race performance.

There are, of course, many classic causal fallacies that serve as traps for such reasoning. One of the best known is post hoc, ergo propter hoc (after this, therefore because of this). This fallacy occurs when it is inferred that A causes B simply because A is followed by B. For example, a person might note that her device showed that she walked more stairs during the week before doing well at a 5K and simply infer that walking more stairs caused her to run better. There could be a connection, but it would take more evidence to support that conclusion.

Other causal reasoning errors include the aptly named ignoring a common cause (thinking that A must cause B without considering that A and B might both be the effects of C), ignoring the possibility of coincidence (thinking A causes B without considering that it is merely coincidence) and reversing causation (taking A to cause B without considering that B might have caused A).  There are, of course, the various sayings that warn about poor causal thinking, such as “correlation is not causation” and these tend to correlate with named errors in causal reasoning.

People obviously vary in their ability to engage in causal reasoning and this would also apply to the design of the various apps and devices that purport to inform their users about the data they gather. Obviously, the better a person is at philosophical (in this case causal) reasoning, the better she will be able to use the data.

The takeaway, then, is that there are at least three important considerations regarding the quantification of the self in regards to the data. These are the accuracy of the data, the usefulness of the data, and the quality of the reasoning (be it automated or done by the person) applied to the data.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter