A Philosopher's Blog

The Curse of Springtime

Posted in Philosophy, Reasoning/Logic, Uncategorized by Michael LaBossiere on April 3, 2017

Springtime 10KAs a professional philosopher, I am not inclined to believe in curses. However, my experiences over the years have convinced me that I am the victim of what I call the Curse of Springtime. As far as I know, this curse is limited to me and I do not want anyone to have the impression that I regard Springtime Tallahassee in a negative light. Here is the tale of the curse.

For runners, the most important part of Springtime is the Springtime 10K (and now the 5K). Since I moved to Tallahassee in 1993 I have had something bad happen right before or during the race. Some examples: one year I had a horrible sinus infection. Another year I had my first ever muscle pull. Yet another year I was kicking the kickstand of my Yamaha, slipped and fell-thus injuring my back. 2008 saw the most powerful manifestation of the curse.

On the Thursday before the race, my skylight started leaking. So, I (stupidly) went up to fix it. When I was coming down, the ladder shot out from under me. I landed badly and suffered a full quadriceps tendon tear that took me out of running for months. When Springtime rolled around in 2009 I believed that the curse might kill me and I was extra cautious. The curse seemed to have spent most of its energy on that injury, because although the curse did strike, it was minor. But, the curse continued: I would either get sick or injured soon before the race, or suffer and injury during the race. This year, 2017, was no exception. My knees and right foot started bothering me a week before the race and although I rested up and took care of myself, I was unable to run on Thursday. I hobbled through the 10K on Saturday, cursing the curse.

Since I teach critical thinking, I have carefully considered the Curse of Springtime and have found it makes a good example for applying methods of causal reasoning. I started with the obvious, considering that I was falling victim to the classic post hoc, ergo propter hoc (“after this, therefore because of this”). This fallacy occurs when it is uncritically assumed that because B follows A, that A must be the cause of B. To infer just because I always have something bad happen as Springtime arrives that Springtime is causing it would be to fall into this fallacy. To avoid this fallacy, I would need to sort out a possible causal mechanism—mere correlation is not causation.

One thing that might explain some of the injuries and illnesses is the fact that the race occurs at the same time each year. By the time Springtime rolls around, I have been racing hard since January and training hard as well—so it could be that I am always worn out at this time of year. As such, I would be at peak injury and illness vulnerability. On this hypothesis, there is no Curse—I just get worn down at the same time each year because I have the same sort of schedule each year. However, this explanation does not account for all the incidents—as noted above, I have also suffered injuries that had nothing to do with running, such as falls. Also, sometimes I am healthy and injury free before the race, then have something bad happen in the race itself. As such, the challenge is to find an explanation that accounts for all the adverse events.

It is certainly worth considering that while the injuries and illnesses can be explained as noted above, the rest of the incidents are mere coincidences: it just so happens that when I am not otherwise ill or injured, something has happened. While improbable, this is not impossible. That is, it is not beyond the realm of possibility for random things to always happen for the same race year after year.

It is also worth considering that it only seems that there is a curse because I am ignoring the other bad races I have and considering only the bad Springtime races. If I have many bad races each year, it would not be unusual for Springtime to be consistently bad. Fortunately, I have records of all my races and can look at it objectively: while I do have some other bad races, Springtime is unique in that something bad has happened every year. The same is not true of any other races. As such, I do not seem to be falling into a sort of Texas Sharpshooter Fallacy by only considering the Springtime race data and not all my race data.

There is certainly the possibility that the Curse of Springtime is psychological: because I think something bad will happen it becomes a self-fulfilling prophecy. Alternatively, it could be that because I expect something bad to happen, I carefully search for bad things and overestimate their badness, thus falling into the mistake of confirmation bias: Springtime seems cursed because I am actively searching for evidence of the curse and interpreting events in a way that support the curse hypothesis. This is certainly a possibility and perhaps any race could appear cursed if one spent enough effort seeking evidence of an alleged curse. That said, there is no such consistent occurrence of unfortunate events for any other race, even those that I have run every year since I moved here. This inclines me to believe that there is some causal mechanism at play here. Or a curse. But, I am aware of the vagaries of chance and it could simply be an unfortunate set of coincidences that every Springtime since 1994 has seemed cursed. But, perhaps in 2018 everything will go well and I can dismiss my belief in the curse as mere superstition. Unless the curse kills me then. You know, because curse.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Philosophy, Running, Gaming & the Quantified Self

Posted in Philosophy, Reasoning/Logic, Running, Technology by Michael LaBossiere on May 4, 2015

“The unquantified life is not worth living.”

 

While the idea of quantifying one’s life is an old idea, one growing tech trend is the use of devices and apps to quantify the self. As a runner, I started quantifying my running life back in 1987—that is when I started keeping a daily running log. Back then, the smartest wearable was probably a Casio calculator watch, so I kept all my records on paper. In fact, I still do—as a matter of tradition.

I use my running log to track my distance, running route, time, conditions, how I felt during the run, the number of time I have run in the shoes and other data I feel like noting at the time. I also keep a race log and a log of my yearly mileage. So, like Ben Franklin, I was quantifying before it became cool. Like Ben, I have found this rather useful—looking at my records allows me to form hypotheses regarding what factors contribute to injury (high mileage, hill work and lots of racing) and what results in better race times (rest and speed work). As such, I am sold on the value of quantification—at least in running.

In addition to my ORD (Obsessive Running/Racing Disorder) I am also a nerdcore gamer—I started with the original D&D basic set and still have shelves (and now hard drive space) devoted to games. In the sort of games I play the most, such as Pathfinder, Call of Cthulu and World of Warcraft the characters are fully quantified. That is, the character is a set of stats such as strength, constitution, dexterity, hit points, and sanity. Such games also feature sets of rules for the effects of the numbers as well as clear optimization paths. Given this background in gaming, it is not surprising that I see the quantified self as an attempt by a person to create, in effect, a character sheet for herself. That is, to see all her stats and to look for ways to optimize this character that is a model of the self. As such, I get the appeal. Naturally, as a philosopher I do have some concerns about the quantified self and how that relates to the qualities of life—but that is a matter for another time. For now, I will focus on a brief critical look at the quantified self.

Two obvious concerns about the quantified data regarding the self (or whatever is being measured) are questions regarding the accuracy of the data and questions regarding the usefulness of the data. To use an obvious example about accuracy, there is the question of how well a wearable really measures sleep.  In regards to usefulness, I wonder what I would garner from knowing how long I chew my food or the frequency of my urination.

The accuracy of the data is primarily a technical or engineering problem. As such, accuracy problems can be addressed with improvements in the hardware and software. Of course, until the data is known to be reasonably accurate, then it should be regarded with due skepticism.

The usefulness of the data is partially a subjective matter. That is, what counts as useful data will vary from person to person based on their needs and goals. For example, knowing how many steps I have taken at work is probably not useful data for me—since I run about 60 miles per week, that little amount of walking is most likely insignificant in regards to my fitness. However, someone who has no other exercise might find such data very useful. As might be suspected, it is easy to be buried under an avalanche of data and a serious challenge for anyone who wants to make use of the slew of apps and devices is to sort out the data that would actually be useful from the thousands or millions of data bits that would not be useful.

Another area of obvious concern is the reasoning applied to the data. Some devices and apps supply raw data, such as miles run or average heartrate. Others purport to offer an analysis of the data—that is, to engage in automated reasoning regarding the data. In any case, the user will need to engage in some form of reasoning to use the data.

In philosophy, the two main basic tools in regards to personal causal reasoning are derived from Mill’s classic methods. One method is commonly known as the method of agreement (or common thread reasoning). Using this method involves considering an effect (such as poor sleep or a knee injury) that has occurred multiple times (at least twice). The basic idea is to consider the factor or factors that are present each time the effect occurs and to sort through them to find the likely cause (or causes). For example, a runner might find that all her knee issues follow times when she takes up extensive hill work, thus suggesting the hill work as a causal factor.

The second method is commonly known as the method of difference. Using this method requires at least two situations: one in which the effect in question has occurred and one in which it has not. The reasoning process involves considering the differences between the two situations and sorting out which factor (or factors) is the likely cause. For example, a runner might find that when he does well in a race, he always gets plenty of rest the week before. When he does poorly, he is always poorly rested due to lack of sleep. This would indicate that there is a connection between the rest and race performance.

There are, of course, many classic causal fallacies that serve as traps for such reasoning. One of the best known is post hoc, ergo propter hoc (after this, therefore because of this). This fallacy occurs when it is inferred that A causes B simply because A is followed by B. For example, a person might note that her device showed that she walked more stairs during the week before doing well at a 5K and simply infer that walking more stairs caused her to run better. There could be a connection, but it would take more evidence to support that conclusion.

Other causal reasoning errors include the aptly named ignoring a common cause (thinking that A must cause B without considering that A and B might both be the effects of C), ignoring the possibility of coincidence (thinking A causes B without considering that it is merely coincidence) and reversing causation (taking A to cause B without considering that B might have caused A).  There are, of course, the various sayings that warn about poor causal thinking, such as “correlation is not causation” and these tend to correlate with named errors in causal reasoning.

People obviously vary in their ability to engage in causal reasoning and this would also apply to the design of the various apps and devices that purport to inform their users about the data they gather. Obviously, the better a person is at philosophical (in this case causal) reasoning, the better she will be able to use the data.

The takeaway, then, is that there are at least three important considerations regarding the quantification of the self in regards to the data. These are the accuracy of the data, the usefulness of the data, and the quality of the reasoning (be it automated or done by the person) applied to the data.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter