When Microsoft offered the Windows 8 upgrade at a low price, I bought a copy, but then held off on installing it. On Friday, I finally got around to updating my Windows 7 desktop to Windows 8. When I tried to update from 8 to 8.1 I got the dreaded 0×80240031 error and the update would not install.
I tried the various fixes: downloading all the Windows 8 updates, updating all my drivers, checking my disk for errors, sacrificing a squirrel to Microsoft and so on. Nothing worked. I did notice that the problem was the download-it would just fail at about the 50% mark. The solution seemed obvious: get an .iso of the Windows 8.1 upgrade and use that. If you have a Windows 8 key, here is how to get (legally) a Winoows 8.1 .iso or USB upgrade, courtesy of PC World. Once you have the .iso (or USB), there is still a catch: while the upgrade from 8 to 8.1 is free, the Windows 8 keys will not work with the Windows 8.1 .iso. Fortunately, there is a legal work-around. Microsoft lists generic keys that can be used to install Windows here. Windows 8.1 Pro is GCRJD-8NW9H-F2CDX-CCM8D-9D6T9. When doing the upgrade, use that generic key to do the install, then go and activate Windows using your legal Windows 8 key. Problem solved. Advice to Microsoft: since Windows 8.1 is free to Windows 8 owners, have Windows 8.1 accept Windows 8 keys. Or fix the damn install problem.
Now, suppose that you want to upgrade your Windows 7 PC to Windows 8.1 but you do not want to re-install all your software and redo your settings. Sadly, Windows 7 to Windows 8.1 makes that impossible. But, you can go from 7 to 8 and then to 8.1 while keeping all your programs in place. If you have the Windows 8 upgrade, just go from 7 to 8 and then to 8.1, choosing the option to keep your programs. If you only own an 8.1 disk and want to keep your programs, you can do this: get a Windows 8 upgrade (see above about how to get that) and use this generic key for Windows 8 Pro: NG4HW-VH26C-733KW-K6F98-J8CK4. Then use your Windows 8.1 disk to upgrade to 8.1 using your legal key. Problem solved.
- This is for Windows 8 Pro.
- The generic keys are from Microsoft and while they will allow an install, they will not activate Windows. Buy a key.
As a public service, here is Democrats at Work Part III.
Sponsored by: Communists for Mandatory Marijuana Usage.
As a runner, I am often accused of being a masochist or at least having masochistic tendencies. Given that I routinely subject myself to pain and recently wrote an essay about running and freedom that was rather pain focused, this is hardly surprising. Other runners, especially those masochistic ultra-marathon runners, are also commonly accused of masochism.
In some cases, the accusation is made in jest or at least not seriously. That is, the person making it is not actually claiming that runners derive pleasure (perhaps even sexual gratification) their pain. What seems to be going on is merely the observation that runners do things that clearly hurt and that make little sense to many folks. However, some folks do regard runners as masochists in the strict sense of the term. Being a runner and a philosopher, I find this a bit interesting—especially when I am the one being accused of being a masochist.
It is worth noting that I claim that people accuse runners of being masochists with some seriousness. While some people say runners are masochists in jest or with some respect for the toughness of runners, it is sometimes presented as an actual accusation: that there is something mentally wrong with runners and that when they run they are engaged in deviant behavior. While runners do like to joke about being odd and different, I think we generally prefer to not be seen as actually mentally ill or as engaging in deviant behavior. After all, that would indicate that we are doing something wrong—which I believe is (usually) not the case. Based on my experience over years of running and meeting thousands of runners, I think that runners are generally not masochists.
Given that runners engage in some rather painful activities (such as speed work and racing marathons) and that they often just run on despite injuries, it is tempting to believe that runners are really masochists and that I am in denial about the deviant nature of runners.
While this does have some appeal, it rests on a confusion about masochism in regards to matters of means and ends. For the masochist, pain is a means to the end of pleasure. That is, the masochist does not seek pain for the sake of pain, but seeks pain to achieve pleasure. However, there is a special connection between the means of pain and the end of pleasure: for the masochist, the pleasure generated specifically by pain is the pleasure that is desired. While a masochist can get pleasure by other means (such as drugs or cake), it is the desire for pleasure caused by pain that defines the masochist. As such, the pain is not an optional matter—mere pleasure is not the end, but pleasure caused by pain.
This is rather different from those who endure pain as part of achieving an end, be that end pleasure or some other end. For those who endure pain to achieve an end, the pain can be seen as part of the means or, perhaps more accurately, as an effect of the means. It is valuing the end that causes the person to endure the pain to achieve the end—the pain is not sought out as being the “proper cause” of the end. In the case of the masochist, the pain is not endured to achieve an end—it is the “proper cause” of the end, which is pleasure.
In the case of running, runners typically regard pain as something to be endured as part of the process of achieving the desired ends, such as fitness or victory. However, runners generally prefer to avoid pain when they can. For example, while I will endure pain to run a good race, I prefer running well with as little pain as possible. To use an analogy, a person will put up with the unpleasant aspects of a job in order to make money—but they would certainly prefer to have as little unpleasantness as possible. After all, she is in it for the money, not the unpleasant experiences of work. Likewise, a runner is typically running for some other end (or ends) than hurting herself. It just so happens that achieving that end (or ends) requires doing things that cause pain.
In my essay on running and freedom, I described how I endured the pain in my leg while running the Tallahassee Half Marathon. If I were a masochist, experiencing pleasure by means of that pain would have been my primary end. However, my primary end was to run the half marathon well and the pain was actually an obstacle to that end. As such, I would have been glad to have had a painless start and I was pleased when the pain diminished. I enjoy the running and I do actually enjoy overcoming pain, but I do not enjoy the pain itself—hence the aspirin and Icy Hot in my medicine cabinet.
While I cannot speak for all runners, my experience has been that runners do not run for pain, they run despite the pain. Thus, we are not masochists. We might, however, show some poor judgment when it comes to pain and injury—but that is another matter.
In my last essay I looked briefly at how to pick between experts. While people often reply on experts when making arguments, they also rely on studies (and experiments). Since most people do not do their own research, the studies mentioned are typically those conducted by others. While using study results in an argument is quite reasonable, making a good argument based on study results requires being able to pick between studies rationally.
Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick a study based on the fact that it agrees with what you already believe. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it is reasonable to believe it.
Another common approach is to accept a study as correct because the results match what you really want to be true. For example, a liberal might accept a study that claims liberals are smarter and more generous than conservatives. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).
In some cases, people try to create their own “studies” by appealing to their own anecdotal data about some matter. For example, a person might claim that poor people are lazy based on his experience with some poor people. While anecdotes can be interesting, to take an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.
While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations of studies, provided that they have the relevant information about the study. The following provides a concise guide to studies—and experiments.
In normal use, people often jam together studies and experiments. While this is fine for informal purposes, this distinction is actually important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.
The objective of the experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible—the more alike the two groups, the better the experiment.
The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.
Assuming that the experiment was conducted properly, whether or not the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The key idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment actually shows there is a causal connection it is important to know the size of the sample used. After all, the difference between the experimental and control groups might be rather large, but might not be significant. For example, imagine that an experiment is conducted involving 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is actually not statistically significant: the sample is so small, the difference could be due entirely to chance. The following table shows some information about statistical significance.
Sample Size (Control group + Experimental Group)
Approximate Figure That The Difference Must Exceed
To Be Statistically Significant
(in percentage points)
While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to radiation to test its effect would be immoral. In such cases studies are used rather than experiments.
One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a suspected cause. The main difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to radiation by an accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.
After the study has run its course, the results are compared to see if these is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.
Since the study relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the experiment. After all, in the study the researchers have to take what they can find rather than conducting a proper experiment.
In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness, but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to sort things out.
Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause it not known, so those in the experimental group are those showing the effect.
Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if these is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study—which is why a study of adequate size is important.
Of the three methods, this is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would face the problem that those who chew tobacco are often ex-smokers. As such, the smoking might be the actual cause. To sort this out would involve a study involving chewers who are not ex-smokers.
It is also worth referring back to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.
As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.
Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double blind experiment/study.
Overall, here are some key questions to ask when picking a study:
Was the study/experiment properly conducted?
Was the sample size large enough?
Were the results statistically significant?
Were those conducting the study/experiment experts?
One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.
If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.
The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.
Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).
Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).
People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise. For example, a person might be very likeable, but not know a thing about what they are talking about.
Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.
1. The person has sufficient expertise in the subject matter in question.
Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.
2. The claim being made by the person is within her area(s) of expertise.
If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).
3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.
This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.
It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.
It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.
4. The person in question is not significantly biased.
This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.
It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.
Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.
These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.