Modern agriculture does deserve considerable praise for the good that it does. Food is plentiful, relatively cheap and easy to acquire. Instead of having to struggle with raising crops and livestock or hunting and gathering, I can simply drive to the supermarket and stock up with the food I need to not die. However, as with all things, there is a price.
The modern agricultural complex is now highly centralized and industrialized, which does have its advantages and disadvantages. There are also the harms of specific, chosen practices aimed at maximizing profits. While there are many ways to maximize profits, two common ones are to pay the lowest wages possible (which the agricultural industry does—and not just to the migrant laborers, but to the ranchers and farmers) and to shift the costs to others. I will look, briefly, at one area of cost shifting: the widespread use of antibiotics in meat production.
While most people think of antibiotics as a means of treating diseases, food animals are now routinely given antibiotics when they are healthy. One reason for this is to prevent infections: factory farming techniques, as might be imagined, vastly increase the chances of a disease spreading like wildfire among an animal population. Antibiotics, it is claimed, can help reduce the risk of bacterial infections (antibiotics are useless against viruses, of course). A second reason is that antibiotics increase the growth rate of healthy animals, allowing them to pack on more meat in less time—and time is money. These uses allow the industry to continue factory farming and maintain high productivity—which initially seems laudable. The problem is, however, that this use of antibiotics comes with a high price that is paid for by everyone else.
Eric Schlosser wrote “A Safer Food Future, Now”, which appeared in the May 2016 issue of Consumer Reports. In this article, he notes that this practice has contributed significantly to the rise of antibiotic resistant bacteria. Each year, about two million Americans are infected with resistant strains and about 20,000 die. The healthcare cost is about $20 billion. To be fair, the agricultural industry is not the only contributor to this problem: improper use of antibiotics in humans has also added to this problem. That said, the agricultural use of antibiotics accounts for about 75% of all antibiotic usage in the United States, thus converting the factory farms into for resistant bacteria.
The harmful consequences of this antibiotic use have been known for years and there have, not surprisingly, been attempts to address this through legislation. It should, however, come as little surprise that our elected leaders have failed to take action. One likely explanation is that the lobbying on the part of the relevant corporations has been successful in preventing action. After all these is a strong incentive on the part of industry to keep antibiotics in use: this increases profits by enabling factory farming and the faster growth of animals. That said, it could be contended that the lawmakers are ignorant of the harms, doubt there are harms from antibiotics or honestly believe that the harms arising from their use are outweighed by the benefits to society. That is, the lawmakers have credible reasons other than straight up political bribery (or “lobbying” as it is known in polite company). This is a factual matter, albeit one that is difficult to settle: no professional politician who has been swayed by lobbying will attribute her decision to any but the purist of motivations.
This matter is certainly one of ethical concern and, like most large scale ethical matters that involves competing interests, is one that seems best approached by utilitarian considerations. On the side of using the antibiotics, there is the increased productivity (and profits) of the factory farming system of producing food. This allows more and cheaper food to be provided to the population, which can be regarded as pluses. The main reasons to not use the antibiotics, as noted above, are that they contribute to the creation of antibiotic resistant strains that sicken and kill many people (vastly more Americans than are killed by terrorism). This inflicts considerable costs on the sickened and those who are killed as well as those who care about them. There are also the monetary costs in the health care system (although the increased revenue can be tagged as a plus for health care providers). In addition to these costs, there are also other social and economic costs, such as lost hours of work. As this indicates, the cost (illness, death, etc.) of the use of the antibiotics is shifted: the industry does not pay these costs, they are paid by everyone else.
Using a utilitarian calculation requires weighing the cost to the general population against the profits of the industry and the claimed benefits to the general population. Put roughly, the moral question is whether the improved profits and greater food production outweigh the illness, deaths and costs suffered by the public. The people in the government seem to believe that the answer is “yes.”
If the United States were in a food crisis in which the absence of the increased productivity afforded by antibiotics would cause more suffering and death than their presence, then their use would be morally acceptable. However, this does not seem to be the case—while banning this sort of antibiotic use would decrease productivity (and impact profits), the harm of doing this would seem to be vastly exceeded by the reduction in illness, deaths and health care costs. However, if an objective assessment of the matter showed that the ban on antibiotics would not create more benefits than harms, then it would be reasonable and morally acceptable to continue to use them. This is partially a matter of value (in terms of how the harms and benefits are weighted) and partially an objective matter (in terms of monetary and health costs). I am inclined to agree that the general harm of using the antibiotics exceeds the general benefits, but I could be convinced otherwise by objective data.
While Aristotle was writing centuries before the rise of wearable technology, his view of moral education provides a solid foundation for the theory behind what I like to call the benign tyranny of the device. Or, if one prefers, the bearable tyranny of the wearbable.
In his Nicomachean Ethics Aristotle addressed the very practical problem of how to make people good. He was well aware that merely listening to discourses on morality would not make people good. In a very apt analogy, he noted that such people would be like invalids who listened to their doctors, but did not carry out her instructions—they will get no benefit.
His primary solution to the problem is one that is routinely endorsed and condemned today: to use the compulsive power of the state to make people behave well and thus become conditioned in that behavior. Obviously, most people are quite happy to have the state compel people to act as they would like them to act; yet equally unhappy when it comes to the state imposing on them. Aristotle was also well aware of the importance of training people from an early age—something later developed by the Nazis and Madison Avenue.
While there have been some attempts in the United States and other Western nations to use the compulsive power of the state to force people to engage in healthy practices, these have been fairly unsuccessful and are usually opposed as draconian violations of the liberty to be out of shape. While the idea of a Fitness Force chasing people around to make them exercise amuses me, I certainly would oppose such impositions on both practical and moral grounds. However, most people do need some external coercion to force them to engage in healthy behavior. Those who are well-off can hire a personal trainer and a fitness coach. Those who are less well of can appeal to the tyranny of friends who are already self-tyrannizing. However, there are many obvious problems with relying on other people. This is where the tyranny of the device comes in.
While the quantified life via electronics is in its relative infancy, there is already a multitude of devices ranging from smart fitness watches, to smart plates, to smart scales, to smart forks. All of these devices offer measurements of activities to quantify the self and most of them offer coercion ranging from annoying noises, to automatic social media posts (“today my feet did not patter, so now my ass grows fatter”), to the old school electric shock (really).
While the devices vary in their specifics, Aristotle laid out the basic requirements back when lightning was believed to come from Zeus. Aristotle noted that a person must do no wrong either with or against one’s will. In the case of fitness, this would be acting in ways contrary to health.
What is needed, according to Aristotle, is “the guidance of some intelligence or right system that has effective force.” The first part of this is that the device or app must be the “right system.” That is to say, the device must provide correct guidance in terms of health and well-being. Unfortunately, health is often ruled by fad and not actual science.
The second part of this is the matter of “effective force.” That is, the device or app must have the power to compel. Aristotle noted that individuals lacked such compulsive power, so he favored the power of law. Good law has practical wisdom and also compulsive force. However, unless the state is going to get into the business of compelling health, this option is out.
Interesting, Aristotle claims that “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While this seems not entirely true, he did seem to be right in that people find the law less annoying than being bossed around by individuals acting as individuals (like that bossy neighbor telling you to turn down the music).
The same could be true of devices—while being bossed around by a person (“hey fatty, you’ve had enough ice cream, get out and run some”) would annoy most people, being bossed by an app or device could be far less annoying. In fact, most people are already fully conditioned by their devices—they obey every command to pick up their smartphones and pay attention to whatever is beeping or flashing. Some people do this even when doing so puts people at risk, such as when they are driving. This certainly provides a vast ocean of psychological conditioning to tap into, but for a better cause. So, instead of mindlessly flipping through Instagram or texting words of nothingness, a person would be compelled by her digital master to exercise more, eat less crap, and get more sleep. Soon the machine tyrants will have very fit hosts to carry them around.
So, Aristotle has provided the perfect theoretical foundation for designing the tyrannical device. To recap, it needs the following features:
- Practical wisdom: the health science for the device or app needs to be correct and the guidance effective.
- Compulsive power: the device or app must be able to compel the user effectively and make them obey.
- Not too annoying: while it must have compulsive power, this power must not generate annoyance that exceeds its ability to compel.
- A cool name.
So, get to work on those devices and apps. The age of machine tyranny is not going to impose itself. At least not yet.
Kaci Hickox, a nurse from my home state of Maine, returned to the United States after serving as a health care worker in the Ebola outbreak. Rather than being greeted as a hero, she was confined to an unheated tent with a box for a toilet and no shower. She did not have any symptoms and tested negative for Ebola. After threatening a lawsuit, she was released and allowed to return to Maine. After arriving home, she refused to be quarantined again. She did, however, state that she would be following the CDC protocols. Her situation puts a face on a general moral concern, namely the ethics of balancing rights with safety.
While past outbreaks of Ebola in Africa were met largely with indifference from the West (aside from those who went to render aid, of course), the current outbreak has infected the United States with a severe case of fear. Some folks in the media have fanned the flames of this fear knowing that it will attract viewers. Politicians have also contributed to the fear. Some have worked hard to make Ebola into a political game piece that will allow them to bash their opponents and score points by appeasing fears they have helped create. Because of this fear, most Americans have claimed they support a travel ban in regards to Ebola infected countries and some states have started imposing mandatory quarantines. While it is to be expected that politicians will often pander to the fears of the public, the ethics of the matter should be considered rationally.
While Ebola is scary, the basic “formula” for sorting out the matter is rather simple. It is an approach that I use for all situations in which rights (or liberties) are in conflict with safety. The basic idea is this. The first step is sorting out the level of risk. This includes determining the probability that the harm will occur as well as the severity of the harm (both in quantity and quality). In the case of Ebola, the probability that someone will get it in the United States is extremely low. As the actual experts have pointed out, infection requires direct contact with bodily fluids while a person is infectious. Even then, the infection rate seems relatively low, at least in the United States. In terms of the harm, Ebola can be fatal. However, timely treatment in a well-equipped facility has been shown to be very effective. In terms of the things that are likely to harm or kill an American in the United States, Ebola is near the bottom of the list. As such, a rational assessment of the threat is that it is a small one in the United States.
The second step is determining key facts about the proposals to create safety. One obvious concern is the effectiveness of the proposed method. As an example, the 21-day mandatory quarantine would be effective at containing Ebola. If someone shows no symptoms during that time, then she is almost certainly Ebola free and can be released. If a person shows symptoms, then she can be treated immediately. An alternative, namely tracking and monitoring people rather than locking them up would also be fairly effective—it has worked so far. However, there are the worries that this method could fail—bureaucratic failures might happen or people might refuse to cooperate. A second concern is the cost of the method in terms of both practical costs and other consequences. In the case of the 21-day quarantine, there are the obvious economic and psychological costs to the person being quarantined. After all, most people will not be able to work from quarantine and the person will be isolated from others. There is also the cost of the quarantine itself. In terms of other consequences, it has been argued that imposing this quarantine will discourage volunteers from going to help out and this will be worse for the United States. This is because it is best for the rest of the world if Ebola is stopped in Africa and this will require volunteers from around the world. In the case of the tracking and monitoring approach, there would be a cost—but far less than a mandatory quarantine.
From a practical standpoint, assessing a proposed method of safety is a utilitarian calculation: does the risk warrant the cost of the method? To use some non-Ebola examples, every aircraft could be made as safe as Air-Force One, every car could be made as safe as a NASCAR vehicle, and all guns could be taken away to prevent gun accidents and homicides. However, we have decided that the cost of such safety would be too high and hence we are willing to allow some number of people to die. In the case of Ebola, the calculation is a question of considering the risk presented against the effectiveness and cost of the proposed method. Since I am not a medical expert, I am reluctant to make a definite claim. However, the medical experts do seem to hold that the quarantine approach is not warranted in the case of people who lack symptoms and test negative.
The third concern is the moral concern. Sorting out the moral aspect involves weighing the practical concerns (risk, effectiveness and cost) against the right (or liberty) in question. Some also include the legal aspects of the matter here as well, although law and morality are distinct (except, obviously, for those who are legalists and regard the law as determining morality). Since I am not a lawyer, I will leave the legal aspects to experts in that area and focus on the ethics of the matter.
When working through the moral aspect of the matter, the challenge is determining whether or not the practical concerns morally justify restricting or even eliminating rights (or liberties) in the name of safety. This should, obviously enough, be based on consistent principles in regards to balancing safety and rights. Unfortunately, people tend to be wildly inconsistent in this matter. In the case of Ebola, some people have expressed the “better safe than sorry” view and have elected to impose or support mandatory quarantines at the expense of the rights and liberties of those being quarantined. In the case of gun rights, these are often taken as trumping concerns about safety. The same holds true of the “right” or liberty to operate automobiles: tens of thousands of people die each year on the roads, yet any proposal to deny people this right would be rejected. In general, people assess these matters based on feelings, prejudices, biases, ideology and other non-rational factors—this explains the lack of consistency. So, people are wiling to impose on basic rights for little or no gain to safety, while also being content to refuse even modest infringements in matters that result in great harm. However, there are also legitimate grounds for differences: people can, after due consideration, assess the weight of rights against safety very differently.
Turning back to Ebola, the main moral question is whether or not the safety gained by imposing the quarantine (or travel ban) would justify denying people their rights. In the case of someone who is infectious, the answer would seem to be “yes.” After all, the harm done to the person (being quarantined) is greatly exceeded by the harm that would be inflicted on others by his putting them at risk of infection. In the case of people who are showing no symptoms, who test negative and who are relatively low risk (no known specific exposure to infection), then a mandatory quarantine would not be justified. Naturally, some would argue that “it is better to be safe than sorry” and hence the mandatory quarantine should be imposed. However, if it was justified in the case of Ebola, it would also be justified in other cases in which imposing on rights has even a slight chance of preventing harm. This would seem to justify taking away private vehicles and guns: these kill more people than Ebola. It might also justify imposing mandatory diets and exercise on people to protect them from harm. After all, poor health habits are major causes of health issues and premature deaths. To be consistent, if imposing a mandatory quarantine is warranted on the grounds that rights can be set aside even when the risk is incredibly slight, then this same principle must be applied across the board. This seems rather unreasonable and hence the mandatory quarantine of people who are not infectious is also unreasonable and not morally acceptable.
While the patent for an e-cigarette like device dates back to 1965, it is only fairly recently that e-cigarettes (e-cigs) have become popular and readily available. Thanks, in part, to the devastating health impact of traditional cigarettes, there is considerable concern about the e-cig.
A typical e-cig works by electronically heating a cartridge containing nicotine, flavoring and propylene glycol to release a vapor. This vapor is inhaled by the user, delivering the nicotine (and flavor). From the standpoint of ethics, the main concern is whether or not the e-cigs are harmful to the user.
At this point, the health threat, if any, of e-cigs is largely unknown—primarily because of the lack of adequate studies of the product.
While propylene glycol is regarded as safe by the FDA (it is used in soft drinks, shampoos and other products that are consumed or applied to the body), it is not known what effect the substance has if it is heated and inhaled. It might be harmless or it might not. Nicotine, which is regarded as being addictive, might also be harmful. There are also concerns about the “other stuff” in the cartridge that are heated into vapor—there is some indication that the vapors contain carcinogens. However, e-cigs are largely an unknown—aside from the general notion that inhaling particles generated from burning something is often not a great idea.
From a moral standpoint, there is the obvious concern that people are being exposed to a product whose health impact is not yet known. As of this writing, regulation of e-cigs seems to be rather limited and is often inconsistently enforced. Given that the e-cig is largely an unknown, it certainly seems reasonable to determine their potential impact on the consumer so as to provide a rational basis for regulation (which might be to have no regulation).
One stock argument in favor of e-cigs can be cast in utilitarian grounds. While the health impact of e-cigs is unknown, it seems reasonable to accept (at least initially) that they are probably not as bad for people as traditional cigarettes. If people elect to use e-cigs rather than traditional tobacco products, then they will be harmed less than if they used the tobacco products. This reduced harm would thus make e-cigs morally preferable to traditional tobacco products. Naturally, if e-cigs turn out to be worse than traditional tobacco products (which seems somewhat unlikely), then things would be rather different.
There is also the moral (and health) concern that people who would not use tobacco products would use e-cigs on the grounds that they are safer than the tobacco products. If the e-cigs are still harmful, then this would be of moral concern since people would be harmed who otherwise would not be harmed.
One obvious point of consideration is my view that people have a moral right to self-abuse. This is based on Mill’s arguments regarding liberty—others have no moral right to compel a person to do or not do something merely because doing so would be better, healthier or wiser for a person. The right to compel does covers cases in which a person is harming others—so, while I do hold that I have no right to compel people to not smoke, I do have the right to compel people to not expose me to smoke. As such, I can rightfully forbid people from smoking in my house, but not from smoking in their own.
Given the right of self-abuse, people would thus have every right to use e-cigs, provided that they are not harming others (so, for example, I can rightfully forbid people from using them in my house)—even if the e-cigs are very harmful.
However, I also hold to the importance of informed self-abuse: the person has to be able to determine (if she wants to) whether or not the activity is harmful in order in order for the self-abuse to be morally acceptable. That is, the person needs to be able to determine whether she is, in fact, engaging in self-abuse or not. If the person is unable to acquire the needed information, then this makes the matter a bit more morally complicated.
If the person is being intentionally deceived, then the deceiver is clearly subject to moral blame—especially if the person would not engage in the activity if she was not so deceived. For example, selling people a product that causes health problems and intentionally concealing this fact would be immoral. Or, to use another example, giving people brownies containing marijuana and not telling them would be immoral.
If there is no information available, then the ethics of the situation become rather more debatable. On the one hand, if I know that the effect of a product is unknown and I elect to use it, then it would seem that my decision puts most (if not all) of the moral blame on me, should the product prove to be harmful. This would be, it might be argued, like eating some mushroom found in the woods: if you don’t know what it will do, yet you eat it anyway and it hurts you, shame on you.
On the other hand, it seems reasonable to expect people who sell products intended for consumption be compelled to determine whether these products will be harmful or not. To use another analogy, if I have dinner at someone’s house, I have the moral expectation that they will not throw some unknown mushrooms from the woods onto the pizza they are making for dinner. Likewise, if a company sells e-cigs, the customers have a legitimate moral expectation that the product will not hurt them. Being permitted to sell products whose effect is not known is morally dubious at best. But, it should be said, that people who use such a product do bear some of the moral responsibility—they have an obligation to consider that a product that has not been tested could be harmful before using it. To use an analogy, if I buy a pizza and I know that I have no idea what the mushrooms on it will do to me, then if it kills me some of the blame rests on me—I should know better. But, the person who sells pizza also has an obligation to know what is going on that pizza-they should not sell death pizza.
The same applies to e-cigs: they should not be sold until their effects are at least reasonably determined. But, if people insist on using them without having any real idea whether they are safe or not, they are choosing poorly and deserve some of the moral blame.
As a runner, I am often accused of being a masochist or at least having masochistic tendencies. Given that I routinely subject myself to pain and recently wrote an essay about running and freedom that was rather pain focused, this is hardly surprising. Other runners, especially those masochistic ultra-marathon runners, are also commonly accused of masochism.
In some cases, the accusation is made in jest or at least not seriously. That is, the person making it is not actually claiming that runners derive pleasure (perhaps even sexual gratification) their pain. What seems to be going on is merely the observation that runners do things that clearly hurt and that make little sense to many folks. However, some folks do regard runners as masochists in the strict sense of the term. Being a runner and a philosopher, I find this a bit interesting—especially when I am the one being accused of being a masochist.
It is worth noting that I claim that people accuse runners of being masochists with some seriousness. While some people say runners are masochists in jest or with some respect for the toughness of runners, it is sometimes presented as an actual accusation: that there is something mentally wrong with runners and that when they run they are engaged in deviant behavior. While runners do like to joke about being odd and different, I think we generally prefer to not be seen as actually mentally ill or as engaging in deviant behavior. After all, that would indicate that we are doing something wrong—which I believe is (usually) not the case. Based on my experience over years of running and meeting thousands of runners, I think that runners are generally not masochists.
Given that runners engage in some rather painful activities (such as speed work and racing marathons) and that they often just run on despite injuries, it is tempting to believe that runners are really masochists and that I am in denial about the deviant nature of runners.
While this does have some appeal, it rests on a confusion about masochism in regards to matters of means and ends. For the masochist, pain is a means to the end of pleasure. That is, the masochist does not seek pain for the sake of pain, but seeks pain to achieve pleasure. However, there is a special connection between the means of pain and the end of pleasure: for the masochist, the pleasure generated specifically by pain is the pleasure that is desired. While a masochist can get pleasure by other means (such as drugs or cake), it is the desire for pleasure caused by pain that defines the masochist. As such, the pain is not an optional matter—mere pleasure is not the end, but pleasure caused by pain.
This is rather different from those who endure pain as part of achieving an end, be that end pleasure or some other end. For those who endure pain to achieve an end, the pain can be seen as part of the means or, perhaps more accurately, as an effect of the means. It is valuing the end that causes the person to endure the pain to achieve the end—the pain is not sought out as being the “proper cause” of the end. In the case of the masochist, the pain is not endured to achieve an end—it is the “proper cause” of the end, which is pleasure.
In the case of running, runners typically regard pain as something to be endured as part of the process of achieving the desired ends, such as fitness or victory. However, runners generally prefer to avoid pain when they can. For example, while I will endure pain to run a good race, I prefer running well with as little pain as possible. To use an analogy, a person will put up with the unpleasant aspects of a job in order to make money—but they would certainly prefer to have as little unpleasantness as possible. After all, she is in it for the money, not the unpleasant experiences of work. Likewise, a runner is typically running for some other end (or ends) than hurting herself. It just so happens that achieving that end (or ends) requires doing things that cause pain.
In my essay on running and freedom, I described how I endured the pain in my leg while running the Tallahassee Half Marathon. If I were a masochist, experiencing pleasure by means of that pain would have been my primary end. However, my primary end was to run the half marathon well and the pain was actually an obstacle to that end. As such, I would have been glad to have had a painless start and I was pleased when the pain diminished. I enjoy the running and I do actually enjoy overcoming pain, but I do not enjoy the pain itself—hence the aspirin and Icy Hot in my medicine cabinet.
While I cannot speak for all runners, my experience has been that runners do not run for pain, they run despite the pain. Thus, we are not masochists. We might, however, show some poor judgment when it comes to pain and injury—but that is another matter.
One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.
If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.
The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.
Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).
Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).
People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise. For example, a person might be very likeable, but not know a thing about what they are talking about.
Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.
1. The person has sufficient expertise in the subject matter in question.
Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.
2. The claim being made by the person is within her area(s) of expertise.
If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).
3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.
This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.
It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.
It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.
4. The person in question is not significantly biased.
This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.
It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.
Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.
These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.
One stock narrative is the tale of the fraud committed by the poor in regards to government programs. Donald Trump, for example, has claimed that a lot of fraud occurs. Fox News also pushes the idea that government programs aimed to help the poor are fraught with fraud. Interestingly enough, the “evidence” presented in support of such claims seems to be that the people making the claim think or feel that there must be a lot of fraud. However, there seems little inclination to actually look for supporting evidence—presumably if someone feels strongly enough that a claim is true, that is good enough.
The claim that the system is dominated by fraud is commonly used to argue that the system should be cut back or even eliminated. The basic idea is that the poor are “takers” who are fraudulently living off the “makers.” While fraud is clearly wrong, it is rather important to consider some key questions.
The first question is this: what is the actual percentage of fraud that occurs in such programs? While, as noted above, certain people speak of lots of fraud, the actually statistical data tells another story. In the case of unemployment insurance, the rate of fraud is estimated to be less than 2%. This is lower than the rate of fraud in the private sector. In the case of welfare, fraud is sometimes reported at being 20%-40% at the state level. However, the “fraud” seems to be primarily the result of errors on the part of bureaucrats rather than fraud committed by the recipients. Naturally, an error rate that high is unacceptable—but is rather a different narrative than that of the wicked poor.
Food stamp fraud does occur—but most of it is committed by businesses rather than the recipients of the stamps. While there is some fraud on the part of recipients, the best data indicates that fraud accounts for about 1% of the payments. Given the rate of fraud in the private sector, that is exceptionally good.
Given this data, the overwhelming majority of those who receive assistance are not engaged in fraud. This is not to say that fraud should not be a concern—in fact, it is the concern with fraud on the part of the recipients that has resulted in such low incidents of fraud. Interestingly, about one third of fraud involving government money involves not the poor, but defense contractors who account for about $100 billion in fraud per year. Medicare and Medicaid combined have about $100 billion in fraudulent expenditures per year. While there is also a narrative of the wicked poor in regards to Medicare and Medicaid, the fraud is usually perpetrated by the providers of health care rather than the recipients. As such, it would seem that the focus on fraud should shift from the poor recipients of aid to defense contractors and to address Medicare/Medicaid issues. That is, it is not the wicked poor who are siphoning away money with fraud, it is the wicked wealthy who are sucking on the teat of the state. As such the narrative of the poor defrauding the state is a flawed narrative. Certainly it does happen: the percentage of fraud is greater than zero. However, the overall level of fraud on the part of the poor recipients seems to be less than 2%. The majority of fraud, contrary to the narrative, is committed by those who are not poor. While the existence of fraud does show a need to address that fraud, the narrative has cast the wrong people as the villains.
While the idea of mass welfare cheating is thus unfounded, there is still a legitimate concern as to whether or not the poor should be receiving such support from the state. After all, even if the overwhelming majority of recipients are honestly following the rules and not engaged in fraud, there is still the question of whether or not the state should be providing welfare, food stamps, Medicare, Medicaid and similar such benefits. Of course, the narrative does lose some of its rhetorical power if the poor are not cast as frauds.
While on a post-race cool down run with a friend, we discussed the failure of relationships. I was asked what I thought about the causes of such failures and, as usual, I came up with an analogy.
While there are many ways to see people, one way is to regard them as wonderful clockworks of cogs. These cogs are metaphors for the qualities, values, interests and other aspects of the personality of the person. Some of the cogs are at the surface of the person’s cog self—these are the ones that interact with the cogs of others. These tend to be the smaller, or minor, cogs. The deep self is made up of the core cogs—which would tend to be the larger cogs of a person. These could be regarded as the large cogs and the greater cogs.
When people interact, their outer cogs meet up. If the cogs spin together well, then the people get along and are compatible. If the cogs clash, then there will be problems.
When a person is in a relationship with another person, their minor cogs will interact and then, if things go well, some of their larger cogs will rotate in sync. While there will be clashes between the cogs, if enough of them spin well together, the relationship will go on. At least for a while.
Over time a person’s minor cogs will change. What she once found amusing will no longer amuse her. A hobby he once liked will no longer hold its charm. The poetry that once bored her will now touch her heart. And so on. A person’s larger cogs can also change, such as in a significant change of values.
In the case of a relationship, the impact of the changes will be doubled—the cogs that once rolled together smoothly might now spin against each other, creating a grinding in the machinery of the soul. If the change is great enough, the cogs can actually destroy each other, doing damage to the person or persons. At a certain point, the clash will doom the interaction, spelling the end of the relationship—or at least dooming those involved.
In other cases, the cogs can grow ever more in sync—spinning together ever closer. Presumably that sometimes happens.