In my last essay I looked briefly at how to pick between experts. While people often reply on experts when making arguments, they also rely on studies (and experiments). Since most people do not do their own research, the studies mentioned are typically those conducted by others. While using study results in an argument is quite reasonable, making a good argument based on study results requires being able to pick between studies rationally.
Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick a study based on the fact that it agrees with what you already believe. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it is reasonable to believe it.
Another common approach is to accept a study as correct because the results match what you really want to be true. For example, a liberal might accept a study that claims liberals are smarter and more generous than conservatives. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).
In some cases, people try to create their own “studies” by appealing to their own anecdotal data about some matter. For example, a person might claim that poor people are lazy based on his experience with some poor people. While anecdotes can be interesting, to take an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.
While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations of studies, provided that they have the relevant information about the study. The following provides a concise guide to studies—and experiments.
In normal use, people often jam together studies and experiments. While this is fine for informal purposes, this distinction is actually important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.
The objective of the experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible—the more alike the two groups, the better the experiment.
The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.
Assuming that the experiment was conducted properly, whether or not the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The key idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment actually shows there is a causal connection it is important to know the size of the sample used. After all, the difference between the experimental and control groups might be rather large, but might not be significant. For example, imagine that an experiment is conducted involving 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is actually not statistically significant: the sample is so small, the difference could be due entirely to chance. The following table shows some information about statistical significance.
Sample Size (Control group + Experimental Group)
Approximate Figure That The Difference Must Exceed
To Be Statistically Significant
(in percentage points)
While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to radiation to test its effect would be immoral. In such cases studies are used rather than experiments.
One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a suspected cause. The main difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to radiation by an accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.
After the study has run its course, the results are compared to see if these is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.
Since the study relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the experiment. After all, in the study the researchers have to take what they can find rather than conducting a proper experiment.
In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness, but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to sort things out.
Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause it not known, so those in the experimental group are those showing the effect.
Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if these is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study—which is why a study of adequate size is important.
Of the three methods, this is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would face the problem that those who chew tobacco are often ex-smokers. As such, the smoking might be the actual cause. To sort this out would involve a study involving chewers who are not ex-smokers.
It is also worth referring back to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.
As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.
Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double blind experiment/study.
Overall, here are some key questions to ask when picking a study:
Was the study/experiment properly conducted?
Was the sample size large enough?
Were the results statistically significant?
Were those conducting the study/experiment experts?
One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.
If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.
The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.
Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).
Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).
People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise. For example, a person might be very likeable, but not know a thing about what they are talking about.
Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.
1. The person has sufficient expertise in the subject matter in question.
Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.
2. The claim being made by the person is within her area(s) of expertise.
If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).
3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.
This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.
It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.
It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.
4. The person in question is not significantly biased.
This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.
It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.
Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.
These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.
One of the basic concerns is ethics is the matter of how people should be treated. This is often formulated in terms of our obligations to other people and the question is “what, if anything, do we owe other people?” While it does seem that some would like to exclude the economic realm from the realm of ethics, the burden of proof would rest on those who would claim that economics deserves a special exemption from ethics. This could, of course, be done. However, since this is a brief essay, I will start with the assumption that economic activity is not exempt from morality.
While I subscribe to virtue theory as my main ethics, I do find Kant’s ethics both appealing and interesting. In regards to how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”
It is reasonable to inquire why this should be accepted. Kant’s reasoning certainly seems sensible enough. He notes that “a man necessarily conceives his own existence as such” and this applies to all rational beings. That is, Kant claims that a rational being sees itself as being an end, rather than a thing to be used as a means to an end. So, for example, I see myself as a person who is an end and not as a mere thing that exists to serve the ends of others.
Of course, the mere fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could apparently regard myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers or slaves.
However, Kant claims that I must regard other rational beings as ends as well. The reason is fairly straightforward and is a matter of consistency: if I am an end rather than a means because I am a rational being, then consistency requires that I accept that other rational beings are ends as well. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant my treating them as means only and not as ends. People have, obviously enough, endeavored to justify treating other people as things. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings.
From this, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not entail that I cannot ever treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from being a pimp who uses women as mere means of revenue. I would, however, not be forbidden from having someone check me out at the grocery store—provided that I treated the person as a person and not a mere means.
One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. That is, the problem is figuring out when a person is being treated as a mere means and thus the action would be immoral.
Interestingly enough, many economic relationships would seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use the obvious example, if an employer treats her employees merely as means to making a profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.
One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than persons. The challenge is, of course, to show that the economic realm grants a special exemption in regards to ethics. Of course, if it does this, then the exemption would presumably be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms.
Another obvious reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics rather like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then people have the same right to use might against employers and other folks—that is, the state of nature applies to all.
On the face of it, the idea seems reasonable enough: if a person has health insurance, then she is less likely to use the emergency room. To expand on this a bit, what seems sensible is that a person with health insurance will be more likely to use primary care and thus less likely to need to use the emergency room. It also seems to make sense that a person with insurance would get more preventative care and thus be less likely to need a trip to the emergency room.
Intuitively, reducing emergency room visits would be a good thing. One reason is that emergency room care is rather expensive and reducing it would save money—which is good for patients and also good for those who have to pay the bills for the uninsured. Another reason is that the emergency room should be for emergencies—reducing the number of visits can help free up resources and lower waiting times.
As such, extending insurance coverage to everyone should be a good thing: it would reduce emergency room visits and this is good. However, it turns out that extending insurance might actually increase emergency room visits. In what seems to be an excellent study, insurance coverage actually results in more emergency room visits.
One obvious explanation is that people who are insured would be more likely to use medical services for the same reason that insured motorists are likely to use the service of mechanics: they are more likely to be able to pay the bills for repairs.
On the face of it, this would not be so bad. After all, if people can afford to go to the emergency room and be treated because they have insurance, that is certainly better than having people suffer simply because they lack insurance or the money to pay for care. However, what is most interesting about the study is that the expansion of Medicaid coverage resulted in an increase in emergency room visits for treatments that would have been more suitable in a primary care environment. That is, people decided to go to the emergency room for non-emergencies. The increase in emergency use was significant—about 40%. The study was large enough that this is statistically significant.
Given that Obamacare aims to both expand Medicaid and ensure that everyone is insured, it is certainly worth being concerned about the impact of these changes on the emergency room situation. Especially since one key claim has been that these changes would reduce costs by reducing emergency room visits.
One possibility is that the results from the Medicaid study will hold true across the country and will also apply to the insurance expansion. If so, there would be a significant increase in emergency room visits and this would certainly not results in a reduction of health care costs—especially if people go to the expensive emergency room rather than the less costly primary care options. Given the size and nature of the study, this concern is certainly legitimate in regards to the Medicaid expansion.
The general insurance expansion might not result in significantly more non-necessary emergency room visits. The reason is that private insurance companies often try to deter emergency room visits by imposing higher payments for patients. In contrast, Medicaid does not impose this higher cost. Thus, those with private insurance will tend to have a financial incentive to avoid the emergency room while those on Medicaid will not. While it would be wrong to impose a draconian penalty for going to the emergency room, one obvious solution is to impose a financial penalty for emergency room visits—preferably tied to using the emergency room for services that can be provided by primary care facilities. This can be quite reasonable, given that emergency room treatment is more expensive than comparable primary care treatment. In my own case, I know that the emergency room costs me more than visiting my primary care doctor—which gives me yet another good reason to avoid the emergency room.
There is also some reason to think that people use emergency rooms rather than primary care because they do not know their options. That is, if more people were better educated about their medical options, they would chose primary care options over the emergency room when they did not need the emergency room services. Given that going to the emergency room is generally stressful and typically involves a long wait (especially for non-emergencies) people are likely to elect for primary care when they know they have that option. This is not to say education will be a cure-all, but it is likely to help reduce unnecessary emergency room visits. Which is certainly a worthwhile objective.
While I teach at Florida A&M University, I regularly run through the Florida State University campus. In December, I noticed that the campus had been plastered with signs announcing that on January 1, 2014 the entire campus would be tobacco free (presumably enforced by killer drones). I was impressed by the extent of the plastering—there were plastic signs adhered to the sidewalks and many surfaces to ensure that all knew of the new decree.
While running does sometimes cause flashbacks, seeing those signs flashed me back to my freshman English class at Marietta College. For one writing assignment I argued in favor of various anti-smoking proposals, including some very draconian ones. I did include area bans on smoking. My motivation was, to be honest, somewhat selfish: I hate the smell of tobacco smoke (except certain pipe tobacco and certain cigars) and react rather badly to it (my eyelids swell and I have trouble breathing). As such, like a properly political person of any leaning, I thought it good and just to recast the rest of the world according to my desires and beliefs.
I thought the paper was well argued and rational. However, the professor (an avowed liberal) assigned it a grade of .62 (I am still not sure if that was out of 1, 4 or 100…). She also put a frowning face on it. And she called me a fascist. Interestingly, almost all that I proposed in the paper has come to pass (the campus wide ban being the latest). On the one hand, I do feel vindicated—if only in regards to my prophetic powers. On the other hand, I wobbled between anarchism and fascism in those days and that paper was clearly written during a fascist swing. Now that I am older and marginally wiser, I think it is worth reconsidering the ethics of the area ban.
While there are various grounds used to warrant area bans on certain behavior, three common justifications include claiming that the behavior is unpleasant, offensive or harmful. Or some combination of the three. In terms of how the justification works, the typical model is to ban behavior based on its impact on the rights others. That is, the behavior is unpleasant, offensive or harmful to others and thus violates their rights to not be exposed to unpleasant, offensive or harmful behavior.
While I have no desire to observe behavior that is unpleasant I do question the idea that I have a right to not be exposed to the merely unpleasant. After all, what is unpleasant is highly subjective and area bans on the merely unpleasant could easily result in absurdity. For example, I would find someone wearing a puke green sweater with neon pink goats unpleasant to view, but it would be rather unreasonable to have an area ban on unpleasant fashion. Roughly put, the merely unpleasant does not impose enough on others to warrant banning it (providing that the unpleasant acts do not cross over into harassment, etc.). As such, the mere fact that many people find smoking unpleasant would not warrant an area ban on smoking,
Obviously, I have no desire to be exposed to behavior that I find offensive. However, I also question the idea that I have a right to not be exposed to what is merely offensive. Even it is very offensive. While the offensive might be a bit less subjective than the unpleasant, it still is very much a subjective matter. As such, as with the merely unpleasant, an area ban on merely offensive behavior would seem to lead to absurdity. For example, if the neon goats on the sweater mentioned above spelled out the words “philosophers are goat f@ckers”, I would find the sweater both unpleasant and offensive. However, the merely offensive does not seem to impose enough on my rights to warrant imposing on the right of the offender. Naturally, offensive behavior can cross over into an actual violation of my rights and that would warrant imposing on the offender. For example, if the sweater wearer insisted on following me and screaming “goat f@cker” into my face all day, then that would go from being merely offensive to harassment. Thus, there mere fact that many people find smoking offensive would not warrant an area ban on smoking. Interestingly, it would also not warrant bans on public nudity.
Obviously, I have no desire to be harmed by the behavior of others. Equally obviously, I do believe that I have a right to not be harmed (although there are cases in which I can be justly harmed). For those who prefer to not talk of rights, I am also fine with the idea that it would be wrong to harm me (at least in most cases). As such, it should be no surprise that I would find area bans on behavior that harms others to be acceptable. The grounds would be Mill’s argument about liberty: what concerns only me and does not harm others is my own business and not their business. But, actions that harm others become the business of those that are harmed.
While the basic idea that it is acceptable to limit behavior that harms others is appealing, one clear challenge is sorting out the sort of harm that warrants imposing on others. Going back to offensive behavior, it could be claimed that offensive behavior does cause harm. For example, someone might believe that his children would be terribly harmed if they saw an unmarried couple kissing in public and thus claim that this should be banned from all public areas. As another example, a person might contend that seeing people catching fish would damage him emotionally because of the suffering of the fish and thus fishing should be banned from public areas. While these two examples are a bit silly, there are clearly some legitimate grey areas between the offensive and the clearly harmful.
Fortunately, the situation with smoking is clear cut. Tobacco smoke is known to be physically harmful to those who breathe it in (whether they are smoking or not). As such, when someone is smoking in a public area, she is imposing an unchosen health risk on everyone else in the area of effect. Since the area is public, she clearly has no right to do this. To use analogy, while a person has a right to wear the “goat f@cker” sweater mentioned above, she does not have a right to wear one that sprays out poison or has been powdered with uranium. To use a less silly analogy, a person in a public area does not have the right to spit on people who get close to her. While they could avoid this by staying away from her, she has no right to “control” the space around her with something that can harm others (spit can, obviously, transmit disease). As such, it is morally acceptable to impose an area ban on smoking.
I would, however, contend that behavior that does not harm others should not be subject to such bans. For example, drinking alcohol in public. Provided that the person is not engaging in otherwise harmful behavior, there seems to be no compelling moral reason to impose such a ban. After all, drinking a beer near people in public causes them no harm. Likewise, campus dress codes would also seem to lack a moral justification—provided that the attire does not actually inflict harm. Merely being offensive or even distracting does not seem enough to warrant an area ban on moral grounds.
Certain pundits of the American right have continued the tradition of demonizing the poor. For example, Fox News seems to delight in the narrative of the wicked poor who are destroying America. It is certainly worth considering why the poor are demonized.
One ironic foundation for this is religion. While “classic” Christianity regards the poor as blessed and warns of the dangers of idolatry, there is a strain of Christianity that regards poverty as a sign of damnation and wealth as an indicator of salvation. As Pope Francis has been pointing out, this view is a perversion of Christianity. Not surprisingly, Pope Francis has been criticized by certain pundits for actually taking Jesus seriously.
Another reason for this is that demonizing the poor allows the pundits to redirect anger so that the have-less are angry at the have-nots, rather than those who have almost everything. This is, of course, classic scapegoating: the wicked poor are blamed for many of the woes besetting America. The irony is, of course, that the poor and powerless are cast as a threat to the rich and powerful.
The approach taken in regards to the poor follows the classic model used throughout history. This model involves presenting two distinct narratives about the group that is to be hated. The first is to create a narrative which casts the members of the group as subhuman, wicked, inferior and defective. In the case of the poor, the stock narrative is that the poor are stupid, lazy, drug-users, criminals, frauds, moochers and so on. This narrative is used to create contempt and hatred of the poor in order to dehumanize them. This makes it much easier to get people to accept that it is morally permissible (even laudable) to treat the poor poorly.
The second narrative is to cast the poor as incredibly dangerous. While they have been cast as subhuman by the first narrative, the second narrative presents them as a dire threat to everyone else. The stock narrative is that the poor are destroying America by being “takers” from the “makers.” One obvious problem is crafting a narrative in which the poor and seemingly powerless are able to destroy the rich and powerful. The interesting solution to this problem is to cast Obama and some Democrats as being both very powerful (thus able to destroy America) yet someone in service to the poor (thus making the poor the true masters of destruction).
On the face of it, a little reflection should expose the narrative as absurd. The poor are obviously poor and lack power. After all, if they had power they would hardly remain poor. As such, the idea that the poor and powerless have the power to destroy America seems to be absurd. True, the poor could rise up in arms and engage in class warfare in the literal sense of the term—but that is not likely to happen.
At this point, it is natural to bring up the idea of “bread and circuses”—the idea that the poor destroyed the Roman Empire by forcing the rulers to provide them with bread and circuses until the empire fell apart.
There are two obvious replies to this. The first is that even if Rome was wrecked by spending on bread and circuses, it was the leaders who decided to use that approach to appease the masses. That is, the wealthy and powerful decided to bankrupt the state in order to stay in power. Second, the poor who wanted bread and circuses were a symptom rather than the disease. That is, the cause of the decline of the empire also resulted in larger numbers of poor people. As such, it was not so much that the poor were destroying the empire, it was that the destruction of the empire that was resulting in an increase in the poor.
The same could be said about the United States: while the income gap in the United States is extreme and poverty is relatively high, it is not the poor that that are causing the decline of America. Rather, the poverty is the result of the decline. As such, demonizing the poor and blaming them for the woes is rather like blaming the fever for the disease.
Ironically, the insistence in demonizing and blaming the poor serves to distract people away from the real causes of our woes, such as the deranged financial system, systematic inequality, a rigged market and a political system that is beholden to the 1%.
It is, however, a testament to the power of rhetoric that so many people buy the absurd idea that the poor and powerless are somehow the victimizers rather than the victims.
One stock narrative is the tale of the fraud committed by the poor in regards to government programs. Donald Trump, for example, has claimed that a lot of fraud occurs. Fox News also pushes the idea that government programs aimed to help the poor are fraught with fraud. Interestingly enough, the “evidence” presented in support of such claims seems to be that the people making the claim think or feel that there must be a lot of fraud. However, there seems little inclination to actually look for supporting evidence—presumably if someone feels strongly enough that a claim is true, that is good enough.
The claim that the system is dominated by fraud is commonly used to argue that the system should be cut back or even eliminated. The basic idea is that the poor are “takers” who are fraudulently living off the “makers.” While fraud is clearly wrong, it is rather important to consider some key questions.
The first question is this: what is the actual percentage of fraud that occurs in such programs? While, as noted above, certain people speak of lots of fraud, the actually statistical data tells another story. In the case of unemployment insurance, the rate of fraud is estimated to be less than 2%. This is lower than the rate of fraud in the private sector. In the case of welfare, fraud is sometimes reported at being 20%-40% at the state level. However, the “fraud” seems to be primarily the result of errors on the part of bureaucrats rather than fraud committed by the recipients. Naturally, an error rate that high is unacceptable—but is rather a different narrative than that of the wicked poor.
Food stamp fraud does occur—but most of it is committed by businesses rather than the recipients of the stamps. While there is some fraud on the part of recipients, the best data indicates that fraud accounts for about 1% of the payments. Given the rate of fraud in the private sector, that is exceptionally good.
Given this data, the overwhelming majority of those who receive assistance are not engaged in fraud. This is not to say that fraud should not be a concern—in fact, it is the concern with fraud on the part of the recipients that has resulted in such low incidents of fraud. Interestingly, about one third of fraud involving government money involves not the poor, but defense contractors who account for about $100 billion in fraud per year. Medicare and Medicaid combined have about $100 billion in fraudulent expenditures per year. While there is also a narrative of the wicked poor in regards to Medicare and Medicaid, the fraud is usually perpetrated by the providers of health care rather than the recipients. As such, it would seem that the focus on fraud should shift from the poor recipients of aid to defense contractors and to address Medicare/Medicaid issues. That is, it is not the wicked poor who are siphoning away money with fraud, it is the wicked wealthy who are sucking on the teat of the state. As such the narrative of the poor defrauding the state is a flawed narrative. Certainly it does happen: the percentage of fraud is greater than zero. However, the overall level of fraud on the part of the poor recipients seems to be less than 2%. The majority of fraud, contrary to the narrative, is committed by those who are not poor. While the existence of fraud does show a need to address that fraud, the narrative has cast the wrong people as the villains.
While the idea of mass welfare cheating is thus unfounded, there is still a legitimate concern as to whether or not the poor should be receiving such support from the state. After all, even if the overwhelming majority of recipients are honestly following the rules and not engaged in fraud, there is still the question of whether or not the state should be providing welfare, food stamps, Medicare, Medicaid and similar such benefits. Of course, the narrative does lose some of its rhetorical power if the poor are not cast as frauds.
While truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.
While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.
Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.
Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.
One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.
Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.
As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.
In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.
The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.
It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.
Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.
While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.
One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.
Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.
It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.
There is considerable buzz about the internet of things, smart devices and connected devices. These devices range from toothbrushes to underwear to cars. As might be imagined, one might wonder whether a person really needs a connected toothbrush or even a connected fridge. While the matter of need is interesting, I’ll focus on other matters.
One obvious point of concern is the fact that a device connected to the internet can be hacked. In some cases, people will engage in prank hacking. For example, a wit might hack a friend’s connected fridge to say “I am sorry Dave. No pie for you” in Hal’s voice. Of greater concern is the possibility that people will engage in truly malicious hacking. For example, a smart fridge might be hacked and shut off, allowing the food in it to spoil. Or the temperature might be lowered so that the food in the refrigerator is frozen. As another example, it might be possible to burn out the motors in a washing machine—something analogous to what happened in the famous case of the Iranian centrifuges. Or a dryer might be hacked in a way that could burn down a house. As a final example, consider the damage that could be done by someone hacking the systems in a connected car, such as turning it off while it is roaring down the highway or disabling the software that allows the car to brake.
Because of these risks, manufacturers will make considerable effort to ensure that the devices are safe even when hacked. Naturally, the easiest way to stay safer is to stick with dumb, unconnected devices—no one can hack my 1997 washing machine nor my 2001 Toyota Tacoma from the internet. But, of course, being safe in this way would entail missing out on the alleged benefits of the connected lifestyle. I cannot, for example, turn on my washer from work—I have to walk over to the machine and turn it on. As another example, my non-smart fridge cannot send me a text telling me to buy more pie. I have to remember when I am out of pie.
Another obvious point of concern is that connected devices can easily be used as spies—they can send all sorts of data to companies, governments and individuals. For example, a suitably smart connected fridge could provide data about its contents on a regular basis, thus providing a decent report on the users’ purchasing and consumption behavior. As another example, a suitably smart connected car can provide all sorts of behavioral and location data. It goes without saying that the NSA will be accessing all these devices and siphoning vast amounts of data about us. It also goes without saying that corporations will be doing the same—just think about Google appliances, cars, and underwear. Individuals, such as stalkers and thieves, will also be keen to get the data from such devices. These concerns are, obviously, not new ones—but the more we are connected, the more our privacy will be violated.
A practical concern is that such devices will be more complicated than the non-smart devices they replace, perhaps making them less reliable, more expensive and such that they become obsolete sooner. While my washer is not smart, it has proven to be very reliable: I’ve had it repaired once since 1997. In contrast, I’ve had to replace my smart devices (like my PC and tablets) to keep up with changes. For example, the used iPad 1 I own is stuck on version 5 of the iOS—and Apple is now on version 7. While some apps still update and run, many do not. Just imagine if your fridge, washer, dryer and car get on the high tech upgrade cycle of being obsolete (and perhaps unusable) in a few years. While this will be great for the folks who want to sell us a new fridge every 2-3 years, it might not be so great for the consumer.
While I do like technology and can see the value in smart, connected devices, I do have these concerns about them. Of course, my best defense against them is that I am a low-paid professor: I’ll only be replacing my current non-smart devices when they can no longer be repaired.