While pharmaceutical companies and their stockholders have profited greatly from flooding America with opioids, this has come at a terrible cost to others. Showing that the idea of gateway drugs can prove true, there has proven to be a clear path from legal opioids to illegal opioids (such as heroin). As would be expected, the use of opioids can have a terrible impact on health. One example of this is endocarditis.
Endocarditis is, roughly speaking, an abscess on a heart valve. While not limited to drug users, it is not an uncommon consequence of injecting opioids. Since the abuse of opioids is increasing, it is no surprise that the number of drug users suffering from endocarditis has increased significantly. As would be imagined, the treatment of endocarditis involves a very expensive surgery. As would also be imagined, many of the drug users getting this surgery are on Medicaid, so the taxpayers are footing the bill for this expensive treatment. To make matters worse, people typically return to using opioids after the surgery and this often results in the need for yet another expensive surgery, paid for by Medicaid. This does raise some serious moral concerns.
There is, of course, the very broad moral issue of whether Medicaid should exist. On the one hand, a compelling moral argument can be made that just as a nation provides military and police protection to citizens who cannot afford their own security forces or bodyguards, a nation should fund medical care for those who cannot afford it on their own. On the other hand, a moral argument can be made that a nation has no obligation to provide such support and that citizens should be left to fend for themselves in regards to health care. Naturally enough, if the nation is under no obligation to provide Medicaid in general, then it is under no obligation to cover the cost of the surgery in question. On this view, there is no need to consider the matter further.
However, it does seem worth granting for the sake of argument that the state should provide Medicaid and then consider the ethics of paying for endocarditis surgery for opioid addicts. Especially when they are likely to continue the behavior that resulted in the need for surgery. It is to this discussion that I now turn.
While it certainly appears harsh to argue against paying for addict’s heart surgery, a solid moral case can be made in favor of this position. The easiest and most obvious way to do this is on utilitarian grounds.
As noted above, the surgery for endocarditis is very expensive. As such, it uses financial and medical resources that could be used elsewhere. It seems likely that a great deal of good could be done with those resources that exceed the good created by replacing the heart valve of an addict. This argument can be strengthened by including the fact that addicts often return to the very behavior that resulted in endocarditis, thus creating the need for repeating the costly surgery. From a utilitarian perspective, it would be morally better to use those resources to treat patients who are far less likely to willfully engage in behavior that will require them to be treated yet again. This is because the resources that would be consumed treating and retreating a person who keeps inflicting harm on themselves could be used to treat many people, thus doing greater good for the greater number. Though harsh and seemingly merciless, this approach seems justifiable on grounds similar to the moral justification for triage.
Another approach, which is even harsher, is to focus on the fact that the addicts inflicting endocarditis on themselves and often doing so repeatedly. This provides the basis for two arguments against public funding of their treatment.
One argument can be built around the idea that there is not a moral obligation to help people when their harm is self-inflicted. To use an analogy, if a person insists on setting fire to their house and it burns down, no one has a moral responsibility to pay to have their house rebuilt. Since the addict’s woes are self-inflicted, there is no moral obligation on the part of others to pay for their surgery and forcing people to do so (by using public money) would be like forcing others to pay to rebuild the burned house.
One way to counter this is to point out that a significant percentage (probably most) health issues are self-inflicted by a lack of positive behavior (such as exercise and a good diet) and an abundance of negative behavior (such as smoking, drinking, or having unprotected sex). As such, if this principle is applied to addicts in regards to Medicaid, it must be applied to all cases of self-inflicted harms. While some might take this as a refutation of this view, others might accept this as quite reasonable.
Another argument can be built around the notion that while there could be an obligation to help people, this obligation has clear limits. In this case, if a person is treated and then knowingly returns to the same behavior that inflicted the harm, then there is no obligation to keep treating the person. In the case of the drug addict, it could be accepted that the first surgery should be covered and that they should be educated on what will happen if they persist in their harmful behavior. If they then persist in that behavior and need the surgery again, then public money should not be used. To use an analogy, if a child swings their ice cream cone around playing like it is a light sabre and is surprised when the scoops are flung to the ground, then it would reasonable for the parents to buy the child another cone. If the child then swings the new cone around again and the scoops hit the floor, then the child can be justly denied another cone.
An obvious counter is to contend that addicts are addicted and hence cannot be blamed for returning to the same behavior that caused the harm. That is, they are not morally responsible for what they are doing to themselves because they cannot do otherwise. This does have some appeal, but would seem to enable the justification of requiring addicts to undergo treatment for their addiction and to agree to monitoring of their behavior. They should be free to refuse this (which, ironically, assumes they are capable of free choice), but this should result in their being denied a second surgery if their behavior results in the same harm. Holding people accountable does seem to be cruel, but the alternative is unfair to other citizens. It would be like requiring them to keep rebuilding houses for a person who persists in setting fires in their house and refuses to have sprinklers installed.
These arguments can be countered by arguing that there is an obligation to provide such care regardless of how many times an addict returns to the behavior that caused the need for the surgery. One approach would be to build an analogy based on how the state repeatedly bails out big businesses every time they burn down the economy. Another approach would be to appeal to the value of human life and contend that it must be preserved regardless of the cost and regardless of the reason why there is a need for the medical care. This approach could be noble or, perhaps, foolish.
On the face of it, the idea seems reasonable enough: if a person has health insurance, then she is less likely to use the emergency room. To expand on this a bit, what seems sensible is that a person with health insurance will be more likely to use primary care and thus less likely to need to use the emergency room. It also seems to make sense that a person with insurance would get more preventative care and thus be less likely to need a trip to the emergency room.
Intuitively, reducing emergency room visits would be a good thing. One reason is that emergency room care is rather expensive and reducing it would save money—which is good for patients and also good for those who have to pay the bills for the uninsured. Another reason is that the emergency room should be for emergencies—reducing the number of visits can help free up resources and lower waiting times.
As such, extending insurance coverage to everyone should be a good thing: it would reduce emergency room visits and this is good. However, it turns out that extending insurance might actually increase emergency room visits. In what seems to be an excellent study, insurance coverage actually results in more emergency room visits.
One obvious explanation is that people who are insured would be more likely to use medical services for the same reason that insured motorists are likely to use the service of mechanics: they are more likely to be able to pay the bills for repairs.
On the face of it, this would not be so bad. After all, if people can afford to go to the emergency room and be treated because they have insurance, that is certainly better than having people suffer simply because they lack insurance or the money to pay for care. However, what is most interesting about the study is that the expansion of Medicaid coverage resulted in an increase in emergency room visits for treatments that would have been more suitable in a primary care environment. That is, people decided to go to the emergency room for non-emergencies. The increase in emergency use was significant—about 40%. The study was large enough that this is statistically significant.
Given that Obamacare aims to both expand Medicaid and ensure that everyone is insured, it is certainly worth being concerned about the impact of these changes on the emergency room situation. Especially since one key claim has been that these changes would reduce costs by reducing emergency room visits.
One possibility is that the results from the Medicaid study will hold true across the country and will also apply to the insurance expansion. If so, there would be a significant increase in emergency room visits and this would certainly not results in a reduction of health care costs—especially if people go to the expensive emergency room rather than the less costly primary care options. Given the size and nature of the study, this concern is certainly legitimate in regards to the Medicaid expansion.
The general insurance expansion might not result in significantly more non-necessary emergency room visits. The reason is that private insurance companies often try to deter emergency room visits by imposing higher payments for patients. In contrast, Medicaid does not impose this higher cost. Thus, those with private insurance will tend to have a financial incentive to avoid the emergency room while those on Medicaid will not. While it would be wrong to impose a draconian penalty for going to the emergency room, one obvious solution is to impose a financial penalty for emergency room visits—preferably tied to using the emergency room for services that can be provided by primary care facilities. This can be quite reasonable, given that emergency room treatment is more expensive than comparable primary care treatment. In my own case, I know that the emergency room costs me more than visiting my primary care doctor—which gives me yet another good reason to avoid the emergency room.
There is also some reason to think that people use emergency rooms rather than primary care because they do not know their options. That is, if more people were better educated about their medical options, they would chose primary care options over the emergency room when they did not need the emergency room services. Given that going to the emergency room is generally stressful and typically involves a long wait (especially for non-emergencies) people are likely to elect for primary care when they know they have that option. This is not to say education will be a cure-all, but it is likely to help reduce unnecessary emergency room visits. Which is certainly a worthwhile objective.
One stock narrative is the tale of the fraud committed by the poor in regards to government programs. Donald Trump, for example, has claimed that a lot of fraud occurs. Fox News also pushes the idea that government programs aimed to help the poor are fraught with fraud. Interestingly enough, the “evidence” presented in support of such claims seems to be that the people making the claim think or feel that there must be a lot of fraud. However, there seems little inclination to actually look for supporting evidence—presumably if someone feels strongly enough that a claim is true, that is good enough.
The claim that the system is dominated by fraud is commonly used to argue that the system should be cut back or even eliminated. The basic idea is that the poor are “takers” who are fraudulently living off the “makers.” While fraud is clearly wrong, it is rather important to consider some key questions.
The first question is this: what is the actual percentage of fraud that occurs in such programs? While, as noted above, certain people speak of lots of fraud, the actually statistical data tells another story. In the case of unemployment insurance, the rate of fraud is estimated to be less than 2%. This is lower than the rate of fraud in the private sector. In the case of welfare, fraud is sometimes reported at being 20%-40% at the state level. However, the “fraud” seems to be primarily the result of errors on the part of bureaucrats rather than fraud committed by the recipients. Naturally, an error rate that high is unacceptable—but is rather a different narrative than that of the wicked poor.
Food stamp fraud does occur—but most of it is committed by businesses rather than the recipients of the stamps. While there is some fraud on the part of recipients, the best data indicates that fraud accounts for about 1% of the payments. Given the rate of fraud in the private sector, that is exceptionally good.
Given this data, the overwhelming majority of those who receive assistance are not engaged in fraud. This is not to say that fraud should not be a concern—in fact, it is the concern with fraud on the part of the recipients that has resulted in such low incidents of fraud. Interestingly, about one third of fraud involving government money involves not the poor, but defense contractors who account for about $100 billion in fraud per year. Medicare and Medicaid combined have about $100 billion in fraudulent expenditures per year. While there is also a narrative of the wicked poor in regards to Medicare and Medicaid, the fraud is usually perpetrated by the providers of health care rather than the recipients. As such, it would seem that the focus on fraud should shift from the poor recipients of aid to defense contractors and to address Medicare/Medicaid issues. That is, it is not the wicked poor who are siphoning away money with fraud, it is the wicked wealthy who are sucking on the teat of the state. As such the narrative of the poor defrauding the state is a flawed narrative. Certainly it does happen: the percentage of fraud is greater than zero. However, the overall level of fraud on the part of the poor recipients seems to be less than 2%. The majority of fraud, contrary to the narrative, is committed by those who are not poor. While the existence of fraud does show a need to address that fraud, the narrative has cast the wrong people as the villains.
While the idea of mass welfare cheating is thus unfounded, there is still a legitimate concern as to whether or not the poor should be receiving such support from the state. After all, even if the overwhelming majority of recipients are honestly following the rules and not engaged in fraud, there is still the question of whether or not the state should be providing welfare, food stamps, Medicare, Medicaid and similar such benefits. Of course, the narrative does lose some of its rhetorical power if the poor are not cast as frauds.
The September, 2013 issue of the NEA Higher Education Advocate featured an infographic comparing working at Walmart with working as an adjunct/contingent faculty member. Having worked as an adjunct, I can attest to the accuracy of the claims regarding the adjunct experience.
In the usual order of things, a college degree provides a higher earning potential. This is not, however, true for the typical adjuncts. In the United States, a retail cashier makes an average of $9.13 an hour, resulting in a yearly income of $20,410. By way of comparison, Goldman Sachs’ health coverage for a higher end employee (such as Ted Cruz’s wife) amounts to almost twice that amount. An adjunct who is working 40 hours a week will make on average $16,200 a year (which is $7.78 per hour). Running a cash register sometimes requires a high school degree, but not always. Being an adjunct typically requires having a graduate degree and many of them have doctorates. I did and I made $16,000 my first year as an adjunct. That was teaching four classes a semester for two semesters. Adjuncts generally do not get any benefits, although some of them do get insurance coverage—as graduate students. I had health insurance as a graduate student (at a very low rate) but not as an adjunct—fortunately I had no serious injuries and only minor illnesses during my insurance free time. If I had had my quadriceps tendon tear when I was an adjunct, it would have cost me almost $12,000—leaving me only $4,000 for the year (less after taxes).
The typical workers for corporations like Walmart tend to be no better off—they do not get much (or any) benefits and hence often do not have health care coverage. It might be wondered how people survive on such low wages and with no benefits. In some cases, people simply do without. When I was an adjunct, I did not have a car, I bought only what food I could afford, I lived in a one bedroom apartment and did all I could to live frugally. I do admit that I splurged on luxuries like running shoes and race entry fees. Fortunately, I did make some extra money writing—which helped support my gaming hobby.
This approach can work for a person who has no dependents, can get by without a vehicle, and has no health issues. However, those who cannot do the obvious: they turn to the state for aid. In the case of Walmart, the taxpayers provide support to their employees. For example, in the state of Wisconsin Walmart employees cost the taxpayers $9.8 million a year in Medicaid benefits alone. Adjuncts would also often qualify for state support. Out of Yankee pride, I did not avail myself of any such aid—I could survive on what I was making, albeit at a relatively low quality of life in Western terms. However, many people do not have the luxury of pride—they need to care for their families or address health issues.
As might be imagined, these low salaries and lack of benefits are a point of concern. Laying aside concerns about fairness of wages (which actually should not be done), there is the fact that the low pay of many workers is subsidized by the taxpayers. That is, the taxpayers pick up the difference between what the employers pay and what people need to survive. As I have argued before, this is a form of corporate and university socialism: the state support allows schools and corporations to pay low wages and thus generate greater profits. Or, in the case of non-profit schools, funnel the money elsewhere—most likely to administration and things like bonuses for the university president. For example, the previous president of my university was guaranteed a yearly bonus that that was about twice the average yearly adjunct salary.
Obamacare is supposed to, in some degree, shift the burden of health care costs from the taxpayer to the employer. The idea is that larger employers will need to provide health care benefits to full time employees or pay a fine. This, as might be imagined, has caused some people to threaten dire consequences. To be specific, some employers, including universities, have stated that they will reduce employee hours so that they fall just under the line for full time employment. Some have even threatened to fire people on the grounds that they cannot afford to pay.
One stock counter to the idea that employers should provide such benefits is that the state has no right to impose such costs on businesses, especially when doing so will cause businesses to fire people and cut their hours. This does have some appeal. However, there is still the question of who will provide the workers with the resources they need to survive.
One view is that the employers have an obligation to provide a living wage to those who do their job and do it competently. Few would argue that an employer is obligated to just hand people money for not working or doing terrible work—after all, a person who can earn his way should do so. As might be imagined, many employers (including universities) would rather not do this. After all, increasing wages to an actual living wage would cut into profits. In the case of universities, such increases would mean cuts in other areas of the budget (but surely not presidential bonuses).
Another view is that private citizens or organizations of private citizens (such as church groups) have the obligation to provide assistance to others via charity. That is, individuals should voluntarily subsidize the employers by providing the employees with the resources they need to survive, such as food. Of course, if private citizens have this obligation, it would seem that the employers (being citizens as well) would also have this obligation. One clever way around this is to contend that corporations are people, just not the sort of people who have moral obligations. Obviously, people do provide such support—but it would certainly be a challenge for private citizens to adequately support all the working people whose wages are not adequate.
A third view is that the state has the obligation to provide the resources for people to survive. This is, for the most part, the current situation. However, since the state gets most of its income from the citizens, this is effectively having private citizens subsidizing the employers, only with the state organizing the charity. Once again, if the state is obligated to do this, this merely comes down to the citizens having this obligation.
A fourth option is that no one has an obligation to provide people with the resources they need to survive, even when those people are actually working full time and generating enough value to allow their employer to pay them living wages. One might make references to the morality nullifying powers of the free-market: while people might have moral obligations, these do not hold in economic relations. One might also reject the idea that people have any such moral obligations to others at all: people must make it on their own or perish, unless someone freely decides to provide assistance.
Overall, it comes down to the question of what, if anything, people owe to each other. My own view is that the market does not nullify morality and that we do have obligations to each other. These obligations include an obligation to not allow other people to suffer or die simply because others are unwilling to pay them a fair, living wage. To head off the usual attacks, I am not claiming that able and competent people should simply be handed resources earned by the toil of others for doing nothing. Rather, my view is about fair wages and ethical behavior. This is why I am against both just handing people stuff for nothing and for people profiting off the labor of others. Both are cases of people who are getting the value of others’ work and not earning the value themselves.
One common way to argue against not raising (or even just eliminating) the minimum wage is to build a case based on claims about those who work such jobs. For example, one approach is to argue that the people on minimum wage are mainly high school and college kids who are just earning spending money. As another example, it is often claimed that minimum wage jobs are temporary jobs for most workers—they will spend a little while at minimum wage and move up to better pay. While these claims are true in some cases, the reality is rather different in general. For example, the average age of fast food workers is almost thirty—they are not just school kids. Also, a significant number of people get stuck in minimum wage jobs because there is nothing else available.
As an aside, even if it were true that all those working such jobs were just earning spending money or were going to move on up, it would not follow that the minimum wage should be lower or eliminated. After all, the fairness of a wage is distinct from the motive of the person working for the job or what they might be doing next. For example, if I am selling my books to get money to buy running shoes rather than on survival necessities, it would seem odd to claim that I am thus obligated to lower my prices. Likewise, even if a kid is earning money to spend on video games rather than for putting food on the table, it would seem odd to say that she is thus entitled to less pay for the work she does.
Getting back to the main focus of this essay, the reality is that many of the folks who work minimum wage jobs are working the jobs primarily to pay for necessities and that many of them are stuck in such jobs (in large part to the current economic situation). The reality also is that a minimum wage job will typically not provide adequate income to pay for the necessities. Interestingly, some corporations recognize this. McDonald’s, for example, generated a brief bit of controversy with its helpful guide for employees: the corporation advised employees in minimum wage jobs to have another job.
Given the gap between the actual cost of living and the pay of a minimum wage job, it is not surprising that quite a few of the folks who work for minimum wage avail themselves of state support programs, such as food stamps (which now goes by other names) and Medicaid. After all, they cannot earn enough to pay for necessities and certainly prefer not to starve or end up on the streets (although some are malnourished and struggling with housing). While one narrative about such people is that they are living easy on federal support, the reality is rather different—most especially for the working poor who have families, for those who are endeavoring to attend college or who hope to start a business.
Obviously enough, one large source for the funds for these programs is the taxpayer. That is, those who pay taxes are helping to subsidize those who received state support while working minimum wage jobs. However, there seems to be another equally plausible way of looking at the matter: the taxpayers are subsidizing those who pay minimum wage to their employees. That is, these employers can pay their employees less than what they need to survive because other people pick up the tab for this, thus allowing the employers to increase profits. If this is correct, those of us who pay taxes are involved in corporate socialism.
It could be countered that the taxpayers are not subsidizing the employers, such as McDonald’s. After all, the money for Medicaid and such are not going to the corporation, but to the workers. The obvious counter is that while this is technically true, the taxpayers are still contributing to sustaining the work forces for these employers, thus subsidizing them and allowing them to page sub-survival wages.
It could also be contended that the employer has no obligation to pay workers enough to survive on without the addition of state support. After all, there are plenty of poor people and if some cannot survive on minimum wage, then economic selection will weed them out so that those who can survive on less will take their place in the economic ecosystem. This, of course, seems rather harsh and morally dubious, at best.
Another counter is that the poor are to blame for their wages. If they had better skills, more talent, better connections and so on, then they would not be receiving that minimum wage but a better salary. As such, while it might be unfortunate that the poor are so badly paid, it is their own fault and hence their employers owe them nothing more. If the state wishes to help them out, that is hardly subsidizing the companies—they would, or so they might say, pay more for a better class of worker.
This has, obviously enough, all the moral appeal of a robber saying that it is the fault of her victims that they were not able to resist her crimes.
Overall, it does appear to be clear that the taxpayers are helping to subsidize those on minimum wage. While we could decide to let the poor slip deeper into poverty that would seem to be a wicked thing to do. It does seem to be reasonable to shift more of the cost to the employers who benefit from the work of the employees. After all, many corporations that are based on minimum wage workers have been making excellent profits—at the expense of the workers and the tax-payers.
While many people still dream of becoming (or marrying) doctors, there is a shortage of primary care doctors. Folks in government are concerned about this and one of the most recent proposals is to have people pose as patients in order to determine the difficulty of getting care. One critical part of this is to determine if doctors are rejecting patients who belong to government health programs in favor of the more lucrative private insurance patients. This is, obviously enough, based on the mystery shopper model.
One obvious concern about this method is that it can be seen as a form of spying and also as a deception. While such deceit is acceptable in law enforcement and intelligence operations, this is justified by the fact that the targets are potential (or actual) criminals and enemies. However, the doctors are not suspected of acting illegally and hence the use of this method seems to be questionable.
A second obvious concern is that the money used in this program could be better spent in making positive contributions to health care-such as providing support for doctors willing to provide primary care services for people who are in government programs or in other ways. It is already well established that we need more primary care doctors and it seems almost equally obvious that doctors prefer patients who have private insurance. This is, of course, due to the main factor of money. In a free market system in which the main goal is to maximize profits, doctors have little incentive to pursue the lower paying career paths or to accept patients on government assistance. As such, there seems little reason to conduct a secret survey in order to learn what already seems to be known.
However, there is certainly merit in investigating the problems that motivated the mystery patient plan. However, this is something that should be done openly rather than with mystery patients.
While it would be nice of people to go through medical school and run their business solely to help people, that sort of devotion to others certainly cannot be expected of people. As such, the most plausible solutions involve providing financial incentives. This can be done by increasing support for medical school students in return for a service commitment and also making the government payouts more appealing to doctors who have money on their minds.