While the scientific evidence for climate change is overwhelming, it has become an ideological matter. In the case of conservatives, climate change denial has become something of a stock position. In the case of liberals, belief in human-caused climate change is a standard position. Because of the way ideological commitments influence thought, those who are committed to climate change denial tend to become immune to evidence or reasons offered against their view. In fact, they tend to double-down in the face of evidence—which is a standard defense people use to protect their ideological identity. This is not to say that all conservatives deny climate change; many accept it is occurring. However, conservatives who accept the reality of climate change tend to deny that it is caused by humans.
This spectrum of beliefs does tend to match the shifting position on climate change held by influential conservatives such as Charles Koch. The initial position was a denial of climate change. This shifted to the acceptance of climate change, but a rejection of the claim that it is caused by humans. The next shift was to accept that climate change is caused by humans, but that it is either not as significant as the scientists claim or that it is not possible to solve the problem. One obvious concern about this slow shift is that it facilitates the delay of action in response to the perils of climate change. If the delay continues long enough, there really will be nothing that can be done about climate change.
Since many conservatives are moving towards accepting human caused climate change, one interesting problem is how to convince them to accept the science and to support effective actions to offset the change. As I teach the students in my Critical Inquiry class, using logic and evidence to try to persuade people tends to be a poor option. Fallacies and rhetoric are vastly more effective in convincing people. As such, the best practical approach to winning over conservatives is not by focusing on the science and trying to advance rational arguments. Instead, the focus should be on finding the right rhetorical tools to win people over.
This does raise a moral concern about whether it is acceptable to use such tactics to get people to believe in climate change and to persuade them to act. One way to justify this approach is on utilitarian grounds: preventing the harms of climate change morally outweighs the moral concerns about using rhetoric rather than reason to convince people. Another way to justify this approach is to note that the goals are not to get people to accept an untruth and to do something morally questionable Quite the contrast, the goal is to get people to accept scientifically established facts and to act in defense of the wellbeing of humans in particular and the ecosystem in general. As such, using rhetoric when reason fails seems warranted in this case. The question is then what sort of rhetoric would work best.
Interestingly, many conservative talking points can be deployed to support acting against climate change. For example, many American conservatives favor energy independence and keeping jobs in America. Developing sustainable energy within the United States, such as wind and solar power, would help with both. After all, while oil can be shipped from Saudi Arabia, shipping solar power is not a viable option (at least not until massive and efficient batteries become economically viable). The trick is, of course, to use rhetorical camouflage to hid that the purpose is to address climate change and environmental issues. As another example, many American conservatives tend to be pro-life—this can be used as a rhetorical angle to argue against pollution that harms fetuses. Of course, this is not likely to be a very effective approach if the main reasons someone is anti-abortion are not based in concern about human life and well-being. As a final example, clean water is valuable resource for business because industry needs clean water and, of course, human do as well. Thus, environmental protection of water can be sold with the rhetorical cover of being pro-business rather than pro-environment.
Thanks to a German study, there is evidence that one effective way to persuade conservatives to be concerned about climate change is to appeal to the fact that conservatives value preserving the past. This study showed that conservatives were influenced significantly more by appeals to restoring the earth to the way it was than by appeals to preventing future environmental harms. That is, conservatives were more swayed by appeals to conservation than by appeals to worries about future harms. As such, those wishing to gain conservative support for combating climate change should focus not on preventing the harms that will arise, but on making the earth great again. Many conservatives enjoy hunting, fishing and the outdoors and no doubt the older ones remember (or think they remember) how things were better when they were young. As examples, I’ve heard people talk about how much better the hunting used to be and how the fish were so much bigger, back in the good old days. This provides an excellent narrative for getting conservatives on board with addressing climate change and environmental issues. After all, presenting environmental protection as part of being a hunter and getting back to the memorable hunts of old is far more appealing than an appeal to hippie style tree-hugging.
Having grown up in the golden age of the CB radio, I have many fond memories of movies about truck driving heroes played by the likes of Kurt Russell and Clint Eastwood. While such movies seem to have been a passing phase, real truck drivers are heroes of the American economy. In addition to moving stuff across this great nation, they also earn solid wages and thus also contribute as taxpayers and consumers.
While most of the media attention is on self-driving cars, there are also plans underway to develop self-driving trucks. The steps towards automation will initially be a boon to truck drivers as these technological advances manifest as safety features. This progress will most likely lead to a truck with a human riding in the can as a backup (more for the psychological need of the public than any actual safety increase) and eventually to a fully automated truck.
Looked at in terms of the consequences of full automation, there will be many positive impacts. While the automated trucks will probably be more expensive than manned vehicles initially, not need to pay drivers will result in considerable savings for the companies. Some of this might even be passed on to consumers, resulting in a tiny decrease in some prices. There is also the fact that automated trucks, unlike human drivers, would not get tired, bored or distracted. While there will still be accidents involving these trucks, it would be reasonable to expect a very significant decrease. Such trucks would also be able to operate around the clock, stopping only to load/unload cargo, to refuel and for maintenance. This could increase the speed of deliveries. One can even imagine an automated truck with its own drones that fly away from the truck as it cruises the highway, making deliveries for companies like Amazon. While these will be good things, there will also be negative consequences.
The most obvious negative consequence of full automation is the elimination of trucker jobs. Currently, there are about 3.5 million drivers in the United States. There are also about 8.7 million other people employed in the trucking industry who do not drive. One must also remember all the people indirectly associated with trucking, ranging from people cooking meals for truckers to folks manufacturing or selling products for truckers. Finally, there are also the other economic impacts from the loss of these jobs, ranging from the loss of tax revenues to lost business. After all, truckers do not just buy truck related goods and services.
While the loss of jobs will be a negative impact, it should be noted that the transition from manned trucks to robot rigs will not occur overnight. There will be a slow transition as the technology is adopted and it is certain that there will be several years in which human truckers and robotruckers share the roads. This can allow for a planned transition that will mitigate the economic shock. That said, there will presumably come a day when drivers are given their pink slips in large numbers and lose their jobs to the rolling robots. Since economic transitions resulting from technological changes are nothing new, it could be hoped that this transition would be managed in a way that mitigated the harm to those impacted.
It is also worth considering that the switch to automated trucking will, as technological changes almost always do, create new jobs and modify old ones. The trucks will still need to be manufactured, managed and maintained. As such, new economic opportunities will be created. That said, it is easy to imagine these jobs also becoming automated as well: fleets of robotic trucks cruising America, loaded, unloaded, managed and maintained by robots. To close, I will engage in a bit of sci-fi style speculation.
Oversimplifying things, the automation of jobs could lead to a utopian future in which humans are finally freed from the jobs that are fraught with danger and drudgery. The massive automated productivity could mean plenty for all; thus bringing about the bright future of optimistic fiction. That said, this path could also lead into a dystopia: a world in which everything is done for humans and they settle into a vacuous idleness they attempt to fill with empty calories and frivolous amusements.
There are, of course, many dystopian paths leading away from automation. Laying aside the usual machine takeover in which Google kills us all, it is easy to imagine a new “robo-planation” style economy in which a few elite owners control their robot slaves, while the masses have little or no employment. A rather more radical thought is to imagine a world in which humans are almost completely replaced—the automated economy hums along, generating numbers that are duly noted by the money machines and the few remaining money masters. The ultimate end might be a single computer that contains a virtual economy; clicking away to itself in electronic joy over its amassing of digital dollars while around it the ruins of human civilization decay and the world awaits the evolution of the next intelligent species to start the game anew.
Martin Shkreli became the villain of drug pricing when he increased the price of a $13.50 pill to $750. While the practice of buying up smaller drug companies and increasing the prices of their products is a standard profit-making venture, the scale of the increase and Shkreli’s attitude drew attention to this incident. Unfortunately, while the Shkreli episode is the best known case, drug pricing is a sweeping problem. The August 2016 issue of Consumer Reports features an article on high drug prices in the United States and provides an excellent analysis of the matter—I am using it as the basis for the numbers I mention.
From the standpoint of consumers, the main problem is that drugs are priced extremely high—sometimes to a level that literally bankrupts patients. Faced with social pushback, drug companies do provide some attempts to justify the high prices. One standard reason is that the high prices are needed to pay the R&D costs of the drugs. While a company does have the right to pass on the cost of drug development, consideration of the facts tells another story about the pricing of drugs.
First, about 38% of the basic research science is actually funded by taxpayer money—so the public is paying twice: once in taxes and once again for the drugs resulting from the research. This, of course, leaves a significant legitimate area of expenses for companies, but hardly enough to warrant absurdly high prices.
Second, most large drug companies spend almost twice as much on promotion and marketing as they do on R&D. While these are legitimate business expenses, this fact does undercut using R&D expenses to justify excessive drug prices. Obviously, telling the public that pills are pricy because of the cost of marketing pills so people will buy them would not be an effective strategy. There is also the issue of the ethics of advertising drugs, which is another matter entirely.
Third, many “new” drugs are actually slightly tweaked old drugs. Common examples including combining two older drugs to create a “new” drug, changing the delivery method (from an injectable to a pill, for example) or altering the release time. In many cases, the government will grant a new patent for these minor tweaks and this will grant the company up to a 20-year monopoly on the product, preventing competition. This practice, though obviously legal, is certainly sketchy. To use an analogy, imagine a company held the patent on a wheel and an axle. Then, when those patents expired, they patented wheel + axle as a “new” invention. That would obviously be absurd.
Companies also try other approaches to justify the high cost, such as arguing that the drugs treat serious conditions or can save money by avoiding a more expensive treatment. While these arguments do have some appeal, it seems morally problematic to argue that the price of a drug can be legitimately based on the seriousness of the condition it treats. This smells of a protection scheme or coercion: “pay what we want…or you die.” The money saving argument is less odious, but is still problematic. By this logic, car companies should be able to charge vast sums for safety features since they protect people from very expensive injuries. It is, of course, reasonable to make a profit on products that provide significant benefits—but there need to be moral limits to the profits.
The obvious counter to my approach is to argue that drug prices should be set by the free-market: if people are willing to pay large sums for drugs, then the drug companies should be free to charge those prices. After all, companies like Apple and Porsche sell expensive products without (generally) being demonized for making profits.
The easy response is that luxury cars and iWatches are optional luxuries that a person can easily do without and there are many cheaper (and better) alternatives. However, drug companies sell drugs that are necessary for a person’s health and even survival—they are generally not optional products. There is also the fact that drug companies enjoy patent protection that precludes effective competition. While Apple does hold patents on its devices, there are many competitors. For example, since I would rather not shell out $350 for an iWatch, I use a Pebble Watch. I could also have opted to go with a $10 watch. But, if I had hepatitis C and wanted to be cured, I would be stuck with only one drug option.
While defenders of drug prices laud the free market and decry “government interference”, their ability to charge high prices depends on the interference of the state. As noted above, the United States and other governments issue patents to drug companies that grant them exclusive ownership. Without this protection, a company that wanted to charge $750 for a $13.50 pill would find competitors rushing to sell the pill for far less. After all, it would be easy enough for competing drug company to analyze a drug and produce it. By accepting the patent system, the drug companies accept that the state has a right to engage in legal regulation in the drug industry—that is, to replace the invisible hand with a very visible hand of the state. Once this is accepted, the door is opened to allowing additional regulation on the grounds that the state will provide protection for the company’s property using taxpayer money in return for the company agreeing not to engage in harmful pricing of drugs. Roughly put, if the drug companies expect people to obey the social contract with the state, they also need to operate within the social contract, Companies could, of course, push for a truly free market: they would be free to charge whatever they want for drugs without state interference, but there would be no state interference into the free market activities of their competitors when they duplicate the high price drugs and start undercutting the prices.
In closing, if the drug companies want to keep the patent protection they need for high drug prices, they must be willing to operate within the social contract. After all, citizens should not be imposed upon to fund the protection of the people who are, some might claim, robbing them.
While asteroid mining is still just science fiction, companies such as Planetary Resources are already preparing to mine the sky. While space mining sounds awesome, lawyers are already hard at work murdering the awesomeness with legalize. President Obama recently signed the U.S. Commercial Space Launch Competitiveness Act which seems to make asteroid mining legal. The key part of the law is that “Any asteroid resources obtained in outer space are the property of the entity that obtained them, which shall be entitled to all property rights to them, consistent with applicable federal law and existing international obligations.” More concisely, the law makes it so that asteroid mining by U.S. citizens would not violate U.S. law.
While this would seem to open up the legal doors to asteroid mining, there are still legal barriers. The various space treaties, such as the Outer Space Treaty of 1967, do not give states sovereign rights in space. As such, there is no legal foundation for a state conferring space property rights to its citizens on the basis of its sovereignty. However, the treaties do not forbid private ownership in space—as such, any other nation could pass a similar law that allows its citizens to own property in space without violating the laws of that nation.
One obvious concern is that if multiple nations pass such laws and citizens from these nations start mining asteroids, then there will be the very real possibility of conflict over valuable resources. In some ways this will be a repeat of the past: the more technological advanced nations engaged in a struggle to acquire resources in an area where they lack sovereignty. These past conflicts tended to escalate into actual wars, which is something that must be considered in the final frontier.
One way to try to avoid war over asteroid resources is to work out new treaties governing the use of space resources. This is, obviously enough, a matter that will be handled by space lawyers, governments, and corporations. Unless, of course, the automated killing machines resolve it first.
While the legal aspects of space ownership are interesting, the moral aspects of ownership in space are also of considerable concern. While it might be believed that property rights in space is something entirely new, this is clearly not the case. While the location is clearly different than in the original, the matter of space property matches the state of nature scenarios envisioned by thinkers like Hobbes and Locke. To be specific, there is an abundance of resources and an absence of authority. As it now stands, while no one can hear you scream in space, there is also no one who can arrest you for space thievery.
Using the state of nature model, it can be claimed that there are currently no rightful owners of the asteroids or it could be claimed that we are all the rightful owners (the asteroids are the common property of all of humanity).
If there are currently no rightful owners, then it would seem that the asteroids are there for the taking: an asteroid belongs to whoever can take and hold it. This is on par with Hobbes’ state of nature—practical ownership is a matter of possession. As Hobbes saw it, everyone has the right to all things, but this is effectively a right to nothing—other than what a person can defend from others. As Hobbes noted, in such a scenario profit is the measure of right and who is right is to be settled by the sword.
While this is practical, brutal and realistic, it does seem a bit morally problematic in that it would, as Hobbes also noted, lead to war. His solution, which would presumably work as well in space as on earth, would be to have sovereignty in space. This would shift the war of all against all in space (of the sort that is common in science fiction about asteroid mining) to a war of nations in space (which is also common in science fiction). The war could, of course, be a cold one fought economically and technologically rather than a hot one fought with mass drivers and lasers.
If the asteroids are regarded as the common property of humanity, then Locke’s approach could be taken. As Locke saw it, God gave everything to humans in common, but people have to acquire things from the common property to make use of it. Locke gives the terrestrial example of how a person needs to make an apple her own before she can benefit from it. In the case of space, a person would need to make an asteroid her own in order to benefit from the materials it contains.
Locke sketched out a basic labor theory of ownership—whatever a person mixes her labor with becomes her property. As such, if asteroid miners located an asteroid and started mining it, then the asteroid would belong to them. This does have some appeal: before the miners start extracting the minerals from the asteroid, it is just a rock drifting in space. Now it is a productive mine, improved from is natural state by the labor of the miners. If mining is profitable, then the miners would have a clear incentive to grab as many asteroids as they can, which leads to a rather important moral problem—the limits of ownership.
Locke does set limits on what people can take in his proviso.: those who take from the common resources must leave as much and as good for others. When describing this to my students, I always use the analogy to food at a party: since the food is for everyone, everyone has a right to the food. However, taking it all or taking the very best would be wrong (and rude). While this proviso is ignored on earth, the asteroids provide us with a fresh start in regards to dividing up the common property of humanity. After all, no one has any special right to claim the asteroids—so we all have equal good claims to the resources they contain.
As with earth resources, some will probably contend that there is no obligation to leave as much and as good for others in space. Instead, those who get there first will contend that ownership should be on the principle of whoever grabs it first and can keep it is the “rightful” owner.
Those who take this view would probably argue that those who get their equipment into space would have done the work (or put up the money) and hence (as argued above) would be entitled to all they can grab and use or sell. Other people are free to grab what they can, provided that they have access to the resources needed to mine the asteroids. Naturally, the folks who lack the resources to compete will remain poor—their poverty will, in fact, disqualify them from owning any of the space resources much in the way poverty disqualifies people on earth from owning earth resources.
While the selfish approach is certainly appealing, arguments can be made for sharing asteroid resources. One reason is that those who will mine the asteroids did not create the means to do so from nothing on their own. Reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to human civilization and paying this off would involve also contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that the successful miners will benefit humanity when their profits “trickle down” from space.
Another way to argue for sharing the resources is to use an analogy to a buffet line. Suppose I am first in line at a buffet. This does not give me the right to devour everything I can with no regard for the people behind me. It also does not give me the right to grab whatever I cannot eat myself in order to sell it to those who had the misfortune to be behind me in line. As such, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line.
Naturally, these arguments for sharing can be countered by the usual arguments in favor of selfishness. While it is tempting to think that the vastness of space will overcome selfishness (that is, there will be so much that people will realize that not sharing would be absurd and petty), this seems unlikely—the more there is, the greater the disparity between those who have and those who have not. On this pessimistic view we already have all the moral and legal tools we need for space—it is just a matter of changing the wording a bit to include “space.”
Although I like science fiction, I did not see Interstellar until fairly recently—although time is such a subjective sort of thing. One reason I decided to see it is because some have claimed that the movie should be shown in science classes, presumably to help the kids learn science. Because of this, I expected to see a science fiction movie. Since I write science fiction, horror and fantasy stuff, it should not be surprising that I get a bit obsessive about genre classifications. Since I am a professor, it should also not be surprising that I have an interest in teaching methods. As such, I will be considering Interstellar in regards to both genre classifications and its education value in the context of science. There will be spoilers—so if you have not seen it, you might wish to hold off reading this essay.
While there have been numerous attempts to distinguish between science and fantasy, Roger Zelazny presents one of the most brilliant and concise accounts in a dialogue between Yama and Tak in Lord of Light. Tak has inquired of Yama about whether a creature, a Rakshasa, he has seen is a demon or not. Yama responds by saying, “If by ‘demon’ you mean a malefic, supernatural creature, possessed of great powers, life span and the ability to temporarily assume any shape — then the answer is no. This is the generally accepted definition, but it is untrue in one respect. … It is not a supernatural creature.”
Tak, not surprisingly, does not see the importance of this single untruth in the definition. Yama replies with “Ah, but it makes a great deal of difference, you see. It is the difference between the unknown and the unknowable, between science and fantasy — it is a matter of essence. The four points of the compass be logic, knowledge, wisdom, and the unknown. Some do bow in that final direction. Others advance upon it. To bow before the one is to lose sight of the three. I may submit to the unknown, but never to the unknowable”
In Lord of Light, the Rakshasa play the role of demons, but they are aliens—the original inhabitants of a world conquered by human colonists. As such, they are natural creatures and fall under the domain of science. While I do not completely agree with Zelazny’s distinction, I find it appealing and reasonable enough to use as the foundation for the following discussion of the movie.
Interstellar initially stays safely within the realm of science-fiction by staying safely within the sphere of scientific speculation regarding hypersleep, wormholes and black holes. While the script does take some liberties with the science, this is fine for the obvious reason that this is science fiction and not a science lecture. Interstellar also has the interesting bonus of having contributed to real science regarding the appearance of black holes. That aspect would provide some justification for showing it (or some of it) in a science class.
Another part of the movie that would be suitable for a science class are the scenes in which Murph thinks that her room might be haunted by a ghost. Cooper, her father, urges her to apply the scientific method to the phenomenon. Of course, it might be considered bad parenting for a parent to urge his child to study what might be a dangerous phenomenon in her room. Cooper also instantly dismisses the ghost hypothesis—which can be seen as being very scientific (since there has been no evidence of ghosts) to not very scientific (since this might be evidence of ghosts).
The story does include the point that the local school is denying that the moon-landings really occurred and the official textbooks support this view. Murph is punished at school for arguing that the moon landings did occur and is rewarded by Cooper. This does make a point about science denial and could thus be of use in the classroom.
Rather ironically, the story presents its own conspiracies and casts two of the main scientists (Brand and Mann) as liars. Brand lies about his failed equation for “good” reasons—to keep people working on a project that has a chance and to keep morale up. Mann lies about the habitability of his world because, despite being built up in the story as the best of the scientists, he cannot take the strain of being alone. As such, the movie sends a mixed-message about conspiracies and lying scientists. While learning that some people are liars has value, this does not add to the movie’s value as a science class film. Now, to get back to the science.
The science core of the movie, however, focuses on holes: the wormhole and the black hole. As noted above, the movie does stick within the realm of speculative science in regards to the wormhole and the black hole—at least until near the end of the movie.
It turns out that all that is needed to fix Brand’s equation is data from inside a black hole. Conveniently, one is present. Also conveniently, Cooper and the cool robot TARS end up piloting their ships into the black hole as part of the plan to save Brand. It is at this point that the movie moves from science to fantasy.
Cooper and TARS manage to survive being dragged into the black hole, which might be scientifically fine. However, they are then rescued by the mysterious “they” (whoever created the wormhole and sent messages to NASA).
Cooper is transported into a tesseract or something. The way it works in the movie is that Cooper is floating “in” what seems to be a massive structure. In “reality” it is nifty blend of time and space—he can see and interact with all the temporal slices that occurred in Murph’s room. Crudely put, it allows him to move in time as if it were space. While it is also sort of still space. While this is rather weird, it is still within the realm of speculative science fiction.
Cooper is somehow able to interact with the room using weird movie plot rules—he can knock books off the shelves in a Morse code pattern, he can precisely change local gravity to provide the location of the NASA base in binary, and finally he can manipulate the hand of the watch he gave his daughter to convey the data needed to complete the equation. Weirdly, he cannot just manipulate a pen or pencil to just write things out. But, movie. While a bit absurd, this is still science fiction.
The main problem lies with the way Cooper solves the problem of locating Murph at the right time. While at this point I would have bought the idea that he figured out the time scale of the room and could rapidly check it, the story has Cooper navigate through the vast time room using love as a “force” that can transcend time. While it is possible that Cooper is wrong about what he is really doing, the movie certainly presents it as if this love force is what serves as his temporal positioning system.
While love is a great thing, there are no even remotely scientific theories that provide a foundation for love having the qualities needed to enable such temporal navigation. There is, of course, scientific research into love and other emotions. The best of current love science indicates that love is a “mechanical” phenomena (in the philosophical sense) and there is nothing to even suggest that it provides what amounts to supernatural abilities.
It would, of course, be fine to have Cooper keep on trying because he loves his children—love does that. But making love into some sort of trans-dimensional force is clearly fantasy rather than science and certainly not suitable for a science lesson (well, other than to show what is not science).
One last concern I have with using the movie in a science class is the use of what seem to be super beings. While the audience learns little of the beings, the movie does assert to the audience that these beings can obviously manipulate time and space. They create the wormhole, they pull Cooper and TARS from a black hole, they send Cooper back in time and enable him to communicate in stupid ways, and so on. The movie also tells the audience the beings are probably future humans (or what humanity becomes) and that they can “see” all of time. While the movie does not mention this, this is how St. Augustine saw God—He is outside of time. They are also clearly rather benign and show demonstrate that that do care about individuals—they save Cooper and TARS. Of course, they also let many people die needlessly.
Given these qualities, it is easy to see these beings (or being) as playing the role of God or even being God—a super powerful, sometimes benign being, that has incredible power over time and space. Yet is fine with letting lots of people die needlessly while miraculously saving a person or two.
Given the wormhole, it is easy to compare this movie to Star Trek: Deep Space Nine. This show had wormhole populated by powerful beings that existed outside of our normal dimensions. To the people of Bajor, these beings were divine and supernatural Prophets. To Star Fleet, they were the wormhole aliens. While Star Trek is supposed to be science fiction, some episodes involving the prophets did blur the lines into fantasy, perhaps intentionally.
Getting back to Interstellar, it could be argued that the mysterious “they” are like the Rakshasa of Lord of Light in that they (or whatever) have many of the attributes of God, but are not supernatural beings. Being fiction, this could be set by fiat—but this does raise the boundary question. To be specific, does saying that something that has what appear to be the usual supernatural powers is not supernatural make it science-fiction rather than fantasy? Answering this requires working out a proper theory of the boundary, which goes beyond the scope of this essay. However, I will note that having the day saved by the intervention of mysterious and almost divinely powerful beings does not seem to make the movie suitable for a science class. Rather, it makes it seem to be more of a fantasy story masquerading as science fiction.
My overall view is that showing parts of Interstellar, specifically the science parts, could be fine for a science class. However, the movie as a whole is more fantasy than science fiction.
The United States has approved the first 3D-printed drug, spritam levetitracetam. This drug is intended to control epilepsy and is an early step on the road to highly customized printed pharmaceuticals.
Since there are already well-established methods of manufacturing pills, it might be wondered what 3D printing brings to the process—other than the obvious fact that 3D printing is hot and great for hype. Fortunately, there is more here than just hype. One advantage of 3D drugs is that specific doses can be custom printed for the patient rather than relying on standard doses—which can easily be too much or too little for the individual.
A second advantage is that custom “mixes” of medication can be easily printed, thus reducing the number of pills a person needs to take. This makes it easier for the patient and caregivers to manage the regimen of medications. For example, a person might only need two custom pills per day rather than six.
A third advantage is that customized shapes can be created for pills. These shapes are not to make the pills look cool (though I am sure that creating cool pill shapes will become a thing). The intent is to change the surface area relative to the pill volume and thus control the time it takes for the drug to be released in the body.
While pills with customized doses and shapes will have an important impact on medicine, what will have a far greater impact in the use of specialized 3D-printers that can function as automated chemistry sets. The idea is that just as users of normal 3D printers can download custom designs and print them, users of the chemistry printers would be able to able to download designs for drugs and print them at home. In short, it would bring small scale chemical creation to the home.
Boringly enough, I made up just such a device in a Traveller role playing game campaign I ran years ago. The players had “acquired” a ship and were pleased that it had an autodoc (a robotic doctor that looks a bit like a tanning booth) since they, like all space adventurers, had a tendency to accumulate laser burns and alien parasites. One of the players inquired if there was also machine for making drugs, so I made up the autopharm on the spot: it would dispense pharmaceuticals like a bartender bot dispensed booze. Like all game masters, I like to encourage players to use things in dangerous ways—especially if I can also make up a random chart to roll for effects.
As expected, the players quickly worked out good and (mostly) bad uses for their autopharm. I am confident that people will do the same in the real world. On the good side, an autopharm can allow the user to create highly personalized medicines in terms of dose, composition, and release time. Assuming that the machine worked reasonably quickly, it would also allow the user to acquire drugs rapidly, perhaps even during a medical emergency. Since the device would “mix” the chemicals, users would not need to stock up on specific medications—just raw materials that could be used to create a variety of drugs.
The players did use the device in these good ways, creating medicines to deal with the specific nasty things they encountered or picked up on alien worlds. Responsible people in the real world will certainly use their real autopharms in this way—to create legitimate medicines in accord with the law and their legitimate prescriptions.
On the bad side, the players quickly realized their autopharm could be used to make dangerous substances (“hey, we can synthesize Ceti spider venom and use that in our needlers!”) and, obviously enough, recreational drugs (“dudes, we can make plutonian nyborg!”). While real autopharms will probably be equipped with “safety” features and heavily regulated, people will rather quickly figure out how to overcome these obstacles and use the autopharms to generate recreational drugs. Since the “legitimate” pharmaceutical industry has developed some of the most popular recreational drugs, users will probably stick with such recipes—though more enterprising folks will try creating their own recipes (expect fatalities). As always, an arms race between those trying to ensure the autopharms are used properly and those who want to “misuse” them will occur.
While people have been mixing their own recreational drugs for quite some time, an autopharm would make it much easier to create these drugs. Making high grade pain killers could be as simple as downloading a recipe and pushing the “print” button. On the plus side, this could increase the purity and quality of the drugs, thus reducing the number of people getting sick or dying from contaminated drugs. They could also change the nature of drug crime: instead of murderous cartels, each person could be his own supplier, thus reducing drug violence considerably.
On the minus side, this could make powerful drugs readily available at low costs—an exponential version of the bathtub gin of prohibition. There is also the worry that people will unintentionally create toxic mixes or drugs with awful side-effects. While autopharms will probably have some safety features that would include a list of known poisons, some users will certainly override these features and there will be many harmful substances that will not be on the lists.
Another point of concern is that autopharms will inevitably be connected to the internet and hackers will target them—either out of malice or as a form of prank (which might end up being a fatal prank). Having someone hack your PC can be a serious problem. Having someone hack the autopharm that prints all your medicine could be a fatal problem.
As part of my critical thinking class, I cover the usual topics of credibility and experiments/studies. Since people often find critical thinking a dull subject, I regularly look for real-world examples that might be marginally interesting to students. As such, I was intrigued by John Bohannon’s detailed account of how he “fooled millions into thinking chocolate helps weight loss.”
Bohannon’s con provides an excellent cautionary tale for critical thinkers. First, he lays out in detail how easy it is to rig an experiment to get (apparently) significant results. As I point out to my students, a small experiment or study can generate results that seem significant, but really are not. This is why it is important to have an adequate sample size—as a starter. What is also needed is proper control, proper selection of the groups, and so on.
Second, he provides a clear example of a disgraceful stain on academic publishing, namely “pay to publish” journals that do not engage in legitimate peer review. While some bad science does slip through peer review, these journals apparently publish almost anything—provided that the fee is paid. Since the journals have reputable sounding names and most people do not know which journals are credible and which are not, it is rather easy to generate a credible seeming journal publication. This is why I cover the importance of checking sources in my class.
Third, he details how various news outlets published or posted the story without making even perfunctory efforts to check its credibility. Not surprisingly, I also cover the media in my class both from the standpoint of being a journalist and being a consumer of news. I stress the importance of confirming credibility before accepting claims—especially when doing so is one’s job.
While Bohannon’s con does provide clear evidence of problems in regards to corrupt journals, uncritical reporting and consumer credulity, the situation does raise some points worth considering. One is that while he might have “fooled millions” of people, he seems to have fooled relative few journalists (13 out of about 5,000 reporters who subscribe to the Newswise feed Bohannon used) and these seem to be more of the likes of the Huffington Post and Cosmopolitan as opposed to what might be regarded as more serious health news sources. While it is not known why the other reporters did not run the story, it is worth considering that some of them did look at it critically and rejected it. In any case, the fact that a small number of reporters fell for a dubious story is hardly shocking. It is, in fact, just what would be expected given the long history of journalism.
Another point of concern is the ethics of engaging in such a con. It is possible to argue that Bohannon acted ethically. One way to do this is to note that using deceit to expose a problem can be justified on utilitarian grounds. For example, it seems morally acceptable for a journalist or police officer to use deceit and go undercover to expose criminal activity. As such, Bohannon could contend that his con was effectively an undercover operation—he and his fellows pretended to be the bad guys to expose a problem and thus his deceit was morally justified by the fact that it exposed problems.
One obvious objection to this is that Bohannon’s deceit did not just expose corrupt journals and incautious reporters. It also misinformed the audience who read or saw the stories. To be fair, the harm would certainly be fairly minimal—at worst, people who believed the story would consume dark chocolate and this is not exactly a health hazard. However, intentionally spreading such misinformation seems morally problematic—especially since story retractions or corrections tend to get far less attention than the original story.
One way to counter this objection is to draw an analogy to the exposure of flaws by hackers. These hackers reveal vulnerabilities in software with the stated intent of forcing companies to address the vulnerabilities. Exposing such vulnerabilities can do some harm by informing the bad guys, but the usual argument is that this is outweighed by the good done when the vulnerability is fixed.
While this does have some appeal, there is the concern that the harm done might not outweigh the good done. In Bohannon’s case it could be argued that he has done more harm than good. After all, it is already well-established that the “pay to publish” journals are corrupt, that there are incautious journalists and credulous consumers. As such, Bohannon has not exposed anything new—he has merely added more misinformation to the pile.
It could be countered that although these problems are well known, it does help to continue to bring them to the attention of the public. Going back to the analogy of software vulnerabilities, it could be argued that if a vulnerability is exposed, but nothing is done to patch it, then the problem should be brought up until it is fixed, “for it is the doom of men that they forget.” Bohannon has certainly brought these problems into the spotlight and this might do more good than harm. If so, then this con would be morally acceptable—at least on utilitarian grounds.
It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.
If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.
While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.
While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.
Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.
While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.
The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.
It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.
To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.
As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.
Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.
Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.
Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.
Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.
The United States recently saw an outbreak of the measles (644 cases in 27 states) with the overwhelming majority of victims being people who had not been vaccinated. Critics of the anti-vaccination movement have pointed to this as clear proof that the movement is not only misinformed but also actually dangerous. Not surprisingly, those who take the anti-vaccination position are often derided as stupid. After all, there is no evidence that vaccines cause the harms that the anti-vaccination people refer to when justifying their position. For example, one common claim is that vaccines cause autism, but this seems to be clearly untrue. There is also the fact that vaccinations have been rather conclusively shown to prevent diseases (though not perfectly, of course).
It is, of course, tempting for those who disagree with the anti-vaccination people to dismiss them uniformly as stupid people who lack the brains to understand science. This, however, is a mistake. One reason it is a mistake is purely pragmatic: those who are pro-vaccination want the anti-vaccination people to change their minds and calling them stupid, mocking and insulting them will merely cause them to entrench. Another reason it is a mistake is that the anti-vaccination people are not, in general, stupid. There are, in fact, grounds for people to be skeptical or concerned about matters of health and science. To show this, I will briefly present some points of concern.
One point of rational concern is the fact that scientific research has been plagued with a disturbing amount of corruption, fraud and errors. For example, the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies. As such, it is hardly stupid to be concerned that scientific research might not be accurate. Somewhat ironically, the study that started the belief that vaccines cause autism is a paradigm example of bad science. However, it is not stupid to consider that the studies that show vaccines are safe might have flaws as well.
Another matter of concern is the influence of corporate lobbyists on matters relating to health. For example, the dietary guidelines and recommendations set forth by the United States Government should be set on the basis of the best science. However, the reality is that these matters are influenced quite strongly by industry lobbyists, such as the dairy industry. Given the influence of the corporate lobbyists, it is not foolish to think that the recommendations and guidelines given by the state might not be quite right.
A third point of concern is the fact that the dietary and health guidelines and recommendations undo what seems to be relentless and unwarranted change. For example, the government has warned us of the dangers of cholesterol for decades, but this recommendation is being changed. It would, of course, be one thing if the changes were the result of steady improvements in knowledge. However, the recommendations often seem to lack a proper foundation. John P.A. Ioannidis, a professor of medicine and statistics at Stanford, has noted “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome. In this literature of epidemic proportions, how many results are correct?” Given such criticism from experts in the field, it hardly seems stupid of people to have doubts and concerns.
There is also the fact that people do suffer adverse drug reactions that can lead to serious medical issues and even death. While the reported numbers vary (one FDA page puts the number of deaths at 100,000 per year) this is certainly a matter of concern. In an interesting coincidence, I was thinking about this essay while watching the Daily Show on Hulu this morning and one of my “ad experiences” was for Januvia, a diabetes drug. As required by law, the ad mentioned all the side effects of the drug and these include some rather serious things, including death. Given that the FDA has approved drugs with dangerous side effects, it is hardly stupid to be concerned about the potential side effects from any medicine or vaccine.
Given the above points, it would certainly not be stupid to be concerned about vaccines. At this point, the reader might suspect that I am about to defend an anti-vaccine position. I will not—in fact, I am a pro-vaccination person. This might seem somewhat surprising given the points I just made. However, I can rationally reconcile these points with my position on vaccines.
The above points do show that there are rational grounds for taking a general critical and skeptical approach to matters of health, medicine and science. However, this general skepticism needs to be properly rational. That is, it should not be a rejection of science but rather the adoption of a critical approach to these matters in which one considers the best available evidence, assesses experts by the proper standards (those of a good argument from authority), and so on. Also, it is rather important to note that the general skepticism does not automatically justify accepting or rejecting specific claims. For example, the fact that there have been flawed studies does not prove that the specific studies about vaccines as flawed. As another example, the fact that lobbyists influence the dietary recommendations does not prove that vaccines are harmful drugs being pushed on Americans by greedy corporations. As a final example, the fact that some medicines have serious and dangerous side effects does not prove that the measles vaccine is dangerous or causes autism. Just as one should be rationally skeptical about pro-vaccination claims one should also be rationally skeptical about anti-vaccination claims.
To use an obvious analogy, it is rational to have a general skepticism about the honesty and goodness of people. After all, people do lie and there are bad people. However, this general skepticism does not automatically prove that a specific person is dishonest or evil—that is a matter that must be addressed on the individual level.
To use another analogy, it is rational to have a general concern about engineering. After all, there have been plenty of engineering disasters. However, this general concern does not warrant believing that a specific engineering project is defective or that engineering itself is defective. The specific project would need to be examined and engineering is, in general, the most rational approach to building stuff.
So, the people who are anti-vaccine are not, in general, stupid. However, they do seem to be making the mistake of not rationally considering the specific vaccines and the evidence for their safety and efficacy. It is quite rational to be concerned about medicine in general, just as it is rational to be concerned about the honesty of people in general. However, just as one should not infer that a friend is a liar because there are people who lie, one should not infer that a vaccine must be bad because there is bad science and bad medicine.
Convincing anti-vaccination people to accept vaccination is certainly challenging. One reason is that the issue has become politicized into a battle of values and identity. This is partially due to the fact that the anti-vaccine people have been mocked and attacked, thus leading them to entrench and double down. Another reason is that, as argued above, they do have well-founded concerns about the trustworthiness of the state, the accuracy of scientific studies, and the goodness of corporations. A third reason is that people tend to give more weight to the negative and also tend to weigh potential loss more than potential gain. As such, people would tend to give more weight to negative reasons against vaccines and fear the alleged dangers of vaccines more than they would value their benefits.
Given the importance of vaccinations, it is rather critical that the anti-vaccination movement be addressed. Calling people stupid, mocking them and attacking them are certainly not effective ways of convincing people that vaccines are generally safe and effective. A more rational and hopefully more effective approach is to address their legitimate concerns and consider their fears. After all, the goal should be the health of people and not scoring points.
Like most people, I have eaten bugs. Also, like most Americans, this consumption has been unintentional and often in ignorance. In some cases, I’ve sucked in a whole bug while running. In most cases, the bugs are bug parts in foods—the FDA allows a certain percentage of “debris” in our food and some of that is composes of bugs.
While Americans typically do not willingly and knowingly eat insects, about 2 billion people do and there are about 2,000 species that are known to be edible. As might be guessed, many of the people who eat insect live in developing countries. As the countries develop, people tend to switch away from eating insects. This is hardly surprising—eating meat is generally seen as a sign of status while eating insects typically is not. However, there are excellent reasons to utilize insects on a large scale as a food source for humans and animals. Some of these reasons are practical while others are ethical.
One practical reason to utilize insects as a food source is the efficiency of insects. 10 pounds of feed will yield 4.8 pounds of cricket protein, 4.5 pounds of salmon, 2.2 pounds of chicken, 1.1 pounds of pork, and .4 pounds of beef. With an ever-growing human population, increased efficiency will be critical to providing people with enough food.
A second practical reason to utilize insects as a food source is that they require less land to produce protein. For example, it takes 269 square feet to produce a pound of pork protein while it requires only 88 square feet to generate one pound of mealworm protein. Given an ever-expanding population and every-less available land, this is a strong selling point for insect farming as a food source. It is also morally relevant, at least for those who are concerned about the environmental impact of food production.
A third reason, which might be rejected by those who deny climate change, is that producing insect protein generates less greenhouse gas. The above-mentioned pound of pork generates 38 pounds of CO2 while a pound of mealworms produces only 14. For those who believe that CO2 production is a problem, this is clearly both a moral and practical reason in favor of using insects for food. For those who think that CO2 has no impact or does not matter, this would be no advantage.
A fourth practical reason is that while many food animals are fed using food that humans could also eat (like grain and corn based feed), many insects readily consume organic waste that is unfit for human consumption. As such, insects can transform low-value feed material (such as garbage) into higher value feed or food. This would also provide a moral reason, at least for those who favor reducing the waste that ends up in landfills. This could provide some interesting business opportunities and combinations—imagine a waste processing business that “processes” organic waste with insects and then converts the insects to feed, food or for use in other products (such as medicine, lipstick and alcoholic beverages).
Perhaps the main moral argument in favor of choosing insect protein over protein from animals such as chicken, pigs and cows is based on the assumption than insects have a lower moral status than such animals or at least would suffer less.
In terms of the lower status version, the argument would be a variation on one commonly used to support vegetarianism over eating meat: plants have a lower moral status than animals; therefore it is preferable to eat plants rather than animals. Assuming that insects have a lower moral status than chickens, pigs, cows, etc., then using insects for food would be morally preferable. This, of course, also rests on the assumption that it is preferable to do wrong (in this case kill and eat) to beings with a lesser moral status than to those with a higher status.
In terms of the suffering argument, this would be a stock utilitarian style argument. The usual calculation involves weighing the harms (in this case, the suffering) against the benefits. Insects are, on the face of it, less able to suffer (and less able understand their own suffering) than animals like pigs and cows. Also, insects would seem to suffer less under the conditions in which they would be raised. While chickens might be factory farmed with their beaks clipped and confined to tiny cages, mealworms would be pretty much doing what they would do in the “wild” when being raised as food. While the insect would still be killed, it would seem that the overall suffering generated by using insects as food would be far less than that created by using animals like pigs and cows as food. This would seem to be a morally compelling argument.
The most obvious problem with using insects as food is what people call the “yuck factor.” Bugs are generally seen as dirty and gross—things that you do not want to find in food, let alone being the food. Some of the “yuck” is visual—seeing the insect as one eats it. One obvious solution is to process insects into forms that look like “normal” foods, such as powders, pastes, and the classic “mystery meat patty.” People can also learn to overcome the distaste, much as some people have to overcome their initial rejection of foods like lobster and crab.
Another concern is that insect might bear the stigma of being a food suitable for “primitive” cultures and not suitable for “civilized” people. Insect based food products might also be regarded as lacking in status, especially in contrast with traditional meats. These are, of course, all matters of social perception. Just as they are created, they can be altered. As such, these problems could be overcome.
Since I grew up eating lobsters and crabs (I’m from Maine), I am already fine with eating “bug-like” creatures. So, I would not have any problem with eating actual bugs, provided that they are safe to eat. I will admit that I probably will not be serving up plates of fried beetles to my friends, but I would have no problem serving up food containing properly processed insects. And not just because it would be, at least initially, funny.