One of the classic moral problems is the issue of whether or not we have moral obligations to people we do not know. If we do have such obligations, then there are also questions about the foundation, nature and extent of these obligations. If we do not have such obligations, then there is the obvious question about why there are no such obligations. I will start by considering some stock arguments regarding our obligations to others.
One approach to the matter of moral obligations to others is to ground them on religion. This requires two main steps. The first is establishing that the religion imposes such obligations. The second is making the transition from the realm of religion to the domain of ethics.
Many religions do impose such obligations on their followers. For example, John 15:12 conveys God’s command: “This is my commandment, That you love one another, as I have loved you.” If love involves obligations (which it seems to), then this would certainly seem to place us under these obligations. Other faiths also include injunctions to assist others.
In terms of transitioning from religion to ethics, one easy way is to appeal to divine command theory—the moral theory that what God commands is right because He commands it. This does raise the classic Euthyphro problem: is something good because God commands it, or is it commanded because it is good? If the former, goodness seems arbitrary. If the latter, then morality would be independent of God and divine command theory would be false.
Using religion as the basis for moral obligation is also problematic because doing so would require proving that the religion is correct—this would be no easy task. There is also the practical problem that people differ in their faiths and this would make a universal grounding for moral obligations difficult.
Another approach is to argue for moral obligations by using the moral method of reversing the situation. This method is based on the Golden Rule (“do unto others as you would have them do unto you”) and the basic idea is that consistency requires that a person treat others as she would wish to be treated.
To make the method work, a person would need to want others to act as if they had obligations to her and this would thus obligate the person to act as if she had obligations to them. For example, if I would want someone to help me if I were struck by a car and bleeding out in the street, then consistency would require that I accept the same obligation on my part. That is, if I accept that I should be helped, then consistency requires that I must accept I should help others.
This approach is somewhat like that taken by Immanuel Kant. He argues that because a person necessarily regards herself as an end (and not just a means to an end), then she must also regard others as ends and not merely as means. He endeavors to use this to argue in favor of various obligations and duties, such as helping others in need.
There are, unfortunately, at least two counters to this sort of approach. The first is that it is easy enough to imagine a person who is willing to forgo the assistance of others and as such can consistently refuse to accept obligations to others. So, for example, a person might be willing to starve rather than accept assistance from other people. While such people might seem a bit crazy, if they are sincere then they cannot be accused of inconsistency.
The second is that a person can argue that there is a relevant difference between himself and others that would justify their obligations to him while freeing him from obligations to them. For example, a person of a high social or economic class might assert that her status obligates people of lesser classes while freeing her from any obligations to them. Naturally, the person must provide reasons in support of this alleged relevant difference.
A third approach is to present a utilitarian argument. For a utilitarian, like John Stuart Mill, morality is assessed in terms of consequences: the correct action is the one that creates the greatest utility (typically happiness) for the greatest number. A utilitarian argument for obligations to people we do not know would be rather straightforward. The first step would be to estimate the utility generated by accepting a specific obligation to people we do not know, such as rendering aid to an intoxicated person who is about to become the victim of sexual assault. The second step is to estimate the disutility generated by imposing that specific obligation. The third step is to weigh the utility against the disutility. If the utility is greater, then such an obligation should be imposed. If the disutility is greater, then it should not.
This approach, obviously enough, rests on the acceptance of utilitarianism. There are numerous arguments against this moral theory and these can be employed against attempts to ground obligations on utility. Even for those who accept utilitarianism, there is the open possibility that there will always be greater utility in not imposing obligations, thus undermining the claim that we have obligations to others.
A fourth approach is to consider the matter in terms of rational self-interest and operate from the assumption that people should act in their self-interest. In terms of a moral theory, this would be ethical egoism: the moral theory that a person should act in her self-interest rather than acting in an altruistic manner.
While accepting that others have obligations to me would certainly be in my self-interest, it initially appears that accepting obligations to others would be contrary to my self-interest. That is, I would be best served if others did unto me as I would like to be done unto, but I was free to do unto them as I wished. If I could get away with this sort of thing, it would be ideal (assuming that I am selfish). However, as a matter of fact people tend to notice and respond negatively to a lack of reciprocation. So, if having others accept that they have some obligations to me were in my self-interest, then it would seem that it would be in my self-interest to pay the price for such obligations by accepting obligations to them.
For those who like evolutionary just-so stories in the context of providing foundations for ethics, the tale is easy to tell: those who accept obligations to others would be more successful than those who do not.
The stock counter to the self-interest argument is the problem of Glaucon’s unjust man and Hume’s sensible knave. While it certainly seems rational to accept obligations to others in return for getting them to accept similar obligations, it seems preferable to exploit their acceptance of obligations while avoiding one’s supposed obligations to others whenever possible. Assuming that a person should act in accord with self-interest, then this is what a person should do.
It can be argued that this approach would be self-defeating: if people exploited others without reciprocation, the system of obligations would eventually fall apart. As such, each person has an interest in ensuring that others hold to their obligations. Humans do, in fact, seem to act this way—those who fail in their obligations often get a bad reputation and are distrusted. From a purely practical standpoint, acting as if one has obligations to others would thus seem to be in a person’s self-interest because the benefits would generally outweigh the costs.
The counter to this is that each person still has an interest in avoiding the cost of fulfilling obligations and there are various practical ways to do this by the use of deceit, power and such. As such, a classic moral question arises once again: why act on your alleged obligations if you can get away with not doing so? Aside from the practical reply given above, there seems to be no answer from self-interest.
A fifth option is to look at obligations to others as a matter of debts. A person is born into an established human civilization built on thousands of years of human effort. Since each person arrives as a helpless infant, each person’s survival is dependent on others. As the person grows up, she also depends on the efforts of countless other people she does not know. These include soldiers that defend her society, the people who maintain the infrastructure, firefighters who keep fire from sweeping away the town or city, the taxpayers who pay for all this, and so on for all the many others who make human civilization possible. As such, each member of civilization owes a considerable debt to those who have come before and those who are here now.
If debt imposes an obligation, then each person who did not arise ex-nihilo owes a debt to those who have made and continue to make their survival and existence in society possible. At the very least, the person is obligated to make contributions to continue human civilization as a repayment to these others.
One objection to this is for a person to claim that she owes no such debt because her special status obligates others to provide all this for her with nothing owed in return. The obvious challenge is for a person to prove such an exalted status.
Another objection is for a person to claim that all this is a gift that requires no repayment on the part of anyone and hence does not impose any obligation. The challenge is, of course, to prove this implausible claim.
A final option I will consider is that offered by virtue theory. Virtue theory, famously presented by thinkers like Aristotle and Confucius, holds that people should develop their virtues. These classic virtues include generosity, loyalty and other virtues that involve obligations and duties to others. Confucius explicitly argued in favor of duties and obligations as being key components of virtues.
In terms of why a person should have such virtues and accept such obligations, the standard answer is that being virtuous will make a person happy.
Virtue theory is not without its detractors and the criticism of the theory can be employed to undercut it, thus undermining its role in arguing that we have obligations to people we do not know.
The recent resignation of Eric Shinseki from his former position as the head of the Department of Veteran Affairs raised, once again, the issue of the responsibilities of a leader. While I will not address the specific case of Shinseki, I will use this opportunity discuss leadership and responsibility in general terms.
Not surprisingly, people often assign responsibility based on ideology. For example, Democrats would be more inclined to regard a Republican leader as being fully responsible for his subordinates while being more forgiving of fellow Democrats. However, judging responsibility based on political ideology is obviously a poor method of assessment. What is needed is, obviously enough, some general principles that can be used to assess the responsibility of leaders in a consistent manner.
Interestingly (or boringly) enough, I usually approach the matter of leadership and responsibility using an analogy to the problem of evil. Oversimplified quite a bit, the problem of evil is the problem of reconciling God being all good, all knowing and all powerful with the existence of evil in the world. If God is all good, then he would tolerate no evil. If God was all powerful, He could prevent all evil. And if God was all knowing, then He would not be ignorant of any evil. Given God’s absolute perfection, He thus has absolute responsibility as a leader: He knows what every subordinate is doing, knows whether it is good or evil and has the power to prevent or cause any behavior. As such, when a subordinate does evil, God has absolute accountability. After all, the responsibility of a leader is a function of what he can know and the extent of his power.
In stark contrast, a human leader (no matter how awesome) falls rather short of God. Such leaders are clearly not perfectly good and they are obviously not all knowing or all powerful. These imperfections thus lower the responsibility of the leader.
In the case of goodness, no human can be expected to be morally perfect. As such, failures of leadership due to moral imperfection can be excusable—within limits. The challenge is, of course, sorting out the extent to which imperfect humans can legitimately be held morally accountable and to what extent our unavoidable moral imperfections provide a legitimate excuse. These standards should be applied consistently to leaders so as to allow for the highest possible degree of objectivity.
In the case of knowledge, no human can be expected to be omniscient—we have extreme limits on our knowledge. The practical challenge is sorting out what a leader can reasonably be expected to know and the responsibility of the leader should be proportional to that extent of knowledge. This is complicated a bit by the fact that there are at least two factors here, namely the capacity to know and what the leader is obligated to know. Obligations to know should not exceed the human capacity to know, but the capacity to know can often exceed the obligation to know. For example, the President could presumably have everyone spied upon (which is apparently what he did do) and thus could, in theory, know a great deal about his subordinates. However, this would seem to exceed what the President is obligated to know (as President) and probably exceeds what he should know.
Obviously enough, what a leader can know and what she is obligated to know will vary greatly based on the leader’s position and responsibilities. For example, as the facilitator of the philosophy & religion unit at my university, my obligation to know about my colleagues is very limited as is my right to know about them. While I have an obligation to know what courses they are teaching, I do not have an obligation or a right to know about their personal lives or whether they are doing their work properly on outside committees. So, if a faculty member skipped out on committee meetings, I would not be responsible for this—it is not something I am obligated to know about.
As another example, the chair of the department has greater obligations and rights in this regard. He has the right and obligation to know if they are teaching their classes, doing their assigned work and so on. Thus, when assessing the responsibility of a leader, sorting out what the leader could know and what she was obligated to know are rather important matters.
In regards to power (taken in a general sense), even the most despotic dictator’s powers are still finite. As such, it is reasonable to consider the extent to which a leader can utilize her authority or use up her power to compel subordinates to obey. As with knowledge, responsibility is proportional to power. After all, if a leader lacks to power (or authority) to compel obedience in regards to certain matters, then the leader cannot be accountable for not making the subordinates do or not do certain actions. Using myself as an example, my facilitator position has no power: I cannot demote, fire, reprimand or even put a mean letter into a person’s permanent record. The extent of my influence is limited to my ability to persuade—with no rewards or punishments to offer. As such, my responsibility for the actions of my colleagues is extremely limited.
There are, however, legitimate concerns about the ability of a leader to make people behave correctly and this raises the question of the degree to which a leader is responsible for not being persuasive enough or using enough power to make people behave. That is, the concern is when bad behavior based on resisting applied authority or power is the fault of the leader or the fault of the resistor. This is similar to the concern about the extent to which responsibility for failing to learn falls upon the teacher and to which it falls on the student. Obviously, even the best teacher cannot reach all students and it would seem reasonable to believe that even the best leader cannot make everyone do what they should be doing.
Thus, when assessing alleged failures of leadership it is important to determine where the failures lie (morality, knowledge or power) and the extent to which the leader has failed. Obviously, principled standards should be applied consistently—though it can be sorely tempting to damn the other guy while forgiving the offenses of one’s own guy.
In March of 2014 popular astrophysicist and Cosmos host Neil deGrasse Tyson did a Nerdist Podcast. This did not garner much attention until May when some philosophers realized that Tyson was rather critical and dismissive of philosophy. As might be imagined, there was a response from the defenders of philosophy. Some critics went so far as to accuse him of being a philistine.
Tyson presents a not uncommon view of contemporary philosophy, namely that “asking deep questions” can cause a “pointless delay in your progress” in engaging “this whole big world of unknowns out there.” To avoid such pointless delays, Tyson advises scientists to respond to such questioners by saying, “I’m moving on, I’m leaving you behind, and you can’t even cross the street because you’re distracted by deep questions you’ve asked of yourself. I don’t have time for that.”
Since Tyson certainly seems to be a deep question sort of guy, it is tempting to consider that his remarks are not serious—that is, he is being sarcastic. Even if he is serious, it is also reasonable to consider that these remarks are off-the cuff and might not represent his considered view of philosophy in general.
It is also worth considering that the claims made are his considered and serious position. After all, the idea that a scientist would regard philosophy as useless (or worse) is quite consistent with my own experiences in academics. For example, the politically fueled rise of STEM and the decline of the humanities has caused some in STEM to regard this situation as confirmation of their superior status and on some occasions I have had to defuse conflicts instigated by STEM faculty making their views about the uselessness of non-STEM fields clear.
Whatever the case, the concern that the deep questioning of philosophy can cause pointless delays does actually have some merit and is well worth considering. After all, if philosophy is useless or even detrimental, then this would certainly be worth knowing.
The main bite of this criticism is that philosophical questioning is detrimental to progress: a scientist who gets caught in these deep questions, it seems, would be like a kayaker caught in a strong eddy: she would be spinning around and going nowhere rather than making progress. This concern does have significant practical merit. To use an analogy outside of science, consider a committee meeting aimed at determining the curriculum for state schools. This committee has an objective to achieve and asking questions is a reasonable way to begin. But imagine that people start raising deep questions about the meaning of terms such as “humanities” or “science” and become very interested in sorting out the semantics of various statements. This sort of sidetracking will result in a needlessly long meeting and little or no progress. After all, the goal is to determine the curriculum and deep questions will merely slow down progress towards this practical goal. Likewise, if a scientist is endeavoring to sort out the nature of the cosmos, deep questions can be a similar sort of trap: she will be asking ever deeper questions rather than gathering data and doing math to answer her less deep questions.
Philosophy, as Socrates showed by deploying his Socratic method, can endlessly generate deep questions. Questions such as “what is the nature of the universe?”, “what is time?”, “what is space?”, “what is good?” and so on. Also, as Socrates showed, for each answer given, philosophy can generate more questions. It is also often claimed that this shows that philosophy really has no answers since every alleged answer can be questioned or raises even more questions. Thus, philosophy seems to be rather bad for the scientist.
A key assumption seems to be that science is different from philosophy in at least one key way—while it raises questions, proper science focuses on questions that can be answered or, at the very least, gets down to the business of answering them and (eventually) abandons a question should it turn out to be a distracting deep question. Thus, science provides answers and makes progress. This, obviously enough, ties into another stock criticism of philosophy: philosophy makes no progress and is useless.
One rather obvious reason that philosophy is regarded as not making progress and as being useless is that when enough progress is made on a deep question, it is perceived as being a matter for science rather than philosophy. For example, ancient Greek philosophers, such as Democritus, speculated about the composition of the universe and its size (was it finite or infinite?) and these were considered deep philosophical questions. Even Newton considered himself a natural philosopher. He has, of course, been claimed by the scientist (many of whom conveniently overlook the role of God in his theories). These questions are now claimed by physicists, such as Tyson, who regard them as scientific rather than philosophical questions.
Thus, it is rather unfair to claim that philosophy does not solve problems or make progress—since when excellent progress is made, the discipline is labeled as science and no longer considered philosophy. However, the progress would have obviously been impossible without the deep questions that set people in search of answers and the work done by philosophers before the field was claimed as a science. To use an analogy, to claim that philosophy has made no progress or contributions would be on par with a student taking the work done by another, adding to it and then claiming the whole as his own work and deriding the other student as “useless.”
At this point, some might be willing to grudgingly concede that philosophy did make some valuable contributions (perhaps on par with how the workers who dragged the marble for Michelangelo’s David contributed) in the past, but philosophy is now an eddy rather than the current of progress.
Interestingly enough, philosophy has been here before—back in the days of Socrates the Sophists contended that philosophical speculation was valueless and that people should focus on getting things done—that is, achieving success. Fortunately for contemporary science, philosophy survived and philosophers kept asking those deep questions that seemed so valueless then.
While philosophy’s day might be done, it seems worth considering that some of the deep, distracting philosophical questions that are being asked are well worth pursuing—if only because they might lead to great things. Much as how Democritus’ deep questions led to the astrophysics that a fellow named Neil loves so much.
While watching news clips about the debate over cutting the SNAP program (more commonly known as food stamps), I saw Florida Republican Steve Southerland say “work is a blessing.” As he sees it, there should be a work requirement for people to be eligible for food stamps. This claim is certainly an interesting one.
In the United States, there is an entire mythology devoted to the notion of the blessings and value of work. The largest roots dig deep into the stereotypes of the Puritans: dour white folks dressed in penguin colors who scorned play and lived to work and pray. Or so the myths go. The mythology of Calvinism also contributed to this notion: the idea that people are pre-destined for heaven or hell—though the final destination could be discerned, perhaps, from the worldly success of the individual.
Interestingly, the mythology of work seems to have begun with the expulsion of Adam and Eve from the garden. On a not unreasonable interpretation of the text, God punishes man with a curse that will require him to work to survive: “Cursed is the ground because of you; In toil you will eat of it All the days of your life.” On this view, work is not a blessing, but a curse.
The mythology of capitalism, at least that which is distinct from the mythology of religion, also praises hard work and would seem to cast it as a blessing. This makes sense: the capitalist needs the workers to work hard for him so that they generate his profits. For the capitalist, the work of others is indeed a blessing. For him. Not surprisingly, those critical of the excesses of capitalism have contended that such work is not a blessing for the workers—especially children and those that toil in horrible conditions for pittances.
While Southerland simply threw out the claim that work is a blessing, presumably he has not given this matter considerable thought—at least in terms of properly defining work and sorting out what sorts of work (if any) are a blessing. There is also the question of what a blessing is. Perhaps he means that in today’s economic system, it is a blessing to be able to find a decent job. If so, I would agree that he is right. However, his intent seems to be that working itself has a special sort of value.
I would agree that working can have extrinsic value. After all, work is mainly aimed at achieving some end and usually there are other ends beyond that. For example, a person might work to assemble iPads in order to get money in order to buy food and pay the rent so as to avoid starving or dying of exposure. That, I suppose, could be seen as a rough sort of blessing. However, this sort of work seems to lack intrinsic value. That is, it is not something valuable in and of itself. After all, we do such work only because the alternative is worse. Few, if any, people would work most jobs if necessity and need did not drive them to do so, like a whip drives a mule.
I will even agree that work can be good for a person. After all, people seem to grow bored and discontent when they do not have appealing work to perform. Also, as my mother was fond of saying in my childhood, work can build character. She is obviously right—I turned out to be quite a character. However, not all work is of the sort that is good for a person. Working a crushing and demeaning job is work, yet obviously not a blessing for the person. Unless, of course, the alternative is worse.
I even accept that it is good for a person to earn his daily bread, at least when that earning is not destroying the person. After all, it is a matter of integrity to not simply receive but to earn. And even more so to give to those who are in need. Of course, I think a person could have the same or more integrity by living a life of value—and these need not be a life of what would be considered work. Which returns me to the matter of sorting out what is meant by “work.”
People use “work” in many ways, ranging from the toiling of slaves in the field to the creative acts of a free artist to running around a track (speed work). As such, the usual usage slams and jams together horrible things and pleasant things, torments and joys, evils and goods. As such, it is rather hard to say that work is blessing, given the incredible scope of the term. I would agree that some things that are called work are a blessing. I regard working out as a blessing—it is a gift indeed. I also regard much of my work, mainly teaching and writing, as blessings. However, this might be because, in a way, I do not see these things as work.
After all, work seems to be what is done from necessity in order to achieve some practical end (like not dying of starvation). What is done from choice because of the value of the activity itself seems to be another matter. Looked at this way, a workout is both a necessity and a valued choice: I need to do running work because it is necessary to be a runner. But, I also value running in and of itself—it is a choice I make for the sake of what I am choosing, not just to achieve some other end.
One of the grotesque failings of our civilization is that so many people have to engage in work of the onerous sort: grinding away the hours just to survive and seeing little value in what they do. Those who benefit from this often believe that this is a good thing for them, but they hold to a deranged set of values in which the accumulation of profit is seen as the highest good.
I am, obviously enough, borrowing heavily from Aristotle: the life of wealth and accumulation of wealth is not the proper function of man. Rather, it is the life of virtue and excellence. Sadly, as Wollstonecraft noted, wealth and property are valued more than virtue and poverty is regarded as a worse vice than wickedness.
Work, then, is not really a blessing. At best, it is necessity.
While it sounds a bit like science fiction, the issue of whether or not human genes can be owned has become a matter of concern. While the legal issue is interesting, my focus will be on the philosophical aspects of the matter. After all, it was once perfectly legal to own human beings—so what is legal is rather different from what is right.
Perhaps the most compelling argument for the ownership of genes is a stock consequentialist argument. If corporations cannot patent and thus profit from genes, then they will have no incentive to engage in expensive genetic research (such as developing tests for specific genes that are linked to cancer). The lack of such research will mean that numerous benefits to individuals and society will not be acquired (such as treatments for specific genetic conditions). As such, not allowing patents on human genes would be wrong.
While this argument does have considerable appeal, it can be countered by another consequentialist argument. If human genes can be patented, then this will allow corporations to take exclusive ownership of these genes, thus allowing them a monopoly. Such patents will allow them to control the allowed research conducted even at non-profit institutions such as universities (who sometimes do research for the sake of research), thus restricting the expansion of knowledge and potentially slowing down the development of treatments. This monopoly would also allow the corporation to set the pricing for relevant products or services without any competition. This is likely to result in artificially high prices which could very well deny people needed medical services or products simply because they cannot meet the artificially high prices arising from the lack of competition. As such, allowing patents on human genes would be wrong.
Naturally, this counter argument can be countered. However, the harms of allowing the ownership of human genes would seem to outweigh the benefits—at least when the general good is considered. Obviously, such ownership would be very good for the corporation that owns the patent.
In addition to the moral concerns regarding the consequences, there is also the general matter of whether it is reasonable to regard a gene as something that can be owned. Addressing this properly requires some consideration of the basis of property.
John Locke presents a fairly plausible account of property: a person owns her body and thus her labor. While everything is initially common property, a person makes something her own property by mixing her labor with it. To use a simple example, if Bill and Sally are shipwrecked on an ownerless island and Sally gathers coconuts from the trees and build a hut for herself, then the coconuts and hut are her property. If Bill wants coconuts or a hut, he’ll have to either do work or ask Sally for access to her property.
On Locke’s account, perhaps researchers could mix their labor with the gene and make it their own. Or perhaps not—I do not, for example, gain ownership of the word “word” in general because I mixed my labor with it by typing it out. I just own the work I have created in particular. That is, I own this essay, not the words making it up.
Sticking with Locke’s account, he also claims that we are owned by God because He created us. Interestingly, for folks who believe that God created the world, it would seem to follow that a corporation cannot own a human gene. After all, God is the creator of the genes and they are thus His property. As such, any attempt to patent a human gene would be an infringement on God’s property rights.
It could be countered that although God created everything, since He allows us to own the stuff He created (like land, gold, and apples), then He would be fine with people owning human genes. However, the basis for owning a gene would still seem problematic—it would be a case of someone trying to patent an invention which was invented by another person—after all, if God exists then He invented our genes, so a corporation cannot claim to have invented them. If the corporation claims to have a right to ownership because they worked hard and spent a lot of money, the obvious reply is that working hard and spending a lot of money to discover what is already owned by another would not transfer ownership. To use an analogy, if a company worked hard and spent a lot to figure out the secret formula to Coke, it would not thus be entitled to own Coca Cola’s formula.
Naturally, if there is no God, then the matter changes (unless we were created by something else, of course). In this case, the gene is not the property of a creator, but something that arose naturally. In this case, while someone can rightfully claim to be the first to discover a gene, no one could claim to be the inventor of a naturally occurring gene. As such, the idea that ownership would be confirmed by mere discovery would seem to be a rather odd one, at least in the case of a gene.
The obvious counter is that people claim ownership of land, oil, gold and other resources by discovering them. One could thus argue that genes are analogous to gold or oil: discovering them turns them into property of the discoverer. There are, of course, those who claim that the ownership of land and such is unjustified, but this concern will be set aside for the sake of the argument (but not ignored—if discovery does not confer ownership, then gene ownership would be right out in regards to natural genes).
While the analogy is appealing, the obvious reply is that when someone discovers a natural resource, she gains ownership of that specific find and not all instances of what she found. For example, when someone discovers gold, they own that gold but not gold itself. As another example, if I am the first human to stumble across naturally occurring Unobtanium on an owner-less alien world, I thus do not gain ownership of all instances of Unobtanium even if it cost me a lot of money and work to find it. However, if I artificially create it in my philosophy lab, then it would seem to be rightfully mine. As such, the researchers that found the gene could claim ownership of that particular genetic object, but not the gene in general on the grounds that they merely found it rather than created it. Also, if they had created a new artificial gene that occurs nowhere in nature, then they would have grounds for a claim of ownership—at least to the degree they created the gene.
While we consider ourselves to be the dominant species on the planet, we do face dangers from other species. While some of these species are large animals such as lions, tigers and bears our greatest foes tend to be tiny. These include insects, bacteria and viruses.
While we have struggled, with some success, to eliminate various tiny threats advances in technology and science have given us some new options. One of these is genetically modifying species so they cannot reproduce, thus resulting in their extermination. As might be suspected, insects such as disease carrying mosquitoes are a prime target. One approach to wiping out mosquitoes is to genetically modify mosquito eggs so that the adults carry “extermination” genes. The adult males are released into the wild and reproduce with native females in the target area. The offspring then bear the modified gene which causes the female mosquitos to be unable to fly (they lack flight muscles). The males can operate normally and they continue to “infect” the local population until (in theory) it is exterminated. As might be imagined, this approach raises various ethical concerns.
One obvious point of concern is the matter of intentionally exterminating a species. On the face of it, such an action seems to be morally dubious. However, it does seem easy enough to counter this on utilitarian grounds. After all, if an organism (such as a mosquito) is harmful to humans and does not have an important role to play in the ecosystem, then its extermination would seem to be morally justified on the grounds that doing so would create more good than harm. Naturally, if a harmful species were also beneficial in other ways, then the matter would be rather more complicated and such extermination could be wrong on the grounds that it would do more harm than good.
The utilitarian approach can be countered by appealing to an alternative approach to ethics. For example, it could be argued that such extermination is simply wrong regardless of the beneficial consequences to humans. It can, however, be pointed out that species go extinct naturally and, as such, perhaps a case could be made that such exterminations are not inherently wrong. The obvious counter would be to point out that there is a significant moral difference between a species dying of natural causes and being destroyed. The distinction between killing and letting die comes to mind here.
I am inclined to accept that the extermination of a harmful species can be acceptable, provided that the benefits do, in fact, outweigh the damage done by exterminating the species. Getting rid of, for example, the HIV virus would seem to be morally acceptable. In the case of mosquitoes, the main concern would be the role of the mosquito in the ecosystem and the impact that its extermination would have. If, for example, the disease carrying mosquito was an invasive species and its elimination would not impact the ecosystem in a negative way, then it would seem to be acceptable to exterminate it. Naturally, if the extermination is local and the species remains elsewhere, then the ethics of the situation become far less problematic. After all, I have no moral objection to the extermination of the roaches, termites, fleas and other bugs that attempt to reside in my house—there are plenty that remain in the wild and they would pose a threat to the well-being of myself and my husky. Naturally, I would only accept the extermination of a species on very serious grounds, such as a clear danger presented to my species. Even then, it would be preferable to see if the extermination could be avoided.
A second point of concern involves the methodology. While humans have attempted to wipe out species by killing them the old fashioned ways (like poisons), the use of genetic modification could be morally significant.
There is, of course, the usual concern with “playing God” or tampering with nature. However, as is always pointed out, we routinely accept such tampering as morally acceptable in other areas. For example, by using artificial light, vaccines, surgery and such we are “playing God” and tampering with nature. As such, the idea that “playing God” is inherently wrong seems rather dubious. Rather, what is needed is to show that specific acts of “playing God” or tampering are wrong.
There is also the reasonable concern about unintended consequences, something that is not unknown in the attempts to exterminate species. For example, DDT had a host of undesirable effects. I do not, of course, think that modifying mosquitoes will create some sort of 1950s style mega-mosquitoes that will rampage across the land. However, there are reasonable grounds to be concerned that genetic modification might have unexpected and unpleasant results and this possibility should be seriously considered.
A final point I will address is a practical one, namely that even if a species is exterminated by genetic modification another species might simply take its place. In the case of mosquitoes it seems likely that if one type of mosquito is wiped out, then another one will simply move into the niche vacated and the problem, such as a mosquito transmitted illness will return. The concern is, of course, that resources would have been expended and a species exterminated for nothing. Naturally, if there are good grounds to believe that the extermination would be effective and ethically acceptable, then this would be another matter.