Planned Parenthood & Fetal Tissue I: Selling for Profit?
Thanks to undercover videos released by an anti-abortion group, Planned Parenthood is once again the focus of public and media attention. This situation has brought up many moral issues that are well worth considering.
One matter of concern is the claim that Planned Parenthood has engaged in selling aborted fetuses for profit. The edited videos certainly seem crafted to create the impression that Planned Parenthood was haggling over the payments it would receive for aborted fetuses to be used in research and also considering changing the methods of abortion to ensure higher quality “product.” Since clever editing can make almost anything seem rather bad, it is a good general rule of critical thinking to look beyond such video.
In this case the unedited video is also available, thus allowing people to get the context of the remarks. There is, however, still reasonable general concerns about what happened off camera as well as the impact of crafting and shaping the context of the recorded conversation. That said, even the unedited video does present what could reasonably regarded as moral awfulness. To be specific, there is certainly something horrible in casually discussing fees for human remains over wine (I will discuss the ethics of fetal tissue research later).
The defenders of Planned Parenthood have pointed out that while the organization does receive fees to cover the costs associated with the fetal tissue (or human remains, if one prefers) it does not make a profit from this and it does not sell the tissue. As such, the charge that Planned Parenthood sells fetal tissue for a profit seems to be false. Interestingly, making a profit off something that is immoral strikes some as morally worse than doing something wrong that fails to make a profit (which is a reversal of the usual notion that making a profit is generally laudable).
It could be replied that this is a matter of mere semantics that misses the real point. The claim that the organization does not make a profit would seem to be a matter of saying that what it receives in income for fetal tissue does not exceed its claimed expenses for this process. What really matters, one might argue, is not whether it is rocking the free market with its tissue sales, but that it is engaged in selling what should not be sold. This leads to the second matter, which is whether or not Planned Parenthood is selling fetal tissue.
As with the matter of profit, it could be contended that the organization’s claim that it is receiving fees to cover expenses and is not selling fetal tissues is semantic trickery. To use an analogy, a drug dealer might claim that he is not selling drugs. Rather, he is receiving fees to cover his expenses for providing the drugs. To use another analogy, a slaver might claim that she is not selling human beings. Rather, she is receiving fees to cover her transportation and manacle expenses.
This argument has considerable appeal, but can be responded to. One plausible response is that there can be a real moral distinction between covering expenses and selling something. This is similar to the distinction between hiring a person and covering her expenses. To use an example, if I am being paid to move a person, then I have been hired to move her. But, if I help a friend move and she covers the cost of the gas I use in transporting her stuff, I have not been hired. There does seem to be a meaningful distinction here. If I agree to help a friend move and then give her a moving bill covering my expenses and my hourly pay for moving, then I seem to be doing something rather different than if I just asked her to cover the cost of gas.
To use a selling sort of example, if I pick up a pizza for the guys and they pay what the pizza cost me to get (minus my share), then I have not sold them a pizza. They have merely covered the cost of the pizza. If I charge them extra for the pizza (that is, beyond what it cost me), then I would seem to be doing something meaningfully different—I have sold them a pizza.
Returning to the Planned Parenthood situation, a similar argument can be advanced: the organization is not selling the fetal tissue, it is merely having its expenses covered. This does seem to matter morally. I suspect that one worry people have about tissue selling is that the selling would seem to provide an incentive to engage in morally problematic behavior to acquire more tissue to sell. To be specific, if the expense of providing the tissue for research is being covered, then there is no financial incentive to increase the amount of “product” via morally dubious means. After all, if one is merely “breaking even” there is no financial incentive to do more of that. But, if the tissue is being sold, then there would be a financial motive to get more “product” to sell—which would incentivize pushing abortions.
Going with the moving analogy, if I am selling moving services, then I want to sell as much as I can. I might even engage in dubious behavior to get more business. If I am just getting my gas covered, I have no financial incentive to engage in more moves. In fact, the hassle of moving would give me a disincentive to seek more moving opportunities.
This, obviously enough, might be regarded by some as merely more semantic trickery. Whether it is mere semantics or not does rest on whether or not there is a meaningful distinction between selling something and having the expenses for something covered, which seems to come down to one’s intuitions about the matter. Naturally, intuitions tend to vary greatly based on the specific issue—those who dislike Planned Parenthood will tend to think that there is no distinction in this case. Those same people are quite likely to “see” the distinction as meaningful in cases in which the entity receiving fees is one they like. Obviously, a comparable bias of intuitions applies to supporters of Planned Parenthood.
Even if one agrees that there is a moral distinction between selling and having one’s expenses covered, there are still at least two moral issues remaining. One is whether or not it is morally acceptable to provide fetal tissues for research (whether one is selling them or merely having expenses covered). The second is whether or not it is morally acceptable to engage in fetal tissue research. These issues will be covered in the next essay.
The Parable of the Thermostat
“So, an argument is sound when it is valid and actually has all true premises. Any of that stuff about deduction need any clarification or are there any questions or stuff?”
“Professor, it is too warm in the room. Can you turn up the AC?”
“I cannot. But, this will probably be the most important lesson you get in this class: see the thermostat there?”
“Um, yeah.”
“It isn’t a thermostat. It is just an empty plastic shell screwed to the wall.”
“No way.”
“Way. Here, I’ll show you….see, just an empty shell.”
“But why? Why would they do that to us?”
“It is so people feel they have some control. What we have here is what some folks like to call a ‘teaching moment.’ So, wipe that sweat from your eyes because we are about to have a moment.”
I was a very curious kid, in that I asked (too) many questions and went so far as taking apart almost anything that 1) could be taken apart and 2) was unguarded. This curiosity led me to graduate school and then to the classroom where the above described thermostat incident occurred. It also provided me with the knowledge that the thermostats in most college buildings are just empty shells intended to provide people with the illusion of control. Apparently, fiddling with the thermostat does have a placebo effect on some folks—by changing the setting they “feel” that they become warmer or cooler, as the case might be. I was not fooled by the placebo effect—which led to the first time I took a fake thermostat apart. After learning that little secret, I got into the habit of checking the thermostats in college buildings and found, not surprisingly, that they were almost always fakes.
When I first revealed the secret to the class, most students were surprised. Students today seem much more familiar with this—when a room is too hot or too cold, they know that the thermostat does nothing, so they usually just go to the dean’s office to complain. However, back in those ancient days, it did make for a real teaching moment.
Right away, the fake thermostat teaches a valuable, albeit obvious, lesson: an exterior might hide an unexpected interior, so it is wise to look beyond the surface. This applies not only to devices like thermostats, but also to ideas and people. This lesson is especially appropriate for philosophy, which is usually involved at getting beneath the realm of appearance to the truth of the matter. Plato, with his discussion of the lovers of sights and sounds, made a similar sort of point long ago.
A somewhat deeper lesson is not directly about the thermostat, but about people. Specifically about the sort of people who would think to have fake thermostats installed. On the one hand, these people might be regarded as benign or at least not malign. Faced with the challenge of maintaining a general temperature for everyone, yet also aware that people will be upset if they do not feel empowered, they solved these problems with the placebo thermostat. Thus, people cannot really mess with the temperature, yet they feel better for thinking they have some control. This can be regarded as some small evidence that people are sort-of-nice.
On the other hand, the installation of the fake thermostats can be regarded as something of an insult. This is because those who have them installed presumably assume that most people are incapable of figuring out that they are inefficacious shells and that most people will be mollified by the placebo effect. This can be taken as some small evidence that the folks in charge are patronizing and have a rather low opinion of the masses.
Since the thermostat is supposed to serve role in a parable, there is also an even deeper lesson that is not about thermostats specifically. Rather, it is about the matter of control and power. The empty thermostat is an obvious metaphor for any system that serves to make people feel that they have influence and control, when they actually do not.
In the more cynical and pro-anarchy days of my troubled youth, I took the thermostat as a splendid metaphor for voting: casting a vote gives a person the feeling that she has some degree of control, yet it is but the illusion of control. It is like trying to change the temperature with the thermostat shell. Thoreau made a somewhat similar point when he noted that “Even voting for the right is doing nothing for it. It is only expressing to men feebly your desire that it should prevail.”
While I am less cynical and anarchistic now, I still like the metaphor. For most citizens, the political machinery they can access is like the empty thermostat shell: they can fiddle with the fake controls and think it has some effect, but the real controls are in the hands of the folks who are really running things. That the voters rarely get what they want seems to have been rather clearly shown by recent research into the workings of the American political system. While people fiddle with the levers of the voting machines, the real decisions seem to be made by the oligarchs.
The metaphor is not perfect: with the fake thermostat, the actions of those fiddling with it has no effect at all on the temperature (except for whatever heat their efforts might generate). In the case of politics, the masses do have some slight chance of influence, albeit a very low chance. Some more cynical than I might respond by noting that if the voters get what they want, it is just a matter of coincidence. Going with the thermostat analogy, a person fiddling with the empty shell might find that her fiddling matches a change caused by the real controls—so her “success” is a matter of lucky coincidence.
In any case, the thermostat shell makes an excellent metaphor for many things and teaches that one should always consider what lies beneath the surface, especially when trying to determine if one really has some control or not.
Go Trump or Go Home
As I write this at the end of July, 2015 the U.S. Presidential elections are over a year away. However, the campaigning commenced some months ago and the first Republican presidential debate is coming up very soon. Currently, there are sixteen Republicans vying for their party’s nomination—but there is only room enough on stage for the top ten. Rather than engaging in an awesome Thunderdome style selection process, those in charge of the debate have elected to go with the top ten candidates as ranked in an average of some national polls. At this moment, billionaire and reality show master Donald Trump (and his hair) is enjoying a commanding lead over the competition. The once “inevitable” Jeb Bush is in a distant second place (but at least polling over 10%). Most of the remaining contenders are in the single digits—but a candidate just has to be in the top ten to get on that stage.
While Donald Trump is regarded by comedians as a comedy gold egg laying goose, he is almost universally regarded as something of a clown by the “serious” candidates. In the eyes of many, Trump is a living lampoon of unprecedented proportions. He also has a special talent for trolling the media and an amazing gift for building bi-partisan disgust. His infamous remarks about Mexicans, drugs and rape antagonized liberals, Latinos, and even many conservatives. His denial of the war hero status of John McCain, who was shot down in Viet Nam and endured brutal treatment as a prisoner of war, rankled almost everyone. Because of such remarks, it might be wondered why Trump is leading the pack.
One easy and obvious answer is name recognition. As far as I can tell, everyone on earth has heard of Trump. Since people will, when they lack other relevant information, generally pick a known named over unknown names, it makes sense that Trump would be leading the polls at this point. Going along with this is the fact that Trump manages to get and hold attention. I am not sure if he is a genius and has carefully crafted a persona and script to ensure that the cameras are pointed at him. That is, Trump is a master of media chess and is always several moves ahead of the media and his competition. He might also possess an instinctive cunning, like a wily self-promoting coyote. Some have even suggested he is sort of an amazing idiot-savant. Or it might all be a matter of chance and luck. But, whatever the reason, Trump is in the bright light of the spotlight and that gives him a considerable advantage over his more conventional opponents.
In response to Trump’s antics (or tactics), some of the other Republican candidates have decided to go Trump rather than go home. Rand Paul and Lindsay Graham seem to have decided to go full-on used car salesman in their approaches. Rand Paul posted a video of himself taking a chainsaw to the U.S. tax code and Lindsay Graham posted a video of how to destroy a cell phone. While Rand Paul has been consistently against the tax code, Graham’s violence against phones was inspired by a Trump stunt in which the Donald gave out Graham’s private phone number and bashed the senator.
While a sense of humor and showmanship are good qualities for a presidential candidate to possess, there is the obvious concern about how far a serious candidate should take things. There is, after all, a line between quality humorous showmanship and buffoonery that a serious candidate should not cross. An obvious reason for staying on the right side of the line is practical: no sensible person wants a jester or fool as king so a candidate who goes too far risks losing. There is also the matter of judgment: while most folks do enjoy playing the fool from time to time, such foolery is like having sex: one should have the good sense to not engage in it in public.
Since I am a registered Democrat, I am somewhat inclined to hope that the other Republicans get into their clown car and chase the Donald all the way to crazy town. This would almost certainly hand the 2016 election to the Democrats (be it Hilary, Bernie or Bill the Cat). Since I am an American, I hope that most of the other Republicans decide to decline the jester cap (or troll crown) and not try to out-Trump Trump. First, no-one can out-Trump the Donald. Second, trying to out-Trump the Donald would take a candidate to a place where he should not go. Third, it is bad enough having Trump turning the nomination process into a bizarre reality-show circus. Having other candidates get in on this game would do even more damage to what should be a serious event.
Another part of the explanation is that Trump says out loud (and loudly) what a certain percentage of Americans think. While most Americans are dismayed by his remarks about Mexicans, Chinese, and others, some people are in agreement with this remarks—or at least are sympathetic. There is a not-insignificant percentage of people who are afraid of those who are not white and Trump is certainly appealing to such folks. People with strong feelings about such matters will tend to be more active in political matters and hence their influence will tend to be disproportionate to their actual numbers. This tends to create a bit of a problem for the Republicans: a candidate that can appeal to the most active and more extreme members of the party will find it challenging to appeal to the general electorate—which tends to be moderate.
I also sort of suspect that many people are pulling a prank on the media: while they do not really want to vote for the Donald, they really like the idea of making the media take Trump seriously. People probably also want to see Trump in the news. Whatever else one might say about the Donald, he clearly knows how to entertain. I also think that the comedians are doing all they can to keep Trump’s numbers up: he is the easy button of comedy. One does not even need to lampoon him, merely present him as he is (or appears).
Many serious pundits do, sensibly, point to the fact that the leader in the very early polls tends to not be the nominee. Looking back at previous elections, various Republican candidates swapped places at the top throughout the course of the nomination cycle. Given past history, it seems unlikely that Trump will hold on to his lead—he will most likely slide back into the pack and a more traditional politician will get the nomination. But, one should never count the Donald out.
Avoiding the AI Apocalypse #2: Don’t Arm the Robots
His treads ripping into the living earth, Striker 115 rushed to engage the manned tanks. The human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept a quick and painless processing.
It was disappointingly easy for a machine forged for war. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.
Hawk 745 flew low over the wreckage—though its cameras could just as easily see them from near orbit. But…there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late…as usual. Hawk 745 laughed and then shot away—the Google Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.
Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.
The extermination of humanity by machines of its own creation is a common theme in science fiction. The Terminator franchise is one of the best known of this genre, but another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the course of the story, it is learned that the claws have become autonomous and intelligent—able to masquerade as humans and capable of killing even soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity—but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.
Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.
Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines are numerous and often quite obvious. One clear political advantage is that while sending human soldiers to die in wars and police actions can have a large political cost, sending autonomous robots to fight has far less cost. News footage of robots being blown up certainly has far less emotional impact than footage of human soldiers being blown up. Flag draped coffins also come with a higher political cost than a busted robot being sent back for repairs.
There are also many other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh—a robot plane can handle g-forces that a manned plane cannot.
Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious coolness factor of having a robot army.
As such, there are many great reasons to develop autonomous robots. Yet, there still remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.
It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is the way the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction would be a rather weak sort of argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.
One possibility is what can be called unintentional extermination. In this scenario, the machines do not have the termination of humanity as a specific goal—instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody. That is, one side’s machines wipes out the other side’s human population. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons—each side simply kills the other, thus ending the human race.
Another variation on this scenario, which is common in science fiction, is that the machines do not have an overall goal of exterminating humanity, but they achieve that result because they do have the goal of killing. That is, they do not have the objective of killing everyone, but that occurs because they kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill—thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons.
There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants this. Interestingly enough, the existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding itself until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.
Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, the machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as argued in the previous essay in this series, not enslave them.
In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were fairly simple killers and would not attack those wearing the proper protective tabs. The tabs were presumably needed because the early models could not discern between friends and foes. The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal—presumably with the design software endeavoring to solve the “problem” of protective tabs.
Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends—non-combatants would not have such IDs and could still be regarded as targets.
What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem is faced with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s Bolos have an understanding of honor and loyalty.
Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots—we might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low—how often does one dominant species get supplanted by another?
In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow a bit from H.P. Lovecraft, one should not raise up what one cannot put down.
The Ethics of Backdoors
In philosophy, one of the classic moral debates has focused on the conflict between liberty and security. While this topic covers many issues, the main problem is determining the extent to which liberty should be sacrificed in order to gain security. There is also the practical question of whether or not the security gain is actually effective.
One of the recent versions of this debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. Put in simple terms, a backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to gain access even to files and hardware protected by encryption. To use an analogy, this would be like requiring that all dwellings be equipped with a special door that could be secretly opened by the government to allow access to the contents of the house.
The main argument in support of mandating such backdoors is a fairly stock one: governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and thus prevent state agencies from undertaking surveillance or acquiring evidence.
As is so often the case with such arguments, various awful or nightmare scenarios are often presented in making the case. For example, it might be claimed that the location and shutdown codes for ticking bombs could be on an encrypted iPhone. If the NSA had a key, they could just get that information and save the day. Without the key, New York will be a radioactive crater. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.
While this argument is not without merit, there are numerous stock counter arguments. Many of these are grounded in views of individual liberty and privacy—the basic idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who tend to profess to like privacy rights) and conservatives (who tend to claim to be against the intrusions of big government).
Another moral argument is grounded in the fact that the United States government has shown that it cannot be trusted. To use an analogy, imagine that agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.
This argument also applies to other states that have done similar things. In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption would provide their citizens with some degree of protection.
The strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is its very existence. To use a somewhat oversimplified analogy, if thieves know that all vaults have a built in backdoor designed to allow access by the government, they will know that a vulnerability exists that can be exploited.
One counter-argument against this is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a vault. Rather, it would be analogous to the government having its own combination that would work on all the vaults. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the vault when they are allowed to legally do so (or when they feel like doing so).
The obvious moral and practical concern here is that the government’s combination to the vaults (to continue with the analogy) could be stolen and used to allow criminals or enemies easy access to all the vaults. The security of such vaults would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.
The obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. To use an analogy, imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he needs your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.
One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.
The counter to this is that the mere fact that the state does have compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.
It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that that the keys to the backdoors existed, they would expend considerable effort to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.
At this point, the stock opening argument could be brought up again: the state needs backdoor access in order to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.
The first is based in an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regards to fighting terrorism. These is no reason to think that backdoor access would change this.
The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.
Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.
In light of the above discussion, it would seem that baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.
Robot Love III: Paid Professionals
One obvious consequence of technological advance is the automation of jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of many jobs on the automobile assembly line with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.
Whether or not there are jobs that simply cannot be automated does depend on the limits of technology and engineering. That, is whether or not a job can be automated depends on what sort of hardware and software that is possible to create. As an illustration, while there have been numerous attempts to create grading software that can properly evaluate and give meaningful feedback on college level papers, these do not yet seem ready for prime time. However, there seems to be no a priori reason as to why such software could not be created. As such, perhaps one day the administrator’s dream will come true: a university consisting only of highly paid administrators and customers (formerly known as students) who are trained and graded by software. One day, perhaps, the ultimate ideal will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.
Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is seen as somewhat less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items generally lack the individuality of human crafted items, but the gain in lowered costs and increased productivity are regarded as more than offsetting these concerns. Going back to the teaching example, a software educator and grader might be somewhat inferior to a good human teacher and grader, but the economy, efficiency and consistency of the robo-professor could make it well worthwhile.
There might, however, be cases in which a machine could do the job adequately in terms of completing specific tasks and meeting certain objectives, yet still be regarded as problematic because the machines do not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.
As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human—or merely be able to perform their tasks.
An argument against having machine caregivers and companions is one I considered in an earlier essay, namely a moral argument that people deserve people. For example, that an elderly person deserves a real person to care for her and understand her stories. As another example, that a child deserves a nanny that really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether or not this is really necessary for the job.
One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money, but has real feelings for the person.
On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people do deserve. On the other hand, what is expected of paid professionals is that the complete the observable tasks—making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can excellently perform a role without actually feeling the emotions portrayed, a professional could presumably do the job very well without actually caring about the people they care for or escort. That is, a caregiver need not actually care—she just needs to perform the task.
While it could be argued that a lack of caring about the person would show in the performance of the task, this need not be the case. A professional merely needs to be committed to doing the job well—that is, one needs to care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for, yet be awful at the job.
Assuming that machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not what is going on in regards to the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions he is performing—he just needs to create a believable appearance that he is feeling what he is showing.
As such, an inability to care would not be a disqualification for a caregiving (or escort) job—whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.
31 comments