Google’s entry into the computer business has been a mixed one. While certain Chromebooks have been selling quite well, they are still a minute fraction of the laptop market. One of Google’s latest endeavors in the realm of hardware is the famous Google Glasses. While the glasses have been the focus of considerable attention, it remains to be seen whether or not they will prove to be a success or an interesting failure.
Since I rather like gadgets, the idea of a wearable computer is certainly appealing-if only for the science fiction aspect. After all, the idea of such technology is old news in science fiction. In my own case, I would most likely use such glasses for running and driving. People who know me know how important navigational technology is for me to have a reasonable chance of getting from one point to another. As such, if the Google glasses can handle this, I might consider getting a pair. Of course, I am also known for being frugal-so the glasses would have to be reasonably priced.
While I like the idea of Google Glasses, there are some practical concerns regarding this technology. One obvious concern is the distraction factor. Mobile phones and other devices are infamous for their distracting power and it seems reasonable that a device designed to sit right in front of the face would have even more distracting power than existing mobile devices. This distracting power is of concern primarily for safety, especially in the context of driving. However, there is also the concern that people will be distracted from the other people physically near them.
Another practical concern is the matter of whether or not people will actually accept the glasses. One factor is that people generally prefer to not wear glasses. While my vision is reasonably good, I do have prescription glasses. However, I find wearing glasses annoying enough that I only wear them when I really want or need to see thing sharply. As such, I usually only wear them while playing video games and watching movies at the theater. Lest anyone be worried, I can drive just fine without them. People can, of course, get accustomed to glasses-but there is the question of whether or not people will find the glasses compelling enough to wear.
There is also a somewhat philosophical issue in regards to the glasses, namely the concern about privacy. Or, to be more accurate, concern about two types of privacy. These two types are defined by which side of the glasses a person happens to be on.
In one direction, the privacy concerns relate to the folks that the glasses are pointing towards. Like almost all modern smart phones, the Google Glasses have a camera and, as such, raise the same basic concerns about privacy. However, the Google device broadens the concern. Since the glasses are glasses, people might not notice that they have a camera pointed at them. Also, since the glasses are worn, it is more likely for the glasses to be pointing at people relative to other cameras. After all, a person has to take out and hold a mobile phone to use the camera effectively. But, with the glasses, the camera will be easily and automatically pointing at the outer world.
In the case of the public context, it is rather well established that people do not have an expectation of privacy in public. This seems reasonable since the public context is just that, public rather than private. However, it can be contended that many of the notions governing the concepts of privacy have become obsolete because of changing technology. As such, there perhaps needs to be a reconsideration of the expectations in the public context. These expectations might be taken as including an expectation not to be filmed or photographed, even casually as a person saunters by wearing their Google Glasses. In addition to the question of what the person using the glasses might do, there is also the concern about what Google will do-especially in light of past issues involving the Google vehicles cruising neighborhoods and gathering up data.
Obviously, there are also concerns about people using the devices more nefariously in contexts in which people do have an expectation of privacy.
In the other direction, there are the privacy concerns relating to the user. What will Google know about the activities and location of the wearer and how will this information be used? Obviously enough, Google would be able to gather a great deal of information about the user of pair of Google Glasses and Google is rather well known for being able to use such data.
Interestingly, a person wearing a pair of Google glasses could end up being both a spy for and spied upon by Google.
The revelations about the once secret Prism program of the National Security Agency
have revitalized the old debates about liberty versus security and the individual versus the state. Obviously enough, there are many legal and ethical issues here.
On the face of it, Prism was quite legal-at least in the United States. That is, the program went through all of the legally established procedures for such a program. It remains, however, to see if it is actually constitutional. While questions of legality and constitutionality are interesting, I’ll focus on some of the ethical concerns.
Not surprisingly, the main moral defense of Prism and other programs is based in utilitarianism: any (alleged) wrongs done by intruding into privacy are morally offset by the greater good done by increasing security. The Obama administration has made vague claims that the program has prevented attacks and there is the claim that it will prevent attacks in the future. However, as I have noted before, these claims are coming from the administration who hid the program behind lies. These past deceits and the fact that they are prejudiced clearly makes the administration a dubious source for claims about the efficacy of Prism. However, I do agree that Prism can potentially be morally justified on these grounds and this would be contingent on it doing more good than harm.
The alleged good of such a program can be assessed in terms of the attacks prevented and estimates of the damage that would have been done if such attacks had succeeded. Naturally, the importance of Prism is such prevention must also be considered. After all, if other means would have prevented the attack anyways, then Prism’s credit should be adjusted appropriately.
There are various ways to argue that Prism and similar programs are wrong. One option is to use the same method as can be used to defend it, namely an assessment of the consequences of the program. In order to show that the program is wrong, what would be needed would be reasons to believe that the harms inflicted by the program exceed the benefits. As noted above, the alleged benefits involve increased security. However, the only evidence I have for the effectiveness of the program is the claims made by the people who are endeavoring to defend it. In regards to the harms done, there seem to be a variety of actual and potential harms.
I know that my view that programs like Prism are wrong stems from purely emotional causes. First, I was quite the WWII buff as a kid and I was taught that only organizations like the Gestapo engaged in such broad spying on the citizens of the state. Second, I grew up during the Cold War and well remember being told that the communist countries were bad because they spied on the citizens, something we would not do in the West. That sort of thing was for the secret police of dictatorships, not democratic states. These are, of course, naive views and based in emotions rather than logic. However, there seems to something to the notion that a difference between good guys and bad guys does involve the willingness to gather intelligence about citizens.
One harm is that the secrecy and nature of the program seems to have increased the distrust of the citizens for the United States government. It has also damaged the United State’s image around the world. Of course, this sort of damage can be considered relatively minor and it can be claimed that the fickle focus of public attention will shift, especially if some celebrity scandal or drama catches the public eye.
Another category of harms arises from the invasion of privacy itself. These harms could include psychological harms regarding the violation of privacy and fears about what the state might do with the information. As was evident in the debate over gun control, people can be quite agitated and dismayed by even the rumor that the state might track firearm purchases. While the Prism program does not (directly) track guns (as far as we know) it certainly gathers a vast amount of information about people.
A third category of harms involves the potential harms. One obvious worry is that even if the information is being used for only legitimate purposes now, there is the possibility that the information could be misused in the future. Or is being misused now. Some people were quite upset by the IRS asking certain groups for more information and with the Department of Justice gathering information about reporters. Obviously, whatever harms occurred in those cases would be vastly multiplied. After all, Prism is getting into everyone’s business.
There are, of course, other harms that can be considered.
A second option is to go with a rights based approach to the matter. If there is a moral right to privacy, then Prism would certainly seem to intrude on that right (if not violate it). Naturally, rights can be limited on moral grounds. The usual example is, of course, that the freedom of speech does not allow anyone to say anything at anytime-the right is limited by concerns about harms. Likewise for the right to privacy (if there is such a right).
The obvious challenge with an appeal to a right is to argue that there is such a right. In the case of legal rights, this is easy enough-one can just point to the relevant laws that specify the legal rights. When it comes to moral rights, it is a bit trickier. Classic rights theorists like John Locke argued for rights to life, liberty and property. A case can be made that certain privacy rights fall under the right to property. For example, it can be contended that my communications belong to me and if the state intercepts and stores them, the state is stealing my property. A case can also be made to put certain privacy rights under the right to liberty. For example, I should have the liberty of communication without the state restricting it by creating the fear that my communications can be intercepted and copied without the justification of legitimate suspicion of wrongdoing on my part.
In any case, it would be interesting to see a full development of privacy rights or at least a clear presentation of what is lost when privacy is intruded upon by programs like prism.
Revelations of the United States government’s Prism Program have brought the matter of privacy into the spotlight. While it should be no surprise that the United State’s government is scooping up vast quantities of information from communication systems ranging from phones to the internet, the scope and nature of the collection has disturbed many people.
Not surprisingly, the Obama administration has defended Prism on two main grounds. The first is that the program is legal. That is, it went through all the proper secret processes in the dark places of the government. But, of course, mere legality does not make something right. There is also the legitimate worry that this legal program actually violates Constitutional rights.
I do no have any doubts that the program is legal-I am confident that it was properly guided through the dark caverns under the public government and legally set loose upon the world. As far as the Constitutionality, I am not fully re-assured by the assurances that the data scooped up by Prism is being used in strict accordance to the Constitution.
The second is the usual line that it is necessary for national security. The idea is that certain rights need to be infringed upon in order to make us safer. This approach does have its appeal. This is because the limitation of rights can, in fact, make us safer. For example, limiting the right of people to sell contaminated food does make us safer. As another example, limiting the right to own certain weapons (like chemical weapons and grenades) does make us safer. As such, I do not reject the “it makes us safer” argument out of hand.
When considering this justification, there are two main concerns. The first is whether or not the limitation of the rights in question actually makes us safer. After all, while limiting a right can make us safer, this is not always the case. It would, of course, be a bad idea to restrict a right when doing so has no benefit. In the case of Prism, what would be needed would be proof that the program actually made us safer. This might include evidence of foiled plots and arrests of terrorists that resulted specifically from Prism. Naturally, I do not really expect such information to be forthcoming since the effectiveness of the program is no doubt a matter of national security and thus secret. However, I will consider the possibility that Prism did yield some positive results that could be used to justify what are claimed to be privacy violations.
The second concern is whether or not the safety gained is worth the cost resulting from the limitation (or violation of) the right in question. For example, we would be safer if each person had a tracking chip implanted into his body. If a person knows that her location is always known, then she would be less likely to engage in misdeeds and far easier to catch if she chose to act badly anyways. However, such implantation and tracking would seem to be an excessive violation of the right to privacy and hence would not seem to be worth the cost. In the case of Prism, a key question is whether or not the (alleged) gain in security is worth the cost paid in terms of the limitation or violation of rights.
The Obama administration has been quick to claim that the data gathered does not violate the right to privacy of the people that matter. If this is true, then perhaps the security gained is worth the price. However, there is the reasonable concern that this is not the case and it is certainly worrisome when the state engages in such massive data scooping.
For those not familiar with the phrase, “Big Data” is used to describe the acquisition, storage and analysis of large quantities of data. The search giant Google was one of the pioneers in this area and it is developed into an industry worth billions of dollars. Big Data and its uses also raise ethical concerns.
One common use of Big Data is to analyse customer data so as to make predictions that would be useful in conducting targeted ad campaigns. Perhaps the most infamous example of this is Target’s pregnancy targeting. This Big Data adventure was a model of inductive reasoning. First, an analysis was conducted of Target customers who had signed up for Target’s new baby registry. The purchasing history of these women was analysed to find patterns of buying that corresponded to each stage of pregnancy. For example, pregnant women were found to often buy lots of unscented lotion at the start of the second trimester. Once the analysis revealed the buying patterns of pregnant women, Target then applied this information to the buying patterns of women customers. Oversimplifying things, they were essentially using an argument by analogy: inferring that hat women not known to be pregnant who had X,Y, and Z patterns were probably pregnant because women known to be pregnant had X,Y, and Z buying patterns. The women who were tagged as probably pregnant were then subject to targeted ads for baby products and this proved to be a winner for Target, other than some public relations issues.
One interesting aspect of this method is that it does not follow the usual model of predicting a person’s future buying behavior from his/her past buying behavior. An example of predicting future buying behavior based on past behavior would be predicting that I would buy Gatorade the next time I went grocery shopping because I have been bought it consistently in the past. The analysis used by Target and other companies differs from this model by making inferences about the future behavior of customers based on their similarity to customers whose past buying behavior is known. For example, a store might see shifts in someone’s buying behavior that matches other data from people starting to get into fitness and thus predict the person was getting into fitness. The store might then send the person (and others like her) targeted ads featuring Gatorade coupons because their models show that such people buy more Gatorade.
This method also has an interesting Sherlock Holmes aspect to it. The fictional detective was able to use inductive logic (although he was presented as deducing) to make impressive inferences from seemingly innocuousness bits of information. Big Data can do this in reality and make reliable inferences based on what appears to be irrelevant information. For example, likely voting behavior might be inferred from factors such as one’s preferred beverage.
Naturally, Big Data can be used to sell a wide variety of products, including politicians and ideology. It also has non-commercial applications, such a law enforcement and political uses. As such, it is hardly surprising that companies and agencies are busily gathering and analyzing data at a relentless and ever growing pace. This certainly is cause for concern.
One ethical concern is that the use of Big Data can impact the outcome of elections. For example, analyzing massive amounts of data information can be acquired that would allow ads to be effectively crafted and targeted. Given that Big Data is expensive, the data advantage would tend to go to the side with the most money, thus increasing the influence of money on the outcome of elections. Naturally, the influence of money on elections is already a moral concern. While more spending does not assure victory, there is a clear connection between spending and success. To use but one obvious example, Mitt Romney was able to beta his Republican competitors in part by being able to outlast them financially and outspend them.
In any case, Big Data adds yet another tool and expense to political campaigning, thus making it more costly for people to run for office. This, in turn, means that those running for office will need even more money than before, thus making money an even greater factor than in the past. This, obviously enough, increases the ability of those with more money to influence the candidates and the issues.
On the face of it, it would seem unreasonable to require that campaigns go without Big Data. After all, it could be argued that this would be tantamount to demanding that campaigns operate in ignorance. However, the concerns about big money buying Big Data to influence elections could be addressed by campaign finance reform, which would be another ethical issue.
Perhaps the biggest ethical concern about Big Data is the matter of privacy. First, there is the ethical worry that much of the data used in Big Data is gathered without people knowing how the data will be used (and perhaps that it is even being gathered). For example, the customers at Target seemed to be unaware that Target was gathering such data about them to be analyzed and used to target ads.
While people might know that information is being collected about them, knowing this and knowing that the data will be analyzed for various purposes are two different things. As such, it can be argued that private data is being gathered without proper informed consent and this is morally wrong.
The obvious solution is for data collectors to make it clear about what the data will be used for, thus allowing people to make an informed choice regarding their private information. Of course, one problem that will remain is that it is rather difficult to know what sort of inferences can be made from seemingly innocuous data. As such, people might think that they are not providing any private data when they are, in fact, handing over data that can be used to make inferences about private matters.
If a business claims that they would be harmed because people would not hand over such information if they knew what it would be used for, the obvious reply is that this hardly gives them the right to deceive to get what they want. However, I do not think that businesses have much to worry about—Facebook has shown that many people are quite willing to hand over private information for little or nothing in return.
A second and perhaps the most important moral concern is that Big Data provides companies and others with the means of making inferences about people that go beyond the available data and into what might be regarded as the private realm. While this sort of reasoning is classic induction, Big Data changes the game because of the massive amount of data and processing power available to make these inferences, such as whether women are pregnant or not. In short, the analysis of seemingly innocuous data can yield inferences about information that people would tend to regard as private—or at the very least, information they would not think would be appropriate for a company to know.
One obvious counter to this is to argue that privacy rights are not being violated. After all, as long as the data used does not violate the privacy of individuals, then the inferences made from this data cannot be regarded as violating people’s privacy, even if the inferences are about matters that people would regard as private (such as pregnancy). To use an analogy, if I were to spy on someone and learn from thus that she was an alcoholic, then I would be violating her privacy. However, if I inferred that she is an alcoholic from publically available information, then I might know something private about her, but I have not violated her privacy.
This counter is certainly appealing. After all, there does seem to be a meaningful and relevant distinction between directly getting private information by violating privacy and inferring private information using public (or at least legitimately provided) data. To use an analogy, if I get the secret ingredient in someone’s prize recipe by sneaking a look at the recipe, then I have acted wrongly. However, if I infer the secret ingredient by tasting the food when I am invited to dinner, then I have not acted wrongly.
A reasonable reply to this counter is that while there is a difference between making an inference that yields private data and getting the data directly, there is also the matter of intent. It is, for example, one thing to infer the secret ingredient simply by tasting it, but it is quite another to arrange to get invited to dinner specifically so I can get that secret ingredient by tasting the food. To use another example, it is one thing to infer that someone is an alcoholic, but quite another to systematically gather public data in order to determine whether or not she is an alcoholic. In the case of Big Data, there is clearly intent to infer data that customers have not already voluntarily provided. After all, if the data had been provided, there would be no need to undertake an analysis in order to get the desired information. Thus, while the means do not involve a direct violation of privacy rights, they do involve an indirect violation—at least in cases in which the data is private (or at least intended to be private).
The solution, which would probably be rather problematic to implement, would involve setting restrictions on what sort of inferences can be made from the data on the grounds that people have a right to keep that information private, even if the means used to acquire it did not involve any direct violations of privacy rights.
A recent case raises questions about the ethics of reading a spouse’s email. The gist of the situation is that Leon Walker of Michigan faces the possibility of up to five years in prison for allegedly “hacking” into his wife’s email account (they are now divorced) and learning that she was having an affair with her second ex-husband. Michigan does have a law against “hacking” computers, programs or networks to get property “without authorization.” Applying this law to accessing a spouse’s email is seen by some legal experts as a stretch, but Leon Walker might very well face trial under this law.
Walker has offered two main defenses for his actions.
His first defense is that his wife had asked him to read her emails before and had given him the password.
If this is true, then it would certainly seem that she had granted him authorization to access her email. As such, he would seem to have acted neither illegally nor wrongly.
Of course, there is the question of whether or not he was acting under her authorization when he learned of her affair. While it is possible, it seems somewhat unlikely that she would be sending and receiving emails related to the affair while still authorizing her husband to read her email. If she did, in fact, remove her authorization, then a case could be made that he did break the law. Ethically, it could also be seen as an incorrect act. After all, being married does not grant a spouse carte blanche access to the other person’s private matters and this would seem to include email. To use an analogy, if someone allowed her husband to open a bill addressed to her, this would not grant him a right to open all her letters and read through them without her explicit permission.
While it seems reasonable to accept a presumption of privacy even with spouses, there is still the question of whether the right to privacy gives spouses a right to hide misdeeds (such as having an affair). This leads to Walker’s second argument.
After getting the emails, Walker passed on the information with his ex-wife’s first ex-husband. This man used the information to justify filing an emergency motion to get custody of his son (whom he had with Clara Walker, the woman in question). The second ex-husband was apparently once arrested on a charge of domestic violence and since Clara Walker was apparently having an affair with him, Leon Walker saw this as a matter of significant concern.
Walker likened his reading his ex-wife’s email to kicking down a door during a house fire. While this would be breaking in, it would be breaking in with the intent of saving people from harm.
This analogy does have a certain degree of appeal. After all, just breaking down someone’s door to steal their stuff would be a criminal (and most likely immoral) action. This would be analogous to hacking into a computer to, for example, steal credit card numbers. In contrast, kicking down a locked door when a house is on fire so as to save people would not be a criminal act nor a wrongful action. If Walker is right, then his reading his ex-wife’s email should not be considered criminal or unethical.
Of course, when a person kicks down the door of a burning house they know that it is on fire and they have to gain access to actually help people. In the case of the email, Walker would need to have clear signs of a “fire” and would need to have reason to believe that he had to “kick down the door” in order to help people. This is, of course, a factual matter. It could be the case that Walker had reason to believe that his wife was having an affair and that crucial information relating to the safety of others was locked behind the password (and could not be acquired via other non-intrusive means).
If this is the case, then Walker would seem to have acted in an acceptable manner. After all, a right to privacy does not seem to give a person a shield behind which they can conceal misdeeds or hide information relating to a possible danger to, for example, a child. In such a case, the person’s right to privacy would be violated and in this they would be wronged. However, the violation could be justified based on the nature of what was being concealed. After all, it would seem odd to say that a married person has right to conceal evidence of her affair from her husband. He would certainly seem to have a moral right to know that.
In response, it could be argued that the right of a spouse (or ex-spouse) to know about such things does not extend to intruding into certain privacy rights, such as email. After all, while there is a certain appeal to thinking it was okay to get into someone’s email when they were having an affair, one must also consider all the cases in which the spouse is not having an affair. It would be odd to say that spouses should have the right to get into each other’s email, mail, and so on all the time because people have affairs.
Some legal experts and Leon Walker’s attorney are, of course, focusing on the legal aspect of the case. The law in question seems to have been intended to deal with cases in which someone has actually hacked into a computer or network and done damage or has stolen something.
While reading someone else’s email is an intrusion into that person’s privacy, it does not seem to fall under the law, at least as it is worded. After all, nothing seems to have been stolen from the woman and she can hardly claim that she was the damaged party when her affair was exposed.
It will be interesting to see how the case develops and what impact it has on legality of the no doubt common practice of spousal snooping.
When I discuss various issues relating to safety and security, I generally take the view that we should have the minimum security needed to provide an effective defense. I also take into account the impact of such measures on rights and liberties while also giving positive and negative consequences their just due. This approach, obviously enough, means that my position on specific security/safety measures can be argued against on these various grounds.
Since I am against the use of full body scans and pat downs (at least as they are currently implemented), one way to argue against me is to consider the dire consequences of the dreaded “what if” scenario. “What if”, says the concerned person, “the scans are stopped and a terrorist takes out a plane with an underwear bomb that the scanners would have stopped?” Put in argument form, the idea is that the scanners and pat downs should be used because they have a chance of stopping such an attack. The added safety presumably overrides concerns about privacy rights, government intrusiveness, and potential harms to passengers (such as being humiliated in various ways).
On the face of it, “what if” scenarios are a legitimate consideration when it comes to security and safety. So, for example, when considering whether deep water drilling should be allowed or not, we should consider what would happen if another well failed. As another example, when deciding whether to lock my office door or not, I should consider what would happen if a dishonest person walked by and saw my computer and printer behind an unlocked door.
While “what if” scenarios are worth considering, merely presenting a dire possible consequence does not automatically justify a practice. After all, the likelihood of the dire consequence needs to be considered as does the likely effectiveness of the method and the cost it imposes.
Naturally enough, the assessment needs to done on the basis of a consistently applied principle or set of principles. Naturally, the principle of relevant difference can be used to justify differences-but this requires showing how the differences actually make a difference.
In the case of scans, they could possibly prevent an underwear bomb from being brought aboard. There is a chance that such an attack might be tried again. As such, there is a non-zero chance that the scanners could prevent harm being done. However, the odds of such an attempt are most likely extremely low. After all, there has been only one attempt.
The body scans and pat downs clearly infringe on basic privacy rights such as the right/liberty not to be touched and the right/liberty not to have people see one naked. While this rights can be justly violated or set aside, this requires proper justification. There is also the fact that there have been some rather unpleasant incidents (such as the urine bag incident and images being saved from the scans) that indicate that these methods are not without their costs. And, of course, there is the actual cost of the machines used in scanning.
Weighing the harms and benefits seems to lead to the conclusion that the scanners and pat downs are not justified.
However, the “what if” gambit can still be played. “What if the scans stop and a plane gets blown up! What would you say then, Dr. cost-benefit analysis?”
What I would say is, of course, that such an incident would be horrible. However, I must wonder what sort of principle the person making the “what if” gambit is using. If the principle is that we can violate rights and expend $170 million+ to provide some possible security/safety against a specific sort of very unlikely occurrence, then I would hold the person to applying the principle consistently.
After all, there are plenty of likely threats and dangers out there that could be reduced by infringing people’s rights or spending $170 million. For example, the right to drive cars could be taken away in the interest of safety (“What if we didn’t ban cars and someone got run over! What would you say then?”). People would then have to walk or bike (or use other means) which would also make them healthier. Many people die each year from traffic incidents and even more die from poor health. This would address both threats. Also, the economy could be boosted by selling bikes, skates, running gear and other such things. Redoing the infrastructure and creating more public transport would also create jobs.
As another example, we know that oil wells can blow up, kill people and create environmental problems ( “What if an oil well blows up, kills people and pollutes the sea! What would you say then?”). If we are justified in using scanners to try to prevent an underwear bomb attack, then we would seem to be far more justified in banning oil wells and replacing them with alternative energy sources.
As a third example, consider guns. Sure, people have a right to keep and bear arms. However, look at all the gun deaths. “What if someone took a gun and killed some people! What would you say then?” Since no one has been killed by an underwear bomb and lots of people have been killed by guns, if we can infringe on rights to protect people from the incredibly low possibility of an underwear bomb, then we surely can infringe on rights to protect people from guns.
As a final example, we could keep people safer by putting cameras everywhere and on everyone. “What if someone committed a crime that could have been prevented by cameras! What would you say then?” While this would violate the right to privacy, if security trumps privacy then this would seem to be fine.
Naturally, I am willing to tolerate oil wells (for now), I am willing to tolerate cars, I like guns, I’m against body scans, and I’m against living in a panopticon. However, this is because I contend that it is acceptable to tolerate a degree of risk in order to maintain rights.
While I am sometimes accused of being “soft on terror” because of my views of the war on terror (or whatever it is called now) in general and airport security in particular, I consider my approach to be a rational one. Since I am often cast as an “intellectual”, I feel somewhat obligated to do the intellectual thing and present a principle rather than just taking a view based on how I feel about one thing and then holding an inconsistent view on a similar thing just because I happen to feel differently about that.
My general principle for security is that a security method should be assessed based on the effectiveness of the method, the probability of the threat the method is supposed to counter and the degree to which it violates or infringes on legitimate rights/liberties, the relevant consequences, and the cost of the method. As such, this is a cost benefit analysis. If a method counters a likely threat effectively and does so without a disproportionate violation of rights/liberties and cost, then the method would seem to be acceptable. Otherwise, there would be reasonable grounds to reject the method.
Obviously, I do not have an exact formula and specific methods can be subject to reasonable debate. For example, I think that the full body scans could be effective, that the threat they counter is very unlikely, that the method violates privacy rights too much, and the scanners are too expensive. As such, I am against the full body scanners. However, all these points can be argued.
As another example, I am opposed to the employment of 3,000 (or so) “behavior detection officers.” While I suppose that it is good that these folks are employed, they seem to be rather ineffective: of the 266,000 referrals made since 2006, only 0.7% have even led to arrests. Hardly a high success rate for the cost. Given that “behavior detection” is, at best, an infant science, this is hardly surprising. As such, my view is that this is not a wise use of limited resources. Naturally, this is subject to debate as well.
One thing I have found rather interesting about security is that many people seem to operate on at least two standards: one is for things like the war on terror and the other is for almost everything else.
For example, someone who might balk at a law that prevents parents from smoking in the car with their kids (thus putting their kids at risk for various serious health problems) might think that full body scans and pat downs are acceptable because they help keep use safe from a threat (however incredibly unlikely the threat might be). This, however, seems inconsistent. After all, if the state has the right to violate rights to counter threats, then this right would seem to apply to both situations.
As another example, someone who is opposed to the state getting involved in health care (even though lack of health insurance leads to many deaths), restricting pollution (even though pollution is harmful), or regulating business (even though many business have shown an unrelenting tendency to behave badly, such as acting in ways that wrecked the economy) might be fine with things like enhanced interrogation, secret prisons, and assassinations. This, however, seems inconsistent. After all, if the state is in the business of keeping us safe, then this should apply to keeping us safe from not only terrorists but also diseases, pollution, and dangerous business practices.
In my own case, I use my principle consistently to assess whether a security method is acceptable or not. So, for example, I assess state regulation of business based on the efficiency of the method, the likelihood of harm, the possible violation of rights/liberties and the cost. In the light of the catastrophic damage done to the economy that can be causally linked to business practices, it seems reasonable to impose regulations on such behavior. Letting business regulate itself in the hopes that they will act responsibly or be “corrected” by market forces is on par with removing all airport security and hoping that the terrorists will self-regulate or that the invisible hand will sort things out. The fact of the matter is that bad behavior generally requires an active counter.
Of course, the counter has to be weighed against the rights and liberties it infringes upon. So, for example, business folks do have rights and liberties that should be taken into account. Also, there can be relevant consequences in regards to limiting business too much. As some folks argue, business folks need a degree of freedom in order to make profits and keep the economy going. Likewise, the way people who travel by air can be treated should be limited by their legitimate rights.
The latest additions to America’s Security Theater are the full body scan and the full body pat down. The scanners provide images of what is under the passenger’s clothes (including the passenger) and this is regarded by some as an invasion of privacy. Concerns have also been raised about the health effects of being exposed to the radiation emitted by the scanners. Passengers who would prefer to avoid this process can elect to have a TSA agent engage in a full body pat down. Not surprisingly, some people are concerned that this violates privacy rights.
While I am not a constitutional lawyer, this does seem to be a violation of the legal rights spelled out in the Fourth Amendment of the Constitution:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
Of course, the legality of this is something that the lawyers will need to hash out, perhaps before the Supreme Court. As a philosophy, my main concern is with whether these practices are justified or not.
The stock argument for these practices is to contend that they are needed to keep people safe and are thus justified. The basic principle, that rights can be set aside because of security needs, is a sound one. But, of course, whether a specific application of the principle is warranted or not is another matter. Oversimplifying things quite a bit, a good case can be made for suspending (or violating) rights by showing that this suspension is required to avoid or prevent harms. To use the stock example, the right of free expression does not apply to yelling “fire” in a crowded theater. Likewise it could be argued that the right to privacy does not apply when the safety of passengers is at stake. Thus, while having strangers gazing at an image of your body or running their hands over your (once) private regions might seem like violations of your privacy, they are merely legitimate means of keeping you (and others) safe when you fly.
An obvious reply to this argument is that the appeal to security is not a magical trump card that justifies setting aside rights merely because an increase in safety has been claimed. Rather, the burden of proof rests upon the state to show that the suspension of rights is justified. In short, it needs to be shown that the security gained or the harm avoided justifies suspending or violating rights. While people will disagree on this matter, it seems reasonable to expect that the defense of the body scans and pat downs should be able to show that they are likely to deter harms significant and likely enough to warrant such clear intrusions into the privacy of passengers.
This does not seem to be the case. The threats that these methods would counter seem to be rather unlikely to occur and just as likely to be caught by other less intrusive security measures that predate the new procedures. As such, there seems to be a lack of adequate justification for these practices.
When pressed, defenders of the scanner and pat downs tend to point to the infamous underwear bomber. What if, they ask with righteous indignation, an under wear bomber got on board and blew up a plane? Surely, they contend, this possibility justifies these measures.
My first reply is that there was one attempt to use the underwear bomb and it failed. There were no other attempts even before the scanners and pat downs were implemented. As such, they provide a defense against an attack that was tried once and failed. It seems odd to expend so much money and violate privacy rights to defend against such an unlikely and feeble sort of attack.
It might be countered that the procedures are based on the principle that any potential threat must be countered, even if the counter is costly and violates rights. After all, one might say, we are at war.
I have two replies to that counter. First, this principle would justify even more blatant violations of privacy. To use an obvious example, a terrorist could swallow a condom containing explosives or insert one into his rectum (drugs are smuggled this way, so why not explosives?). Since these methods of attack are possible, this principle would justify forcing people to expel the contents of their stomachs and being subject to cavity searches before flying. However, I suspect even Homeland Security would (at least for now) balk at this procedures. However, while they are more invasive than the current procedures, they are justified by the same principle that allows privacy rights to be suspended for even minuscule gains in security. This, I think, shows that the scans and pat downs are also unjustified.
Second, consistency would seem to require that this principle be applied across the board. After all, it would be inconsistent to have such strict standards to protect people from underwear bombs while allowing other more likely dangers to remain unchecked. While terrorists do try to kill people, they do not kill people any more dead than anything else that kills people. So, for example, people should lose their right to drive cars. After all, thousands of people are killed each year by vehicles. As another example, air and train travel should be banned. After all, if people must be protected from the incredibly unlikely threat of underwear bombs, then they must surely be protected against the dangers of plane and train crashes. Naturally, environmental threats from companies must also be dealt with on the basis of this principle. For example, since the drilling of oil could result in an explosion and oil leak, all drilling must be stopped to keep people safe.
Of course, it would probably be argued that it would be absurd to ban planes, trains and automobiles. Sure, one might argue, people are killed in accidents and even homicides involving them. But such levels of danger are tolerable because of the right of people to travel and the economic necessity of such transport. Naturally, I would argue that if such dangers can be tolerated on this basis, then it would seem reasonable to tolerate the minuscule increase in risk that discontinuing scans and pat downs might create.
A likely reply to this is to restate that if a single underwear bomb blows up a plane or kills some people, then my argument would be shown to be horribly mistaken. However, this is on par with saying that if a single person is killed in a vehicular homicide, then the argument that cars should not be banned has been shown to be horribly mistaken. Or, to use another example, that if one oil well suffers an accident, then oil should be banned immediately. This seems absurd.
Thus, the full body scans and pat downs do not seem to be adequately justified and should be discontinued.
Since I am not fond of heights or delays I am not a big fan of flying. I am also not fond of being irradiated or groped by strangers. While this was generally not a problem with past flights, the TSA (Theater of Security in America) has added them to our travel routine.
Starting soon, you will get a choice between a full body scan or having your (once) private areas handled by a TSA agent. While I am not overly worried about the radiation levels of the scanners (although this does raise some legitimate medical concerns), my main concern is over how this seems to violate well established rights to privacy.
A full body scan or a thorough pat down seem to violate some basic legal and moral privacy rights, namely the right to not have other people see beneath your clothing or place hands on your body (especially your groin area). As such, these new techniques seem to be morally incorrect and most likely unconstitutional.
It might be replied that these scans and probings are with our consent and we have the option not to fly (after all, people can drive, take the bus, run, walk or bike). Of course, this is a rather forced consent: you can be scanned/groped or miss that crucial business meeting, spend hours driving, or take a ship when you need to cross an ocean. As such, the idea that we consent to this treatment is rather absurd.
It might be argued that the police do search people using more invasive methods and this has stood up legally and morally. Even granting this, the police use of such techniques is justified on the grounds that the person is suspected of a crime that warrants such a search or the context (such as being in prison) justified this. In the case of flying, the passengers are not suspected of a crime nor would it seem that they are in a context that justifies such treatment. In short, the intrusion does not seem to be adequately justified.
But, someone might argue, these searches are justified because they will (drum roll) protect us from the terrorists ™. After all, the underwear bomber showed that terrorists are not above hiding bombs in the groin region. Hence, TSA must be able to go to the groin to keep us safe. Never mind that the underwear bomber failed. Never mind that there is no indication that this sort of strategy has been tried again or ever will be tried again. Never mind that the scanners will cost a lot of money. Never mind that people will be humiliated by this treatment. Never mind that privacy is being tossed aside. What matters is that we will be made safer against the tiny possibility that some terrorist will try to hide a bomb in his/her underwear or bra. Obviously, one might say, that the microscopic increase in safety is well worth the cost.
Laying aside the obvious question of whether the cost is worth the gain, there is also the question of whether these methods will make us safer. Interestingly, as many other have pointed out, the Israelis do not use these methods. Of course, the Israelis might be in error. However, I am inclined to think that these methods will not prove effective. One obvious reason is that the existing techniques will tend to find (or not find) the same sort of things that these invasive methods would find. Another obvious reason is that these methods do nothing in regards to luggage or cargo, which seem to be more likely avenues by which dangerous things might be brought on board. In short, it seems that there will little if any gain in security. As such, it does not seem worth violating privacy or funding the scanners.
Also, consider this: there are all sorts of places inside the human body where things can be hidden (drug traffickers do this). A terrorist could have an explosive enema device. A female terrorist could have bombs implanted in her breasts. A person could swallow a bomb. To defend against these, we clearly would need cavity searches and penetrating x-rays (perhaps even MRIs). For the folks who insist that we need pat downs to be safe, think about the TSA sticking a finger in your rectum or giving your breasts a thorough work out to make sure they are real. That would make us even safer. Demand it next time you fly.
A final factor worth considering is that people already hate to fly and this will no doubt increase the hate. While people will still travel (because they must), I suspect that some people will rightfully refuse to be subject to these needlessly intrusive techniques. I was already sick of the security theater and adding a peep show and grope fest to it merely makes me hate it even more. I suspect other people think the same way and that at least some of them will decide to stop flying until this stupidity is undone.
I do wonder why these methods are being implemented. After all, there seems to be no need for them and they are needlessly invasive. Jokingly, I might even suggest that the government folks just enjoy screwing with us (“Hey, I got them to take off their shoes. Top that! I will, I’ll have them groped!”). Or maybe they think that we will feel “safer” by having ineffective security measures inflicted on us. Or maybe the Tea Party is right: the state just wants to get in our business (or at least grope it) and impose tyranny.
Perhaps it is a good thing that the Republicans and Tea Party made gains in the recent election. Surely, with their small government views and their cries about getting government off our back, they will leap into action to get the hand of the state off our groins. This would be a clear way for them to prove that they mean what they say and that they are not just blowing tea steam. We can, as the Israelis have shown, have effective security without such invasions. As such, they cannot have recourse to the excuse of security to justify the violations of liberty. So, put that in your teacups and drink it. Or don’t and STFU about tyranny and liberty.
I am willing to admit that if these methods were effective and if there was a sufficiently high probability of a terrorist attack using methods that could be detected by these methods, than I would be willing to change my mind. However, this is not the case. So, I call on the folks in charge to change these rules. If not, well, I am sure that the airlines do not need my money nor the politicians my vote.
I say that enough is quite enough. Our founders said “don’t tread on me.” Now, I say, “don’t grope me.”
- Group slams airport naked body scanners (go.theregister.com)
- John W. Whitehead: Michael Roberts: One Man Against the Surveillance State (huffingtonpost.com)
- Miami Airport Screener Accused of Attack After Jeers at Genitals (miamiherald.com)