A federal appeals court ruled in May, 2015 that the NSA’s bulk collection of domestic calling data is illegal. While such bulk data collection would strike many as blatantly unconstitutional, this matter has not been addressed, though that is perhaps just a matter of time. My intent is to address the general issue of bulk domestic data collection by the state in a principled way.
When it comes to the state (or, more accurately, the people who compose the state) using its compulsive force against its citizens, there are three main areas of concern: practicality, morality and legality. I will addressing this matter within the context of the state using its power to impose on the rights and liberties of the citizens for the purported purpose of protecting them. This is, of course, the stock problem of liberty versus security.
In the case of practicality, the main question is whether or not the law, policy or process is effective in achieving its goals. This, obviously, needs to be balanced against the practical costs in terms of such things as time and resources (such as money).
In the United States, this illegal bulk data collection has been going on for years. To date, there seems to be but one public claim of success involving the program, which certainly indicates that the program is not effective. When the cost of the program is considered, the level of failure is appalling.
In defense of the program, some proponents have claimed that there have been many successes, but these cannot be reported because they must be kept secret. In fairness, it is certainly worth considering that there have been such secret successes that must remain secret for security reasons. However, this defense can easily be countered.
In order to accept this alleged secret evidence, those making the claim that it exists would need to be trustworthy. However, those making the claim have a vested interest in this matter, which certainly lowers their credibility. To use an analogy, if I was receiving huge sums of money for a special teaching program and could only show one success, but said there were many secret successes, you would certainly be wise to be skeptical of my claims. There is also the fact that thanks to Snowden, it is known that the people involved have no compunctions about lying about this matter, which certainly lowers their credibility.
One obvious solution would be for credible, trusted people with security clearance to be provided with the secret evidence. These people could then speak in defense of the bulk data collection without mentioning the secret specifics. Of course, given that everyone knows about the bulk data collection, it is not clear what relevant secrets could remain that the public simply cannot know about (except, perhaps, the secret that the program does not work).
Given the available evidence, the reasonable conclusion is that the bulk data collection is ineffective. While it is possible that there is some secret evidence, there is no compelling reason to believe this claim, given the lack of credibility on the part of those making this claim. This alone would suffice as grounds for ceasing this wasteful and ineffective approach.
In the case of morality, there are two main stock approaches. The first is a utilitarian approach in which the harms of achieving the security are weighed against the benefits provided by the security. The basic idea is that the state is warranted in infringing on the rights and liberties of the citizens on the condition that the imposition is outweighed by the wellbeing gained by the citizens—either in terms of positive gains or harms avoided. This principle applies beyond matters of security. For example, people justify such things as government mandated health care and limits on soda sizes on the same grounds that others justify domestic spying: these things are supposed to protect citizens.
Bulk data collection is, obviously enough, an imposition on the moral right to privacy—though it could be argued that this harm is fairly minimal. There are, of course, also the practical costs in terms of resources that could be used elsewhere, such as in health care or other security programs. Weighing the one alleged success against these costs, it seems evident that the bulk data collection is immoral on utilitarian grounds—it does not do enough good to outweigh its moral cost.
Another stock approach to such matters is to forgo utilitarianism and argue the ethics in another manner, such as appealing to rights. In the case of bulk data collection, it can be argued that it violates the right to privacy and is thus wrong—its success or failure in practical terms is irrelevant. In the United States people often argue this way when it comes to gun rights—the right outweighs utilitarian considerations about the well-being of the public.
Rights are, of course, not absolute—everyone knows the example of how the right to free expression does not warrant slander or yelling “fire” in a crowded theater when there is no fire. So, it could be argued that the right of privacy can be imposed upon. Many stock arguments exist to justify such impositions and these typical rest either on utilitarian arguments or arguments showing that the right to privacy does not apply. For example, it is commonly argued that criminals lack a right to privacy in regards to their wicked deeds—that is, there is no moral right to secrecy in order to conceal immoral deeds. While these arguments can be used to morally justify collecting data from specific suspects, they do not seem to justify bulk data collection—unless it can be shown that all Americans have forfeited their right to privacy.
It would thus seem that the bulk data collection cannot be justified on moral grounds. As a general rule, I favor the view that there is a presumption in favor of the citizen: the state needs a moral justification to impose on the citizen and it should not be assumed the state has a right to act unless the citizen can prove differently. This is, obviously enough, analogous to the presumption of innocence in the American legal system.
In regards to the legality of the matter, the specific law in question has been addressed. In terms of bulk data collection in general, the answer seems quite obvious. While I am obviously not a constitutional scholar, bulk data collection seems to be a clear and egregious violation of the 4th Amendment: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
The easy and obvious counter is to point out that I, as I said, am not a constitutional scholar or even a lawyer. As such, my assessment of the 4th Amendment is lacking the needed professional authority. This is, of course, true—which is why this matter needs to be addressed by the Supreme Court.
In sum, there seems to be no practical, moral or legal justification for such bulk data collection by the state and hence it should not be permitted. This is my position as a philosopher and the 2016 Uncandidate.
In February, 2014 Twitter made all its tweets available to researchers. As might be suspected, this massive data is a potential treasure trove to researchers. While one might picture researchers going through the tweets for the obvious content (such as what people eat and drink), this data can be mined in some potentially surprising ways. For example, the spread of infectious diseases can be tracked via an analysis of tweets. This sort of data mining is not new—some years ago I wrote an essay on the ethics of mining data and used Target’s analysis of data to determine when customers were pregnant (so as to send targeted ads). What is new about this is that all the tweets are now available to researchers, thus providing a vast heap of data (and probably a lot of crap).
As might be imagined, there are some ethical concerns about the use of this data. While some might suspect that this creates a brave new world for ethics, this is not the case. While the availability of all the tweets is new and the scale is certainly large, this scenario is old hat for ethics. First, tweets are public communications that are on par morally with yelling statements in public places, posting statements on physical bulletin boards, putting an announcement in the paper and so on. While the tweets are electronic, this is not a morally relevant distinction. As such, researchers delving into the tweets is morally the same as a researcher looking at a bulletin board for data or spending time in public places to see the number of people who go to a specific store.
Second, tweets can (often) be linked to a specific person and this raises the stock concern about identifying specific people in the research. For example, identifying Jane Doe as being likely to have an STD based on an analysis of her tweets. While twitter provides another context in which this can occur, identifying specific people in research without their consent seems to be well established as being wrong. For example, while a researcher has every right to count the number of people going to a strip club via public spaces, to publish a list of the specific individuals visiting the club in her research would be morally dubious—at best. As another example, a researcher has every right to count the number of runners observed in public spaces. However, to publish their names without their consent in her research would also be morally dubious at best. Engaging in speculation about why they run and linking that to specific people would be even worse (“based on the algorithm used to analysis the running patterns, Jane Doe is using her running to cover up her affair with John Roe”).
One counter is, of course, that anyone with access to the data and the right sorts of algorithms could find out this information for herself. This would simply be an extension of the oldest method of research: making inferences from sensory data. In this case the data would be massive and the inferences would be handled by computers—but the basic method is the same. Presumably people do not have a privacy right against inferences based on publically available data (a subject I have written about before). Speculation would presumably not violate privacy rights, but could enter into the realm of slander—which is distinct from a privacy matter.
However, such inferences would seem to fall under privacy rights in regards to the professional ethics governing researchers—that is, researchers should not identify specific people without their consent whether they are making inferences or not. To use an analogy, if I infer that Jane Doe and John Roe’s public running patterns indicate they are having an affair, I have not violated their right to privacy (assuming this also covers affairs). However, if I were engaged in running research and published this in a journal article without their permission, then I would presumably be acting in violation of research ethics.
The obvious counter is that as long as a researcher is not engaged in slander (that is intentionally saying untrue things that harm a person), then there would be little grounds for moral condemnation. After all, as long as the data was publically gathered and the link between the data and the specific person is also in the public realm, then nothing wrong has been done. To use an analogy, if someone is in a public park wearing a nametag and engages in specific behavior, then it seems morally acceptable to report that. To use the obvious analogy, this would be similar to the ethics governing journalism: public behavior by identified individuals is fair game. Inferences are also fair game—provided that they do not constitute slander.
In closing, while Twitter has given researchers a new pile of data the company has not created any new moral territory.
Google’s entry into the computer business has been a mixed one. While certain Chromebooks have been selling quite well, they are still a minute fraction of the laptop market. One of Google’s latest endeavors in the realm of hardware is the famous Google Glasses. While the glasses have been the focus of considerable attention, it remains to be seen whether or not they will prove to be a success or an interesting failure.
Since I rather like gadgets, the idea of a wearable computer is certainly appealing-if only for the science fiction aspect. After all, the idea of such technology is old news in science fiction. In my own case, I would most likely use such glasses for running and driving. People who know me know how important navigational technology is for me to have a reasonable chance of getting from one point to another. As such, if the Google glasses can handle this, I might consider getting a pair. Of course, I am also known for being frugal-so the glasses would have to be reasonably priced.
While I like the idea of Google Glasses, there are some practical concerns regarding this technology. One obvious concern is the distraction factor. Mobile phones and other devices are infamous for their distracting power and it seems reasonable that a device designed to sit right in front of the face would have even more distracting power than existing mobile devices. This distracting power is of concern primarily for safety, especially in the context of driving. However, there is also the concern that people will be distracted from the other people physically near them.
Another practical concern is the matter of whether or not people will actually accept the glasses. One factor is that people generally prefer to not wear glasses. While my vision is reasonably good, I do have prescription glasses. However, I find wearing glasses annoying enough that I only wear them when I really want or need to see thing sharply. As such, I usually only wear them while playing video games and watching movies at the theater. Lest anyone be worried, I can drive just fine without them. People can, of course, get accustomed to glasses-but there is the question of whether or not people will find the glasses compelling enough to wear.
There is also a somewhat philosophical issue in regards to the glasses, namely the concern about privacy. Or, to be more accurate, concern about two types of privacy. These two types are defined by which side of the glasses a person happens to be on.
In one direction, the privacy concerns relate to the folks that the glasses are pointing towards. Like almost all modern smart phones, the Google Glasses have a camera and, as such, raise the same basic concerns about privacy. However, the Google device broadens the concern. Since the glasses are glasses, people might not notice that they have a camera pointed at them. Also, since the glasses are worn, it is more likely for the glasses to be pointing at people relative to other cameras. After all, a person has to take out and hold a mobile phone to use the camera effectively. But, with the glasses, the camera will be easily and automatically pointing at the outer world.
In the case of the public context, it is rather well established that people do not have an expectation of privacy in public. This seems reasonable since the public context is just that, public rather than private. However, it can be contended that many of the notions governing the concepts of privacy have become obsolete because of changing technology. As such, there perhaps needs to be a reconsideration of the expectations in the public context. These expectations might be taken as including an expectation not to be filmed or photographed, even casually as a person saunters by wearing their Google Glasses. In addition to the question of what the person using the glasses might do, there is also the concern about what Google will do-especially in light of past issues involving the Google vehicles cruising neighborhoods and gathering up data.
Obviously, there are also concerns about people using the devices more nefariously in contexts in which people do have an expectation of privacy.
In the other direction, there are the privacy concerns relating to the user. What will Google know about the activities and location of the wearer and how will this information be used? Obviously enough, Google would be able to gather a great deal of information about the user of pair of Google Glasses and Google is rather well known for being able to use such data.
Interestingly, a person wearing a pair of Google glasses could end up being both a spy for and spied upon by Google.
The revelations about the once secret Prism program of the National Security Agency
have revitalized the old debates about liberty versus security and the individual versus the state. Obviously enough, there are many legal and ethical issues here.
On the face of it, Prism was quite legal-at least in the United States. That is, the program went through all of the legally established procedures for such a program. It remains, however, to see if it is actually constitutional. While questions of legality and constitutionality are interesting, I’ll focus on some of the ethical concerns.
Not surprisingly, the main moral defense of Prism and other programs is based in utilitarianism: any (alleged) wrongs done by intruding into privacy are morally offset by the greater good done by increasing security. The Obama administration has made vague claims that the program has prevented attacks and there is the claim that it will prevent attacks in the future. However, as I have noted before, these claims are coming from the administration who hid the program behind lies. These past deceits and the fact that they are prejudiced clearly makes the administration a dubious source for claims about the efficacy of Prism. However, I do agree that Prism can potentially be morally justified on these grounds and this would be contingent on it doing more good than harm.
The alleged good of such a program can be assessed in terms of the attacks prevented and estimates of the damage that would have been done if such attacks had succeeded. Naturally, the importance of Prism is such prevention must also be considered. After all, if other means would have prevented the attack anyways, then Prism’s credit should be adjusted appropriately.
There are various ways to argue that Prism and similar programs are wrong. One option is to use the same method as can be used to defend it, namely an assessment of the consequences of the program. In order to show that the program is wrong, what would be needed would be reasons to believe that the harms inflicted by the program exceed the benefits. As noted above, the alleged benefits involve increased security. However, the only evidence I have for the effectiveness of the program is the claims made by the people who are endeavoring to defend it. In regards to the harms done, there seem to be a variety of actual and potential harms.
I know that my view that programs like Prism are wrong stems from purely emotional causes. First, I was quite the WWII buff as a kid and I was taught that only organizations like the Gestapo engaged in such broad spying on the citizens of the state. Second, I grew up during the Cold War and well remember being told that the communist countries were bad because they spied on the citizens, something we would not do in the West. That sort of thing was for the secret police of dictatorships, not democratic states. These are, of course, naive views and based in emotions rather than logic. However, there seems to something to the notion that a difference between good guys and bad guys does involve the willingness to gather intelligence about citizens.
One harm is that the secrecy and nature of the program seems to have increased the distrust of the citizens for the United States government. It has also damaged the United State’s image around the world. Of course, this sort of damage can be considered relatively minor and it can be claimed that the fickle focus of public attention will shift, especially if some celebrity scandal or drama catches the public eye.
Another category of harms arises from the invasion of privacy itself. These harms could include psychological harms regarding the violation of privacy and fears about what the state might do with the information. As was evident in the debate over gun control, people can be quite agitated and dismayed by even the rumor that the state might track firearm purchases. While the Prism program does not (directly) track guns (as far as we know) it certainly gathers a vast amount of information about people.
A third category of harms involves the potential harms. One obvious worry is that even if the information is being used for only legitimate purposes now, there is the possibility that the information could be misused in the future. Or is being misused now. Some people were quite upset by the IRS asking certain groups for more information and with the Department of Justice gathering information about reporters. Obviously, whatever harms occurred in those cases would be vastly multiplied. After all, Prism is getting into everyone’s business.
There are, of course, other harms that can be considered.
A second option is to go with a rights based approach to the matter. If there is a moral right to privacy, then Prism would certainly seem to intrude on that right (if not violate it). Naturally, rights can be limited on moral grounds. The usual example is, of course, that the freedom of speech does not allow anyone to say anything at anytime-the right is limited by concerns about harms. Likewise for the right to privacy (if there is such a right).
The obvious challenge with an appeal to a right is to argue that there is such a right. In the case of legal rights, this is easy enough-one can just point to the relevant laws that specify the legal rights. When it comes to moral rights, it is a bit trickier. Classic rights theorists like John Locke argued for rights to life, liberty and property. A case can be made that certain privacy rights fall under the right to property. For example, it can be contended that my communications belong to me and if the state intercepts and stores them, the state is stealing my property. A case can also be made to put certain privacy rights under the right to liberty. For example, I should have the liberty of communication without the state restricting it by creating the fear that my communications can be intercepted and copied without the justification of legitimate suspicion of wrongdoing on my part.
In any case, it would be interesting to see a full development of privacy rights or at least a clear presentation of what is lost when privacy is intruded upon by programs like prism.
Revelations of the United States government’s Prism Program have brought the matter of privacy into the spotlight. While it should be no surprise that the United State’s government is scooping up vast quantities of information from communication systems ranging from phones to the internet, the scope and nature of the collection has disturbed many people.
Not surprisingly, the Obama administration has defended Prism on two main grounds. The first is that the program is legal. That is, it went through all the proper secret processes in the dark places of the government. But, of course, mere legality does not make something right. There is also the legitimate worry that this legal program actually violates Constitutional rights.
I do no have any doubts that the program is legal-I am confident that it was properly guided through the dark caverns under the public government and legally set loose upon the world. As far as the Constitutionality, I am not fully re-assured by the assurances that the data scooped up by Prism is being used in strict accordance to the Constitution.
The second is the usual line that it is necessary for national security. The idea is that certain rights need to be infringed upon in order to make us safer. This approach does have its appeal. This is because the limitation of rights can, in fact, make us safer. For example, limiting the right of people to sell contaminated food does make us safer. As another example, limiting the right to own certain weapons (like chemical weapons and grenades) does make us safer. As such, I do not reject the “it makes us safer” argument out of hand.
When considering this justification, there are two main concerns. The first is whether or not the limitation of the rights in question actually makes us safer. After all, while limiting a right can make us safer, this is not always the case. It would, of course, be a bad idea to restrict a right when doing so has no benefit. In the case of Prism, what would be needed would be proof that the program actually made us safer. This might include evidence of foiled plots and arrests of terrorists that resulted specifically from Prism. Naturally, I do not really expect such information to be forthcoming since the effectiveness of the program is no doubt a matter of national security and thus secret. However, I will consider the possibility that Prism did yield some positive results that could be used to justify what are claimed to be privacy violations.
The second concern is whether or not the safety gained is worth the cost resulting from the limitation (or violation of) the right in question. For example, we would be safer if each person had a tracking chip implanted into his body. If a person knows that her location is always known, then she would be less likely to engage in misdeeds and far easier to catch if she chose to act badly anyways. However, such implantation and tracking would seem to be an excessive violation of the right to privacy and hence would not seem to be worth the cost. In the case of Prism, a key question is whether or not the (alleged) gain in security is worth the cost paid in terms of the limitation or violation of rights.
The Obama administration has been quick to claim that the data gathered does not violate the right to privacy of the people that matter. If this is true, then perhaps the security gained is worth the price. However, there is the reasonable concern that this is not the case and it is certainly worrisome when the state engages in such massive data scooping.
For those not familiar with the phrase, “Big Data” is used to describe the acquisition, storage and analysis of large quantities of data. The search giant Google was one of the pioneers in this area and it is developed into an industry worth billions of dollars. Big Data and its uses also raise ethical concerns.
One common use of Big Data is to analyse customer data so as to make predictions that would be useful in conducting targeted ad campaigns. Perhaps the most infamous example of this is Target’s pregnancy targeting. This Big Data adventure was a model of inductive reasoning. First, an analysis was conducted of Target customers who had signed up for Target’s new baby registry. The purchasing history of these women was analysed to find patterns of buying that corresponded to each stage of pregnancy. For example, pregnant women were found to often buy lots of unscented lotion at the start of the second trimester. Once the analysis revealed the buying patterns of pregnant women, Target then applied this information to the buying patterns of women customers. Oversimplifying things, they were essentially using an argument by analogy: inferring that hat women not known to be pregnant who had X,Y, and Z patterns were probably pregnant because women known to be pregnant had X,Y, and Z buying patterns. The women who were tagged as probably pregnant were then subject to targeted ads for baby products and this proved to be a winner for Target, other than some public relations issues.
One interesting aspect of this method is that it does not follow the usual model of predicting a person’s future buying behavior from his/her past buying behavior. An example of predicting future buying behavior based on past behavior would be predicting that I would buy Gatorade the next time I went grocery shopping because I have been bought it consistently in the past. The analysis used by Target and other companies differs from this model by making inferences about the future behavior of customers based on their similarity to customers whose past buying behavior is known. For example, a store might see shifts in someone’s buying behavior that matches other data from people starting to get into fitness and thus predict the person was getting into fitness. The store might then send the person (and others like her) targeted ads featuring Gatorade coupons because their models show that such people buy more Gatorade.
This method also has an interesting Sherlock Holmes aspect to it. The fictional detective was able to use inductive logic (although he was presented as deducing) to make impressive inferences from seemingly innocuousness bits of information. Big Data can do this in reality and make reliable inferences based on what appears to be irrelevant information. For example, likely voting behavior might be inferred from factors such as one’s preferred beverage.
Naturally, Big Data can be used to sell a wide variety of products, including politicians and ideology. It also has non-commercial applications, such a law enforcement and political uses. As such, it is hardly surprising that companies and agencies are busily gathering and analyzing data at a relentless and ever growing pace. This certainly is cause for concern.
One ethical concern is that the use of Big Data can impact the outcome of elections. For example, analyzing massive amounts of data information can be acquired that would allow ads to be effectively crafted and targeted. Given that Big Data is expensive, the data advantage would tend to go to the side with the most money, thus increasing the influence of money on the outcome of elections. Naturally, the influence of money on elections is already a moral concern. While more spending does not assure victory, there is a clear connection between spending and success. To use but one obvious example, Mitt Romney was able to beta his Republican competitors in part by being able to outlast them financially and outspend them.
In any case, Big Data adds yet another tool and expense to political campaigning, thus making it more costly for people to run for office. This, in turn, means that those running for office will need even more money than before, thus making money an even greater factor than in the past. This, obviously enough, increases the ability of those with more money to influence the candidates and the issues.
On the face of it, it would seem unreasonable to require that campaigns go without Big Data. After all, it could be argued that this would be tantamount to demanding that campaigns operate in ignorance. However, the concerns about big money buying Big Data to influence elections could be addressed by campaign finance reform, which would be another ethical issue.
Perhaps the biggest ethical concern about Big Data is the matter of privacy. First, there is the ethical worry that much of the data used in Big Data is gathered without people knowing how the data will be used (and perhaps that it is even being gathered). For example, the customers at Target seemed to be unaware that Target was gathering such data about them to be analyzed and used to target ads.
While people might know that information is being collected about them, knowing this and knowing that the data will be analyzed for various purposes are two different things. As such, it can be argued that private data is being gathered without proper informed consent and this is morally wrong.
The obvious solution is for data collectors to make it clear about what the data will be used for, thus allowing people to make an informed choice regarding their private information. Of course, one problem that will remain is that it is rather difficult to know what sort of inferences can be made from seemingly innocuous data. As such, people might think that they are not providing any private data when they are, in fact, handing over data that can be used to make inferences about private matters.
If a business claims that they would be harmed because people would not hand over such information if they knew what it would be used for, the obvious reply is that this hardly gives them the right to deceive to get what they want. However, I do not think that businesses have much to worry about—Facebook has shown that many people are quite willing to hand over private information for little or nothing in return.
A second and perhaps the most important moral concern is that Big Data provides companies and others with the means of making inferences about people that go beyond the available data and into what might be regarded as the private realm. While this sort of reasoning is classic induction, Big Data changes the game because of the massive amount of data and processing power available to make these inferences, such as whether women are pregnant or not. In short, the analysis of seemingly innocuous data can yield inferences about information that people would tend to regard as private—or at the very least, information they would not think would be appropriate for a company to know.
One obvious counter to this is to argue that privacy rights are not being violated. After all, as long as the data used does not violate the privacy of individuals, then the inferences made from this data cannot be regarded as violating people’s privacy, even if the inferences are about matters that people would regard as private (such as pregnancy). To use an analogy, if I were to spy on someone and learn from thus that she was an alcoholic, then I would be violating her privacy. However, if I inferred that she is an alcoholic from publically available information, then I might know something private about her, but I have not violated her privacy.
This counter is certainly appealing. After all, there does seem to be a meaningful and relevant distinction between directly getting private information by violating privacy and inferring private information using public (or at least legitimately provided) data. To use an analogy, if I get the secret ingredient in someone’s prize recipe by sneaking a look at the recipe, then I have acted wrongly. However, if I infer the secret ingredient by tasting the food when I am invited to dinner, then I have not acted wrongly.
A reasonable reply to this counter is that while there is a difference between making an inference that yields private data and getting the data directly, there is also the matter of intent. It is, for example, one thing to infer the secret ingredient simply by tasting it, but it is quite another to arrange to get invited to dinner specifically so I can get that secret ingredient by tasting the food. To use another example, it is one thing to infer that someone is an alcoholic, but quite another to systematically gather public data in order to determine whether or not she is an alcoholic. In the case of Big Data, there is clearly intent to infer data that customers have not already voluntarily provided. After all, if the data had been provided, there would be no need to undertake an analysis in order to get the desired information. Thus, while the means do not involve a direct violation of privacy rights, they do involve an indirect violation—at least in cases in which the data is private (or at least intended to be private).
The solution, which would probably be rather problematic to implement, would involve setting restrictions on what sort of inferences can be made from the data on the grounds that people have a right to keep that information private, even if the means used to acquire it did not involve any direct violations of privacy rights.
A recent case raises questions about the ethics of reading a spouse’s email. The gist of the situation is that Leon Walker of Michigan faces the possibility of up to five years in prison for allegedly “hacking” into his wife’s email account (they are now divorced) and learning that she was having an affair with her second ex-husband. Michigan does have a law against “hacking” computers, programs or networks to get property “without authorization.” Applying this law to accessing a spouse’s email is seen by some legal experts as a stretch, but Leon Walker might very well face trial under this law.
Walker has offered two main defenses for his actions.
His first defense is that his wife had asked him to read her emails before and had given him the password.
If this is true, then it would certainly seem that she had granted him authorization to access her email. As such, he would seem to have acted neither illegally nor wrongly.
Of course, there is the question of whether or not he was acting under her authorization when he learned of her affair. While it is possible, it seems somewhat unlikely that she would be sending and receiving emails related to the affair while still authorizing her husband to read her email. If she did, in fact, remove her authorization, then a case could be made that he did break the law. Ethically, it could also be seen as an incorrect act. After all, being married does not grant a spouse carte blanche access to the other person’s private matters and this would seem to include email. To use an analogy, if someone allowed her husband to open a bill addressed to her, this would not grant him a right to open all her letters and read through them without her explicit permission.
While it seems reasonable to accept a presumption of privacy even with spouses, there is still the question of whether the right to privacy gives spouses a right to hide misdeeds (such as having an affair). This leads to Walker’s second argument.
After getting the emails, Walker passed on the information with his ex-wife’s first ex-husband. This man used the information to justify filing an emergency motion to get custody of his son (whom he had with Clara Walker, the woman in question). The second ex-husband was apparently once arrested on a charge of domestic violence and since Clara Walker was apparently having an affair with him, Leon Walker saw this as a matter of significant concern.
Walker likened his reading his ex-wife’s email to kicking down a door during a house fire. While this would be breaking in, it would be breaking in with the intent of saving people from harm.
This analogy does have a certain degree of appeal. After all, just breaking down someone’s door to steal their stuff would be a criminal (and most likely immoral) action. This would be analogous to hacking into a computer to, for example, steal credit card numbers. In contrast, kicking down a locked door when a house is on fire so as to save people would not be a criminal act nor a wrongful action. If Walker is right, then his reading his ex-wife’s email should not be considered criminal or unethical.
Of course, when a person kicks down the door of a burning house they know that it is on fire and they have to gain access to actually help people. In the case of the email, Walker would need to have clear signs of a “fire” and would need to have reason to believe that he had to “kick down the door” in order to help people. This is, of course, a factual matter. It could be the case that Walker had reason to believe that his wife was having an affair and that crucial information relating to the safety of others was locked behind the password (and could not be acquired via other non-intrusive means).
If this is the case, then Walker would seem to have acted in an acceptable manner. After all, a right to privacy does not seem to give a person a shield behind which they can conceal misdeeds or hide information relating to a possible danger to, for example, a child. In such a case, the person’s right to privacy would be violated and in this they would be wronged. However, the violation could be justified based on the nature of what was being concealed. After all, it would seem odd to say that a married person has right to conceal evidence of her affair from her husband. He would certainly seem to have a moral right to know that.
In response, it could be argued that the right of a spouse (or ex-spouse) to know about such things does not extend to intruding into certain privacy rights, such as email. After all, while there is a certain appeal to thinking it was okay to get into someone’s email when they were having an affair, one must also consider all the cases in which the spouse is not having an affair. It would be odd to say that spouses should have the right to get into each other’s email, mail, and so on all the time because people have affairs.
Some legal experts and Leon Walker’s attorney are, of course, focusing on the legal aspect of the case. The law in question seems to have been intended to deal with cases in which someone has actually hacked into a computer or network and done damage or has stolen something.
While reading someone else’s email is an intrusion into that person’s privacy, it does not seem to fall under the law, at least as it is worded. After all, nothing seems to have been stolen from the woman and she can hardly claim that she was the damaged party when her affair was exposed.
It will be interesting to see how the case develops and what impact it has on legality of the no doubt common practice of spousal snooping.
When I discuss various issues relating to safety and security, I generally take the view that we should have the minimum security needed to provide an effective defense. I also take into account the impact of such measures on rights and liberties while also giving positive and negative consequences their just due. This approach, obviously enough, means that my position on specific security/safety measures can be argued against on these various grounds.
Since I am against the use of full body scans and pat downs (at least as they are currently implemented), one way to argue against me is to consider the dire consequences of the dreaded “what if” scenario. “What if”, says the concerned person, “the scans are stopped and a terrorist takes out a plane with an underwear bomb that the scanners would have stopped?” Put in argument form, the idea is that the scanners and pat downs should be used because they have a chance of stopping such an attack. The added safety presumably overrides concerns about privacy rights, government intrusiveness, and potential harms to passengers (such as being humiliated in various ways).
On the face of it, “what if” scenarios are a legitimate consideration when it comes to security and safety. So, for example, when considering whether deep water drilling should be allowed or not, we should consider what would happen if another well failed. As another example, when deciding whether to lock my office door or not, I should consider what would happen if a dishonest person walked by and saw my computer and printer behind an unlocked door.
While “what if” scenarios are worth considering, merely presenting a dire possible consequence does not automatically justify a practice. After all, the likelihood of the dire consequence needs to be considered as does the likely effectiveness of the method and the cost it imposes.
Naturally enough, the assessment needs to done on the basis of a consistently applied principle or set of principles. Naturally, the principle of relevant difference can be used to justify differences-but this requires showing how the differences actually make a difference.
In the case of scans, they could possibly prevent an underwear bomb from being brought aboard. There is a chance that such an attack might be tried again. As such, there is a non-zero chance that the scanners could prevent harm being done. However, the odds of such an attempt are most likely extremely low. After all, there has been only one attempt.
The body scans and pat downs clearly infringe on basic privacy rights such as the right/liberty not to be touched and the right/liberty not to have people see one naked. While this rights can be justly violated or set aside, this requires proper justification. There is also the fact that there have been some rather unpleasant incidents (such as the urine bag incident and images being saved from the scans) that indicate that these methods are not without their costs. And, of course, there is the actual cost of the machines used in scanning.
Weighing the harms and benefits seems to lead to the conclusion that the scanners and pat downs are not justified.
However, the “what if” gambit can still be played. “What if the scans stop and a plane gets blown up! What would you say then, Dr. cost-benefit analysis?”
What I would say is, of course, that such an incident would be horrible. However, I must wonder what sort of principle the person making the “what if” gambit is using. If the principle is that we can violate rights and expend $170 million+ to provide some possible security/safety against a specific sort of very unlikely occurrence, then I would hold the person to applying the principle consistently.
After all, there are plenty of likely threats and dangers out there that could be reduced by infringing people’s rights or spending $170 million. For example, the right to drive cars could be taken away in the interest of safety (“What if we didn’t ban cars and someone got run over! What would you say then?”). People would then have to walk or bike (or use other means) which would also make them healthier. Many people die each year from traffic incidents and even more die from poor health. This would address both threats. Also, the economy could be boosted by selling bikes, skates, running gear and other such things. Redoing the infrastructure and creating more public transport would also create jobs.
As another example, we know that oil wells can blow up, kill people and create environmental problems ( “What if an oil well blows up, kills people and pollutes the sea! What would you say then?”). If we are justified in using scanners to try to prevent an underwear bomb attack, then we would seem to be far more justified in banning oil wells and replacing them with alternative energy sources.
As a third example, consider guns. Sure, people have a right to keep and bear arms. However, look at all the gun deaths. “What if someone took a gun and killed some people! What would you say then?” Since no one has been killed by an underwear bomb and lots of people have been killed by guns, if we can infringe on rights to protect people from the incredibly low possibility of an underwear bomb, then we surely can infringe on rights to protect people from guns.
As a final example, we could keep people safer by putting cameras everywhere and on everyone. “What if someone committed a crime that could have been prevented by cameras! What would you say then?” While this would violate the right to privacy, if security trumps privacy then this would seem to be fine.
Naturally, I am willing to tolerate oil wells (for now), I am willing to tolerate cars, I like guns, I’m against body scans, and I’m against living in a panopticon. However, this is because I contend that it is acceptable to tolerate a degree of risk in order to maintain rights.
While I am sometimes accused of being “soft on terror” because of my views of the war on terror (or whatever it is called now) in general and airport security in particular, I consider my approach to be a rational one. Since I am often cast as an “intellectual”, I feel somewhat obligated to do the intellectual thing and present a principle rather than just taking a view based on how I feel about one thing and then holding an inconsistent view on a similar thing just because I happen to feel differently about that.
My general principle for security is that a security method should be assessed based on the effectiveness of the method, the probability of the threat the method is supposed to counter and the degree to which it violates or infringes on legitimate rights/liberties, the relevant consequences, and the cost of the method. As such, this is a cost benefit analysis. If a method counters a likely threat effectively and does so without a disproportionate violation of rights/liberties and cost, then the method would seem to be acceptable. Otherwise, there would be reasonable grounds to reject the method.
Obviously, I do not have an exact formula and specific methods can be subject to reasonable debate. For example, I think that the full body scans could be effective, that the threat they counter is very unlikely, that the method violates privacy rights too much, and the scanners are too expensive. As such, I am against the full body scanners. However, all these points can be argued.
As another example, I am opposed to the employment of 3,000 (or so) “behavior detection officers.” While I suppose that it is good that these folks are employed, they seem to be rather ineffective: of the 266,000 referrals made since 2006, only 0.7% have even led to arrests. Hardly a high success rate for the cost. Given that “behavior detection” is, at best, an infant science, this is hardly surprising. As such, my view is that this is not a wise use of limited resources. Naturally, this is subject to debate as well.
One thing I have found rather interesting about security is that many people seem to operate on at least two standards: one is for things like the war on terror and the other is for almost everything else.
For example, someone who might balk at a law that prevents parents from smoking in the car with their kids (thus putting their kids at risk for various serious health problems) might think that full body scans and pat downs are acceptable because they help keep use safe from a threat (however incredibly unlikely the threat might be). This, however, seems inconsistent. After all, if the state has the right to violate rights to counter threats, then this right would seem to apply to both situations.
As another example, someone who is opposed to the state getting involved in health care (even though lack of health insurance leads to many deaths), restricting pollution (even though pollution is harmful), or regulating business (even though many business have shown an unrelenting tendency to behave badly, such as acting in ways that wrecked the economy) might be fine with things like enhanced interrogation, secret prisons, and assassinations. This, however, seems inconsistent. After all, if the state is in the business of keeping us safe, then this should apply to keeping us safe from not only terrorists but also diseases, pollution, and dangerous business practices.
In my own case, I use my principle consistently to assess whether a security method is acceptable or not. So, for example, I assess state regulation of business based on the efficiency of the method, the likelihood of harm, the possible violation of rights/liberties and the cost. In the light of the catastrophic damage done to the economy that can be causally linked to business practices, it seems reasonable to impose regulations on such behavior. Letting business regulate itself in the hopes that they will act responsibly or be “corrected” by market forces is on par with removing all airport security and hoping that the terrorists will self-regulate or that the invisible hand will sort things out. The fact of the matter is that bad behavior generally requires an active counter.
Of course, the counter has to be weighed against the rights and liberties it infringes upon. So, for example, business folks do have rights and liberties that should be taken into account. Also, there can be relevant consequences in regards to limiting business too much. As some folks argue, business folks need a degree of freedom in order to make profits and keep the economy going. Likewise, the way people who travel by air can be treated should be limited by their legitimate rights.