Privacy Rights: Moral and Legal Foundations By Adam D. Moore (original) (raw)
We all know that Google stores huge amounts of information about everyone who uses its search tools, that Amazon can recommend new books to us based on our past purchases, and that the U.S. government engaged in many data-mining activities during the Bush administration to acquire information about us, including involving telecommunications companies in monitoring our phone calls (currently the subject of a bill in Congress). Control over access to our bodies and to special places, like our homes, has traditionally been the focus of concerns about privacy, but access to information about us is raising new challenges for those anxious to protect our privacy. In Privacy Rights, Adam Moore adds informational privacy to physical and spatial privacy as fundamental to developing a general theory of privacy that is well grounded morally and legally.
Adam D. Moore is Associate Professor of Philosophy at the University of Washington.
Introduction
Individuals have privacy rights. We each have the right to control access to our bodies, to specific places, and to personal information. Beyond controlling access, individuals should, in large part, also determine how their own personal information is used. While control over access to bodies and places appears to be on firm ground, informational privacy is everywhere under siege. Corporations large and small engage in data-mining activities that capture massive amounts of information. Much of this information is about our daily activities—what was purchased, when, where, and for how much. Our mail boxes and e-mail accounts are then stuffed with an endless stream of advertisements and solicitations. Even more alarming are telemarketers, who intrude upon our solitude. Financial information, phone numbers, and personal addresses of all sorts, whether accurate or not, are captured in databases and bought and sold to individuals, corporations, and government agencies. Beyond data mining, video surveillance, facial recognition technology, spyware, and a host of other invasive tools are opening up private life for public consumption.
Physical or locational privacy, on the other hand, seems to enjoy stronger protection. Forcibly entering someone’s house or touching that person’s body without just cause is considered a serious moral and legal violation. Strengthened by property rights to houses and land and by walls, fences, and security systems, our physical selves appear quite secure—at least compared to our personal information. In any case, individuals are more willing to trade informational privacy for security or economic benefits than physical privacy.
The underlying assumption is that in relation to human well-being or flourishing, control over our bodies, capacities, and powers is more important than controlling who has access to facts about ourselves. I do not deny this—to be able to “create” the facts of my life I must first be in exclusive control of my body, capacities, or private locations. It does not follow from any of this, however, that personal information control does not deserve protection. Similarly, a right to life, crudely defined as the right not to be killed unjustly, may be more essential or fundamental than physical property rights, yet we should not hastily conclude that lives deserve robust protection while property does not. In fact, without the moral authority to control physical objects in arrangements that support our lifelong goals and projects, a life may not be worth living. A similar point can be made with respect to our physical and informational selves. Being captured on video, audio, and financial data streams is almost as threatening to individual autonomy and well-being as losing physical control of bodies, capacities, and private spaces.
Consider the following thought experiment. One day we are told that the earth is dying—in a few weeks some event will deprive us of oxygen or sunlight. However, we are not to worry. Two advanced humanoid societies have been discovered and are willing to take us in. We have a once-and-for-all choice to go live with the Fencers or the Watchers.
In Fencer society technological improvements, including advances in cryptography and outerwear, have produced near perfect privacy protections. Each individual may wear an antimonitoring suit that completely shields him or her from the prying eyes and ears of others. Court-enforced contracts along with near unbreakable encryption algorithms protect informational privacy. The separation between one’s physical self and one’s informational self is nearly unbreachable.
But one might ask, “What of security, criminals, and terrorists?” Not a problem. The antimonitoring suits do not make one invisible—suspected criminals can still be questioned and taken off to jail if necessary. Moreover, court-issued warrants override secrecy agreements and allow for encrypted codes to be broken and information gathered. Informants, incarceration, plea bargains, as well as other law enforcement tools are still available: “Taken from a Fencer Immigration Advertisement: Come join the Fencer society—a society of privacy and security. Feel free to engage in new experiments in living without becoming someone else’s news story. Relax, think, and meditate without an endless stream of solicitations pursuing your every free moment. . . . With privacy enshrined, solitude, tranquility, and peace of mind are secured.”
In the Watcher society technological improvements, such as advances in facial recognition technology, video miniaturization, data mining, and nanotechnology have opened up all private domains for public consumption. Every movement and sound made by each individual is recorded, stored, and uploaded for public consumption to the Watcher database. Security is complete, and criminal activity is nonexistent. Total information awareness has been achieved.
But what about pressures to conform, restrictions of autonomy, and Big Brother worries? In the Watcher society individuals have learned to cast aside shyness and enjoy total openness. Moreover, given that transparency is universal, governmental officials are truly accountable to public concerns. At the same time, information overload or superabundance assures anonymity for average law-abiding citizens: “Taken from a Watcher Immigration Advertisement: Come and be a part of the open society. Cast aside your inhibitions to conform when being watched and embrace total information access . . . What do you have to hide? Transparency is liberating! Accountability ensures security!”
Thinking about these two fictitious societies raises multiple questions. Would the bonds of community break apart in the Fencer society? Would total information access in the Watcher world include real time access to one’s thoughts, and would such surveillance lead to social conformity of thought and action? Could average citizens really hide—unmonitored—in the mountain of data being captured by the Watchers? If we had to choose one of these alternatives, which would be better, all things considered?
While we are not faced with these stark possibilities, our policy choices may lead us toward them. For example according to one estimate there are over four million surveillance cameras in Britain. One concern is that the rich, powerful, and connected will live in a world closer to Fencer society, while the poor and disenfranchised will inhabit something close to the Watcher society. Privacy has always been a commodity secured, more or less, on the basis of wealth, power, and privilege. Nevertheless, a hundred years ago privacy concerns were not so important because of technological barriers related to information gathering and control. The digital and computer revolution, along with numerous other advances in technology, have changed the game.
Overview of a Theory
In the most general terms, my goal is to provide a philosophically rigorous defense of privacy rights while addressing numerous important applied issues that surround privacy such as free speech, drug testing, hackers, public accountability, and national security. While other applications and issues could have been taken up, such as the feminist critique of privacy, I think that the issues discussed herein are important and worthy of consideration. Informational privacy is typically viewed as less important than free speech and a free press. Physical or bodily privacy is too easily traded for national security and for economic values such as workplace productivity. Many digital natives, those who have grown up with digital technology, have been advocating “free access” views that would undermine legal protections for privacy. I would like to reverse these trends.
In Chapters 2, 3, 4, and 5 I provide the theoretical foundations for the analysis and conclusions contained in Chapters 6 through 10. Many, perhaps most, books and articles on privacy simply assume that privacy is valuable and that individuals have privacy rights. Another often used strategy is to hold that privacy is entailed by some other, higher-level concept or theory like autonomy. Without an analysis or justification of the background theory that is supposed to entail privacy, we are left with little means to adjudicate between different moral claims and interests. My goal in Chapters 2 through 5 is to establish moral claims to privacy without relying on some overarching moral theory or set of principles lurking as hidden assumptions. At appropriate places I will indicate how the theoretical framework provided in the first half of this work applies to an issue being considered in Chapters 6 through 10. If we view privacy, not as some mere preference or interest, but as a fundamental moral claim, then it will be easier to strike an appropriate balance with other important values such as speech and security.
Chapter 2 begins with several attempts to define privacy. After analyzing several competing conceptions, I offer and defend my own: Privacy may be understood as the right to control access to and use of physical items, like bodies and houses, and information, like medical and financial facts. Physical or locational privacy affords individuals the right to control access to specific bodies, objects, and places. Informational privacy, on the other hand, allows individuals to control access to and uses of personal information no matter how it is codified. Medical information about someone, for example, could be instantiated in a database, recorded on an audio cassette, or carved into stone.
If privacy is defined as a right of control over access to and uses of places and information and if the nature of “places” and “information” forces slightly different forms of justification and legal considerations, then this may undermine any attempt at a unified definition—with privacy over bodies and places being considered one thing, and privacy over information being considered another. Decisional privacy, defined as the right to make certain sorts of fundamental decisions, would then be a third area. Nevertheless, I believe that a “control over access and use” definition is coherent and there is nothing unsettling about distinguishing between and focusing on physical privacy and informational privacy. Similarly, there is nothing incoherent about the notion of “property” as including physical property and intellectual property. Intellectual property and information are, in the typical case, nonrivalrous in a way that tangible property and physical privacy are not. This feature alone would sanction treating these domains separately and lead one to suspect that slightly different arguments will be needed to justify control along both domains. Chapter 2 ends with an analysis of the “right to control access and use” view of privacy in light of several, now classic, cases and illustrations.
My primary purpose in Chapter 3 is to demonstrate the moral value of privacy. Providing an argument or reasons in support of this claim will require several digressions into metaethics. Many theorists simply assume a specific and contentious account of value is correct or that some other principle, like freedom, is valuable, and then explain how privacy is implied by the assumed account or principle. In many instances the entire view rests on intuitions. But against those who doubt the moral value of privacy, this gets us nowhere—alas, all the detractor has to do is challenge the assumption or intuition with a contrary view. My hope is that by offering a plausible and compelling account of moral value and then grounding privacy in this view, I can provide a firm foundation for the value of privacy. Included in the objectivist and relationalist perspective offered is an account of moral bettering and worsening.
Privacy, it is argued, is a core human value—the right to control access to oneself is an essential part of human well-being or flourishing. I explore several historical and cultural understandings of privacy to support this claim. The ability to control access to oneself and to engage in patterns of association and disassociation is a cultural universal. Moreover and more important, individuals who lack this control typically exhibit increased levels of physical and emotional impairment. This claim is also true of numerous nonhuman mammals as well.
Chapter 4 centers on the justification of privacy rights to bodies and locations. Establishing the value of privacy does not, by itself, establish privacy rights. The goal is to derive privacy rights from the relatively uncontroversial moral principle that actions that do no harm are not immoral—a “no harm, no foul” rule. Briefly put, the argument for physical privacy rights runs as follows. In using his own body, capacities, and powers Fred does not morally worsen Ginger relative to how she would have been were Fred absent or had Fred not possessed his own body (whatever this means). When Ginger uses Fred’s body, she will almost always interfere with his use and worsen him relative to how he would have been in her absence or had she not possessed the object in question. With numerous qualifications and clarifications, I conclude that Fred’s moral claim to control access to his body, capacities, and powers is undefeated and bodily privacy rights emerge.
Since the argument for bodily or locational privacy and informational privacy depend on a version of a “no harm, no foul” principle, some care is taken in establishing the moral weightiness of this rule. Actions that pass this requirement are collectively rational and appropriately respect the moral worth of individuals and their goals and projects. Such a commitment reflects our minimum and, I hope, uncontroversial obligations to each other.
In Chapter 5, “Providing for Informational Privacy Rights,” I build on the argument offered in Chapter 4. By possessing and using information about himself, Fred does not necessarily worsen Ginger relative to how she would be in his absence or had Fred not possessed the information in question. Fred’s use and possession is thus warranted. Unlike the case where Ginger tries to use Fred’s body, when Ginger possesses and uses information about Fred, she does not necessarily worsen him relative to how he would be were she absent or if she did not possess the information in question. After all, information may be nonrivalrously possessed and consumed. Thus further arguments are required to secure informational privacy rights. I present two. I maintain, first, that gathering, possessing, and using information about someone else, especially if that information is sensitive, personal, and easily disseminated, creates risks that are morally relevant. More important, many of these risks are not chosen—they are imposed. A second strand of argument links physical privacy rights with property rights. Through the use of walls, fences, disguises, strong encryption, legitimate deception, trade secrets, and contracts we may be able to justifiably restrict access to information about ourselves.
My primary focus in Chapter 6, “Strengthening Legal Privacy Rights,” is to determine the nature and scope of legal protections for informational and locational privacy. If legal systems are to reflect important moral norms, then privacy protections must be codified in the law. In recent times, however, informational privacy protections have not fared well. Privacy-based torts have been undermined through various legal cases and statutes. Moreover, decisional privacy, crudely understood as the right to make private choices in private places, and Fourth Amendment privacy, which protects citizens from unwarranted searches, have been threatened in the name of national security or relegated to fairly narrow areas. I argue that by strengthening the tort of intrusion we may move toward a more robust protection of privacy rights. My goal is to provide a workable model of privacy protection within the legal framework already in place.
In Chapter 7, “Privacy, Speech, and the Law,” I examine the tensions between privacy, free speech, and a free press. Judicial hostility toward protecting privacy, especially when free speech issues are present, is widespread. In fact, across hundreds of cases speech nearly always trumps privacy. I argue that privacy concerns should not be so easily sacrificed on the altar of “the public’s right to know.” The often-noted tension between privacy and speech in the legal realm is due to an expansive but unfounded view of expression. Once the value of privacy is recognized, and once we place ourselves in an unbiased position from which to view these issues, it is argued that “right to know” considerations should be recast in terms of the kinds of information necessary for the continued existence and stability of democratic institutions. Moreover, information that is both invasive and clearly publicly important can almost always be modified so that the invasive properties are diminished or nonexistent. As with other content-based restrictions on expression—such as hate speech or sexual harassment—I will argue for a privacy restriction.
In Chapter 8, “Drug Testing and Privacy in the Workplace,” I consider numerous arguments for and against workplace drug testing. If the account of privacy offered in earlier chapters is correct, then there is a fairly strong presumption in favor of individual privacy rights—even in the workplace. Many claim this presumption is overridden by employee consent, public safety arguments, or workplace productivity concerns. I consider each of these arguments and dismiss them as having insufficient merit to undermine individual privacy rights. I give special attention to the consent argument against employee privacy.
On this view, when an employee consents to give up privacy, then there can be no legitimate objections. Employees can waive their rights in exchange for a job and a paycheck. But consent or agreement is only binding if certain conditions have been met. If someone agrees to relinquish privacy while under duress—for example, he or she needs a job and jobs are in short supply—then the agreement seems suspect. If, on the other hand, someone agrees to relinquish privacy in conditions that are fair, it would seem that the agreement would be morally binding. In this chapter I develop a procedure to test the moral bindingness of agreements related to the issues of workplace drug testing and privacy. I argue that in most fields, drug testing is not warranted and unjustifiably invades private domains.
In the most general terms, in Chapter 9 I focus on the tensions between free access and privacy related to digitally stored information. Free access to information, whether stored on networks or in software packages have long been championed by hackers and, more recently, digital natives. Proponents of this view argue that information should be free because it is a social product, because information is nonrivalrous, and because hacking information provides for better security. Advocates for privacy and intellectual property would disagree with these views. The central question of this chapter is whether or not the free access arguments are strong enough to override a presumption in favor of privacy and intellectual property.
I consider two major strands of argument when establishing a presumption in favor of intellectual property. First, the utilitarian incentives-based view holds that rights to restrict information flow are based on a bargain between content creators and society. Society grants limited rights to control access to intellectual works as an inducement to bring forth new knowledge. The second strand of argument presented is inspired by John Locke and runs parallel to the arguments for privacy offered in earlier chapters. On this view, authors and inventors who produce content and do not worsen their fellows—relative to the appropriate baseline of comparison and measure of value—generate moral claims to their creations. I then present three arguments in favor of free access and find none of them strong enough to override these arguments supporting privacy and intellectual property.
The question of when privacy rights may be justifiably overridden in the name of public security is considered in Chapter 10—“Privacy, Security, and Public Accountability.” Balancing privacy and security may require that we trade some of the former for some of the latter. Nevertheless, in many cases security arguments cut the other direction. It is only through the implementation of strong privacy protections, sunlight provisions, and judicial oversight that we obtain an appropriate level of security against government abuse of power, industrial espionage, unwarranted invasions into private domains, and information warfare or terrorism.
In the aftermath of the terrorist attacks of September 11, 2001, there were numerous calls for suppressing civil liberties in the name of national security. A few years after the attacks General Patrick M. Hughes, Department of Homeland Security intelligence chief noted: “We have to abridge individual rights, change the societal conditions, and act in ways that heretofore were not in accordance with our values and traditions, like giving a police officer or security official the right to search you without a judicial finding of probable cause.”
Passage and implementation of the USA Patriot Act in October 2001 greatly expanded government surveillance powers. In short, it enables the government, in my opinion, to act without public oversight, to ignore strict probable cause, and to avoid accountability for illegal or unwarranted intrusions into private domains. Policies that create secret courts, allow covert searches and seizures, and suppress information with no public oversight or “sunlight” provisions unjustifiably violate individual privacy rights and have no place in a liberal democracy.
Conclusion
John Stuart Mill once said, “It is sometimes both possible and useful to point out the way, though without being . . . prepared to adventure far into it.” In this book I will try to do better than merely point the way. Rather than assume several moral principles and derive privacy rights from the top down, I begin with several simple claims about value and a version of a “no harm, no foul” rule. While I argue for both of these starting points at length, my goal is not convince all comers that these claims are beyond dispute, but rather to demonstrate that they are plausible and warranted. From these weak and hopefully widely shared views, I derive a theory of privacy. My hope is to provide a compelling theory of privacy—compelling in the sense of being suitably justified given the subject matter.
It is my belief that technological advances over the next few decades will continue to highlight issues of privacy. Increasing use of video surveillance, facial-recognition technology, data mining, genetic profiling, e-mail surveillance, and the like, along with government sponsored programs such as Total Information Awareness, indicate that privacy will continued to be threatened. We may indeed have a once-and-for-all choice to travel toward a Watcher society or a Fencer society, and if we are to make this choice between the Watchers and the Fencers, better—far better—the latter.
© 2010 Penn State University