chevron-down Created with Sketch Beta.
December 10, 2024 Feature

Privacy and Democracy

Michael Aisenberg, Alex Joel, Charles Mok, and Robert Gellman

This article is derived from the keynote panel of the same title presented in Washington, D.C., at the April 2024 ABA Science & Technology Law Section Privacy Institute. The panel provided a thematic anchor for the Institute, addressing the elements and threats abroad in modern America challenging the important role of privacy as a cornerstone of civil liberty. It also addressed the crucial role of information technology on privacy—especially the posture of personal identity data, state responsibility for data protection and the threat of state surveillance, and the relationship of privacy protections on core democratic principles: rule of law, free and fair elections and broad ballot access, and the function of an independent judiciary and judicial process.

The tension between “security” and America’s civil liberties, and its impact on the fabric of our democracy, has a long history. During the siege of Philadelphia in 1777, when the stressed colonials were being pressured to suspend the writ of habeus corpus and keep Tories in the Philadelphia jails, Benjamin Franklin wrote in support of their release that “he who would sacrifice our civil liberties for momentary security deserves neither, and will soon find himself without both!”

Much later, in the years prior to World War II, the IBM corporation had begun to sell its punch card–based data management systems to western European governments, particularly France and the Netherlands, to support census modernization. American IBM executives behind this effort could not foresee how their highly successful, innovative data-handling technology would soon be recognized by Reichsfurer Reinhard Heydrich and exploited by the Nazi regime as it sought to implement the Final Solution. Indeed, the cliché “may I see your papers please?” as an icon of that evil era may have been at the root of the modern European Union’s (EU’s) General Data Protection Regulation (GDPR) privacy regime.

Today innovative applications in personal and financial data management continue to transform modern life—in every aspect from healthcare to transportation, energy, public functions like elections, and the judicial process. And with each major technology transformation have come judicial and legislative efforts to adapt the associated personal information and privacy frameworks here in the United States and in other democracies.

Even as we witness in the 21st century such “counter-civil” output from the military industrial complex as ubiquitous video surveillance and weapons of mass destruction (WMDs), these same information technologies also offer us life-extending treatments for infectious diseases and cancer, birth control, and wireless communications, which all transform our individual and societal behaviors. Yet, to many, they also challenge assumptions about how we and our institutions relate to and perpetuate our increasingly fragile rule of law–based democracies.

Former Federal Trade Commission (FTC) chair Bob Pitofsky famously told Congress, “you may have security without privacy, but you cannot have privacy without security.” Indeed, the adaptations and innovations in information technology generating behavior-changing societal and technology innovations now provide the means to improve the security of our fundamental democratic processes like the courts and our electoral processes; but they also, and especially with the emergence of artificial intelligence (AI) and machine learning (ML), may permit us to divert, corrupt, or otherwise adversely impact the legitimacy, conduct, and conclusions of those and other core democratic institutions.

For many, our conception of privacy—both individual and institutional—has developed in the shadow of Supreme Court opinions like Griswold that integrate and extend the Fourth Amendment protections of individuals from state action to the core First Amendment “civil liberty” freedoms.

This article is a primer on questions of the present posture and future viability of privacy as a civil liberty and the impacts that technologies such as ubiquitous personal computing, networking, massive cloud data storage, and AI/ML are demonstrating on privacy norms circa 2024.

Hopefully, this may assist us in understanding—again, to paraphrase Ben Franklin—“what sort of government we and our sister democracies will have?” What might the relationship of the individual and the state be under a fully AI/ML-mediated rule of law environment? What challenges for the protection of individual and institutional privacy must we in the legal system address to maintain the legitimacy and efficacy of those institutions and our democracies?

We first posit privacy as a civil liberty and set out the structure of Fair Information Practices—the FIPs—as a common framework for national statutes to address the rights of individuals and responsibilities of data-collecting institutions and governments under a formalized privacy regime. We then explore the important role of personal information in a democracy and the structural tension between the privacy civil liberty and legitimate structures to protect national security from acquisition and abuse by adversaries (both foreign and domestic). Finally, we explore recent experiences of privacy legislation being turned from supporting citizen rights in a democracy to becoming a means of controlling individual rights and behavior.

Privacy as a Civil Liberty

Privacy is a right, a value, or a concept that can be distressingly hard to define broadly. In the past five decades or so, attention to privacy has intensified in parallel to the growth of computers. The specific focus of much of that attention was the part of privacy concerned with the processing of personal information. The European term—data protection—is a much better descriptor for these issues, but that term is little used in the United States.

Early on, FIPs became the key for understanding personal information privacy. In a 1973 report, an American advisory committee proposed FIPs in response to its charge to investigate the growing use of automated data systems containing information about individuals. That committee’s discussions about mainframe computers seem charmingly antiquated today, but the fundamental privacy concerns are the same even as the technology has expanded wildly. At a time when there was much uncertainty about the meaning of information privacy, FIPs provided basic answers and a menu for responding to the concerns.

In 1980, the Organization for Economic Cooperation and Development (OECD) restated the five elements originally proposed into eight elements:

  • Collection Limitation Principle
  • Data Quality Principle
  • Purpose Specification Principle
  • Use Limitation Principle
  • Security Safeguards Principle
  • Openness Principle
  • Individual Participation Principle
  • Accountability Principle

The modestly expanded OECD version of FIPs further explained the elements of the privacy of personal information to the world of policymakers and legislators. FIPs provided the specificity that was essential to finding ways to protect privacy in the face of continual assaults from technology and from those who sought to use personal information in new and often unforeseen ways that changed the balance of power between data processors and data subjects.

A wave of national privacy laws in Europe started in the late 1970s. Ultimately, the EU adopted a Data Protection Directive in 1985 and a General Data Protection Regulation (GDPR) in 2016. All the legislation in Europe and ultimately elsewhere around the world relies on basic notions of FIPs. Today, virtually every country in the world—with the United States notably excepted—has a national data protection law, and those laws have a FIPs backbone.

As essential as FIPs are to an understanding of the complicated issues of the privacy of personal information, they did not solve all the policy problems. For example, the Purpose Specification Principle provides:

The purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose.

This policy prescription balances the interests of data subjects in limiting the use and disclosure of their personal information against the legitimate interests of data processors in accomplishing the goals of processing. The Purpose Specification Principle calls for defined purposes together with a process for adjusting those purposes when there is a need for change.

Forty years ago when the now-familiar core elements of information privacy were uncertain, the Purpose Specification Principle was a major contribution. Today, however, with decades more experience and a dramatically different technology environment, the difficulties of translating that principle into enforceable rules are more apparent. Just what does “not incompatible with” mean? Is it different than “compatible with”? Who decides when the standard is met or not met? How does technological change affect the way that the principle should be applied? Should the rules for the data that result from online activity be different from the rules that apply to offline activity? Who enforces the standard?

Appropriately, perhaps, a rough parallel here can be found with the Fourth Amendment to the U.S. Constitution. The wording of the amendment prohibits unreasonable searches and seizures and includes a standard (probable cause) and a proper procedure (a warrant) for deciding when that standard is met. In the centuries since the Fourth Amendment was approved, courts and legislatures have found it necessary to interpret the words of the amendment and apply them to new technologies and new circumstances, including the telegraph, telephone, and internet, as well as to the development of multiple types of third-party data holders.

For both FIPs and the Fourth Amendment, the fundamental principles reflected in each text remain vital. However, in both cases, it takes additional work to apply those principles to current and changing contexts. In both cases, the core statements no longer offer the guidance and the processes necessary to respond to new circumstances and to new understandings of the right way to balance all the relevant interests.

Another important point about FIPs is that they are not complete. As the world gained experience with writing rules for privacy, legislators and policymakers recognized that other principles, practices, and institutions were necessary to address the needs of information privacy and to oversee their implementation. The clearest example is the need for an agency to oversee, interpret, and enforce privacy laws. The structure and powers of privacy agencies vary widely around the globe, but a privacy agency (and, typically, an independent privacy agency) is found everywhere. There are other important elements not part of FIPs that also are part of the modern privacy regulatory apparatus.

The fading of FIPs into the background of current privacy discussions is not evidence of failure. It is, perhaps, an inevitable consequence of the success and widespread adoption of FIPs as a tool for understanding privacy and for directing suitable policy responses. FIPs were never supposed to be the ending point for privacy debates and discussions. FIPs remain necessary even as they are no longer sufficient for addressing the privacy concerns that confront us today and in the future.

Privacy and Security

As noted, one FIP is the Security Safeguards Principle. As such, maintaining the security of personal data is a fundamental component of protecting privacy. Protecting the nation’s security, however, often is seen as a threat to individual privacy. Governments seeking to protect their nation from threats such as terrorism, cyberattacks, and transnational crime often seek access to personal data in order to uncover evidence of hostile activity. Harkening back to Dr. Franklin in 1777, how can governments today protect both individual privacy and national security?

In a democracy, protecting both individual privacy and national security is not a zero-sum game. Government must do two things equally well: It must authorize agencies to protect the nation’s security, and it must constrain those same agencies from going too far. Failing at either means failing as a democracy.

Achieving these dual objectives is challenging. Intelligence activities are, by their nature, secretive and intrusive. They are secretive because disclosing to foreign adversaries the sources and methods underlying intelligence collection will put those sources at grave risk and will render those methods ineffective, as targets take steps to avoid detection. They are intrusive because intelligence collection focuses on gathering information that others do not choose to make available.

In many democracies, the resulting legal framework governing national security activities is highly specialized and complex and can be difficult for nonexperts to fully comprehend. This is certainly the case in the United States, but it is also true in other countries. The Foreign Intelligence Surveillance Act (FISA) is a good example of a statute that both authorizes and constrains. It is a complicated statute with technical terminology, and it is implemented through classified measures by intelligence agencies like the National Security Agency and the Federal Bureau of Investigation. Applications are made and hearings are held in secure facilities, where cleared personnel pore over classified filings. The Foreign Intelligence Surveillance Court (FISC) has the power to authorize—or refuse to authorize—specified foreign intelligence surveillance activities and to oversee compliance with its orders. A great deal has been made public about how FISA authorities are used. That said, much necessarily remains classified.

Given the secrecy involved with intelligence activities, oversight plays a central role. “Secret oversight” sounds sinister, but, for a democracy, it is essential. Oversight bodies must have the ability to access and review the highly sensitive secrets of intelligence agencies to ensure agencies are complying with applicable laws. To be able to handle classified information, oversight personnel must themselves abide by accepted security protocols to protect intelligence sources and methods from unauthorized disclosure.

In the United States, oversight is provided by a system of many layers with many players. Within the executive branch, a range of offices work alongside one another to provide advice, ensure compliance, and exercise internal oversight; this includes general counsels, independent inspectors general, privacy and civil liberties officers, and compliance officials. For FISA matters, the FISC plays the crucial oversight role. And all intelligence agencies are overseen by Congress, which has the power to authorize and fund—or refuse to authorize and fund—intelligence activities.

The tension between secrecy and transparency cuts to the very heart of the way legal systems ordinarily operate in a democracy. For most government actions that impact individuals, one would expect the ability to access information the government has on its citizens (e.g., through the Freedom of Information Act) and to challenge the legality of government measures in court. However, protecting classified information necessarily requires special exceptions and privileges, making it much more difficult for individuals to obtain national security information that might relate to them, and to challenge intelligence activities in court.

This is a problem that is not unique to the United States. Democracies around the world have developed similar exceptions and privileges. Indeed, when trying to understand how other countries protect privacy, it is important to look beyond their privacy and data protection laws. Unlike the United States, the European Union now has in place a comprehensive privacy framework—the GDPR. Many other countries have taken their lead from the EU and have enacted their own versions of comprehensive privacy legislation. While these laws differ in their specifics, they tend to share key principles in common with GDPR, which in turn traces core principle back to the FIPs.

However, comprehensive privacy legislation does not necessarily apply with equal force to a country’s intelligence activities. It is important to always look at two sections of privacy laws: definitions and exceptions. The definitions of key terms may well exclude certain categories of government actors and activities, thus, in effect, putting intelligence out of scope. Perhaps more commonly, the exceptions section of a privacy protection law will likely exclude national security entirely (or in significant part) from the reach of that law. Recourse must be made to the country’s specialized national security legal framework to understand how the government is authorized—and constrained—in protecting national security.

Another challenge for national security legal frameworks is the rapid pace of technological change. Many of the principles and precedents governing intelligence activities were developed years ago. Technology develops rapidly and unpredictably, while law and policy develop much more slowly and incrementally. As a result, there is often a gap between law and technology. To fill the gap, the legal system ends up applying old rules to new tools.

These new technologies present opportunities, risks, and threats to democracies. For example, authoritarian regimes can exploit such technologies to carry out mass surveillance, engage in transnational repression, carry out disinformation campaigns, and penetrate the cyber defenses of democracies. In response, like-minded democracies need to come together to align their approaches to how their legal frameworks will both authorize and constrain their agencies as they seek to protect democracies in a world of rapidly evolving, technology-enabled threats.

Privacy and Democracy: Today and Into the Future

How are the two concepts of democracy and privacy related? Dean Pitofsky posited a dependency of privacy on security. Is it also true that there is a perhaps fundamental dependency of democracy on privacy, or the persistence of a “safe zone” of individual liberty of thought and expression that is essential to the functioning of a viable democratic system?

Privacy underpins personal autonomy, political association, and free expression, including the right to vote one’s conscience. On the other hand, undemocratic regimes tend to use surveillance and limits to personal privacy to advance their autocratic control. In an era many see as a period of the backsliding of democracy, privacy protection can be seen as the frontier to preserving democratic institutions.

Indeed, privacy can be conceived of as a “gateway” right: Without it, basic human rights—such as freedom of thought, rights against being discriminated, and other democratic rights such as secret ballots in voting—cannot be protected. But in the United States, particularly in recent years, discussions over privacy protection are increasingly focused on blaming Big Tech, which is somewhat misguided. Today every human function is impacted, if not dominated, by technology in some manner. Platforms collect, profile, use, and abuse our information. But instead of forcefully legislating and protecting personal data and the other elements of privacy as basic civil rights, there is a decades-old impasse of policy perspectives at the federal level and piecemeal and fragmented legislative actions at the state level. So, it is easy to point the fingers at Big Tech (not that they don’t deserve a share of the blame) by those in government whose own inaction arguably is as much to blame.

The example of Hong Kong is instructive: Hong Kong is a jurisdiction with a comprehensive privacy protection law that was passed 27 years ago—for better or worse, just before the internet became so ubiquitous. Even as the jurisdiction has passed to the PRC, there are obvious advantages of having such law, even if the law itself is sometimes flawed, and under an imperfect and sometimes even undemocratic regime.

First, the concept of a comprehensive privacy statute is important because in the presence of a law and its relevant protection, over time people will develop an awareness that privacy is really a fundamental right, and readily notice when such rights of theirs have been violated, and even know where they can go to complain.

Second, the law must be updated often because things change: technology, business models, emerging issues such as data localization, and the national context of a putative “great power” as the administrator of the privacy regime. Hong Kong did not do so well here, as the UK only set up the Hong Kong privacy law while on their way out, right before the 1997 handover to China, and incoming Chinese authorities never really updated and strengthened that law.

Third, privacy regulations will not normally directly inhibit innovation, or make compliance by tech innovators or users too difficult, as many observers believe. Businesses can adapt, if they have to. Here in Silicon Valley, many embrace the argument that privacy regulations are bad for innovation and the tech industry, and they point to Europe as the counter example. There is no simple causal relationship there, as there are many other causes as to why Europe lags behind in tech innovation. And it is clearly not that U.S. legislators don’t express an appetite to regulate. Consider efforts to ban TikTok in the United States, or all the discussion (including hearings) about “not falling behind” when it comes to AI regulations. But how does one meaningfully regulate AI/ML, or TikTok, especially the privacy-relevant aspects, without first having a comprehensive national privacy statutory framework in place?

Meanwhile, authoritarian regimes are conducting mass surveillance indiscriminately on their own people, and more and more even transnationally on those outside their borders, including within U.S. borders. China has recently built the largest embassy in its history in the Bahamas, less than 50 miles off the shore of Florida. Its purpose is unlikely to be directed at tourism. And cyber exploits—or, indeed, all types of criminal activities—make it easy for governments—not just China—to justify more and more restriction to people’s individual privacy.

Private American firms like ClearView AI are marketing massive personal data collected on the internet to almost everyone, from domestic law enforcement to other countries. Without a U.S. privacy statute, data brokers like ClearView AI are largely beyond the control of U.S. national security and law enforcement communities, and so we become dependent on the individual policies and practices of data collectors to assure the appropriate use of our personal information and other sensitive data in the privacy rights bundle.

And then there are newly enacted and proposed laws in the UK, Australia, and even Europe, to put limits on end-to-end encryption for services and apps by internet service providers. The declared motive is always to protect children and those who are vulnerable, but the argument lacks empirical evidence to prove that by giving law enforcement such powers to broadly intercept messages, even setting aside the infringement on the personal privacy of everyone, it is even going to be effective in preventing such crimes.

No one should completely trust and rely on the platforms, but the same goes for trusting governments as well. If anything, with such laws to limit our own rights and access to technologies to encrypt or otherwise protect sensitive data, all digital systems will be made less secure. Hackers of the world will have the last laugh, with government-mandated backdoors wide open to be exploited.

And what country was first to ban encryption, before it was even widely adopted by platforms for user safety and privacy? China. Now, their “forward thinking” can finally be justified by Western democracies following their example.

These emerging collisions of desirable privacy policy and appetites for monetization or surveillance are evidence of why policymakers must go back to basics: to empower the people and give individuals the legal protection of their fundamental right to privacy, first.

Europe’s continuous experimenting—from the GDPR to the DSA, the DMA, and now the AI Act—is a good reference, building from fundamental human rights protection as the foundation first, to ultimately regulating for a safer online environment and fair competition for all platforms, before zeroing in on AI in particular.

If politicians only address opportunistically whatever data and privacy problems they perceive to be the headline threat of the day, their regulatory solutions will not only be ineffective or unworkable, but, inevitably, unintended consequences must follow. That will further undermine the national and individual security and privacy environment for everyone, and, inevitably, as a consequence, will threaten our democracy.

Entity:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

Michael Aisenberg

-

Michael A. Aisenberg is an assistant editor for The SciTech Lawyer.

Alex Joel

-

Alex Joel is a senior project director and resident adjunct professor at the American University Washington College of Law, where he leads the Privacy Across Borders initiative for the school’s Tech, Law and Security Program. Before that, he served for 14 years as the Civil Liberties Protection Officer for the Office of the Director of National Intelligence.

Charles Mok

-

Charles Mok is research scholar of the Global Digital Policy Incubator, Cyber Policy Center at Stanford University, and a board trustee of the Internet Society. He was a member of the Legislative Council in Hong Kong, representing the Information Technology sector, between 2012 and 2020.

Robert Gellman

-

Robert Gellman is a privacy and information policy consultant in Washington, D.C.