chevron-down Created with Sketch Beta.
November 01, 2017

Why Changes in Data Science Are Driving a Need for Quantum Law and Policy, and How We Get There

By April F. Doss

The growing complexity of issues at the intersection of technology, privacy, security, and law have brought us to a turning point similar to the revolution in scientific thinking that has taken place over the past 50 years. For centuries, scientists believed that the laws of physics described by Isaac Newton were adequate to explain the workings of the universe. By and large, for most purposes they still are. But at the edges of our understanding—in the subatomic scale and across the vastness of the universe—Newtonian physics broke down. Something in those laws didn’t hold true. The evidence we were collecting made clear that those theories were inadequate to explain new scientific questions we were facing.

At the intersection of law and technology today we are facing a similar revolution. It is now possible to collect data that is so granular—subatomic particles of information, if you will—on such a massive, grand, and continuous scale—data collection and analysis that matches the scale of the universe—that our traditional approaches to law and policy struggle to make sense of what these advances mean for privacy and technology, and leave real doubts about whether law and policy can keep up. In the face of these challenges, we need a new approach, something I think of as “quantum policy.” Allow me to relate a few examples of these current challenges, and explain what quantum policy could mean.

Quantum physics asks us to believe two apparently contradictory things simultaneously: that light can be both a particle and a wave, and that it can be both at the same time; or that bits in a computer can register both one and zero at the same time. In the days of Newtonian physics, propositions like these would have felt like something out of Alice in Wonderland, an exercise in believing impossible things. Today, though, we know that Newtonian physics cannot explain subatomic behavior or the cosmos. And that realization led to the development of a new field of physics. It helps to remember that these new theories were driven by necessity: quantum physics came about because the laws of physics we had relied on before were no longer sufficient to explain the way the universe worked.

We are at a similar turning point when it comes to applying traditional law and policy to the questions raised by algorithms, big data, and cybersecurity and privacy law. Law and policy are straining under the pressures imposed by technological advances, and a Newtonian approach is no longer sufficient to answer the questions that are pressing to be clarified, or to keep pace with the widespread scope of rapid and hard-to-predict changes.

What do I mean by a Newtonian approach to law and policy? It is one in which we are content with gradual, incremental change, where we continue to rely on the slow accretion of precedent, where we are content with having critical legal issues decided by disparate cases that take years to wend their way through multiple jurisdictions before arriving at a critical mass of new law that is achieved through an organic evolution. It means continuing to require that there be a case in controversy, refusing to allow courts to offer advisory opinions. It means that once a precedent has been established, it is extremely hard to overturn—the inertia becomes almost insurmountable.

Technology development works differently. Beta versions, user acceptance testing, and minimum viable products are the watchwords of the day, and failing fast to support rapid improvement in iterations is key.

A simple example is one that my intellectual property (IP) colleagues often point to: the timeline for obtaining traditional patent protection for a new invention has made patents on software increasingly obsolete. If the window for a new software product or technique to be cutting edge shrinks to a mere 12 to 18 months, then it may no longer make sense to wait for a patent to issue before bringing the product to market. By the time patent protection is obtained, the product itself would be obsolete. Of course, this does not apply to all IP issues, and not all the time. But this is a real and genuine concern among technology developers and business owners who are struggling to fit their more agile business model into a framework of traditional, and often slow, legal processes.

It isn’t possible, of course, to map a precise one-for-one comparison between the evolution of hard science and law and policy. Instead, I am offering up quantum physics as a conceptual analogy for the ways in which we might approach the challenge of modernizing law and policy.

According to quantum mechanics, light can be a wave and a particle at the same time, and both properties can be leveraged. Relying on duality, rather than resisting it, leads to quantum gains. Quantum mechanics allows us to zoom in, to understand the behavior of things at almost unfathomably micro levels, leading to advances in miniaturization that were previously unimaginable. Quantum mechanics also explains the behavior of the universe at macro levels. It fills in the questions raised by Einstein’s theories of relativity, and allows us to zoom out and take stock of the heavens in ways that we could not have predicted only a short time ago.

Law and policy need to embrace some analogous approaches. One of these is the ability to embrace, rather than resist, duality: not just balancing rights or ideals that seem to conflict, but encouraging them to thrive at the same time, and understanding that sometimes outcomes are probabilistic.

Law and policy need the ability to zoom in, to tackle legal and policy questions at a level of detail that many practitioners resist. How often have you heard, or perhaps said, “I’m not technical” when talking about a complex question with a client? Lawyers should no longer allow themselves the intellectual sloppiness of saying that. It doesn’t take a computer science degree to embrace the intellectual challenge of understanding what data objects are, or how analytics work, or to consider the territoriality and jurisdictional questions raised by actions and information that traverse the world’s communications networks. We shirk our duties when we shy away from trying to understand, at a layman’s level, the technologies that are shaping our world.

Law and policy also need to zoom out, to know when it’s useful to describe black holes—to look at the macro effects of an analytic, for example—instead of focusing on the bits of data like grains of sand.

We need to recognize the limits of traditional approaches to law and policy, and to look for appropriate ways to bring to bear concepts like minimum viable product, rapid iteration, and failing fast and improving often, which have been key to technological advances in the private sector. The idea of failing at anything is anathema to lawyers: partly because so many of us are ego-driven (often to unhealthy degrees), and also because we value the stability that comes with predictability in the law, and we want to avoid actions that could bring about unintended societal injustice, whether to an individual or group.

However, the slow pace of legal and policy developments in key areas such as big data and data analytics, cybersecurity, and privacy mean that in fact we are already failing to provide the certainty, guidance, and resolution of issues that justice demands. If we are honest, it shouldn’t be hard to recognize the enduring wisdom in the words of William Gladstone or William Penn that “justice delayed is justice denied.”1

A more creative, rapid, and innovative approach to these issues could allow the law to move forward more quickly—but only if it is done in a way that has appropriate safeguards built in to counterbalance the risks of uncertainty in outcomes, unintended consequences, and erroneous judgments.

Subatomic Particles and the Universe of Conclusions: Quantum Policy for Big Data Analytics

A number of years ago, when I was at the National Security Agency (NSA), I had the privilege of leading the group that was charged with developing a new approach to vetting the legal and policy ramifications of big data analytics. I would like to be able to say that a team of smart lawyers and intelligence oversight officers were able to come up with innovative solutions, but that wouldn’t be accurate. Instead, what happened was that a cross-disciplinary team emerged: one that included software developers, intelligence analysts, people with expertise in data tagging and in platforms, and yes, people who were steeped in the specifics of the policy, legal, and oversight regimes.

We knew that we needed to create a framework that would be flexible enough to accommodate the wide range of analytics that would undoubtedly emerge over time. It needed to be agile enough to make decisions quickly—speed is often critical in intelligence operations, and we wanted to give people confidence that new ideas could be run through the vetting process quickly enough to meet mission needs. It needed to be adaptable over time, able to incorporate best practices as we learned them. It needed to include a definitive registry of decisions, allow periodic re-review when analytics or legal or policy requirements changed, and be scalable so that the review process could keep pace with a demand signal that was likely to grow.

With all of these requirements in mind, we assumed that risk-based tiering made sense. We knew we would need a decision matrix, something that could be translated into codable yeses and nos. We would need an iterative approach: evaluating analytics to create the matrix, training people on the matrix once it existed, and adding new rules to the matrix every time a new analytic review prompted a new rule. This cycle of constant updates would allow our decision-making processes to continue to grow and mature. And all of this would only be possible if we created a standing framework of human interaction, documentation, and thresholds for decision making that could be repeatable, scalable, and timely, and could grow.

The traditional approach of having one of our in-house clients bring us a problem would barely have worked for vetting a single cloud analytic—the intelligence analysts could have articulated the intended mission outcome, but probably could not describe with precision the ways the data would interact with each other or provide a comprehensive overview of the legal and privacy protections built into the computing platform where the analytics would run. The technology team could tell us how the data would interact, but were not as well positioned to gauge what the impact would be on intelligence analysis if an analytic were tweaked in a particular way at our suggestion. Relatively few lawyers have the technical and operational expertise to fully evaluate the legal implications of a complex analytic involving data governed by many different sets of rules. And analytic decision making required a repeatable approach that could tackle not just the legal advice but other dimensions of the decision as well: policy approval, resource commitment, and integrating legal advice with technical review to ensure that an analytic—while it might be given a green light by the lawyers—wouldn’t be run until someone else had made sure that it wouldn’t end up crashing its platform.

In other words, Newtonian approaches to legal review were not enough to deal quickly or well with assessing how complex analytics would operate on small bits of data, or to address the outcomes they would drive when they were run at a large scale.

With all of that in mind, the analytics vetting team crafted a novel, interdisciplinary way of reviewing analytics, codifying the results, creating a repeatable framework, and ensuring that the ecosystem for analytic vetting could continue to evolve and change as the people using the framework learned more. In creating this analytic vetting framework, this small, interdisciplinary team took a quantum leap forward into new ways of managing complexity, volume, scale, iteration, and a host of other issues that arose with the challenge of big data analytics. In other words, quantum policy had just taken hold.

Simultaneous Belief in Two Seemingly Contradictory Things

Anyone watching national security and privacy debates can see the ways in which black-and-white thinking obscures the complexity of the issues we face. On the national security side, zealous advocates with mental blind spots sometimes fail to acknowledge that many law enforcement and electronic surveillance programs raise genuine privacy impacts. While many people might believe that some privacy intrusions are worth the increased security benefit (I count myself as among that group, having testified publicly in favor of the renewal of Foreign Intelligence Surveillance Act (FISA) section 7022), it would be wrong to pretend that no privacy impacts exist; the better question is whether those privacy impacts are justified by the national security gain. Similarly, some privacy advocates disregard the real benefits of national security, going so far as to praise leaks of sensitive information and lionize the leakers. This could not be more short-sighted. When the privacy community celebrates theft of data from government systems and the unauthorized release of classified information, they undermine the credibility of the important privacy values that they are trying to speak for.

When it comes to private sector activities, many privacy advocates applaud companies that refuse to cooperate with the government for law enforcement or national security purposes. National and multinational companies use their opposition to judicial warrants as a marketing tool. While multinational companies face genuinely complex dilemmas over whether and how to comply with the vast array of laws across all of the jurisdictions where they do business, privacy advocates are mistaken to think of these companies as acting purely from altruism, when the same companies carry out data collection and analysis schemes that are more comprehensive, intrusive, and unfettered than anything that is typically lawful for the government to do in Western democracies.

Cybersecurity puts the paradox in stark relief. Most privacy advocates want to see a high level of security for systems holding personal data. But in order to achieve a high level of cybersecurity, it is often necessary to implement some degree of network and user activity monitoring—which has the effect of being privacy-intrusive. Today, many companies capture detailed system, network, and user log information and run complex analytics on it, perhaps combining that data with other behavioral indicators, in order to detect and assess insider threat. Yet doing so necessarily comes at a privacy cost.

Modern technology provides us with a nearly endless supply of seemingly contradictory positions, of situations in which the on-the-one-hand and on-the-other discussion seems to go on without end. We need approaches to law and policy that will help us harmonize those tensions, rather than simply thinking of one thing versus another; we need to be able to embrace them both.

What to Do When, under Existing Laws, the Behavior of the Universe Is Hard to Predict

Although cybersecurity is a challenge for everyone, what you see depends on where you sit, and cybersecurity legal risk looks different, and less definitive, from the private sector perspective than from the government perspective.

First, in the private sector, you can get sued. It doesn’t matter whether you are a multinational corporation or a small business that happens to hold personal information—like the Social Security numbers of your employees, or the credit card numbers of your customers. Most government entities do not share that concern. In the classic government case, there is a premium on confidentiality of information, and there is high concern over integrity and availability. While both government and private entities can face devastating impacts from the loss of secrets, the government can rarely be sued. Private entities have to think about all of the cybersecurity strategies that government entities use, and they also need to incorporate an additional set of risk mitigation tools in order to manage their cybersecurity risk—tools like insurance coverage, contractual language, limitations on liability, and representations and warranties.

Second, in cybersecurity lawsuits is there is no clearly defined standard of care. Frequently, a company cannot be confident that the cybersecurity measures it has taken will be deemed to have been “enough” in the face of a breach. As a result, cybersecurity risk assessments now permeate every aspect of commercial life, from mergers and acquisitions, to cross-border data transfers of personnel and customer information, to complying with the patchwork of data breach notification laws in this country alone.3

Further complicating matters, companies that have been hacked often feel as though they are victimized twice: first, as the victim of a computer crime; and second, as the victim of a federal or state enforcement action or regulatory probe seeking, with perfect 20/20 vision, to determine whether their cybersecurity preparedness had been “reasonable,” despite the lack of clearly defined standards. Private sector entities know it is unlikely that law enforcement will be able to reach attribution for a cyberattack—much less make indictments or arrests. And so they are reluctant to provide the government with information that could help identify cybersecurity threat trends. I often encouraged my clients to report cyberattacks to law enforcement. But there is a kernel of truth in those reservations my clients expressed: we treat cybercrime differently from other crimes. We expect the police to keep the city streets generally safe; we know that a bank that reports a theft to the police is unlikely to be held responsible for failing to stop the thief. At the same time, we know that the government cannot ensure cybersecurity for the private sector; we are not even sure that it should try. What we are left with is an odd tension in which hacking is a crime, but the victim shoulders part of the blame. (And if the victim is fined, those penalties are likely to be paid to the state, rather than as restitution to the second order of victims, such as customers or patients whose information has been breached.) We suspect that some kind of public-private partnership has to be central to solving the cybersecurity conundrum, but we are having an awfully hard time figuring out how to get there.

What Will the Future Bring?

The trends in technology and networking are driving us toward more widespread harvesting of information and increasingly complex ways of using it that range from consequential to comical. In just the past year in the private sector, here are a few examples of things we have seen:

  • In California, an employee sued her company for requiring her to install an app on her phone that would track her location even when she was off work.
  • Complex data analytics are being used to influence decisions by parole boards, and also to prioritize the text messages that come into a crisis hotline.
  • A committee in the U.S. House of Representatives approved legislation that would allow employers to require employees to undergo genetic testing and grant employer access to those results along with other health information.4
  • Litigation was brought against the manufacturer of an app-controlled sex toy, alleging invasion of privacy because the company did not tell its customers that it was harvesting data from the app about what settings were used, when, and how often.
  • We know that there are widespread insecurities in the Internet of Things, along with widespread adoption of in-home devices from smart refrigerators to audio-powered personal in-home assistants. In Germany, children’s toys with cameras and artificial intelligence (AI) were recalled because they were capable of spying.
  • Biometrics are increasingly used for identification, and people are volunteering to have implanted microchips that can be used for everything from tracking the time they spend at work to opening security doors and presenting their mass transportation passes.

Private sector litigation over data breaches keeps growing: suits against companies by individual customers and patients; derivative lawsuits by shareholders against directors and officers for failing to ensure effective cybersecurity measures were in place; and regulatory or enforcement actions taken by governments. Insurance companies have started underwriting policies for property damage and personal injury that result from a cyberattack with impacts in the physical realm: when a car’s steering system is hijacked, or a dialysis machine is shut down, or traffic lights are switched to show green in all directions, or an app that’s supposed to control a home cooktop is hijacked by a malicious user to turn on a stove, in turn causing a fire that burns down a house.

Consumers still say they want privacy, and they also still value the benefits that come from commercial products that harvest their information for digital personal assistants and in-home security systems, more accurate driving directions, the fun of locating friends online, interactive children’s toys, or other AI. Many consumers do not read privacy notices or understand them, and governments struggle to decide where the line is between appropriate consumer protection and economic-strangling paternalism.

Through it all, law and policy will struggle to keep up, because the pace of technology change is limited only by ingenuity and imagination, and to a much lesser extent by the laws of electrical engineering and physics. In other words, recent years have shown us a world in which the challenges of big data, privacy, and information security grow more complicated, more multidimensional, and more interconnected. And those challenges are greatest when dealing with the extraordinarily detailed types of information now available about nearly everyone, and in dealing with the macro-level conclusions being drawn from analytics that are assessing that information.

How Quantum Policy Can Help

Just as quantum physics paved the way for technologies from which the entire world benefits, quantum policy can help support the effort to democratize data science, privacy, and related technology.

From its inception, the Internet has been a great democratizing force, making unprecedented volumes of information available to millions of people all over the world, often for free (or included in the price of a cell phone service plan). Free webmail services made global communication cheap, easy, and practically instant for billions of people around the globe who would never previously have considered sending international letters or telegrams. Voice over Internet Protocol (VoIP) connections made real-time conversations with voice and video available to billions of people who never could have afforded international long-distance phone calls. Web hosting, blogs, YouTube, you name it—nearly everything necessary for the exchange of ideas and commerce have been put into place by the innovation engine of the Internet, at a price that makes these tools accessible on a scale that could never have been imagined before. Data has never been more ubiquitous or available. At the same time, it is impossible to escape the potential negative impacts: the same AI that promises rapid innovation, the same genetic information that promises new medical treatments, and the same convenience of personalized traffic recommendations can also be used to bring about unprecedented invasions of privacy. And the same access to information that supports democracy can also be subverted to cement the iron rule of authoritarian regimes.

The profusion of data and ways to manipulate it, the paradox of privacy in an era of living online, the question over who can misuse information and how, or how to think about consumer choice in a time when we freely give away data but do not always understand how it is used—all of these dilemmas are forcing us to acknowledge the limits of our current legal and policy approaches to them. If we do not modernize our approaches to law and policy, they will continue to lag sorely behind the pace of technological change, with real-life consequences—some of them unintended—for individuals, organizations, and governments. Here are a few examples of what quantum policy could look like:

  • We need to admit that we cannot put the tech genie back in the bottle or reseal Pandora’s box; legal originalism has limited effectiveness in judicial review of these kinds of matters.
  • We need to avoid the temptation to use scaremongering, or else we will miss the opportunity for clear and rational discussions about the ways technology can be privacy-protective.
  • We need to educate the public without paternalism or condescension.
  • We need to teach technology in law schools, teach privacy in technology schools, and increase the use of cross-disciplinary teams.
  • We need to take rigorous approaches to data risk, being sure we understand where the biggest data really is, and assign accountability to both the government and the private sector accordingly.
  • We need to pursue cost-effective data security, realigning penalties to harm.
  • We need to consider duality: For example, if the encryption and going-dark debate falters because of black-and-white thinking, we should consider alternatives outside of that binary box. Perhaps encryption and back doors are not the only ways to achieve the goals of keeping data private and secure, while allowing the government to access it for a legitimate purpose.
  • We need to focus privacy policy on the end goals and sensitivity of the information, rather than focusing on how it was acquired (commercial purchase, government warrant, etc.), and adopt policies that avoid unnecessarily placing the United States at a disadvantage against other nations, either in national security or economic terms.
  • We should consider ways to implement test cases and incubator environments for legal and policy evolution.

When it comes to dealing with legal and policy issues for emerging technologies, we are still largely living in a Newtonian age. If we do not make the leap to quantum policy, our entire ecosystem of jurisprudence, litigation, legislation, intellectual property, and privacy rights will suffer as a result. Quantum physics did not evolve overnight, and neither will quantum policy. But by looking at examples of innovations that have worked, and by assembling cross-disciplinary teams to think in creative ways about the challenges that face us, we can move toward solutions that allow the law, and lawyers, to keep up. 

Endnotes

1. The quote has been variously attributed to both.

2. Section 702 of the Foreign Intelligence Surveillance Act: Hearing Before the H. Comm. on the Judiciary, 115th Cong. 20 (2017) (statement of April F. Doss, Partner, Saul Ewing LLP).

3. At last count, 48 states had their own data breach laws. These laws have different definitions of an actionable breach; they impose different timelines for notifying victims; some require notifying state government agencies and others do not; and they have different requirements for the information to be included in consumer and regulator notifications.

4. Sharon Begley, House Republicans Would Let Employers Demand Workers’ Genetic Test Results, PBS Newshour (Mar. 11, 2017), http://www.pbs.org/newshour/rundown/house-republicans-let-employers-demand-workers-genetic-test-results/.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

By April F. Doss

April F. Doss ([email protected]) was formerly the head of intelligence law for the National Security Agency (NSA), and the chair of the cybersecurity and privacy practice at the law firm Saul Ewing. She currently serves as senior minority counsel for the Senate Select Committee on Intelligence (SSCI). The views expressed here are her own, and not those of the NSA, SSCI, or any other organization.