This is a longer form piece of “Big Question: How Does Digital Privacy Matter for Democracy and its Advocates?” published by the National Endowment for Democracy on January 22, 2024.
Privacy is often referred to as a 'gateway' right fundamental to achieving human rights such as freedom of expression, thought, belief, association, assembly, and non-discrimination. At its core Privacy establishes the boundaries that give us space to develop our personality and how we interact with the world around us. However, Privacy is not a static concept, but rather shaped by cultural, social, and individual norms and contexts including technological advancement and modern innovations – you share different information about yourself to your friends than you do with your boss. Our definition of Privacy is also influenced by how members of our community and networks use information about us. The Cambridge Analytica scandal, where the personal information of 300,000 Facebook users was collected and used to collect information about their contacts affected about 87 million users, highlights the networked nature of privacy in the digital age. Our privacy can be compromised not only by the information we choose to share about ourselves, but also by others sharing or even gathering information, such as contact information or pictures, of us. Violations of privacy like this can lead to identity theft, reputational damage, and harassment both online. As Cambridge Analytica demonstrated, Privacy breaches can also be used to manipulate public opinion, threatening key government institutions and processes such as elections.
The speed of technological innovation made possible through large data sets and AI exacerbate challenges in protecting Privacy in the digital world. This article explores three critical challenges to our right to privacy in the digital world:
The predominant business model of most technology companies, based on harvesting and analyzing mass amounts of personal and non-personal information.
The impact of emerging technologies blurring the lines between online and offline harms; and
Cybersecurity threats posed by Spyware such as Pegasus, specifically targeted at undermining our digital privacy.
Business Model
The primary business model for tech companies relies on collecting, analyzing, and selling massive amounts of user data conflicts with our rights to privacy, enshrined in documents such as the Universal Declaration of Human Rights (Art. 12) and International Covenant on Civil and Political Rights (Art. 17). This is not only a safety risk for Human Rights Defenders (HRDs), but also forces them into a Faustian bargain: HRDs must use a system founded on the violation of privacy rights to communicate with the world about this same abuse and others. Scarce avenues for engaging tech companies on rights abuses stemming from their programs are overwhelmed by demand and manned by the representatives who often lack authority to implement the wide-ranging policy reforms limiting the possibility for system-wide reforms necessary to protect digital privacy rights to be implemented by the private sector.
Current efforts to address digital privacy challenges seem to accept this business model and the necessity of data collection as a given. There is an overemphasis on data protection, which concentrates solely on ensuring data remains secure during processing, is protected from unauthorized access and its integrity is maintained neglects other crucial aspects to digital Privacy. “Security” carve-outs prevalent in most data protection laws grant authorities extensive discretion and create a security-centric approach that compromises privacy rights. This can be seen in draft legislation like the UK Online Streaming Act, which undermines end to end encryption, or real name registration requirements in places like Australia, India, Germany, France and Sweden, all of which can compromise communication with Human Rights Defenders (HRDs) in repressive states.
Furthermore, while a robust body of human rights law for online privacy exists, global technology discourse that shapes privacy standards often fails to resonate in global majority countries due to its Eurocentric bent—a predictable outcome given the underrepresentation of these countries at forums, like the UK’s recent AI Safety Summit, where these issues are discussed. This homogenization results in a misalignment of values and creates space for authoritarian governments to promote repressive tech regulations that further threaten privacy rights.
Emerging Technologies
Emerging technologies such as the Internet of Things (IoT), augmented and virtual realities (AR/VR), along with increasingly sophisticated algorithms to support AI and automated decision making will only enhance the challenges posed to Privacy in the digital world. Take for example the challenges posed by facial recognition technology (FRT), used here to refer to both one-to-one and one-to-many recognition technologies. In real life (IRL), even though our faces can be seen by everyone we encounter in our daily lives, most people, apart from celebrities and public figures, have a reasonable expectation of walking down the street, going into a shop, hotel, or house anonymously.
The increasing use of FRT, however, equips governments and other actors with the power to track the movements of individuals IRL, through CCTV and body cameras along with other means. This capability has been used to arrest protesters not only by authoritarian regimes such as Russia, but also democracies from India to the United States. While protesting has always carried risks with governments able to identify protesters through traditional means, emerging technology from AI-enabled FRT and GPS tracking greatly amplifies these risks. These technologies also pose serious risks to our private lives as well. Consider the impact tracking via FRT can have on homosexuals seeking a private liaison at a hotel or political dissidents who fear putting family and friends at risk simply by meeting for coffee.
Faceprints – digital maps of a face created by analyzing multiple images – further complicate the issue of facial anonymity. Faceprints can be used in data sets to train AI systems or track individuals after the original images have been deleted, thus providing a means to circumvent existing data protection laws. While recent cases have found companies such as Meta (formerly Facebook) and Clearview AI guilty of violating privacy laws in Illinois and Italy, the impact of these cases is limited to those jurisdictions. Exact data deletion – removing all traces of a specific piece of personal data – is nearly impossible. Moreover, given the dynamic nature of privacy, and potential for individuals to remove their consent at any time, data sets will need to be constantly monitored and updated, making such a solution costly and overwhelming.
Spyware
The modern development of spyware creates one of the major challenges against digital privacy that sometimes is hard to track and hard to prevent. Highly invasive spyware has become a bargaining chip between states and a tool for the states or private entities to look into your personal domain, steal your personal data or even control your digital devices. This is a major challenge especially in the context of using spyware against political activists, political dissidents or HRDs. Also, the blurred line between the protection of privacy and the derogation of privacy for the sake of national security or national intelligence and/or the lack of comprehensive regulations only exacerbates the situation.
For example, the development of highly invasive Pegasus Spyware presents a real threat to digital privacy across the globe. Pegasus Spyware can infect your phones without your knowledge or even without your action with its “Zero-Click Exploitation” technology. Not only that, but it also can access almost every data in your phones such as passwords, bank accounts, personal chats and messages, photos, and locations, and it can control your microphone and camera.
Further complicating the ability to prevent privacy violations using Pegasus Spyware is that it can only be sold to the governments or state entities. According to reports by organizations such as Amnesty International, Citizen Lab or iLaw many governments, use Pegasus Spyware against many HRDs, political dissidents, political activists and journalists who come out to criticize their governments which could lead to chilling effects or self-censorship, restriction against the exercise of their rights, or even deaths. As technology evolves quite rapidly, this rising threat against privacy can only become more complicated and harder to solve. In Thailand, victims of Pegasus Spyware are challenging the government in court for violating their privacy. There are two cases pending in the courts right now: the first, an Administrative Case against the government authorities who purchased and used Pegasus Spyware against political activist and a human rights lawyer. Second, a Civil Case against N.S.O Group Technologies, an Israeli-company who designs, develops and sells Pegasus Spyware to the states. In the first instance the Administrative Court dismissed the claim on the basis that it was part of criminal justice process and thus not within the Administrative Court’s jurisdiction and is now under appeal. However, the Civil Case remains ongoing with the pre-trial conference date set on 5th of February, 2024 and is hoped to establish a benchmark case providing a higher standard of protection for digital privacy.