Climate change, with its need for global monitoring and response, has introduced new demands for surveillance and data collection, potentially infringing on privacy rights. For example, digital emissions tracking can be helpful for climate change risk mitigation but raises privacy concerns. Similarly, responsible AI has created a need for algorithms that are both secure and ethical, but at the same time, these systems may undermine privacy if not implemented properly. Furthermore, AI systems require massive infrastructures that may have cybersecurity vulnerabilities and require sizeable electric power sources. These developments require a fundamental shift in legislating, regulating, and managing digital and environmental risks.
While previous regulatory approaches have sought to address security and privacy concerns within narrowly defined frameworks, they often have overlooked the broader consequences that regulations in one area might have on others. For example, some well-intended environmental regulations have inadvertently exposed vulnerabilities in cybersecurity, leading to increased risks, significant damage, and financial loss. For example, most U.S. states have environmental laws regarding the disposal of electronic equipment. Much of this equipment is exported to African and Asian countries, where hackers can harvest it to create botnets that launch damaging cyberattacks.
This article proposes a holistic approach that integrates the domains of climate change, responsible AI, cybersecurity, and digital privacy to mitigate unintended consequences and foster a sustainable and secure digital ecosystem.
Definitions and Impacts
Defining each term and examining implications for digital privacy and security are essential to understanding the relationships among these four domains.
Climate change refers to long-term shifts in temperature, precipitation, and weather patterns. Its impacts include rising sea levels, more severe natural disasters, ecosystem changes, and disruptions to agricultural and water resources.
Responsible AI involves developing and deploying artificial intelligence systems ethically, transparently, and fairly. This includes ensuring AI systems are designed to avoid bias, promote accountability, and respect user privacy. While AI has the potential to revolutionize industries and improve lives, it also presents risks to privacy, security, and fairness. AI models also require vast computational resources and electric power, which is currently straining the U.S. power grid and leading to significant greenhouse gas emissions exacerbating climate change. Bloomberg News reports, “The grid has never faced the kinds of strain that comes with data centers.” Moreover, AI models that rely on vast amounts of personal data may inadvertently infringe on individual privacy, while autonomous systems may create new security vulnerabilities.
Cybersecurity processes and systems protect computer systems, networks, and data from unauthorized access, theft, or damage. Robust cybersecurity measures often conflict with privacy protections, as they may impede lawful access to data by authorities or undermine transparency in AI systems.
Digital privacy involves protecting individuals’ personal data and encompasses the right to control how one’s data are collected, used, and shared. However, digital privacy must be balanced against the need for data-driven innovation and security. As technology evolves, finding the right balance between protecting individuals’ privacy and ensuring the functionality and security of digital systems has become increasingly complex.
Long-Term Conflicts Between Cybersecurity and Digital Privacy
The conflicts between digital security and privacy are not new, and they have been the subject of various legal and technological debates over the years. Some of the most significant examples of these tensions are discussed in the following sections.
Encryption vs. Law Enforcement Access
The debate over encryption has long been a flashpoint between cybersecurity and privacy advocates. Encryption is a key tool for protecting data, but law enforcement agencies often argue that encrypted data impede investigations, especially in cases involving terrorism or organized crime. The Clipper Chip, introduced in the early 1990s, was a government initiative to embed encryption in communications devices with a “backdoor” for law enforcement access. However, the public backlash over the potential for government surveillance led to its failure, illustrating the delicate balance between privacy and security. In 2016, the Federal Bureau of Investigation (FBI) filed a legal action in a California court against Apple to require it to create a custom operating system that would disable key security features on the iPhone. Apple opposed the order, arguing that granting it would undermine the security of all Apple devices and set a dangerous precedent for future cases. The case ultimately became moot when the FBI found another solution to the immediate situation. However, the central issue was not resolved.
The debate continues with many proposals from some lawmakers and law enforcement organizations in the United States and abroad to enable ways to bypass encryption and opposition from other lawmakers and privacy advocates. A recent proposal is “client-side scanning,” in which material would be scanned for illegal content before it is encrypted and leaves the device. This concept was part of the proposed EARN IT Act introduced in the Senate in 2020, 2022, and 2023 but never enacted. Although the act attempts to prevent online child abuse, its critics say the approach is highly flawed in its treatment of encryption.
Morrison Foerster provides a current survey of cybersecurity and digital privacy issues. The survey includes global regulations, U.S. state privacy laws, new cyber threats, applications of AI for data protection, and other relevant topics.
Data Collection vs. Individual Privacy
The explosion of data collection by tech companies and governments has created a paradox: While data are essential for improving services, making decisions, and advancing technologies, data also raise serious privacy concerns. The collection of vast amounts of personal information can lead to profiling, discrimination, and breaches of confidentiality. Laws like the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act aim to protect individuals’ privacy rights. Still, their implementation raised concerns about compliance costs and the potential for overregulation.
Mass Surveillance vs. Privacy Rights
Mass surveillance, used by governments and corporations for national security or marketing purposes, is another area where digital security and privacy collide. For example, the USA PATRIOT Act in the United States allowed for the expansion of surveillance activities following the September 11 attacks, raising significant concerns about the erosion of individual privacy rights. Similarly, surveillance technologies that monitor environmental changes to combat climate change could infringe on privacy if not correctly managed.
How Climate Change and Responsible AI Are Changing and Extending These Conflicts
Climate change and responsible AI complicate the problematic relationships between cybersecurity and digital privacy. These developments introduce new risks and regulatory demands that are not fully addressed by existing frameworks.
Increased Reliance on Surveillance for Climate Monitoring
As climate change leads to more severe natural disasters, governments and enterprises increasingly rely on surveillance technologies to monitor environmental conditions and provide early warnings. However, these technologies have security vulnerabilities and may involve collecting personal data, such as location tracking or biometric data. This raises significant concerns about data availability, integrity, and privacy, as individuals may not be aware that their data are being collected or may not have consented to their use.
Cybersecurity of Climate-Critical Infrastructure
The cybersecurity of critical infrastructure—such as power grids, water supply systems, and telecommunications—has become a focal point in the era of climate change. Disruptions to these systems, caused by cyberattacks, excessive power demands from information systems, or environmental factors, can have devastating consequences. Securing these systems often involves creating significant networks of sensors and data collection systems, which can introduce new privacy risks. Furthermore, the increasing costs of these infrastructures mean that financial resources must be focused on expansion and environmental resilience, which have limited security expenditures.
Disaster Management and Emergency Responses
Disaster management systems, which rely on real-time data to coordinate responses to climate-related events, can save lives and property, mitigate public health issues, speed recovery, and otherwise improve outcomes. A recent U.S. Chamber of Commerce study shows that “every $1 spent on climate resilience and preparedness saves communities $13 in damages, cleanup costs, and economic impact.” However, these systems also may expose sensitive personal information. In the aftermath of a disaster, the use of AI and surveillance to coordinate emergency responses may infringe on individuals’ privacy, particularly in marginalized communities. New products and services based on AI technologies are emerging to address these challenges.
Impact of AI on Digital Privacy and Cybersecurity Conflicts
Enhanced Surveillance Capabilities
AI technologies, such as facial recognition and predictive analytics, offer enhanced surveillance capabilities. While these technologies can improve public safety and security, they also raise significant privacy concerns. The ability to track individuals across different environments and contexts may undermine personal privacy, potentially leading to intrusive monitoring practices. Last year, the U.S. Government Accountability Office testified to Congress about using facial recognition technology by seven federal law enforcement agencies. It provided recommendations to address civil rights concerns. While facial recognition systems alone cannot provide legal identification, they can significantly impact law enforcement investigations by providing leads quickly. Their use is still controversial.
Maryland is the only state in the United States to date to pass legislation explicitly addressing facial recognition technology. It went into effect on October 1, 2024. The Maryland State Police published a model policy to assist agencies in incorporating these new requirements into their policies and procedures. Some states have tried to ban facial recognition technology but reconsidered when they saw the positive benefits of providing leads in investigations. There is no federal law or regulation for facial recognition, so this issue is addressed at the state level.
Big Data AI Models
AI models require vast data volumes, particularly those used for training generative models. Such models may have billions of parameters and use petabytes of information for training. They are security and privacy targets because of their scale and growing importance. The collection and use of these data, often without individuals’ explicit consent, raise significant privacy concerns. For example, large language models may inadvertently access personal or sensitive data from the Internet, putting privacy at risk.
While generative AI models can create highly personalized content, they may inadvertently reveal sensitive information about individuals or groups. Moreover, the model training processes sometimes violate copyright protections. In 2023, Google staff members revealed a privacy attack on ChatGPT 3.5 turbo that exposed more than 10,000 unique verbatim memorized training examples with names, addresses, and phone numbers. While this problem has been fixed in later versions of ChatGPT, it illustrates potential cybersecurity and digital privacy threats to large language models (LLMs). Responsible AI initiatives include ethical uses of generative AI and LLMs as significant priorities.
Improved Regulations and Enterprise Risk Management Can Mitigate These Conflicts
The growing complexity of the risks posed by climate change, responsible AI, digital security, and digital privacy calls for a new approach to legislation, regulation, and enterprise risk management.
Interdisciplinary Impact Assessments
A key aspect of managing these conflicts is conducting interdisciplinary impact assessments. These assessments should consider the environmental, privacy, and security implications of new technologies and regulatory approaches. By integrating expertise from diverse fields, organizations can anticipate and mitigate potential risks before they materialize. Interdisciplinary impact assessments for risk mitigation have been performed or advocated in diverse areas such as pesticide management, consumer marketing, and environmental regulation. A principles-based practice will be essential in these areas due to their economic scale and rapid changes.
Transparency and Flexibility
Regulations must be transparent and flexible, allowing rapid adaptation to new technologies and evolving threats. This means creating regulations that are not overly prescriptive but provide clear guidelines and principles that can be applied across different industries and contexts. The classification of regulatory approaches into “prescriptive rules–based” and “principles-based” is particularly relevant here due to the highly dynamic nature of climate change impacts and AI technology development.
Regional Economic Standards
Regional economic standards significantly impact digital privacy, cybersecurity, responsible AI, and climate change risk mitigation. Europe emphasizes comprehensive, precautionary regulations (e.g., GDPR, the NIS Directive and Cybersecurity Act, the European Climate Law, and the EU AI Act). The United States has a distributed approach that is often market-driven and somewhat innovation-friendly. There are some federal laws and regulations and many sectoral and state-level variations. Asia has several diverse maturity levels, with some countries (e.g., China) prioritizing state control and others (e.g., Japan) trying to balance regulation and innovation. Establishing common standards might create a more equitable playing field while ensuring that all parties adhere to the same security and privacy expectations. This looks like a long road, but there may be some opportunities for improvement through United Nations action.
Improving Enterprise Risk Management
To address these challenges at the enterprise level, enterprise risk management (ERM) processes need to be adapted to include the risks associated with climate change, responsible AI, cybersecurity, and digital privacy. This will be a growing challenge for corporate chief risk officers. This challenge includes:
Creating an interdisciplinary process for risk assessment: Engaging experts from cybersecurity, privacy, AI, and environmental fields to assess risks from multiple angles.
Establishing a unified privacy-security framework: This framework should account for climate change and AI developments while aligning with business objectives and compliance requirements. Many frameworks exist for individual domains (e.g., the NIST frameworks for cybersecurity, privacy, and risk management). However, creating an interdisciplinary framework that addresses all four domains will require effort from the enterprise risk team. For example, a recent paper by an IBM subsidiary provides helpful content on responsible AI, cybersecurity, and privacy but does not include the word “climate.” As the impacts of extreme weather and climate change continue to grow, more enterprise risk teams will integrate their climate risk management plans with other aspects of their work on privacy and cybersecurity.
Implementing risk mitigation strategies: New developments in ERM include broader use of risk maturity models; expanded governance, risk, and compliance platforms with enhanced data analytics capabilities; and the integration of risk management across supply chains. As litigation and financial risks for corporations grow, boards will place more emphasis on implementing integrated ERM.
Concluding Remarks
Managing the evolving conflicts between digital privacy and digital security in the context of climate change and responsible AI requires a comprehensive, proactive, and adaptable approach. Chief risk officers must stay ahead of regulatory changes, promote a culture of responsibility within AI and cybersecurity, and leverage cutting-edge privacy technologies. Additionally, fostering cross-disciplinary collaboration will be key to addressing these complex challenges to avoid unintended consequences. Organizations can safeguard their assets by adopting a holistic, interdisciplinary approach while meeting privacy and security obligations in an increasingly complex digital landscape.