chevron-down Created with Sketch Beta.

ARTICLE

Regulating Artificial Intelligence: Global Trends and Regional Approaches Shaping the Future

Nandini Pathak and Vaideh Balvally

Summary

  • The United Nation’s Resolution on AI, European Parliament’s Artificial Intelligence Act, and the launch of the IndiaAI mission is building toward a global consensus on AI regulation.
  • In the United States, while AI regulation falls to the states, 120 bills are being considered by Congress on education, copyright disclosure, national security, biological risks, among other areas.
  • Regional regulations cotributes to fragmentation making it difficult to develop global approaches to AI regulation.  
Regulating Artificial Intelligence: Global Trends and Regional Approaches Shaping the Future
Narumon Bowonkitwanchai via Getty Images

Introduction

Artificial intelligence (AI) is developing at a rapid pace. From generative language models like ChatGPT to advancements in medical diagnostics technology, policymakers acknowledge that AI has the potential to deliver significant and transformative changes in almost all aspects of our lives. While AI has enormous potential it also poses significant risks that demand careful regulation and governance.

AI experts and entrepreneurs, such as Sam Altman, Elon Musk, and Mustafa Suelyman have warned humanity against the dangers of AI by suggesting that this technology could put an end to civilization. Thought leaders, such as Yuval Noah Harari suggest that AI poses an unprecedented threat to humanity because it is for the first time ever in the history of the human race that technology can make decisions as well as create new ideas on its own.

Regulation of AI is important to mitigate the risks posed by technology, to ensure the ethical use of AI, to promote human rights and safety, and to reduce the adverse social and economic impact of the use of AI. The debate is about how to best regulate this innovative technology.

In recent years, the AI space has seen some significant regulatory developments. With the United Nation’s Resolution on AI, the Artificial Intelligence Act (“AI Act”) by the European Parliament, and the launch of the IndiaAI mission, there is a growing consensus for the formalization of AI regulations. However, the introduction of regional regulations by governments is contributing towards fragmentation and regionalization. This would make it harder to develop a global approach to AI regulations.

This piece discusses the global trends, regional approaches, and the challenges in the regulation of AI.

Global Trends in AI Regulation

Six trends in AI regulation emerging globally are as follows:

  1. Core principles: In some jurisdictions, the AI regulation and guidance is in alignment with the core principles as defined by OECD and recommended by G-20. These principles include respect for human rights, transparency, sustainability, and strong risk management.
  2. Risk-based approach: Some jurisdictions are known to take a risk-based approach to AI regulation. These jurisdictions are designing their AI regulations to the perceived risks around AI to core values, which include, privacy, non-discrimination, transparency, and security. The principle being followed is that the compliance obligations should be proportionate to the level of risk.
  3. Sector-specific: Due to the varying use cases of AI, some jurisdictions have adopted sector-specific rules for the regulation of AI.
  4. Policy alignment: Some jurisdictions are adopting AI-related rulemaking along with other digital policy concerns and priorities, such as intellectual property protection, data privacy, and cybersecurity.
  5. Private-sector collaborations: Some jurisdictions are using regulatory sandboxes as an instrument to encourage collaborations between the private sector and the policymakers to develop rules towards the fulfilment of the core objective of promoting safe and ethical use of AI.
  6. International collaborations: Some jurisdictions are making efforts to encourage international collaborations towards not only understanding but also addressing the risks posed by the rapidly evolving new generative and general-purpose AI systems. They are driven by a shared concern for the fundamental uncertainties regarding the risks posed by AI systems.

Regional Approaches to AI regulation

In the absence of a global regime for the regulation of AI, several jurisdictions have adopted regional approaches to the regulation of AI. There is evidence to suggest divergence in global regulatory practices.

(i) EU’s Framework

In June 2024, the European Parliament passed the AI Act, the world’s first comprehensive legal framework to regulate AI systems. The aim of the AI Act is to make Europe a global hub for trustworthy AI. The AI Act meticulously lays down harmonized rules to govern the development, marketing, and use of AI in the EU. The objective of the AI Act is to foster investment and innovation in AI, enhance governance and enforcement, and to encourage a single EU market for AI.

The AI Act classifies AI systems into four categories, according to risk: (i) unacceptable risk; (ii) high risk; (iii) limited risk; and (iv) minimal risk. AI systems that pose unacceptable risk, such as cognitive behavioural manipulation of people or vulnerable groups, are prohibited. Most of the AI systems fall under the ‘high risk’ category, such as critical infrastructure and education, which may have a detrimental impact on the health, safety, and fundamental rights of people.

AI systems posing ‘high risk’ are subjected to strict obligations, such as adequate risk assessment and mitigation systems, and appropriate human oversight to minimize risk. AI systems posing limited risk, such as chatbots are subjected to information and transparency requirements. AI systems posing minimal risk, such as AI enabled videogames and spam filters are not subjected to any specific obligations.

Furthermore, the AI Act includes several provisions aimed at reducing the financial and administrative burden on small and medium sized enterprises, which play a significant role in driving innovation. These provisions include free access to regulatory sandboxes, simplified documentation requirements, representation in governance, communication and training, and proportionate compliance costs.

(ii) US’s Approach

The US has promulgated several state-level regulations to address the rapid advancement of AI technologies. Currently, there is no comprehensive federal legislation or regulations to regulate the development and use of AI in the US. However, there are more than 120 AI bills being considered by the US Congress, which deal with AI education, copyright disclosure, national security, biological risks, and others.

These bills adopt a cautious approach to regulation, placing more reliance on the development of voluntary guidelines and best practices for AI systems as compared to imposing strict mandates. This approach is being carefully considered to avoid stifling technological innovation and losing competitiveness in the global market.

The US government issued an executive order on ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ in October 2023 which adopts a comprehensive approach to AI regulation. The US’s proposed Federal AI Governance and Transparency Act lays emphasis on creating a uniform standard for the regulation of AI based on principles of transparency and accountability. Among other things, the bill aims at defining federal standards for responsible AI, establishing AI governance charters, and creating a clear, cohesive, and consistent federal AI policy.

Most recently in January 2025, President Donald Trump signed an executive order on AI revoking past government policies which act as barriers to the innovation of AI in America. This executive order also calls for the development of an AI action plan within 180 days.

The US has adopted a more risk-based, sectoral specific, and high-level agency approach to AI risk management. This approach focuses more on damage control rather than damage prevention.

(iii) U.K.’s Standpoint on AI

On February 6, 2024, the UK Government released its response to the White Paper consultation on regulating AI which was published in August 2023. As expected, the Government adopted a “pro-innovation” approach to AI, which is spearheaded by the Department for Science, Innovation, and Technology.

The U.K. has adopted a principled, context-based, cross-sectoral, decentralized and outcome-based approach to regulation of AI. It is based on the principles of safety, security and robustness, appropriate transparency and explainability, fairness, accountability, governance, contestability, and redress.

The U.K’s regulatory approach to AI includes mandatory consultations with regulatory bodies, expansion of technical know-how, and developing expertise to better understand and regulate complicated technologies and others.

(iv) China’s Approach

China’s approach to regulating AI is based on identifying risks. While the development of AI technologies is encouraged, necessary safeguards against any potential harm to the social and economic goals of the nation are also considered. China’s regulatory framework addresses three key issues, which are: (i) content moderation; (ii) personal data protection; and (iii) algorithmic governance.

China has published a draft regulation for generative AI which mentions alignment to “socialist core values.” The draft regulations impose obligations on developers for the output created because of their use of AI. These draft regulations impose several restrictions, such as on sourcing training data. Developers are legally liable if their training data infringes upon the intellectual property of others.

The Chinese government is optimistic about the future of technology and AI. In the “Next Generation Artificial Intelligence Development Plan,” a document released by the Chinese government, in the year 2017, the authors suggest that “China’s AI theories, technologies, and applications should achieve world-leading levels by the year 2030.”

(v) India’s Position

India has adopted a “pro-innovation” approach to AI regulation. The Indian government is determined to unlock the potential of AI, while also taking into consideration the risks posed by the use of AI technologies. The G-20 Ministerial Declaration made during India’s presidency along with a statement in Parliament in April 2023 suggest that the Indian government is not considering introducing legislation to regulate AI.

However, around the same time, the Ministry of Electronics and Information Technology (MeitY) published a blueprint for a new Digital India Act that acknowledges the need to regulate high-risk AI systems. In March 2024, the Indian government issued an advisory which mandated compliance with immediate effect, directed companies to obtain permission before deploying certain AI models to India. However, the advisory was withdrawn and replaced.

On March 6, 2024, the Cabinet approved the comprehensive national-level IndiaAI mission with the allocation of a budget of Rs.10,371.92 crore. The IndiaAI mission aims to establish a comprehensive ecosystem, which catalyzes AI innovation by way of strategic programs and partnerships in both the public as well as the private sectors.

India’s fragmented approach to AI regulation is due to multiple stakeholders. Overall, the Indian government has adopted a cautious approach to the regulation of AI.

Challenges

Brookings India in its 2023 report on “The three challenges of AI regulation” identifies the key obstacles that policymakers and governments have to overcome for holistic regulation of AI. These challenges include: (i) to keep up with the velocity of change in the AI space; (ii) decision-making on what to regulate; and (iii) determining who regulates and how.

Focus and agility are required to keep pace with the velocity of change in the AI landscape. The regulatory needs of the industrial revolution era are not the same as those for the AI revolution. Governments have to be innovative, and self-regulation by technology companies does not seem appropriate.

The regulation of AI should be risk based and targeted. This is because AI has vast capabilities. The use of AI in videogames stands in stark contrast to AI that could threaten security of critical infrastructure. Thus, AI in both these instances deserves to be treated differently.

So far, when it comes to the regulation of AI, innovators have made the rules. Governments in most jurisdictions have lagged in keeping up with the rapid advancement of AI. There is a general understanding on the need to regulate AI. However, it is up for debate as to who and how do we regulate AI. Governments across jurisdictions may choose to regulate AI through licensing or risk-based mechanisms.

Conclusion

Unsurprisingly, governments, regulators, and policymakers have struggled to keep up with the rapid advancement of AI technologies. The lag in regulation creates critical gaps in accountability, making it difficult to manage AI’s broader societal impact and to ensure its responsible use.

Governments across the world are in an unprecedented and difficult position as they deal with the complex issue of regulating AI. Regulation of AI is urgently needed and unpredictable. If implemented incorrectly, it may even become counterproductive.

However, governments cannot afford to wait to gain access to perfect and complete information before taking necessary action. By delaying action, governments risk failing to intervene in time to prevent the trajectory of technological development from resulting in existential or unacceptable risks.

Given the global nature of the issue of regulating AI, an international regulatory response is necessary. Agreement on the first principles to regulate AI across jurisdictions would be a good start.

For regulatory approaches to strike the right balance between fostering innovation and allowing for government oversight, it is necessary for individuals, companies, policymakers, governments, and other stakeholders to collaborate, cooperate, and engage in open conversations.

A robust legal and regulatory framework, supported by comprehensive policies, serves as the backbone for harnessing the transformative potential of AI. Lawmakers must consider aligning such a framework with human values, such as transparency. This framework would balance innovation with government oversight, mitigate risks posed by the use of AI, and provide clear guidelines for the ethical, responsible, and sustainable development and deployment of AI for the benefit of all in society.

Until a robust legal and regulatory framework is in place, companies must adopt a proactive approach to communication, internal governance, and risk assessment to ensure accountability for their use of AI.

    Authors