(iii) U.K.’s Standpoint on AI
On February 6, 2024, the UK Government released its response to the White Paper consultation on regulating AI which was published in August 2023. As expected, the Government adopted a “pro-innovation” approach to AI, which is spearheaded by the Department for Science, Innovation, and Technology.
The U.K. has adopted a principled, context-based, cross-sectoral, decentralized and outcome-based approach to regulation of AI. It is based on the principles of safety, security and robustness, appropriate transparency and explainability, fairness, accountability, governance, contestability, and redress.
The U.K’s regulatory approach to AI includes mandatory consultations with regulatory bodies, expansion of technical know-how, and developing expertise to better understand and regulate complicated technologies and others.
(iv) China’s Approach
China’s approach to regulating AI is based on identifying risks. While the development of AI technologies is encouraged, necessary safeguards against any potential harm to the social and economic goals of the nation are also considered. China’s regulatory framework addresses three key issues, which are: (i) content moderation; (ii) personal data protection; and (iii) algorithmic governance.
China has published a draft regulation for generative AI which mentions alignment to “socialist core values.” The draft regulations impose obligations on developers for the output created because of their use of AI. These draft regulations impose several restrictions, such as on sourcing training data. Developers are legally liable if their training data infringes upon the intellectual property of others.
The Chinese government is optimistic about the future of technology and AI. In the “Next Generation Artificial Intelligence Development Plan,” a document released by the Chinese government, in the year 2017, the authors suggest that “China’s AI theories, technologies, and applications should achieve world-leading levels by the year 2030.”
(v) India’s Position
India has adopted a “pro-innovation” approach to AI regulation. The Indian government is determined to unlock the potential of AI, while also taking into consideration the risks posed by the use of AI technologies. The G-20 Ministerial Declaration made during India’s presidency along with a statement in Parliament in April 2023 suggest that the Indian government is not considering introducing legislation to regulate AI.
However, around the same time, the Ministry of Electronics and Information Technology (MeitY) published a blueprint for a new Digital India Act that acknowledges the need to regulate high-risk AI systems. In March 2024, the Indian government issued an advisory which mandated compliance with immediate effect, directed companies to obtain permission before deploying certain AI models to India. However, the advisory was withdrawn and replaced.
On March 6, 2024, the Cabinet approved the comprehensive national-level IndiaAI mission with the allocation of a budget of Rs.10,371.92 crore. The IndiaAI mission aims to establish a comprehensive ecosystem, which catalyzes AI innovation by way of strategic programs and partnerships in both the public as well as the private sectors.
India’s fragmented approach to AI regulation is due to multiple stakeholders. Overall, the Indian government has adopted a cautious approach to the regulation of AI.
Challenges
Brookings India in its 2023 report on “The three challenges of AI regulation” identifies the key obstacles that policymakers and governments have to overcome for holistic regulation of AI. These challenges include: (i) to keep up with the velocity of change in the AI space; (ii) decision-making on what to regulate; and (iii) determining who regulates and how.
Focus and agility are required to keep pace with the velocity of change in the AI landscape. The regulatory needs of the industrial revolution era are not the same as those for the AI revolution. Governments have to be innovative, and self-regulation by technology companies does not seem appropriate.
The regulation of AI should be risk based and targeted. This is because AI has vast capabilities. The use of AI in videogames stands in stark contrast to AI that could threaten security of critical infrastructure. Thus, AI in both these instances deserves to be treated differently.
So far, when it comes to the regulation of AI, innovators have made the rules. Governments in most jurisdictions have lagged in keeping up with the rapid advancement of AI. There is a general understanding on the need to regulate AI. However, it is up for debate as to who and how do we regulate AI. Governments across jurisdictions may choose to regulate AI through licensing or risk-based mechanisms.
Conclusion
Unsurprisingly, governments, regulators, and policymakers have struggled to keep up with the rapid advancement of AI technologies. The lag in regulation creates critical gaps in accountability, making it difficult to manage AI’s broader societal impact and to ensure its responsible use.
Governments across the world are in an unprecedented and difficult position as they deal with the complex issue of regulating AI. Regulation of AI is urgently needed and unpredictable. If implemented incorrectly, it may even become counterproductive.
However, governments cannot afford to wait to gain access to perfect and complete information before taking necessary action. By delaying action, governments risk failing to intervene in time to prevent the trajectory of technological development from resulting in existential or unacceptable risks.
Given the global nature of the issue of regulating AI, an international regulatory response is necessary. Agreement on the first principles to regulate AI across jurisdictions would be a good start.
For regulatory approaches to strike the right balance between fostering innovation and allowing for government oversight, it is necessary for individuals, companies, policymakers, governments, and other stakeholders to collaborate, cooperate, and engage in open conversations.
A robust legal and regulatory framework, supported by comprehensive policies, serves as the backbone for harnessing the transformative potential of AI. Lawmakers must consider aligning such a framework with human values, such as transparency. This framework would balance innovation with government oversight, mitigate risks posed by the use of AI, and provide clear guidelines for the ethical, responsible, and sustainable development and deployment of AI for the benefit of all in society.
Until a robust legal and regulatory framework is in place, companies must adopt a proactive approach to communication, internal governance, and risk assessment to ensure accountability for their use of AI.