The journey of the AI Act began with a proposal by the European Commission, the executive branch of the EU, which was then subject to a thorough examination and debate within the EU institutions. The proposal passed through the European Parliament and the Council of the European Union, involving extensive discussions, amendments, and negotiations. The final approval came after a careful balancing of different interests and perspectives, reflecting the EU's commitment to fostering the implementation of AI technologies while ensuring their responsible and ethical use.
As mentioned above, this comprehensive regulation is strategically structured on a risk-based approach, categorizing AI applications into three discernible levels, according to their potential societal impact and inherent risks. Indeed, at the forefront of this regulatory framework is the recognition that not all AI applications carry the same level of risk. Consequently, the risk-based approach within the AI Act serves as a sophisticated tool to tailor regulatory measures based on the nature and potential consequences of these applications. Thus, the nuanced stratification into four levels - Unacceptable Risk, High Risk, Limited Risk and Minimal Risk - forms the cornerstone of this regulatory strategy.
The first level, deemed as an “Unacceptable Risk”, encompasses AI applications that are considered high-risk as potentially detrimental to fundamental rights: in particular, all AI systems considered a clear threat to security, livelihoods, and the rights of individuals fall within this category and are, therefore, strictly prohibited.
Moving to the second level, characterized as “High Risk”, it includes AI applications that, while not deemed as unacceptable, still carry substantial risks. Here, regulatory measures are implemented to strike a balance between fostering innovation and ensuring the responsible deployment of AI. Entities developing or deploying high-risk AI systems are required to undergo thorough conformity assessments, ensuring compliance with established standards. This level acknowledges the potential impact of such technologies and aims to mitigate risks through targeted regulatory interventions.
The third level, classified as “Limited Risk”, pertains to systems classified as having a limited level of potential harm, exemplified by entities like chatbots and deepfake technologies, where it is only required to maintain a baseline of transparency. In such instances, the AI Act imposes the duty to notify individuals whenever they engage with AI systems and mandates the reporting of AI-generated content disclosure. This reporting obligation aims to differentiate manipulated outputs from authentic imagery.
The fourth level, identified as “Minimal Risk”, encompasses AI applications with lower inherent risks. While these applications are subject to fewer regulatory burdens compared to higher-risk categories, they still fall within the scope of the AI Act. This tier recognizes the varying degrees of risk associated with different AI applications and blends regulatory requirements, accordingly, promoting flexibility while maintaining oversight. For these AI systems, the AI Act recommends – without imposing – the adoption of a code of conduct to establish rules regarding the handling and retention of data policies.
An area of particular scrutiny within the AI Act is that of generative AI systems, which, last May, have already been the focus of several regulatory interventions, in regarding, in particular, the ensuring greater transparency on various aspects. Such softwares have garnered heightened attention due to their unique capabilities and potential implications. Indeed, generative AI, exemplified by models like OpenAI's GPT, has the capacity to create content, be it text or multimedia, that closely emulates human-generated behaviour. The operational mechanism of generative AI involves training the model on large datasets containing a diverse array of examples of the desired output. Throughout this training process, the model learns intricate patterns, associations, and structures within the data, enabling it to generate content that aligns with the learned patterns.
While generative AI is acknowledged for its potential to generate creative and contextually fitting content, concerns arise due to its potential misuse and unintended consequences. One of the primary apprehensions involves the generation of misinformation and deepfakes, where the technology can produce convincingly fake content, such as videos or text, contributing to the proliferation of misleading information. Moreover, another aspect of concern is the potential for
malicious use, where individuals with harmful intent could employ generative AI to craft sophisticated cyber threats, such as deceptive phishing emails or social engineering attacks, leveraging the technology's capability to generate content that appears realistic and contextually appropriate. Additionally, issues of bias and unintended outputs are raised, as the training data used to teach generative AI models may inherently contain biases that can be reflected in the generated content.
For all the reasons outlined above, it is essential to recognize that while generative AI holds promise for both positive and negative applications, the designation of it as "the most dangerous one" is subjective and contingent on the specific context of use, as well as the ethical considerations surrounding its deployment. Of these aspects, the AI act proves to be aware that it strives to establish responsible practices, ethical guidelines, and regulatory frameworks to navigate and mitigate potential risks associated with the application of generative AI technologies.
Within this framework there are undoubtedly several risks of criminal liability specifically associated with the actions of artificial intelligence, which raise countless complex and evolving issues. Central to these concerns is the challenge of attributing responsibility in circumstances where AI systems are implicated in criminal activities. Indeed, determining liability trough fault or wilfulness becomes intricate given the lack of conscious intent on the part of AI, which operates based on patterns and training data. Unlike human actors, AI lacks the capacity for intentional wrongdoing, posing a fundamental obstacle in applying conventional notions of guilt.
Moreover, the opaque nature of decision-making processes in many AI implementations, grounded in sophisticated algorithms and deep neural networks, further complicates the task of ascertaining why a specific decision was made. This opacity challenges the conventional legal principles that underpin the assignment of responsibility. Therefore, the existing legal framework may prove inadequate in addressing criminal liability associated with AI, necessitating a comprehensive review and adaptation of laws to effectively address the novel challenges posed by this technology. Furthermore, concerns also extend to the potential for AI manipulation or exploitation for criminal purposes, such as through hacking or manipulating training data: this raises questions not only about the actions of AI but also about the ethical and legal obligations of those who design, deploy, and maintain such systems.
In navigating these complexities, the role of corporations and AI developers must come under scrutiny. The question of whether and to what extent they should be held accountable for damages caused by the systems they create underscores the need for clear legal frameworks and preventive measures. As the discourse on criminal liability tied to AI continues to evolve, national legal systems are grappling with the imperative to adapt regulations to effectively address the intricate challenges posed by this transformative technology in contemporary society.
However, the AI Act regulation, proposed by the European Union, still does not tackle those issues of criminal nature, focusing on ethical, transparent, and secure aspects of artificial intelligence use, as it establishes specific rules for high-risk artificial intelligence systems, sets transparency requirements, and imposes control mechanisms.
In conclusion, the European Union's AI ACT represents a significant step towards establishing a framework in the realm of artificial intelligence. Nonetheless, the realization of a robust system of criminal responsibility linked to AI necessitates the resolution of intricate challenges pertaining to the subjectivity of criminal actions and the nuanced profiles of subjective liability, encompassing intent and fault. Addressing these complexities is paramount to ensuring a fair and effective legal framework that appropriately holds both AI systems and their human operators accountable for any potential criminal consequences. Achieving a harmonious balance in defining responsibility within the context of AI-driven activities will be crucial for fostering trust, ethical development, and the responsible deployment of artificial intelligence technologies across the European landscape.