chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo November/December 2024 (41:6): Hybrid Law

AI Decision-Making: Legal and Ethical Boundaries and the Mens Rea Dilemma

Daniel Gall

Summary

  • The rapid advancement of artificial intelligence (AI) raises complex questions about applying legal concepts, such as mens rea, to AI systems.
  • Applying the concept of mens rea to AI systems presents several challenges—these systems lack consciousness, emotion, or intention in the human sense.
  • We must develop new frameworks that shift from the mental state of intention or knowledge of wrongdoing and focus instead on risk, control, and foreseeability.
AI Decision-Making: Legal and Ethical Boundaries and the Mens Rea Dilemma
krisanapong detraphiphat/Moment via Getty Images

Jump to:

A computer can never be held accountable. Therefore, a computer must never make a management decision.
—1979 IBM presentation.

The above statement from IBM might still be valuable, but we are, indeed, about to witness something new for which we might need more time to be ready. The rapid advancement of artificial intelligence (AI) has sparked a heated debate about the role of computers in management decisions and the legal implications of AI systems making potentially harmful choices. As some argue that the principle of “no responsibility without authority” remains relevant, others contend that the changing times could lead to a significant shift toward computers taking on more decision-making roles in management. This shift raises complex questions about applying legal concepts, such as mens rea, to AI systems.

Mens rea, a fundamental concept in criminal law, refers to the mental state of intention or knowledge of wrongdoing when committing a crime. Traditionally, determining an individual’s guilt requires an analysis of their mental state at the time of the offense, considering factors such as intent, recklessness, knowledge, or negligence. However, applying the concept of mens rea to AI systems presents several challenges—these systems lack consciousness, emotion, or intention in the human sense. As AI becomes more sophisticated and autonomous, the legal system must develop new frameworks that account for the distinct nature of artificial intelligence.

Legal and Theoretical Challenges

Lack of Conscious Intent in AI Systems

AI systems operate based on algorithms, data inputs, and programmed instructions rather than conscious decision-making or an awareness of right and wrong. As a result, it is difficult to argue that AI can possess the intent or even negligence required for mens rea. AI systems act based on machine learning, probabilistic models, or programmed rules, meaning they don’t have a subjective understanding of their actions.

Delegating Accountability

When AI systems cause harm, determining who should bear responsibility is tricky. Should it be the developer, the company that deployed the AI, or the user? We may argue whether responsibility should be attributed based on foreseeability, whether the harm is predictable, or the level and type of control exerted over the AI system. However, this doesn’t align neatly with traditional mens rea categories, which focus on mental state.

New Models of Responsibility

Some scholars propose moving away from human-centric concepts such as mens rea and instead adopting new frameworks for AI. These might include:

  • Strict liability. AI developers or users could be held liable regardless of their mental state simply because they deploy the system.
  • Risk-based liability. Parties could be held responsible based on the risk level they introduce using AI systems.
  • Corporate or vicarious liability. AI systems could be treated more like agents for which a company or individual might bear responsibility without needing to prove intent.

Levels of Autonomy and Mens Rea

AI systems operate across different levels of autonomy. For low-autonomy systems (e.g., those directly controlled by humans), it may be easier to attribute the human operator’s mental state to the system. In contrast, for highly autonomous systems that act independently (e.g., self-driving cars), mens rea becomes more challenging to apply, as human control is minimal.

Ethical and Philosophical Considerations

Legal theorists also grapple with the ethical implications of applying human legal principles such as mens rea to non-human entities. Is it appropriate or even possible to attribute responsibility to machines as we do to people? Some argue that focusing on the intent behind the human designers and users of AI might be more practical than trying to apply mens rea directly to the AI itself.

Legal Frameworks for AI Liability

Electronic Personhood

Some proposals suggest that highly autonomous AI systems could be given a form of legal personhood. This would allow them to be held “responsible” for specific actions, though in a different way than human individuals are. Under this model, AI systems could theoretically have a legal identity, including rights and responsibilities, influencing how we think about mens rea.

AI Actus Reus

An AI system’s actus reus (the physical act of committing a crime) could still be evaluated, even if mens rea is problematic. Legal systems might focus on whether the AI’s actions resulted in harm and whether those actions were foreseeable, even if intent or knowledge is not applicable.

Risks of Cyber Crimes in Determining Mens Rea

An additional layer of complexity for these issues is introduced by the threat of cyber-attacks. Hackers, predominantly in Russia and North Korea, have been leveraging AI to enhance their cyber-operations. They use AI to refine their tactics, such as improving phishing emails, gathering information on vulnerabilities, and troubleshooting technical issues.

This evolution in their methods poses significant risks, including the risks associated with modifying AI model prompts:

  • Prompt injection. This involves manipulating the inputs to AI models to produce harmful outputs, potentially leading to misinformation or malicious actions.
  • Model poisoning. Hackers can alter the training data or the model itself to introduce vulnerabilities, which can be exploited later.

The use of AI in malicious cyber operations raises ethical issues and increases the complexity of defending against such attacks.

The Need to Shift Focus

The application of mens rea in artificial intelligence represents an evolving area of legal theory. Rather than applying conventional human concepts such as mens rea, we should create new responsibility frameworks tailored to AI systems. This perspective shifts focus to risk, control, and foreseeability, emphasizing that the primary responsibility lies with the human agents who design, deploy, or monitor AI systems rather than trying to ascertain the AI’s intent or mental state. By proactively addressing these issues, the legal community can help ensure that the integration of AI into decision-making processes is done responsibly and with appropriate safeguards in place.

    Author