chevron-down Created with Sketch Beta.

The SciTech Lawyer

Global AI

Beyond Innovation: The Legal and Ethical Challenges of AI

Ericka Watson

Summary

  • AI advancements, such as Agentnic AI and AGI, offer opportunities but require ethical safeguards to avoid biases and societal inequities.
  • Legal frameworks, like the EU’s AI Act and NIST’s AI Risk Management Framework, aim to mitigate risks of algorithmic discrimination and privacy violations.
  • The rise of open-source AI models like DeepSeek promotes transparency, but governance remains critical for fairness and accountability.
  • AI’s future depends on the collaboration of technologists, policymakers, and legal professionals to balance innovation with ethical responsibility.
Beyond Innovation: The Legal and Ethical Challenges of AI

Jump to:

Artificial intelligence (AI) has evolved from a novelty into a foundational technology available to everyone, shaping industries, influencing policy frameworks, and potentially redefining global power structures. The rapid advancements in AI, including Agentic AI’s enhanced automation and decision-making capabilities and DeepSeek’s improvements in formal reasoning tasks, highlight the increasing usefulness and accessibility of AI-driven solutions. Simultaneously, there is growing development and research on artificial general intelligence (AGI), which is in pursuit of AI systems capable of human-equivalent cognitive abilities, including reasoning, problem-solving, and cross-domain adaptability. While these developments present significant opportunities for economic growth, scientific discovery, and operational efficiency, they amplify the critical need for rigorous governance frameworks and ethical safeguards to mitigate risks associated with bias, misinformation, and societal inequities. The direction of AI will be determined by the policies and strategies implemented today, making it imperative for researchers, policymakers, and industry leaders to establish frameworks that serve as a catalyst for equitable progress rather than an amplifier of existing disparities.

A Transformative Era for AI

January 20, 2025, marked a significant shift in AI policy under the second Trump administration, characterized by a series of executive orders, one of which redefined the country’s approach to AI development. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, introduced by the Biden administration, was replaced by the Executive Order on Removing Barriers to American Leadership in Artificial Intelligence. In particular, this shift represents a strategic emphasis on competitiveness, scalability, and dominance, reinforcing the United States’s goal to continue leading the global AI race. However, these policy realignments raise critical questions about the balance between fostering innovation and responsible development.

At this very moment, the legal community can play a critical role in shaping the future of AI governance. By fostering collaboration between policymakers, technologists, lawyers, and ethicists, we have the opportunity to create regulatory frameworks that empower innovation while safeguarding fundamental rights and ethical principles. The question is no longer whether the United States can lead in AI innovation, but whether it can do so responsibly. The path forward is not about erecting walls but aboutbuilding bridges that serve AI as a force for collective progress rather than deepening existing inequalities.

Legal and Ethical Challenges in AI Development

For legal professionals, the evolving AI landscape introduces complex challenges at the intersection of AI innovation, privacy, intellectual property, and data governance. The rise of generative AI, large language models, and autonomous systems has introduced new risks to fundamental human rights. Article 5 of the European Union’s AI Act was recently enforced, which explicitly prohibits AI systems that pose unacceptable risks, including real-time biometric surveillance in public spaces, social scoring systems, and AI-driven manipulation that exploits vulnerabilities based on age or disability. Regulatory frameworks like the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the Colorado Artificial Intelligence Act (CAIA) also offer guidance on responsible AI deployment by outlining best practices, risk assessment methodologies, and compliance requirements aimed at ensuring fairness, transparency, accountability, and the protection of individuals’ rights in AI systems. However, the rapid pace of AI innovation demands proactive and adaptive legal strategies to support compliance and mitigate ethical and societal risks, such as dynamic contracting, AI ethics committees, and stakeholder engagement. Although these regulations aim to protect individuals from algorithmic discrimination, there is real potential for misuse and raising of ethical dilemmas. To balance the open door to innovate with human rights, a multidisciplinary approach that integrates legal, technical, and societal perspectives in AI development is imperative.

AGI research, if not done thoughtfully, can present significant risks related to autonomous decision-making and the societal consequences of highly advanced AI systems. It’s important to note that Nobel Prize winner (AI) Geoffrey Hinton warned of the risks posed by AGI’s potential acceleration, emphasizing the difficulty of preventing bad actors from exploiting AI for harmful purposes, creating autonomous systems that could be weaponized or used for malicious purposes, such as cyberattacks, surveillance, or the spread of disinformation. As AI capabilities expand, multidisciplinary collaboration between legal experts, policymakers, technologists, and ethicists will be essential in AI serving humanity; without such guidance, there is a risk of undermining fundamental rights.

For the legal community, this means AI regulation is no longer a distant issue; it’s happening now. Corporate counsel, privacy officers, and compliance teams must navigate a complex, multijurisdictional AI regulatory environment. Lawyers should be proactive andadvise their clients and businesses on how to comply and be good stewards with AI transparency and accountability, what governance structures to implement in order to mitigate AI-related risks, and how to future-proof AI policies to align with emerging global regulations.

The Rise of Agentic AI

Agentic AI is used to describe advanced AI systems capable of autonomous decision-making and task execution. While still in its transformative stage, this technology is already reshaping how professionals in law, health care, finance, and customer service interact with AI. By interpreting complex queries, analyzing vast datasets, and generating actionable insights in real time, Agentic AI is driving efficiency, accessibility, and innovation across sectors.

In the legal field, AI-powered tools are automating tasks such as document review, legal research, and compliance monitoring. Platforms like Lexus+AI assist lawyers in identifying relevant caselaw, while Zuva streamlines contract review and due diligence processes. A solo practitioner handling complex litigation could leverage these AI tools to quickly cross-reference statutes, draft legal arguments, and determine compliance with jurisdictional requirements. Similarly, in health care, AI-driven systems like IBM Watson Health and Google’s DeepMind are transforming patient care by assisting in diagnosing diseases, predicting patient outcomes, and optimizing hospital operations. A primary care physician, for instance, could use AI to review the latest clinical guidelines for a rare condition and utilize evidence-based care without delays.

Despite its potential, the widespread adoption of Agentic AI raises critical concerns around privacy, bias, and accountability. In health care, compliance with patient privacy laws such as HIPAA remains essential, while financial institutions must address algorithmic bias in lending and investment decisions to prevent discriminatory outcomes. A high-profile example exposed Wells Fargo for charging community college borrowers $1,134 more on a $10,000 loan than four-year students, while Upstart, a lending platform that uses AI and machine learning to assess creditworthiness and automate loan approvals, assigned higher APRs and fees to HBCU and Hispanic-serving institution graduates, with a typical Howard University graduate paying $3,499 more over five years than a similarly situated NYU graduate with identical credentials. As AI systems take on more complex decision-making responsibilities, some questions should be addressed about fairness, transparency, and oversight in AI-powered solutions: Who is accountable when an AI-generated recommendation leads to an adverse outcome? What mechanisms must be in place so AI systems remain fair, transparent, and aligned with legal and ethical standards? What assurances are needed to ensure there are unbiased data inputs and algorithmic transparency? These concerns should be collaboratively addressed to balance innovation with ethical responsibility.

As Agentic AI continues to evolve, its influence will continue to expand across sectors, reshaping the future of work, decision-making, and industry operations. However, its success will be defined not just by its technical capabilities but by the strength of the governance frameworks that accompany it. Prioritizing transparency, accountability, and compliance will be essential in decision-making while upholding ethical standards and maintaining trust. The challenge ahead is not just harnessing AI’s power but making sure it benefits society while mitigating risks, requiring diligence, adaptability, and commitment to innovation without compromising trust or integrity.

Introducing Open-Source AI

In January 2025, the AI landscape experienced a significant shift with the accelerated adoption of DeepSeek, a Chinese-developed AI model allegedly known for its efficiency and cost-effectiveness. DeepSeek’s emergence has democratized access to advanced AI capabilities, offering the technology at a fraction of the costs of its competitors. It has brought attention to open-source AI models, offering an alternative to the dominance of proprietary systems. DeepSeek’s commitment to open-source frameworks challenges the industry’s preference for secrecy, providing businesses, researchers, and developers with greater visibility into model architecture, training data considerations, and operational mechanics. While open-source models don’t guarantee complete transparency in AI decision-making, they promote greater accessibility and encourage collaborative innovation. In comparison, DeepSeek operates as a closed algorithm and doesn’t fully disclose its internal workings. However, its emphasis on transparency in how it communicates with users makes it more accessible than completely opaque systems.

The movement toward open-source AI models and more transparent decision-making frameworks represents a fundamental shift. As regulators tighten their focus on AI accountability, companies must be proactive in developing AI governance strategies and building transparency into AI adoption from the start. The accelerated adoption of DeepSeek underscores a larger inflection point in AI governance. Whether AI models are fully proprietary, partially transparent, or open-source, one thing is certain: Governance and accountability must remain a priority.

Commitment to Responsible AI

The acceleration of AI innovation presents both extraordinary opportunities and significant responsibilities. Tools like AGI, Agentic AI, and DeepSeek illustrate how AI can drive progress across industries, but their adoption must be guided by strong governance, ethical oversight, and accountability.

The legal and regulatory community has a pivotal role to play in AI development as a force for collective progress rather than as a driver of systemic inequities. As the United States redefines its AI leadership strategy, the focus should balance technological advancement with responsible, inclusive, and transparent AI governance. AI innovation will not slow down, and neither should our commitment to responsible AI. The question is whether the United States will lead the AI race responsibly and harness AI’s transformative power to benefit all of society.

    Author