Navigating Legal Frontiers: The Evolution of Agentic AI and New Regulatory Challenges

As agentic artificial intelligence (AI) systems—capable of autonomous decision-making and action—become increasingly integrated into various sectors, existing legal frameworks are being tested in unprecedented ways. Two notable statutes, the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act and the Colorado AI Act, are at the forefront of this legal evolution.

The NO FAKES Act, reintroduced in April 2025, aims to establish a federal right of publicity, granting individuals control over digital replicas of their likenesses, including those generated by AI. This legislation defines a digital replica as a “newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual,” whether living or deceased. The Act seeks to create a legal framework for licensing such replicas, incorporating provisions for liability, safe harbors, and statutory exceptions. Notably, it introduces a notice-and-takedown mechanism, obligating online service providers to promptly address unauthorized digital replicas upon notification. This approach mirrors existing copyright law practices, adapting them to the challenges posed by AI-generated content. ([en.wikipedia.org](https://en.wikipedia.org/wiki/No_Fakes_Act?utm_source=openai))

At the state level, Colorado’s AI Act, enacted in May 2024, targets developers and deployers of “high-risk AI systems,” particularly those involved in automated decision-making processes in critical areas such as employment, housing, and healthcare. The Act mandates that these AI systems undergo independent audits to assess and mitigate potential biases. However, the political landscape has introduced additional complexities. The U.S. House of Representatives has passed a 10-year moratorium on state-level AI regulations, potentially stalling state-driven initiatives like Colorado’s. This federal intervention underscores the tension between fostering innovation and ensuring ethical AI deployment, highlighting the need for a cohesive regulatory approach. ([joneswalker.com](https://www.joneswalker.com/en/insights/blogs/ai-law-blog/when-ai-acts-independently-legal-considerations-for-agentic-ai-systems.html?id=102kdl4&utm_source=openai))

The application of these statutes to agentic AI systems raises several legal considerations. For instance, in July 2024, a California district court allowed a case to proceed against Workday, an HR and finance platform, under the theory of agency liability. The court posited that by designing an AI technology to perform tasks traditionally handled by human employees, Workday could be considered an agent of the employer, thereby sharing liability for the AI’s actions. This case exemplifies the evolving legal interpretations as AI systems assume more autonomous roles within organizations. ([thetmca.com](https://www.thetmca.com/artificial-intelligence-launching-agentic-ai-in-an-uncertain-u-s-regulatory-landscape/?utm_source=openai))

Furthermore, the autonomous nature of agentic AI challenges traditional notions of authorship and liability. A 2025 study highlighted the difficulty in attributing specific creative elements to either human or machine when their contributions are irreducibly entangled. This blurring complicates the application of existing intellectual property laws, which rely on clear distinctions between human and non-human creators. ([arxiv.org](https://arxiv.org/abs/2504.04058?utm_source=openai))

In conclusion, as agentic AI systems continue to evolve, they expose gaps and ambiguities in current legal frameworks. The NO FAKES Act and the Colorado AI Act represent proactive efforts to address these challenges, yet their effectiveness will depend on ongoing legal interpretations and potential federal interventions. Legal professionals must remain vigilant, adapting to the dynamic interplay between technological advancements and regulatory responses.