Implications of the EU AI Act: Challenges and Opportunities for US Businesses

The European Parliament, European Council, and European Commission have recently agreed upon the critical components of what would potentially be the globe’s inaugural comprehensive regulation of artificial intelligence – the EU AI Act. Although the final text has not yet been made public, projections indicate that it could substantially affect a broad array of organizations, inclusive of those based in the U.S. that maintain a small or non-existent presence in the EU.

Together with extant regulatory EU laws like the General Data Protection Regulation, the AI Act appears poised to lead to significant changes for U.S. businesses across various sectors and industries. One distinguishing feature to note about the AIA is its risk-based approach. Certain AI systems deemed to infringe fundamental rights will be categorically banned within the EU, while others will be labeled as either presenting a limited risk or a high risk.

There are several open-ended queries and takeaways for U.S. businesses. Although the Act may not be fully implemented until sometime between 2025 to 2026, it’s paramount that organizations understand the potential duties that could be placed upon them. Systems presenting limited risk would be subject to certain transparency requirements, while those seen as high-risk will be obligated to carry out risk assessments, adopt certain governance frameworks, and maintain a certain degree of cybersecurity.

The rapid evolution of AI brings a raft of challenges. For instance, the AIA framework could potentially become outdated and fail to keep pace with the speed of AI development. It’s also unclear how the law will have an extraterritorial reach to U.S. entities that have no operations in the EU. However, initial drafts suggest that the law will apply to providers placing AI systems on the market or putting systems into service within the EU, whether or not those providers are established in the EU.

For U.S. businesses in particular, the AI Act’s regulations around use of general-purpose AI should be closely monitored. The Act will likely require U.S. enterprises to carefully manage risks associated with general-purpose AI, which may include providing technical documentation and detailed summaries about content used to train these systems, a task that could prove difficult for many.

The final extent and impact of the AIA on U.S. businesses remains a topic of conjecture. However, businesses can start to prepare by implementing and maintaining an AI governance framework, promoting a culture of responsible AI deployment, having robust policies, and maintaining transparency mechanisms.

Authors of the article are Peter Stockburger, the office managing partner at Dentons’ San Diego, a member of the venture technology and emerging growth companies group, and co-lead of the autonomous vehicle practice. He advises that businesses that focus on responsible, safe, and ethical AI development and deployment stand a better chance of taking advantage of market share and addressing the evolving needs of customers, partners and regulators.