Navigating the Global Landscape of Evolving AI Regulations and Risk Management

Law.com International reports that general counsel are increasingly under pressure to manage AI risk within their organisations, even as regulators globally race to define the rules of engagement.

In the U.S., AI regulation is considered to be in an early stage. Lawmakers have engaged tech executives in dialogues, and several bills directed at controlling the technology are up for preliminary discussion, but no definitive AI-specific legislation is immediately forthcoming.

Meanwhile, the European Union (EU) has been proactive, having started to draft AI regulation several years ago. This initiative makes the EU’s AI Act one of the most advanced pieces of AI regulation currently under negotiation. The proposed Act seeks to ensure that AI systems are transparent, reliable, safe, and comply with fundamental rights and values. Moreover, it holds companies accountable for ensuring their AI tools operate within established rules. The Act is currently awaiting agreement from the European Council and Commission.

While some businesses see regulation as a potential tool to stave off a “race to the bottom,” others are concerned about the impact of stifling innovation and discouraging startups due to compliance costs. More than 150 companies, including Renault and Siemens, signed a letter urging the E.U. to reconsider its Draft AI Act, contending that it could negatively affect innovation and job creation.

The United Kingdom (UK), however, is taking a distinct path. Rather than crafting new AI-specific laws, the UK government is keen on expanding the powers of existing industry regulators to oversee AI.

This summary is based on an article available via Law.com International. Please note that full access to this article may be subject to a paywall.