Governments globally are investing considerable effort into exploring how to leverage Artificial Intelligence (AI) and mitigate its potential risks. The process of figuring out how to effectively regulate AI has been likened to a complex game of chess, influenced by policymakers’ views across six dimensions: potential risks, risk mitigants, targets of policies, safeguarding AI opportunities, perceived urgency, and regulatory strategies. Businesses that are developing or utilizing AI need to monitor how AI regulation unfolds, focusing primarily on the key themes emerging from the strategies of various countries. With AI regulation constantly evolving, they will often find themselves needing to implement rules from even the most prescriptive countries where they operate, and they’ll need to bring on board the appropriate experts to oversee this ongoing endeavor.
In 2023, 28 countries along with the EU signed the Bletchley Park Declaration, outlining a dozen potential AI risks that could resonate with existing laws. Some AI systems might collect or output data in ways that intersect with intellectual property, privacy, and contractual rights, but there can be disagreement and uncertainty in the AI context over how existing laws apply or how adequate they are.
Policy makers often ensure risks are tackled via a certain set of mitigants. Normally, these include disclosing and explaining the use of AI, ensuring that AI systems are fair, robust and safe, giving oversight to the way AI is used and ensuring accountability for outcomes, additional governing of relevant AI models with many possible uses, and allowing individuals to oppose any harmful outcomes or decisions caused by AI.
Deciding which organizations should be targeted by policies is another key dimension. Policymakers may choose to target developers of AI systems, organizations that deploy them, or others in the supply chain. Each actor in this scenario brings with them different responsibilities, control, capabilities, incentives, and challenges. Policymaking often distinguishes between developers and deployers of AI systems, with laws such as China’s generative AI law and the draft EU AI Act imposing more obligations on the developers.
The potential of AI to enhance societies is widely accepted, leading to countries increasingly encouraging AI-related innovation, industries, and jobs through various means, including financial support, innovation forums, and reskilling initiatives.
States are also aware of the AI opportunities in their reach and are likely to tread cautiously to avoid actions that may inhibit these advantages. This has been seen in cases like France and Germany lobbying for changes to the draft EU AI Act to protect their AI start-ups. The sense of perceived urgency heavily influences strategies, with differing approaches even between close neighbors. For example, EU institutions feel that certain AI poses significant risks that need immediate addressing, leading to comprehensive AI-specific laws being put into place. In contrast, the UK government holds the view that premature binding measures could fail to effectively address the risks at a pace that keeps up with the evolving AI landscape or may simply stifle innovation.
The chess game-like scenario is also seen in the different strategies for regulating AI, which include general and wide-ranging AI-specific laws (like the draft EU AI Act), laws targeting specific AI uses or technologies (such as China’s laws on generative AI), revamping or updating existing technologically and sectorally neutral laws, voluntary codes and standards, and international action. Policymakers are broadly recognizing the need to monitor, scrutinize, and learn from each other’s strategies in order to fortify their own approaches.
Lastly, diverging national approaches to AI regulation pose a risk of burdening the AI economy and potentially stunting innovation. As a result, global businesses developing or deploying AI will require a well-coordinated team of experts to play this intricate game of six-dimensional AI regulatory chess.
The original article was written by Giles Pratt, Natasha Good, and Beth George, partners at Freshfields, and can be read here.