Just over a year since generative AI became a common practice, in-house counsel are still trying to adapt to its use within their companies. Simultaneously, they are gearing up for a surge in regulations and requirements directed at the technology.
In the final month of last year, the European Union set the pace when it reached political agreement on the comprehensive AI Act. Although full details are yet to be published, this legislation will introduce widespread regulations and obligations for creators and users of AI, predominantly based on the potential risks of specific use cases – a development that will impact many US corporations. You can find more details in the official Bloomberg Government News article.
Whilst the United States has not yet enacted wide-reaching federal AI legislation, President Joe Biden issued a far-reaching executive order last October. This order called for the creation of standards and assessments for AI models, highlighting safety concerns such as cybersecurity and biosecurity. To ensure that AI technologies are safe, companies will need to focus on “red-teaming” – a process of proactively identifying and addressing potential vulnerabilities. Further context can be found here.
Within the United States, laws regarding AI are springing up at both state and local levels. State bar associations are drafting ethical guidelines for lawyers employing AI, and courts are mulling over the intellectual property implications of AI. Yet, in-house counsel are still tackling issues surrounding AI use that emerged in 2023, trying to determine who will spearhead their internal AI governance initiatives, and how they should appropriately use the technology. They anticipate that 2024 will bring more clarity to these lingering questions about AI law.
For a more comprehensive analysis on this topic, you can refer to the original Bloomberg Law report here.