Navigating Corporate Legal Challenges in the Era of Generative AI

The advent of ChatGPT last year and subsequent acceleration of generative AI technologies have encompassed the legal departments of corporations worldwide. In-house lawyers have found themselves challenged with the task of devising appropriate governance for the use of this new tech, with primary concerns centered around the protection of confidential business and customer data, and the necessity of human checks and balances to counter flawed information output by AI systems. This concern was notably expressed by Dan Felz, a partner at Alston & Bird in Atlanta, who highlighted that from a company’s perspective, generative AI is the first technology that “can violate all our policies at once.”

Strategies for AI Oversight

Amber Ezell, policy counsel at the Future of Privacy Forum, suggests that organizations should assign a dedicated individual or team to handle AI governance and compliance. This adoption of responsibility often falls under the purview of the chief privacy officer, but Ezell notes that AI issues extend beyond the realm of privacy. Following this suggestion, corporations like Toyota Motor North America have set up an AI oversight group which consists of specialists in various domains, ensuring that each case involving generative AI is handled judiciously and the balance between business benefits and risks is maintained.

For companies seeking guidance in writing their own generative AI policies, the Future of Privacy Forum has offered a checklist. Moreover, companies such as Salesforce have been proactive in taking steps towards the ethical use and governance of AI technologies. Paula Goldman, chief ethical and humane use officer at Salesforce, acknowledged that while their firm has been using AI for several years, generative AI has introduced new facets of concern.

Policies & Regulations in Formation

The explosive growth and swift adoption of generative AI have pressurized corporations to draft and enforce policies before they could fully comprehend the implications of the technology. Katelyn Canning, head of legal at fintech startup Ocrolus, described the introduction of ChatGPT as a sudden incident requiring urgent policies.

As identified in a survey by the McKinsey Global Institute, only 21% of organizations employing AI reportedly had pre-existing policies governing the use of generative AI. As these companies continue to draft and update their policies, they have to be adept in considering multiple potential legal issues including security, data privacy, employment law, and copyright law.

Moreover, corporations are preparing for targeted AI regulation under review in jurisdictions like the EU and Canada. By monitoring regulatory conversation, they are forming AI governance policies that address the queries of regulators. As pointed out by Caitlin Fennessy, vice president and chief knowledge officer at the International Association of Privacy Professionals, organizations are utilizing existing frameworks and rules for privacy and anti-discrimination laws while forming their AI governance programs.

Companies are also aware of the dreadful potential for a security breach or data privacy violation due to the improper handling of sensitive information by generative AI. Additionally, concerns about the accuracy of AI outputs and their predisposition to ‘hallucinate’, or produce incorrect information, necessitate the implementation of corporate checks and measures to ensure accuracy and accountability.

This article not only highlights the potential risks and challenges linked with the deployment of AI, but also the ongoing efforts and strategies employed by companies to mitigate them. The future of AI policy promises to be an area of rapid development as technology continues to evolve.