Generative artificial intelligence (AI) is becoming an increasingly significant tool in the legal profession, particularly in the area of discovery during litigations. However, this innovation poses potential risks that must be carefully managed to ensure client documents remain protected. Lawyers have been utilizing AI-based tools for more than a decade, but the scope and power of these tools have substantially increased with the advent of generative AI technologies such as ChatGPT.
Traditionally, legal professionals rely on confidentiality stipulations or protective orders to govern the use and dissemination of documents that are marked as confidential. These stipulations typically specify who can access the documents, how the documents can be used during litigation, and what happens to the documents after the litigation concludes. Given the capabilities of generative AI to process and summarize large sets of documents efficiently, legal practitioners need to seriously consider how AI tools are incorporated within these confidentiality agreements. Bloomberg Law discusses this in detail, urging attorneys to update their strategies to ensure client data security.
One of the primary concerns is that information fed into public generative AI models may become part of the responses generated for other users or be incorporated into the datasets that train these models. This issue is so significant that several financial institutions have banned the use of public AI programs for work-related tasks to prevent any inadvertent exposure of sensitive information. Tech.co and Forbes report on how firms like Goldman Sachs and Citigroup have enforced such bans.
For litigants, a critical question is whether to adapt existing confidentiality stipulations to account for the use of generative AI. For instance, including provisions that explicitly restrict the use of AI in summarizing or processing confidential documents could be one approach. Another option is to specify which AI vendors and programs are secure enough to be trusted with confidential information. Companies and legal professionals should also consider segregating confidential documents from AI systems entirely to avoid unintended exposure of sensitive data.
As AI technologies continue to evolve, it’s imperative for legal professionals to reassess their confidentiality agreements and other legal protective measures regularly. This proactive approach will help ensure that the benefits of AI can be harnessed without compromising the security of sensitive information.
For more detailed analysis, refer to the full article by Bloomberg Law.