Artificial Intelligence (AI) technology has the potential to revolutionize the cumbersome and often costly process of responding to federal government subpoenas or civil investigative demands (CIDs). However, before companies dive headfirst into these promising advancements, it’s essential that they weigh the potential benefits against the potential risks of using AI in such sensitive legal contexts.
Benefits
Currently, reacting to a subpoena or a CID can be an intense siphon of resources. Relvant custodians must be identified, their material gathered and reviewed for responsiveness, and then produced for the government. Companies are simultaneously identifying possible areas of exposure and strategizing about whether or how to address these with the government.
AI could be a significant boon to streamline these tasks. AI search tools could help identify responsive documents more efficiently while eliminating the need for employee interviews and extensive reviews. Furthermore, chatbots could highlight exposure areas based on the subpoena or CID requests and even assist in crafting substantive responses. By allowing for efficient completion of certain tasks, AI can free legal teams and other personnel to focus more effectively on critical issues.
Downsides
There are, however, potential downsides to employing AI in responding to government subpoenas or CIDs. One significant concern is the impact of replacing human actors with AI on attorney-client privilege and work product protections.
Questions arise, such as whether the input prompts in an AI tool should be considered attorney work product? In general, this would arguably be true if an attorney or someone under an attorney’s direction crafts the prompts. But, what about the AI outputs? The government might argue to gain access to the outputs, claiming they are factual data, which typically are not privileged.
The government might also contend that AI outputs aren’t protected because work product protections apply only to humans. These emerging debates echo similar discussions currently occurring in the Copyright Office as they determine how much human input is necessary to grant copyright protections to works largely or solely created by AI.
Indeed, the government could increase its involvement in company responses to subpoenas or CIDs with available AI tools. If the government learns of a company’s AI capabilities, it might attempt to use these tools to run its own queries and prompts. This could lead to uncomfortable situations where companies are pressured into giving the government access to their AI tools.
Managing Risks
Therefore, companies must take action to manage these potential risks. While not adopting AI tools might seem a solution, it likely isn’t a sustainable option in the long run, or even short term. Instead, companies could limit the use of AI to specific departments or functionalities to ensure the legal department isn’t directly leveraging AI. This could help avoid some of the issues mentioned previously while still allowing the rest of the company to benefit from AI.
Another strategy could be to engage external counsel to handle the bulk of responding to government requests. Such counsel could be granted temporary access to internal AI tools, thereby enabling the company to harness the advantages of AI in reacting to subpoenas or CIDs while also having a stronger position to push back against conceivable aggressive demands associated with AI usage from the government.
Structuring the use of AI wisely is clearly essential. Companies must take care to ensure that AI usage is adequately supervised and its outputs closely monitored—by humans, including attorneys—to mitigate associated risks.