AI Misuse in Legal Research: The Perils of Relying on Generative Tools

The question remains: when are legal professionals going to realize that commercially available generative AI tools are not suitable for conducting legal research? These AI tools, despite their many uses, are not designed for this purpose. Resorting to such unconventional methods could potentially result in the software creating fictitious cases to meet the query demands, often to the detriment of the user.

Chances are that courts will not look kindly upon citing made-up cases or authorities, particularly given the numerous examples of such instances. Echoing this sentiment, the Second Circuit referred attorney Jae S. Lee to a grievance panel for using a fake case produced by ChatGPT in a complaint without verifying its authenticity.

An example of such misuse is detailed in a Law360 report, explaining the circumstances that led Lee to use a non-existent case in a legal filing. For the grievance panel, attorneys, at the very least, have an obligation to read and verify the cases they cite. The panel found attorney Lee guilty of presenting a false representation of law to the court without making the reasonable inquiry into its validity.

While specifications regarding AI use in a legal context might be beneficial, the absence of such rules doesn’t give lawyers a free pass. According to the panel, licensed attorneys are expected to ensure the accuracy of their court submissions, with or without rules directly addressing AI.

As the fiasco around the misuse of AI tools in legal research ensues, it’s high time for attorneys to reconsider their tools and strategies. The warnings have been sounded repeatedly, and the professional world looks on to see how this unfolds. The full account of this issue can be found in Kathryn Rubino’s article at Above the Law.