Judge Paul Engelmayer, sitting in the Southern District of New York, recently made public his skepticism towards the application of artificial intelligence in legal matters. Englelmayer’s sentiments followed an unsuccessful fee request that was calculated, in part, through the utilization of AI tool, ChatGPT. The criticized fee submission was made by the Cuddy Law Firm on completion of a case tied to the Individuals with Disabilities Education Act (IDEA). The judge’s verdict regarded the use of this tool as “misbegotten” from the start.
The law firm used a variety of reference points to argue that awards traditionally allocated for IDEA cases severely underestimate the compensation required within their practice area. Included in the substantiating data were reports such as the Real Rate Report by Wolters Kluwer, the 2022 Litigation Hourly Rate Survey and Report by the National Association of Legal Fee Analysis (NALFA), the 50th Annual Survey of Law Firm Economics (ASLFE), and the Laffey Matrix. Despite these, the judge found the assumption that administrative actions involving IDEA should be financially on par with representing corporate giants such as Goldman Sachs to be flawed.
In conjunction with the aforementioned reports, the firm also used output from a ChatGPT search as auxiliary data to “cross-check” their fee request. The firm defended the relevance of this AI-facilitated data arguing that potential clients, such as parents in search of representation in IDEA cases, would likely utilize a tool like ChatGPT to gauge the market price for hiring a lawyer – a notion the judge found unconvincing.
In support of this stance, the judge further referenced two recent cases from the Second Circuit. Ridiculing the use of ChatGTPT, they highlighted that the AI tool was unable to distinguish between real and fictitious case citations. This led to a broader discussion on the larger issue of whether AI’s faults lie solely within the area of mistaken data (hallucinations) or whether the issues also include the tool producing inaccurate yet authentic interpretations of data.
Despite the rejection of AI’s role within this particular case, it does bring into question the wider implications, benefits and limitations of AI applications within the legal world.
You can learn more about other instances of ChatGPT’s challenges in similar settings by reading here and here.