As excitement and trepidation surround Generative Pre-trained Transformers (GPT), or GenAI, legal professionals remain poised to harness the tools’ potential. Despite all the hype, it’s crucial to remember that piracy-related jargon and expressions are just as applicable here as in AI’s ever-changing landscape itself.
There are common pitfalls surrounding GenAI. These include issues such as inputting confidential firm or client information into ChatGPT, hallucinations, and the absence of links to authoritative sources for verification. However, organizations can license a private instance of ChatGPT in Microsoft’s Azure cloud service, allowing the application of settings to protect confidential information; a potentially excellent solution to the often-evoked confidentiality concern.
Increasingly, teams are finding that Large Language Models (LLMs) like GenAI’s ChatGPT excel at language processing and translation. The technology can effectively reword documents to facilitate better communication, summarize information, manipulate documents, and even extract data. In fact, when given prompts for tasks like summarizing legal briefs or outlining key legal issues, GenAI performs admirably. Innovative techniques such as RAPTOR are emerging, allowing for large-scale summarization by creating hierarchies of summaries that a system can search through and summarize across documents.
Equally prestigious is the Retrieval Augmented Generation (RAG) approach, which challenges the common misconception of GenAI as a database. It involves accessing or searching a controlled data set or a database and combines GenAI’s mastery of language and summarization with controlled data to produce better, more reliable results. This technique minimises hallucinations and improves the technology’s reliability.
One tool, the multi-turn approach, coupled with a low temperature setting, helps ensure a RAG application’s conversations stay on track and avoid imaginative hallucinations. Lawyers, especially, can greatly benefit from RAG applications that operate off accurate, trusted data.
Worth noting here is that hallucinations—or the AI producing false information—are not necessarily a flawed trait. If harnessed correctly, these hallucinations could be viewed as a feature. They can lead to wild ideas that can spark fresh legal arguments or novel thought processes for lawyers. And while the hallucination trait requires the human counterpart’s accuracy review, innovative arguments can indeed be surfaced.
Even with the advancements and possibilities, it’s clear that we are still in the early days of GenAI applications. Legal professionals are learning new ways to use this technology, and determining measurable returns on investment continues to be a journey. Over time, AI is expected to change the way lawyers work, and while the journey evolves, the technology continues to create a plethora of new possibilities in the field.
This and more was discussed in an article titled “So What Is Actually Working With GenAI?” appearing on Above the Law, written by Ken Crutchfield, Vice President and General Manager of Legal Markets at Wolters Kluwer Legal & Regulatory U.S.