The future implementation of large language models and broader foundation models into artificial intelligence spheres remains uncertain. A wide array of outlooks exist concerning the safety and reliability of such models, most notably in their role within final products. The primary anxieties lie around the role these tools will play as auxiliary mechanisms for legal operations and process ways and the subsequent effect on the professional conduct and responsibility of practitioners.
The California State Bar has recently addressed these questions. Cases like Mata v. Avianca and People v. Zachariah C. Crabill exemplify the existing rules of behavior that should feasibly account for the responsible use of generative AI. However, these technologies continue to be both a source of captivation and trepidation. The indistinct nature of behavioral expectations fosters a gap between perceived and actual usage.
Leading law firms and other legal professionals have noted a lack of agreement when implementing a company-wide deployment of generative AI on the appropriate timing of tool usage and a specific performance standard. However, ongoing pressure compels the necessity of these tools to bring efficiency into the profession. Given these opposing forces, reconciling them emerges as a challenging task.
The optimum response could lie in focusing on the future of the education and training of legal professionals. The rigid guidelines on perceived usage, alongside the practitioners’ responsibilities when employing them, seem to stem from the perception that these technologies can aid in legal tasks execution.
However, there are no clear definitions, metrics, or quality assurance requirements that determine a legal task. Often, explanations of what it means to deliver quality legal services, such as differentiating a good contract from a great one, are vague and implicit. They can be based on the years of experience in a practice, or the quantity of deals or transactions executed.
The ambiguous nature of these responses often stems from one of the legal industry’s complexities—much of the work’s value-add is implicit, hidden amongst personal experience, and specific knowledge of the client and industry base.
In our paper earlier this year, 12 senior associates and partners at DLA Piper demonstrated this by analyzing and finding potentially conflicting clauses from a set of five contracts. The findings were variable, and only two lawyers converged on a set of clauses that were deemed contradictory, highlighting the inherent value of the advisory component in legal practice.
While efforts toward standardization like guideline construction have been significant in the past, there is equally immense value attributed to personalization. The wisdom of experience and mentorship in legal institutions reflect the extent of individuality in the practice.
Embracing AI tools that allow for tailoring expertise at an individual level should be the goal. The advent of custom GPTs is enabling, and even encouraging, this type of increased personalization. Here at the Stanford Center for Legal Informatics, we’re moving in this direction by experimenting with contract negotiation simulation and implicit contract redlining. We’re drawing inspiration from 2023 research on generative and communicative agents using large language models.
Collaboration with merger and acquisition partners to develop tools is ongoing, allowing lawyers to gauge negotiations based on starting positions, legal complexity, and impact. Moreover, working with law firms’ specific redlines in contracts helps shed light on the unique voices of individual lawyers, allowing a junior associate to more clearly comprehend their review method, and potentially compare it against the voices and perspectives of other lawyers.
While there are remaining technical and legal hurdles to determining the proper integration of these models into end-product uses, it’s important to utilize these models as robust tools for training. Preparing law students and junior associates to understand and respond to the industry’s nuances are critical considerations. There are existing mentorship models that can be digitized to better teach and cultivate the legal domain dynamically.