The rise of artificial intelligence (AI) technologies has opened a new frontier in the legal realm, sparking debates over data privacy, security, and ethical implications. As these complex tools become increasingly prominent and capable, the role of the law firm in managing their adoption and use is fast becoming a focal point of these discussions. Following the recent ILTA Evolve conference – a focused and intentionally slow-paced event dealing with trending legal tech issues – the spotlight is on generative AI and how its impending adoption moment for law firms, if not handled adeptly, could lead to a host of problems.
During a key kickoff session titled “Privacy v. Security – How GenAI Creates Challenges to Both”, two legal tech experts examined this very issue. Reanna Martinez, Solutions Manager at Munger, Tolles & Olson LLP, and Kenny Leckie, Senior Technology & Change Management Consultant at Traveling Coaches, walked participants through the spectrum of AI tools available, noting the varying levels of associated risk. From consumer-facing free products to enterprise-level versions and legal specific offerings, each tool holds its own challenges – with public GenAI being cited as the “opposite” of data security.
This rapid evolution of technology raises significant concerns for law firm IT staff. As lawyers become more reliant on tools like ChatGPT or its enterprise equivalent, IT departments must navigate the ensuing challenges. Over the coming years, it is anticipated that multiple instances will arise where the line between different AI products is blurred or disregarded, leading to a potential nightmare scenario for tech teams.
The onboarding of new tech within a firm involves a multi-step process that deserves appreciation from all stakeholders. This includes a detailed evaluation of the product’s data privacy, security, vendor reliability, risk of bias, and legal and ethical considerations. Preparation of the internal environment involves building robust permissions, security measures, audit trails, and crisis response strategies.
One of the major challenges lies in training users and convincing them not to disregard the established safety measures. Additionally, it is important to persuade lawyers to use the product effectively and to understand their place in the larger tech adoption process. Rather than viewing themselves as privileged actors brought in early to evaluate the product, they are instead late-term participants who should respect the intricacies of the process.
The current climate cautions law firms against building their own LLMs from scratch unless they possess the resources and necessary competency. After everything is up and running, the technology team must remain vigilant to stop data poisoning, IP theft, privacy breaches and misuse of generated content.
Adopting AI within a law firm is more than simply “buying some AI”. It is a deliberate, detailed process with higher risk implications, and everyone has a part to play in successfully moving the firm into the 21st century.