In the complex legal terrain where patent prosecution thrives, artificial intelligence (AI) offers increasingly valuable tools for practitioners. AI language models, known as large language models (LLMs), generate human-like text responses to user prompts using intricate deep learning algorithms. Initially, the use of LLMs within patent prosecution was cloaked in skepticism around issues of accuracy, legal standing and privacy, but it is now becoming acknowledged that these models can bring a multitude of benefits when used appropriately. According to Nicholas Martin and George Zalepa of Greenberg Traurig.
LLMs offer the potential to aid patent practitioners in researching initial concepts related to an invention, providing valuable understanding of a technological area ahead of beginning a patent application. Ranging from in-depth research to establishing questions to elicit engagement from inventors, these tools have application in multiple aspects of patent prosecution. They can even assist with the drafting of background materials or boilerplate sections.
Despite their undeniable potential, LLMs must be used judiciously, as the output they generate cannot be assumed to be correct without thorough review. It’s important to confirm the accuracy of generated answers, both in isolation and in the context of the whole document. Certain risks also exist around misuse of these tools, as public LLM usage or usage in an non-secure workspace could lead to unintentional disclosures of confidential or non-public information.
Unique to patent prosecution, LLMs raise distinctive issues regarding prior art and inventorship. Given that LLMs are trained on existing data, their output could possibly be considered an assembly of potential prior art. A patent practitioner incorporating such output into an application might then be seen as incorporating known prior art. Another issue emerges when considering that LLMs may be capable of generating potentially novel and non-obvious answers to prompts. This raises certain gray areas around inventorship, with some jurisdictions ruling that AI cannot be considered an inventor.
While LLMs offer the benefits of speed and broad knowledge, they require careful handling to ensure ethical and privacy standards are upheld, and their contributions to cases are thoroughly verified. Hence, while it’s clear that AI in the form of large language models can indeed provide practitioners with valuable assistance, it’s equally important to remain cognizant of the practical and ethical considerations these tools bring to the table.