In a notable development within the intellectual property law circuit, Anthropic PBC has clarified an error involving AI-generated citations during an expert report submission in a high-stakes IP suit. The company recently informed a federal judge that the involved expert witness, Olivia Chen, did not base her report on a non-existent academic article generated by artificial intelligence. Instead, Anthropic attributes the mishap to a citation error produced by its AI chatbot, Claude.
The error was detailed in a filing submitted to the US District Court for the Northern District of California, highlighting a critical mistake in Chen’s declaration from April 30. The document contained incorrect information regarding the author and title of a cited publication, although the link to the actual material was indeed accurate, suggesting that the core reference itself was valid.
The situation came to light when music publishers, apparently on the opposing side, suggested that the expert had used a fabricated citation in her report. Ivana Dukanovic, a Latham & Watkins LLP associate and part of Anthropic’s legal team, explained that she had requested Claude to generate a properly formatted legal citation, which inadvertently led to the error.
This admission brings to the forefront ongoing debates around the reliability of AI in complex legal tasks and the scrutiny required to ensure its outputs do not inadvertently affect legal proceedings. For further details, the full article can be accessed for more insight into this unfolding legal narrative.