In today’s tech-centric world, the need for legal professionals to adapt to changing trends is greater than ever, especially with the advancing rates of artificial intelligence (AI) applications. This advancement in AI has forced our courts into unfamiliar territories, dealing with AI-altered evidences like deepfake videos. For instance, a King County Superior Court judge in the state of Washington recently ruled on the admissibility of AI-enhanced video in a triple murder trial.
The lack of transparency in AI editing tools’ algorithms raised concerns for Judge Leroy McCullough, leading to the preclusion of the altered video in question. This case underscores the rapidly emerging issue for our trial courts — determining the admissibility of videos created with AI tools.
A recent report by the New York State Bar Association’s Task Force on Artificial Intelligence provides some much-needed guidance on this matter, addressing a broad range of issues including the evolution of AI, its risks and benefits, and the implications for the legal profession.
The threat of AI-created deepfake evidence is significant, since the output from generative AI tools is increasingly sophisticated and deceptive, making it challenging to decipher truth from lies. Efforts to manage these concerns are ongoing both nationally and on a state level.
The Advisory Committee for the Federal Rules of Evidence is considering a proposal to revise the standard for admissible evidence from “accurate” to “reliable.” The proposed new rule 901(b)(9) would require a demonstration that the result produced by AI is valid and reliable. The Committee is also recommending a new rule, 901(c), to address the threat of potential fabrication or alteration in electronic evidence.
In New York, changes to the Criminal Procedure Law and CPLR have been proposed to address the admissibility of AI-created or processed evidence. These proposals aim to differentiate between evidence “created” by AI (which produces new information from existing information) and evidence “processed” by AI (which produces a conclusion based on the existing information).
While amendment of laws is one way to deal with advances in AI, another path is for legal professionals to enhance their technological competence. This involves understanding how existing laws apply to AI and whether new regulations are needed to address emerging issues that could potentially impede the judicial process. As AI continues to advance, a balanced strategy combining legal reforms, technological literacy, and a commitment to continuous learning in the legal profession is necessary to maintain a fair and equitable legal system in the age of AI. Full details on this topic can be found on Above the Law’s article.