EU-Inspired Liability Models: A Solution to Battling AI Bias in American Courts

Addressing the procedural roadblocks in challenging artificial intelligence (AI) bias in court can be frustrating, especially when individuals face racial discrimination from police facial recognition algorithms or discriminatory algorithms for job applicant screening, closely reflecting the application of biased human behaviour [Scientific American] [Nature].

A promising solution may come from the EU—the EU Liability Directive—which can provide leeway for those negatively impacted by AI, giving them the opportunity to have their cases heard in court [Axios]. The significance of the EU liability model is its inversion of the burden of proof, requiring defendants to validate their system’s legal operation or its lack of impact on the plaintiff.

AI’s inherent opacity could complicate legal battles, such as that of Carmen Arroyo, whose fight has been underway for over five years after her application for an apartment and her severely disabled son was allegedly evaluated by a CoreLogic algorithmic tool [National Housing Law Project] [TechEquity].

Self-sovereign corporate AI regulation is becoming increasingly common, and lawmakers in Washington D.C. seem limited in their ability to effect change, underlined by President Joe Biden’s executive order containing only ideals rather than actual policy [Politico] [Reuters] [whitehouse.gov].

Fortunately, the EU AI Act again offers a path ahead [Artificial Intelligence Act]. Though perhaps not entirely transferable to the American system, the requirement for defendants to prove the legality of their systems or non-impact on plaintiffs could overhaul the American civil procedure rules and help pave the legal road for AI victims [Brookings].

Importantly, adopting this EU model could realign economic motivations by only utilising AI when effective and transparent, preventing the use of unlawful AI systems due to legal complexities. Safeguards against frivolous litigation would also be crucial, confining discovery solely to the algorithm, and preventing further discovery unless evidence is sufficient to justify expedited summary judgement.

The litigation implications of AI could be one of the most significant legal challenges since the industrial revolution. However, the introduction of major reforms to civil procedure rules, with inspiration from the EU, holds potential not only for the current generation adversely impacted by AI, but for generations beyond [Bloomberg Law].