The increasing reliance on artificial intelligence tools across various sectors brings about significant potential benefits, including enhanced efficiency and improved accuracy. However, the inherent risks associated with these technologies cannot be overlooked. As highlighted in a 2023 study by the Stanford Institute for Economic Policy Research, the IRS’s use of a predictive algorithm for audit selection has yielded unintended racial disparities, offering a cautionary tale for employers venturing into AI-driven solutions.
According to the study, although the IRS does not collect racial data, Black taxpayers claiming the earned income tax credit were disproportionately audited compared to non-Black taxpayers. The underlying cause appears to be a predictive algorithm designed to identify potential errors, which inadvertently resulted in racial biases in audit selection.
The IRS acknowledged these disparities in their 2024 annual report, committing to review its compliance processes and dedicate resources to identify and address biases across various demographic dimensions. This issue emphasizes the findings by the National Institute of Standards and Technology in a report published earlier, which highlighted the potential for bias in AI systems.
For employers, the primary legal risk associated with AI tools is the potential for disparate impact claims. The landmark case Griggs v. Duke Power Company established that disparate impact does not necessitate evidence of intentional discrimination—only proof of neutral practices resulting in statistically significant discriminatory outcomes. Essentially, employers could face liability if AI tools inadvertently lead to disproportionate impacts on protected groups.
In response to these risks, both federal and state entities are taking steps to regulate the use of AI in employment practices. President Biden’s executive order on AI usage outlines directives for tackling hiring discrimination and other AI-related issues. Furthermore, cases like Mobley v. Workday, Inc set precedents for holding AI software providers accountable for employment discrimination.
Mitigating these risks requires proactive measures such as employing robust risk management strategies and working closely with legal experts. The National Institute of Standards and Technology’s risk management framework offers a flexible model for identifying and addressing AI risks. Employers are advised to continuously monitor AI systems, have remediation plans ready, and keep open channels of privileged communication with their counsel.
It’s clear that while AI tools offer notable advantages, they carry significant potential liability risks. With diligent oversight and strategic planning, organizations can leverage AI’s benefits while safeguarding against unintended biases and legal challenges. Employers must ensure their practices do not mirror the issues faced by the IRS, serving as a potent reminder of the critical balance between technological advancement and equitable application.