After years of anticipation, the final text of the Artificial Intelligence Act (‘the Act’) was approved by the Council on May 21st of this year. As the first regulation of its kind, the Act aims to safeguard fundamental rights while promoting safe and trustworthy AI by adopting a risk-based approach. While the Act prohibits specific instances of AI predictive policing, serious concerns have been raised about whether the ban will be effective or merely symbolic.
Predictive policing, often defined as ‘the use of analytical techniques to identify promising targets’ for forecasting criminal activity, is not explicitly defined in the Act. A leading definition, provided by Perry et. al, outlines its scope. This includes predictive mapping of potential crime locations and predictive identification of individuals likely to commit or become victims of crime. Such applications have faced significant criticism, notably for their human rights implications associated with extensive data collection and processing, as well as potential biases.
Despite initially being classified as a high-risk application, predictive identification is now prohibited under Article 5(1)(d) of the Act. However, the effectiveness of this provision is questioned due to potential workarounds like the “human in the loop” defense and exemptions for national security purposes, as highlighted in this analysis by Jessie Levano.
Potential loopholes arise from the lack of clear definitions for terms such as “profiling” and “meaningful human intervention,” the latter being essential to differentiate solely automated processes from those involving human input. Moreover, the Act’s exemption for AI used for national security purposes could further weaken the prohibition’s effectiveness. Concerns, raised by groups like Article 19 and Access Now, suggest that invoking national security could lead to continued use of predictive identification despite its potential to infringe upon fundamental rights.
The Act’s goals of safeguarding humanity and fostering AI that benefits all might be undermined by these exemptions and the ambiguity surrounding “human oversight”. The need for clearer definitions, stricter guidelines on human involvement, and a more nuanced approach to national security exceptions is paramount. Without such changes, the ban risks being merely symbolic, failing to adequately address the real challenges and potential harms posed by AI in law enforcement.