Indiana Officer Resigns After Misuse of Clearview AI Facial Recognition

An Indiana police officer has stepped down following revelations of his unauthorized usage of Clearview AI’s facial recognition technology to identify individuals on social media who were not involved in any criminal activity.

The Evansville Police Department labeled this a clear “misuse” of the technology, according to their press release. Clearview AI, which has been banned in several US cities due to privacy concerns, operates a database comprising over 40 billion images drawn from various sources, including public social media. The officer exploited the system by falsely utilizing actual case numbers from legitimate incidents to conceal his actions, said Evansville Police Chief Philip Smith.

A routine audit in March unveiled the officer’s misconduct. The review showed an unusually high usage of the software, leading to an investigation that revealed the officer’s scans were mainly of social media images, contrary to the typical use of live or CCTV footage for ongoing investigations. Chief Smith noted, “This officer was using Clearview AI for personal purposes,” and, as a result, recommended his termination. However, the officer resigned before any formal decision could be made.

Clearview AI maintains that its technology is a valuable public safety tool and insists on measures like case numbers and crime types for tracking searches. Despite these safeguards, the officer managed to conduct unauthorized scans, calling into question the efficacy of these compliance features. This incident could have wider implications, given the significant use of Clearview AI by police nationwide, with nearly 1 million searches conducted, Clearview AI CEO Hoan Ton-That told the BBC last year.

The company’s practices have faced scrutiny and legal challenges, including a settlement that barred Clearview AI from selling access to most private companies. However, law enforcement agencies in the US, such as the Miami Police, continue to use the software for various types of crimes.

Kashmir Hill, the journalist who initially reported on Clearview AI’s capabilities for The New York Times, noted this case reflects broader concerns about privacy and misuse. Advocates underscore the need for stronger privacy regulations to prevent such abuses, describing the tool as akin to a “Shazam for people.”

Chief Smith emphasized that the officer ignored departmental guidelines that explicitly state Clearview AI should only be used for official purposes. “To ensure the software is used correctly, we have internal operational guidelines and adhere to Clearview AI’s terms of service, both which clearly state this tool is not for personal use,” he said.