Countering Deepfakes with Critical Thinking: The Power of Slow Thinking in the Age of AI

Realistic AI-generated images and voice recordings are emerging as new threats to democracy, according to a recent article by F.D. Flam. These deceptive materials, part of a longstanding trend of disinformation tactics, have lately evolved into more advanced forms labelled as ‘deepfakes’. You can read the original article in detail by visiting this link.

Despite the rising concern around the political impact of deepfakes, the solution may not necessarily lie in designing a counter AI or training the public to identify fake images. The author argues that adopting certain well-established critical thinking methods could provide a more effective line of defence. These include refocusing our attention, re-evaluating our sources, and questioning our own perceptions.

Keys to countering deepfakes can be located in the domain of ‘system 2’, or slow thinking, which endorse a careful and contemplative approach towards accepting information. This concept has been extensively discussed in Daniel Kahneman’s book “Thinking, Fast and Slow”. The author suggests that AI is often successful in duping the ‘fast’, or spontaneous and intuitive thinking processes that we engage in regularly. On the other hand, slow, deliberate thinking can help people scrutinize information better and thus, stand up against the wave of AI-powered disinformation.

As legal professionals, this call to thinking slow underlines the importance of discretion and discernment not only in evaluating media content but also in analysing cases and forming legal judgments. An understanding of deepfakes and their insidious spread could be instrumental in identifying and managing risks posed by digital disinformation in the legal landscape.

This discussion opens potential avenues for the development of AI policy and law, considering the mounting need for regulation and public awareness to curb the impact of deepfakes.