In a move reflective of growing concern over potential misuse of artificial intelligence (AI) in political spaces, the Federal Election Commission (FEC) has unanimously agreed to proceed with rulemaking on deceptive campaign advertisements, focusing particularly on the threat posed by “deepfake” content. Dated on August 10, voting has put the issue at the forefront of on-going debates concerning the integrity of democratic processes. Complete details here.
The prospect of candidate “deepfakes”, or media that has been manipulated using artificial intelligence to be virtually indistinguishable from authentic footage, in the 2024 elections has the potential to significantly disrupt the democratic process. With their growing sophistication and the resultant increase in their potential for misuse, deepfakes pose a daunting challenge for regulating bodies like the FEC. The decision to move for rule-making reflects the intensity of the concern at hand.
The FEC’s decision is significant for the legal community, as it signifies a shift towards the formal legal regulation of AI-generated content. To navigate this complex, technologically-infused legal terrain, it becomes crucial for legal professionals to understand the intersections between law, politics, and AI.
A call for public comments usually precedes such decisions, and in accordance with this convention, the Federal Election Commission is seeking feedback regarding the rulemaking. Lawyers, legal scholars, technologists, and political scientists, among others, are likely to be active contributors in this crucial dialogue, given the multifaceted nature of the issue.
The FEC’s unanimous decision to move forward with rulemaking represents an emerging trend among regulatory bodies across the world who are grappling with the technological advances of the 21st century. This is definitely an area for legal professionals to closely monitor, to stay up to date with these developments in the intersection of law and technology.