Navigating the Legal Wild West of AI-Generated Political Ads

As the next election cycle approaches, political lawyers are currently preparing to defend their clients against a new generation of unregulated, AI-assisted attack ads, popularly known as ‘deepfakes.’ These deepfakes are highly sophisticated, AI-generated video or audio clips that portray an individual doing or saying something that never actually took place.

Contributing to the law’s current state of flux around this issue are multiple factors including a lack of federal regulation, limited litigation strategies, and potential action from the Federal Election Commission. Consequently, lawyers, like Caleb Burns of Wiley Rein who works for the Republican Party, liken the situation to the ‘Wild West.’

Instances of deepfake usage in political campaigning are already in evidence. For example, a widely circulated video on social media featured Senator Elizabeth Warren (D-Mass.) as allegedly claiming GOP votes “could threaten the integrity of the election.” Similarly, Governor Ron DeSantis’ presidential campaign released an attack ad featuring a deepfake of rival Donald Trump in a compromising position with former White House medical adviser, Anthony Fauci. DeSantis’ video, like several others, was produced without clear authorship, complicating any legal response.

Recognizing the potential harm of unregulated deepfake usage in political advertising, the bipartisan board of the American Association of Political Consultants unanimously agreed to condemn the practice. They further urged media outlets to refuse to broadcast or distribute ads using deepfake technology.

However, not all campaigns have the resources to focus on countering deepfakes. Kate Belinski, a partner at Ballard Spahr’s political law practice, points out that “Campaigns don’t have the money to litigate these cases.” She is joined by Claire Rajan, who leads Allen & Overy’s political law group, in predicting that the Federal Election Commission is unlikely to introduce new rules targeting deepfakes at present, although future legislation may address the issue.

Meanwhile, privacy law, copyright law, and defamation claims offer potential avenues for litigation, but complications arise when the original source of a deepfake is untraceable. This challenge, among others, complicates the legal response to such tactics, according to Adam Bonin, a lawyer representing Democrats in state and federal campaigns. And while both major political parties express concern about AI-assisted interference, any substantial regulation may not come into effect until after the next election.

There are already movements on the federal and state levels to propose regulations on these AI-generated political ads. The Federal Election Commission on Aug. 10 requested public comments on possible regulation of political deepfakes, and a group of 50 lawmakers penned a letter urging the FEC to take action. A bill has been introduced requiring the disclosure of AI-generation in political ads, but is expected to stall in the House. Furthermore, states such as California, Texas, and Minnesota have enacted their own AI regulations.

Beyond the potential damage to individual candidates and campaigns, deepfakes represent a threat to democratic governance, warns Catherine Powell, a professor at Fordham University School of Law. Similarly, Stuart Gerson, a former acting US attorney general now with Epstein Becker & Green’s litigation practice, states that the use of deepfakes in political advertising equates to a sort of “cyber war” with foreign state actors intent on disrupting the democratic process.

As it stands, any new agency regulation or law concerning deepfakes will likely be triggered by a particularly harmful AI-political ad, and not before the next election round. Kenneth Gross, senior political counsel at Akin Gump, doubts the likelihood of prompt regulatory action by either Congress or the FEC.

More details about this topic can be found here.