Attorneys are anticipating a surge in litigation and the development of a new liability framework for deepfakes, drawing parallels to the legal strategies employed against manufacturers of products like talcum powder and asbestos. This shift reflects the growing recognition of the potential harms posed by deepfake technology and the need for robust legal mechanisms to address them.
Deepfakes—AI-generated or manipulated media that can convincingly depict individuals saying or doing things they never did—have raised significant concerns across various sectors. The potential for misuse ranges from personal defamation to broader societal impacts, such as the spread of misinformation and fraud. In response, legal professionals are exploring avenues to hold creators and disseminators of deepfake content accountable.
One approach under consideration involves applying product liability principles to deepfake technology. This strategy would treat deepfakes as defective products that cause harm to individuals, thereby holding creators and platforms responsible for damages. Such a framework mirrors the legal actions taken against companies that produced harmful products like asbestos and talcum powder, where manufacturers were held liable for failing to warn consumers about the risks associated with their products.
Legislative efforts are also underway to address the challenges posed by deepfakes. In December 2025, U.S. Representatives Jake Auchincloss and Celeste Maloy introduced the Deepfake Liability Act, aiming to amend Section 230 of the Communications Act of 1934. The proposed bill seeks to impose a duty of care on online platforms, requiring them to implement processes to prevent and address cyberstalking and intimate privacy violations facilitated by deepfake technology. This includes establishing clear procedures for the removal of harmful content and maintaining data logs necessary for legal proceedings. The bill emphasizes that platforms failing to meet these obligations could lose their liability protections under Section 230.
State-level initiatives have also emerged. For instance, Pennsylvania enacted Act 35 in July 2025, criminalizing the creation or dissemination of deepfakes with fraudulent or injurious intent. The law imposes penalties ranging from first-degree misdemeanors to third-degree felonies, depending on the severity of the offense. Similarly, Washington State’s House Bill 1205, effective July 2025, targets the intentional use of “forged digital likenesses” for purposes such as defrauding, harassing, or intimidating individuals.
Legal experts suggest that victims of deepfakes may pursue claims under existing tort laws, including defamation, invasion of privacy, and intentional infliction of emotional distress. These claims require demonstrating that the deepfake content was published with false or misleading information that harmed the individual’s reputation or caused significant emotional distress. Additionally, if a deepfake is used to promote a product or service without consent, the victim may invoke the privacy tort of misappropriation or right of publicity, seeking damages for the unauthorized commercial use of their likeness.
As deepfake technology continues to evolve, the legal landscape is adapting to address the associated risks. The combination of legislative action and the application of existing legal doctrines aims to provide a comprehensive framework for holding responsible parties accountable and protecting individuals from the potential harms of deepfakes.