Grammarly Faces Legal Battle Over Unauthorized Use of Public Figures in AI Feature

Grammarly, the widely used AI-powered writing assistant, is currently embroiled in a legal dispute over its “Expert Review” feature, which utilized the names and identities of prominent writers and public figures without their consent. This case underscores the emerging legal challenges surrounding the right of publicity in the context of artificial intelligence.

Introduced in August 2025, the “Expert Review” feature offered users writing feedback purportedly inspired by renowned authors and journalists, including Stephen King, Carl Sagan, and Kara Swisher. However, many of these individuals had not authorized the use of their names or likenesses in this manner. The feature was available to subscribers for $12 per month, positioning itself as a premium service that leveraged the reputations of these experts to enhance user experience.

Investigative journalist Julia Angwin discovered her name was being used without permission and subsequently filed a class-action lawsuit against Superhuman, Grammarly’s parent company. The lawsuit alleges violations of privacy and publicity rights, contending that the company profited from misrepresenting these individuals as endorsers of the feature. Angwin expressed her distress, stating, “I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.” ([techcrunch.com](https://techcrunch.com/2026/03/12/a-writer-is-suing-grammarly-for-turning-her-and-other-authors-into-ai-editors-without-consent/?utm_source=openai))

In response to the backlash, Superhuman disabled the “Expert Review” feature and issued an apology. CEO Shishir Mehrotra acknowledged the misstep, admitting, “The feature was not a good feature. It wasn’t good for experts, it wasn’t good for users.” ([techradar.com](https://www.techradar.com/ai-platforms-assistants/the-feature-was-not-a-good-feature-grammarly-ceo-admits-experts-review-didnt-work-but-you-may-not-like-what-replaces-it?utm_source=openai)) Despite the apology, the company maintains that the legal claims are without merit and intends to defend against them vigorously.

This incident highlights the complex legal terrain AI companies navigate when utilizing individuals’ identities. The right of publicity, which protects against unauthorized commercial use of a person’s name or likeness, is at the forefront of this case. As AI technologies become more sophisticated, the potential for misappropriation of personal identities increases, raising significant ethical and legal questions.

Legal experts suggest that this lawsuit could set a precedent for how AI companies handle the use of real individuals’ identities. The outcome may influence future regulations and guidelines, emphasizing the necessity for explicit consent when incorporating personal attributes into AI-driven products.

As the case progresses, it serves as a critical reminder for AI developers and companies to prioritize ethical considerations and legal compliance, particularly concerning individuals’ rights to their own identities. The balance between innovation and respect for personal rights remains a pivotal issue in the evolving landscape of artificial intelligence.

For a more in-depth discussion on this topic, you can watch the following video:

Grammarly’s Facing Class Action Lawsuit Over Its AI ‘Expert Review’ Feature