Grammarly Faces Lawsuit Over Unauthorized Use of Experts’ Names in AI Tool

4

Grammarly, the popular writing assistance platform owned by Superhuman, is embroiled in a class action lawsuit alleging the unauthorized use of prominent figures’ names and identities in its new AI-powered “Expert Review” feature. The suit, filed in the Southern District of New York, claims the company misappropriated the likenesses of journalists, authors, and other professionals—including Julia Angwin, the lead plaintiff and founder of the nonprofit news organization The Markup—to lend credibility to its AI editing suggestions.

The Core of the Dispute

The lawsuit centers on Grammarly’s decision to present AI-generated feedback as if it came directly from well-known experts without their consent. This included using names like Stephen King and Neil deGrasse Tyson as virtual editors, a practice that drew immediate criticism once revealed. Despite a disclaimer stating that these experts did not endorse the tool, the implication was clear: users were receiving input from trusted voices.

Superhuman has since discontinued the feature following a public backlash, stating they will “reimagine” it to give experts greater control over their representation. However, the lawsuit argues that the damage is already done, asserting that damages for the plaintiff class exceed $5 million.

Legal and Ethical Concerns

The legal basis for the suit rests on long-standing laws in New York and California that prohibit the commercial use of a person’s name and likeness without permission. According to Peter Romer-Friedman, Angwin’s attorney, the case is legally straightforward. More broadly, the lawsuit raises critical questions about the ethics of AI-driven platforms leveraging individuals’ reputations without their consent.

This isn’t simply about celebrity endorsements; it’s about the appropriation of years of hard-earned expertise and credibility. As Angwin herself noted, this feels akin to a “deepfake” scenario, where one’s identity is cloned for commercial gain. The case highlights how rapidly AI tools can blur the lines between real authority and simulated expertise.

Broader Implications

The lawsuit comes at a time when AI-powered tools are increasingly being used to mimic human skills and expertise. This trend raises concerns about intellectual property, professional integrity, and the potential for widespread misinformation. If companies can freely exploit reputations without accountability, it undermines trust in both the technology and the individuals whose likenesses are misused.

The outcome of this case will likely set a precedent for how AI platforms navigate the ethical and legal boundaries of leveraging human expertise, especially as these tools become more integrated into everyday workflows.

Ultimately, the suit underscores the need for stricter regulations and greater transparency in how AI companies use and represent human identities in their products. The future of AI-driven tools may depend on whether they can operate ethically without relying on unauthorized appropriation.