A recent class action complaint filed in the Southern District of New York, Angwin v. Superhuman Platform, Inc., No. 26 Civ. 02005, 2026 WL 704131 (S.D.N.Y. 3/11/26), highlights an evolving issue in artificial intelligence (AI) product design: what happens when an AI feature uses a real person’s name or identity as part of the user experience and that identity becomes part of what is being sold?
In the Angwin complaint, the plaintiff (a journalist and editor) alleges that Superhuman (the parent company of the writing assistant tool Grammarly) misappropriated the names and identities of hundreds of journalists, authors, writers, and editors to earn profits. The complaint focuses on Grammarly’s now-disabled “Expert Review” feature, which let subscribers pay for comments attributed to famous writers without their consent, including Angwin herself, Stephen King, and Carl Sagan.
The complaint underscores that AI risk is not limited to AI producing output. The risk is also about how the product attributes that output to real people. The complaint alleged that the Grammarly app told users it was picking experts to review their draft and then gave feedback as if it came from those named people, including short biographies for those experts. The pleading further describes inline editing-style comments that appear next to passages of the user’s text under an expert’s name, and a deeper view where the product explains that a particular recommendation is “inspired” by the selected expert. The complaint also alleges that the system draws on the experts’ publicly available writing to generate advice the experts did not actually provide. The concern is not only unauthorized use of a name in marketing, but also the possibility that ordinary users could reasonably come away believing they received guidance from, or endorsed by, the named person, even where the person was not involved in the review and might disagree with what is attributed to them.
Much of the conversation surrounding developing or deploying AI products centers on concerns about AI output accuracy and intellectual property infringement. Angwin, however,is a lesson that if an AI tool uses a person’s name as part of the product’s value proposition, especially in a way that reads like participation or endorsement, likeness claims could also be at play. Companies should be mindful of what their AI interface suggests about the source of its output. If the product uses a well-known identity to make a feature seem more credible or relatable, it’s advisable to get permission before using that person’s name in this way. A recognizable name can certainly add value, but Angwin is a reminder that it can also add legal risk.