Grammarly, the writing assistance software, has introduced a new feature that allows users to receive feedback on their work from AI simulations of prominent authors and academics – including those who have passed away. This expansion into generative AI, part of a broader rebranding effort under the new name Superhuman, raises serious ethical questions about intellectual property, consent, and the commodification of expertise.
The Rise of AI-Powered Writing Tools
Grammarly has evolved from a simple grammar checker into a comprehensive AI writing partner. The platform now includes chatbots, paraphrasing tools, “humanizers” that mimic specific writing styles, and even AI graders that predict academic performance. However, the most controversial addition is the “Expert Review” option, which offers critiques purportedly inspired by real individuals, both living and deceased.
Simulated Expertise: Living and Dead
Users can now request feedback from virtual versions of authors like Stephen King and Neil deGrasse Tyson, as well as from the late William Zinsser and Carl Sagan. Grammarly explicitly states that these experts have no affiliation with the product, clarifying that the simulations are “for informational purposes only.” The AI agents are trained on the works of these figures, but the legality of this content harvesting remains uncertain.
Ethical Concerns and Reactions
The practice has sparked outrage among academics and writers. Vanessa Heggie, a professor at the University of Birmingham, condemned Superhuman for “creating little LLMs” based on scraped work, trading on names and reputations without consent. The availability of feedback from deceased historians, such as David Abulafia, further fuels the controversy.
How It Works: Inspiration vs. Endorsement
Grammarly claims the AI provides suggestions inspired by the works of these experts, rather than direct endorsements. Jen Dakin, a Superhuman communications manager, says the tool aims to point users toward influential voices for further exploration. Independent reviews, however, show that the AI is actively using “ideas” and “concepts” from dead authors like William Strunk Jr. and Margaret Mitchell.
Academic Mistrust and Exploitation
Historian C.E. Aubin argues that this system reinforces deep mistrust in AI within the humanities. She emphasizes that real experts are not involved in producing these reviews and that reducing scholarship to mere work ignores the personhood of the scholar. The practice is particularly egregious as the humanities face ongoing attacks and funding cuts.
Effectiveness and Detection
The new AI tools are not without flaws. Grammarly’s plagiarism checker failed to detect a direct quote from The Simpsons, highlighting the limitations of its detection capabilities. Despite these shortcomings, the feature may encourage students to rely on AI for academic work, potentially blurring the lines between assistance and cheating.
The Future of AI in Education
The expansion of AI-powered writing tools raises concerns about the future of education. Some fear that these technologies could ultimately replace teachers altogether. Whether this is a realistic outcome remains to be seen, but the trend points toward a growing reliance on artificial intelligence in academic settings.
In conclusion, Grammarly’s latest feature represents a concerning step toward commodifying intellectual labor and exploiting the legacies of writers and scholars without their consent. The ethical implications of simulating expertise – especially from the deceased – are profound, and the long-term consequences for academia remain uncertain.






















