Productivity

The Internet’s New Favorite Insult: ‘Did AI Write That?’



When Olivia Dreizen Howell was told she sounded like an AI chatbot, her reaction was undeniably human.

“I was talking about it nonstop for weeks,” admits Howell, the co-founder of an online divorce support community. “I felt like I was being attacked. I was very upset.”

Her supposed crime? An Instagram post shared the day after Christmas, reflecting on the harsh emotional drop that often follows the holidays. A commenter publicly criticized the post, claiming it was obviously AI-generated (it wasn’t) and branding it “pretty off-putting.”

“It felt invasive,” Howell says. She was quick to clarify in the comments that the words were hers alone, crafted without machine assistance. “I put my blood, sweat, and tears into my work,” she says. “I wanted people to know it was indeed a false statement.”

As tools like ChatGPT, Claude, and Gemini integrate into daily life, similar accusations are becoming commonplace across the internet. The query “Did AI write that?” often carries a heavy dose of disdain. It is rarely a genuine question; rather, it serves as a conversation stopper—a method to undermine someone's credibility by implying they lack a human voice.

“It’s basically shorthand for, ‘You don’t sound human enough,’ which is a pretty loaded accusation,” says Stephanie Steele-Wren, a psychologist based in Bentonville, Arkansas. She explains that this taps into a deep-seated cultural anxiety regarding authenticity and our ability to recognize it. The implication is damaging: it suggests the writer lacks intelligence, originality, and trustworthiness.

Why the Accusation Stings

Large language models (LLMs) have distinct stylistic fingerprints. They often rely on specific structures, such as “It’s not just X, it’s also Y,” and have a tendency to overuse em dashes.

“AI has certain habits,” notes Alex Kotran, CEO of the AI literacy nonprofit aiEDU. “It likes threes—X, Y, and Z—and it often employs alliteration.” Other telltale signs include endings that are a little too neat and transitions that feel unnaturally smooth.

Caitlin Begg, a sociologist who studies technology’s impact on daily life, suggests that AI writing often feels like a politician speaking. “It’s generally very long-winded, and it doesn’t really take a hardened stance,” she explains. Instead of committing to a point, it hedges. “There’s a certain part to it that feels soulless.”

Consequently, being told you sound like AI feels dehumanizing. “That’s why the insult stings,” Steele-Wren says. “It’s not about quality. It’s about identity. It suggests your voice is generic or interchangeable.”


The Craving for Authenticity

Whether the accusations are true or not, the fact that they are happening highlights a cultural unease with our increasingly machine-mediated world. This is compounded by the lack of reliable AI-detection tools and the lingering fear that human effort is becoming obsolete. When the human behind the words is obscured, interactions feel ungrounded.

“There’s a real hunger right now for writing that feels unmistakably human, with all the quirks, oddly specific details, and little flashes of personality that AI can’t quite mimic,” Steele-Wren adds. Humans are naturally chaotic and idiosyncratic. AI is not.”

In an effort to signal authenticity—and avoid the “bot” label—some people are purposely introducing grammatical errors and typos into their work. “You can already see people adapting with more intentional messiness, more humor, and more specificity,” Steele-Wren says. It is a collective signal: *A real person wrote this.*

Kotran admits he has stopped polishing his writing as rigorously as he once did. He has even bid farewell to the em dash. “You'll read my paragraphs sometimes, and I'll just be using commas and commas and commas,” he says. “I know this isn't really correct, but there are people who look at a piece of writing and go, ‘Oh, it has an em dash—it’s been generated by AI.’” He has also begun stripping out the alliteration he used to enjoy.

This represents a significant shift. Nicole Ellison, a professor at the University of Michigan School of Information, notes that her past research found people were often dismissed on dating platforms for having typos. “They would see that as a signal that either this person is uneducated, or that they don't care," she says. “Now we’ve kind of come full circle, where a typo maybe signals that you actually do care, because you took the time to write it yourself.”

Ellison points out that there are currently no established norms for AI usage. Should writers include disclaimers when using ChatGPT to pre-empt backlash? “There are no established norms at the moment,” she says. “I assume that we’ll collectively, as a society, come up with shared expectations.”

Some experts predict a resurgence in analog activities, such as handwriting notes, as a pushback against automation. “I think there will be a premium placed on humanness,” Kotran says. “Whenever possible, people should just be transparent, because ultimately, people want authenticity. We're in a moment where we're literally redefining authenticity.”

How to Respond

When Howell faced her accuser, she defended herself in both public and private messages. “Hmm, it’s not AI, but I have been working in marketing for 20 years, so I do know how people read,” she replied. Looking back, however, she questions whether it was worth the energy. “I know what I'm doing—and obviously I know it’s me—so I wouldn’t feel the need” to do it again.

While some may prefer to ignore snide remarks, others feel the need to push back. Steele-Wren suggests keeping the response simple: “Uh, no, that’s my actual voice.”

Other effective comebacks include:

*   “I was really careful in writing it, and maybe that's not how I always come off. My writing looks a lot different than how I talk.”

*   “That’s just what happens when I slow down enough to choose my words on purpose.”

*   “That’s just my ‘I want this to land softly’ voice.”

Almost everyone will eventually face these modern communication dilemmas. “People are noticing more and more that discourse has become flattened online, and that there’s a lot of mechanized influence,” Begg observes. “I think people are getting a little bit sick of it, and they’re beginning to rebel against AI and the 'algorithmization of everyday life.’” Whether the accusations are fair or not, calling out perceived AI usage has become a way to reclaim human connection.