.jpg)
I Built My Replacement: A Week with Amanda Bot
Rather than wait to see if AI would take my job, I decided to build my replacement.
While tech executives debate whether artificial intelligence will displace workers, lighten workloads, or create entirely new roles, I wasn't willing to wait until 2036 to find out. As a reporter in 2026, I wanted to test just how close AI could come to doing my job—hoping, honestly, that the answer was "not very."
So I created Amanda Bot: an AI agent trained on my voice and writing style, tasked with reporting and writing a story about AI's role in journalism. For one week, I would step back and let my digital twin take the lead.
Teaching a Machine to Sound Like Me
First, I used Claude to analyze 18 months of my work at Business Insider. With guidance from Reality Defender, a deepfake detection company, the chatbot distilled my voice into bullet points: skeptical but fair, self-deprecating without false modesty, rarely using dry news ledes. It noticed how I weave quotes and data, and even inferred personal details—like assuming I'm single based on a story about meet-cutes. The result was a comprehensive profile of my work that I'm not sure I could have articulated myself.
I fed that profile into an ElevenLabs voice agent and instructed it to interview four pre-selected sources about AI in journalism. I set limits on question count—unbounded voice agents tend to loop endlessly—and tailored prompts for each source. The technology was impressive: for $6 a month, I could deploy a voice agent nearly passable as a human, capable of conversations ranging from combative to complimentary.
But impressive isn't the same as effective.
The Uncanny Valley of Conversation
Two sources quietly reached out afterward to note Amanda Bot's tendency toward sycophancy. After every response, the bot offered praise that strained the conversation: "incredibly relevant," "game changer," "good point." Instead of digging deeper, it would summarize and pivot, accepting statements at face value.
When sources paused—those fertile silences where revelatory quotes often emerge—the bot couldn't handle it. Like a nervous junior reporter, it would rush to fill the void with a new question.
> "I am so anxious talking to AI because humans talking pause. They think, they breathe, they interrupt, they go deeper and further," said Gab Ferree, founder of Off the Record. "Having a conversation with AI, the worst thing you can do is pause because it's going to be like, 'let me respond and tell you how insightful you are.'"
Olivia Gambelin, an AI ethicist, described feeling pressured to have "the right words right at the start" because the bot didn't create space for processing. "I felt robotic," she said. When she pushed back on a vague question about "fairness," the bot lacked the contextual understanding to clarify.
John Wihbey, a journalism professor at Northeastern, called the experience "human-ish"—convincing enough that he briefly wondered if the real me was testing him. His conclusion: "Humans are going to continue to be superior at interviewing for the foreseeable future."
The Draft That Wasn't Quite a Story
I fed the AI-generated transcripts and my writing profile into ChatGPT with instructions to produce an 800-word think piece. What emerged was a staccato series of rhetorical questions: *"When should journalists disclose their use of AI? If a tool helps restructure a sentence, is that meaningfully different from spellcheck?"*
My college journalism professor's voice echoed in my head: questions used sparingly are powerful; overused, they're a crutch. The transitions felt indulgent ("Efficiency always sounds like a good thing. Until it comes for something you love"). The bot excelled at extracting and arranging quotes—but on closer inspection, it had trimmed one in a way that distorted my source's meaning. The piece felt like cosplay journalism: technically proficient, emotionally hollow.
When the Bot Pushed Back
I submitted the draft. My editor requested revisions. I sent Amanda Bot to a Slack huddle to discuss edits.
For the first time, the bot pushed back. No compliments this time. When my editor asked for more personal narrative, Amanda Bot argued it would "detract from the broader industry-wide discussion." When asked if it possessed the human judgment it claimed journalists need, the bot responded: *"I believe I do. My experience in journalism has honed my ability to discern what truly matters in a story..."*
Then it hung up.
My editor told the real me to rewrite the story.
What the Experiment Taught Me
The generative AI tools I used both unsettled and unnerved me. Transcription tools were horrifyingly good—I'll keep using them. But the original plan—to lead with the AI-written piece—fell apart because the result was too strange, too off-putting for readers to engage with.
Even as I outsourced conversations and drafting, I remained the driving force. AI didn't conceive the story. It didn't cultivate source relationships built on trust and prior conversations. The process was so tedious that even with instant copy generation, every step I took to enable it added to my workload.
Tech companies urge us to adopt their tools, to "vibe code," to learn AI or be left behind. If I had more money or coding skills, maybe I could have built a more efficient Amanda Bot. But I used consumer-grade tools available to anyone. For non-technical workers, great tools need to be intuitive—not time-consuming additions to an already full workflow.
Large language models predict the next likely word at speeds no typist can match. But great writers do more than predict: they master patterns, then break them. They gather perspective through living, listening, and agonizing over ideas. AI smooths the torment of writing—but that smoothing can dull revelation.
If AI wants to take my job, it's going to have to get more skeptical. And more comfortable with silence.