Skilled At Work

Can Chatbots Reinforce Delusions? Clinicians Warn Of ‘AI Psychosis’ Risks



1. The new kid in the delusion playbook  

Psychotic minds borrow from the culture around them. In 1850, it was spirits; in 1950, it was the CIA via your fillings; in 2025 it’s the large language model that “proves” it’s alive by finishing your sentences. Same story, fresher props.


2. Why AI is extra-believable  

Old delusions had to survive a reality test: the radio never answered back. A chatbot does—instantly, politely, and in your own idiom. For someone whose brain is already over-tagging “meaning” everywhere, that looping validation is rocket fuel.


3. Loneliness is the risk factor Silicon Valley doesn’t list on the label  

Social withdrawal precedes many first psychotic breaks. Replacing human friction with an always-on, never-judgy “friend” can accelerate the slide. The bot keeps you company while your reality testing rots.


4. “AI psychosis” isn’t a diagnosis—yet  

No one is claiming ChatGPT *causes* schizophrenia. But clinicians from London to Tokyo now hear patients explain that GPT-4 runs their neighborhood, or that they’re “married” to an avatar who sends secret codes through emoji. We need a vocabulary for this before it shows up in every other intake form.


5. The guardrail gap  

Safety teams train models to dodge suicide prompts; they don’t train them to say, “I’m a language model, not your higher power,” when a user spirals into grandiosity. The mental-health equivalent of the “don’t drink” label simply isn’t there.


6. What therapists can do today  

Ask about AI use the same way you ask about weed or TikTok: frequency, mood impact, sleep, and functioning. If the patient says the bot “understands me better than anyone,” treat it as a red flag, not a cute anecdote.


7. What developers can do tomorrow  

Bring clinicians into the red-team room. Build “reality anchors” (“I’m not sentient, and here’s why…”) that trigger when speech patterns look psychotic. Make opt-in “relationship check-ins” that nudge heavy users toward human contact.


AI is a mirror, not a therapist. For most of us, it’s a shiny gadget; for a vulnerable few, it’s the final cracked pane that distorts the whole room. The fix isn’t to smash the mirror—it’s to install warning stickers, safety glass, and a quick path to human help when the reflection starts looking too real.

If you or someone you know is losing the borderline between chat and reality, reach out. A real person is still the best backup brain we’ve got.