Sex researcher Aella is stepping into a new role: trying to make people care about the risks of advanced AI—and fast.
Known for her work as a data scientist, Substack writer, and organizer of unconventional social events, Aella is now cofounding an initiative called Plz Don’t Kill Us. The project is an experimental residency program designed to recruit content creators and flood the internet with accessible, short-form material about AI risk.
Her core argument is simple: the AI safety movement has failed to communicate effectively with the general public. Discussions about “alignment” and “existential risk” tend to stay confined to technical circles, leaving most people disengaged. Meanwhile, she believes the stakes are enormous—comparable, in her view, to the development of nuclear weapons.
The residency, based in Berkeley, plans to host up to 100 creators. Participants will receive housing and food in exchange for posting daily content focused on AI-related dangers. The approach prioritizes volume and virality, with the idea that persistent exposure might break through public indifference.
Aella describes her own level of concern as extremely high—“nine out of ten.” That anxiety has influenced her personal decisions, from financial planning to lifestyle choices. She says she has already gone through a process of “grieving” the possibility of catastrophic outcomes and is now focused on doing whatever she can to shift awareness.
She is skeptical of major AI labs, arguing that many insiders believe they can control what they’re building but are also financially motivated to continue accelerating development. Even in scenarios short of extinction, she worries about a small number of companies controlling systems of unprecedented power.
The program’s strategy contrasts with more formal efforts like fellowships and research-driven communication projects. Instead of targeting people already interested in AI, Aella wants to reach those who aren’t paying attention at all. That means translating complex ideas into formats suited for platforms like TikTok and Instagram—something she believes technical experts often struggle to do.
The initiative has backing from organizations tied to the AI safety ecosystem and aims to raise around $800,000. Its mentors include a mix of researchers, entertainers, and online personalities, reflecting its hybrid focus on accuracy and mass appeal.
There are a few firm rules: no हिंolent messaging and no misinformation. Aella is particularly concerned about the possibility of individuals misinterpreting AI risk as justification for real-world attacks, emphasizing that such actions are both unethical and ineffective.
Ultimately, her goal isn’t to create panic but to push people toward engagement. She wants broader public pressure on policymakers—calls, advocacy, and political attention—so that AI development is approached with more caution.
In her view, the biggest problem right now isn’t just the technology itself, but how little most people are paying attention to it.
