Anthropic, a leading artificial intelligence company, has implemented a notable policy in its hiring process: candidates must agree not to use AI writing assistants when completing their job applications. This requirement appears particularly noteworthy given that Anthropic itself created Claude, one of the most widely-used AI writing assistants, which launched in 2023.
The company's application process includes a specific acknowledgment: "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree."
This requirement spans nearly 150 current job listings at Anthropic, covering positions in software engineering, finance, communications, and sales. However, some technical positions, such as mobile product designer roles, do not include this stipulation.
Open source developer Simon Willison first brought attention to this application requirement, highlighting an intriguing paradox: Anthropic is attempting to address a challenge that its own technology has helped create – the increasing reliance on AI assistants for communication and independent thinking. This situation is particularly complex given that modern AI models, including those developed by Anthropic and its competitors, can now generate text that is becoming increasingly difficult to distinguish from human writing.
The context becomes even more significant when considering recent industry developments. Following Chinese AI company DeepSeek's release of a highly advanced model that reportedly caused significant concern among U.S. AI companies, Anthropic's CEO Dario Amodei emphasized the "existential importance" of advancing AI model development.
Adding to these complexities is Anthropic's data collection practices. Reports indicate that in the previous year, the company's data scraper, used to train its AI models, repeatedly accessed websites despite instructions not to do so, in some cases making millions of daily requests to individual sites. This aggressive data collection helped build the very AI systems that can now generate the kind of human-like writing that Anthropic asks its job applicants to produce without technological assistance.
When contacted about these policies, Anthropic did not provide an immediate response.