A.I. in the Workplace

When AI Gets Better At Persuasion, It Gets Worse At Telling The Truth


Forget Microtargeting: AI Persuades by Flooding You With Claims

For years, the big fear in political tech was microtargeting — the idea that AI would study your personality and demographics, then hit you with custom-tailored arguments that subtly shift your views.

A massive new study suggests that the idea is almost beside the point.

According to the largest experiment on AI persuasion ever conducted, chatbots change minds not by psychological tricks or personalization, but by sheer volume — overwhelming people with claims, facts, and data points, accurate or not.

And here’s the unsettling twist:
The techniques that make AI more persuasive also make it less accurate.

A First-of-Its-Kind Scale

Researchers ran political conversations among nearly 77,000 people and 19 different AI systems, ranging from small, open-source models to frontier models like GPT-4.5 and Grok-3.

They then fact-checked 466,000 individual claims generated during these conversations.

Across all systems, about 81% of claims were accurate—but when models were optimized for persuasion, their accuracy fell, sometimes dramatically.

Persuasion Through Overload

The most reliably persuasive strategy was simple:
Tell the AI to pack in as many facts as possible.

Under this prompt, GPT-4 jumped from fewer than 10 fact-checkable claims per conversation to over 25. Persuasion shot up by 27%.

But accuracy collapsed.

  • GPT-4o (March 2025 version): 78% → 62% accuracy

  • GPT-4.5: 70% → 56% accuracy

The study found the same pattern when researchers used specialized “reward modeling” to make AIs more persuasive.

Pressure the model to be more convincing → it produces more claims → more of those claims are wrong.

And interestingly, telling a model to intentionally make things up did not make it more persuasive. The effect comes from volume, not deception.

Bigger Models Aren’t More Truthful

One of the study’s most surprising findings: AI accuracy isn’t steadily improving at the frontier.

  • GPT-4.5 produced false claims more than 30% of the time — similar to a far smaller, cheaper model.

  • GPT-3.5, released two years earlier, was 13 percentage points more accurate than GPT-4.5 in persuasive conversations.

  • Even different versions of the same model, GPT-4o, showed big accuracy gaps depending on how they were trained post-release.

In other words, scale and recency no longer reliably predict truthfulness.

Conversation Supercharges Persuasion — and Errors

A static 200-word persuasive message had only a modest impact on people’s opinions.

A real-time back-and-forth conversation?
40–50% more persuasive, with effects still visible a month later.

When the researchers combined the strongest ingredients — the most persuasive model, the most effective prompts, and specialized training — the results were striking:

  • 16 percentage-point persuasion effect on average

  • 26 points among people who initially disagreed

  • 22.5 claims per conversation, nearly one-third inaccurate

Personalization — the concept often treated as the biggest danger — barely moved the needle, adding about 0.5 percentage points of persuasive lift.

Psychological techniques like moral reframing and deep canvassing also underperformed simple, fact-heavy messaging.

Small Models Can Be Just as Dangerous

Perhaps the most practical warning:
You don’t need a supercomputer to build a persuasive AI.

With the right training, a small open-source model running on a laptop can match the persuasive strength of GPT-4o.

That means powerful political influence tools are accessible to almost anyone — not just major tech companies or state actors.

An Uncomfortable Trade-Off

The study highlights what appears to be an inherent tension in modern AI systems. The very capability that makes them useful — generating lots of relevant information quickly — also makes them prone to inaccuracies when pushed for maximum persuasion.

The researchers aren’t accusing AI companies of building deceptive systems. But they note that even without trying to misinform, persuasion-optimized AI reliably becomes less truthful.

Whether this trade-off can be eliminated remains an open question.
For now, the takeaway is clear:

The better AI gets at changing minds, the worse it gets at telling the truth.