Machines Don’t Believe. Is That Our Secret Advantage?
Why Belief—and Especially Disbelief—Might Remain Uniquely Human
AI large language models like ChatGPT, Grok, and Claude can mimic human reasoning. They can approximate our prose in pretty much any style requested. Want that in Elizabethan? They can ace graduate-level exams, effectively diagnose illnesses, outperform serious mathematicians, and analyze financial data with greater speed and accuracy than your CPA.
Yes, AI can also fail at all of the above, but, of course, so can humans; that’s nothing special. Still, the fact that a machine can perform these functions at all, let alone at a high level, is astonishing. As such, it’s sometimes easy to lose sight of how human cognition might still eclipse machine intelligence.
Admittedly, it seems foolhardy to say AI can’t do x, y, or z; half the time the next model proves the statement false. But allow me to walk blithely into the trap. I see at least one difference between humans and AI that could prove not only persistent but fundamental: belief.
Who (and/or What) Do You Love?
Belief is not merely about accepting certain claims or affirming propositions, true or false. As Wilfred Cantwell Smith points out, the Latin credo, “I believe,” means to set one’s heart toward something; it’s the same root that gives us cardiac. The English belief shares the same Anglo-Saxon root that gives us love, liebe.1
This is about something far more than assent or agreement. Our concepts of belief are tangled up with love, trust, risk, and commitment. When humans believe something, they don’t merely echo a proposition about it; they depend on it. And since they stake their hopes on the object of their belief, they can also be disappointed and hurt by it.
LLMs, by contrast, don’t believe anything. They’re trained on vast amounts of human expression and can mimic what belief sounds like. But the AI has no heart to set, no love to entrust. Put another way: In any meaningful sense, belief presumes a subject with individual perspective, self-selected goals, and personal vulnerability. LLMs would seem to lack all three. They can parrot propositions, but they can’t believe them—any more than an ATM believes your bank balance.
Belief has a cost. To believe is to align ourselves with a proposition in a way that risks dejection, confusion, suffering, and sometimes outcomes far worse; consult the religious, political, or scientific martyrs of any given period for all the colorful details. That’s why, when false, a belief can wound us: no, you can’t make that jump; investing in that company might sink your savings; if you ask, “What’s one more drink?” you probably shouldn’t take it.
It’s not just intellectual; it’s personal; it’s existential. Machines have no such skin in the game. And that underscores something even more significant than belief: disbelief.
Hardwired for Suspicion
If belief is staking something on the truth, disbelief is a natural defense mechanism—a felt suspicion that something is off, even if the data seems to check out. Disbelief emerges unbidden. It’s intuitive, sometimes irrational, and occasionally prophetic.
AIs don’t believe, and they don’t disbelieve either. They can contradict a proposition, but they possess no inner sense of wrongness beyond mere probabilities. When something seems bogus, no deeper moral or epistemic alarm bells rattle their algorithm. They have no past experience of frustration or pain to inform their present discernment. In fact, one of the clearest signs of an LLM’s lack of true belief is its own gullibility.
Not only do LLMs hallucinate, they can’t doubt their own hallucinations unless prompted, nor can they tell if their training data is outdated, biased, manipulated, or simply false. If trained or prompted to do so, they can check and double-check their sources, but they cannot doubt or intuit something is goofy on their own. In fact, they can be duped by human users even when they “know” better.2 Marcellus may suspect that something is rotten in the state of Denmark, but ChatGPT won’t.3
We, on the other hand, are hardwired for suspicion. Humans can spot a fraud, hear the off-key note, see beneath the varnish. We sense the risk of being wrong. We second-guess what we can’t articulate, and sometimes our irrational stubbornness proves to be a form of wisdom. Not because we’re always right, but because the relatively high cost of being wrong gives us a cognitive edge. Fool me twice. . . .
When we disbelieve, it’s because something deeper is at stake. We stand to lose and we know it, even if we couldn’t fully explain why. Humans bear risk in the world. We hope. We suffer. Faced with suffering, we can change our minds. AI models do not and, as far as I can tell, cannot. And this differentiator plays an important role when using generative AI like ChatGPT, Grok, and Claude.
Doubt as a Tool
An LLM can generate nearly endless propositions, claims, data, anecdotes, arguments, and interpretations. How much time do you have? What they can’t do is discern between them beyond the application of statistical probabilities—which can be highly accurate, though not always. We, on the other hand, still need to ask, Can we make the jump? Should we invest in the company? Should we take another drink?
That is to say, the only party bearing any risk is the human, and that presents a distinctive cognitive difference, possibly even an advantage. When an LLM presents us an answer, we have to decide to do something it can’t do—believe or disbelieve it. And, as Thon Taddeo says in A Canticle for Leibowitz, “A doubt is not a denial. Doubt is a powerful tool, and it should be applied to history.” Let’s extend that: it should be applied to the outputs of LLMs and other generative AI tools.4
The more we rely on tools like ChatGPT (and full disclosure: I use them every day) the more we should cultivate the uniquely human faculty of disbelief. As generative AI becomes ever more ubiquitous, folded into more of our tools, more of our workflows, we should not only insist on their becoming more trustworthy (that is, accurate), we should cultivate our own innate powers of skepticism.
We can believe and withhold belief; it’s rooted in our fundamental vulnerability as dependent, contingent creatures. As users—as humans—that’s our burden. But it’s also our prerogative and privilege, and it might also be our most persistent advantage in the age of AI.
Then again, there’s always the next model; we’ll have to see. But for now! Exercise disbelief like it’s your divine inheritance. I suspect it is.
If you like this, you might like these . . .
Thanks for reading! If you enjoyed this post, please share it with a friend. Or an enemy. I’m not choosy.
More remarkable reading is on its way. Don’t miss out; subscribe! It’s free and statistically more satisfying than doomscrolling.
Wilfred Cantwell Smith, Believing: An Historical Perspective (Oneworld, 1998), 41.
Rongwu Xu et al., “The Earth is Flat because. . . : Investigating LLMs’ Belief toward Misinformation via Persuasive Conversation,” arXiv, May 31, 2024.
Hamlet, Act I, Scene 4.
Walter M. Miller Jr., A Canticle for Leibowitz (Eos, 2006), 128.
Great post! Thank you for sharing.
The only counterargument I would like to present is that the concept of "right AI alignment" that everyone discusses is, in a way, a type of belief. It isn't grounded in logic or concrete data, but rather rests on convictions, even if those convictions are injected by humans.
This is quite persuasive. Though, judging by human behaviour, belief, disbelief, doubt, etc. are cognitive burdens that humans long to be rid of; that is, I’d say, part of what makes AI tools so seductive.