Can Artificial Intelligence Detect Lies Better than Humans?
- Matthew Parish
- 4 minutes ago
- 5 min read

The capacity to detect deceit has always fascinated philosophers, psychologists, lawyers and spies alike. From Socratic dialogues to polygraph machines, humanity has long wrestled with the question of how to know when one is being deceived. The twenty-first century has introduced a new contender in this ancient contest: artificial intelligence. Sophisticated algorithms now analyse facial micro-expressions, voice modulation, body language, and even linguistic patterns. Yet the question remains: who is the better lie-detector, the human mind or the machine that mimics it?
The Human Talent for Intuition and Context
Human beings are social animals evolved to read one another. Evolutionary psychologists suggest that deception detection is bound to survival instincts: discerning truth from lies determines trust, cooperation, and safety within social groups. Humans rely not merely upon facial cues or tone, but upon the context of relationships, histories of interaction, and moral intuition. Trained professionals—interrogators, negotiators, or judges—develop an instinctive sense for hesitation, deflection, excessive precision in language, or an absence of precision. Yet even this is very difficult to articulate: shocking or memorable events are often remembered for years afterwards with very specific precision, while more casual and unremarkable events are often not remembered at all. People can remember the dates that super or horrible things happened to them possibly for all their lives; it would be surprising if those memories faded. This is part of the incompletely understood psychology of trauma.
Scientific experiments have long shown that unaided human accuracy at detecting lies is remarkably poor, hovering around 50 to 60 per cent—barely better than chance. What humans gain in contextual understanding they lose in cognitive bias: they tend to believe those they like and distrust those they do not; they project honesty or dishonesty onto others based on culture, emotion, or prior experience. The result is a deeply human but unreliable compass for truth. Some people acquire remarkable talent for doing it, through following intuitions dealing routinely with situations in which people are more likely to lie.
Hence the experience of the lawyer who has spent years cross-examining witnesses, who acquires a sixth-sense (if he or she is skilled) that a witness subject to questions they are obliged to answer is somehow not telling the truth or is making up their story entirely, or is embellishing it. Where people have rehearsed a story, suddenly asking them about something else apparently unrelated (that they have not rehearsed) creates hesitation, uncertainty and a change in behaviour. But sometimes it doesn't. This is an art, not a science, and it is not clear that machines can learn the same sixth sense because it is not capable of algorithmic articulation (at least not yet). Books about cross-examination teach rules for beginners (e.g. closed questions, no open questions), and computers can learn those rules; but with decades of experience, the rules are tossed out of the door as a skilled cross-examiner learns to reveal deceit with targeted open questions that leave a liar free to expose themselves with nonsense they are making up on the spot.
Equally, people can learn to lie. Lawyers learn how to lie on behalf of their clients. Spies learn how to lie effectively to conceal their real intentions. They learn the techniques that truth-seekers use to discriminate between truth and lies, and they manipulate their own discourse to evade those techniques. Finally, machines can lie, as all users of Large Language Models are aware when they are asked a question or engaged with a task requiring knowledge they do not have; so they just make things up. Humans are generally incapable of dissecting machines' lies without tremendous quantities of forensic analysis.
The Algorithmic Eye: Data, Patterns, and Probability
Artificial intelligence approaches deceit through pattern recognition. Machine-learning systems trained on thousands of video clips can identify subtle facial micro-expressions invisible to the untrained eye. Natural-language processing algorithms can scan transcripts for linguistic markers of lying: fewer self-references, more negative emotion words, greater cognitive load reflected in syntax complexity. Voice-analysis tools detect micro-tremors in speech that arise when a person is stressed or fabricating information. But the differences between stress, concentration (to articulate one's recollection as precisely as possible) and deception are extraordinarily hard to simulate.
Such systems often outperform humans in laboratory tests. A 2022 Stanford study found that an AI model analysing facial expressions and tone of voice identified deception with 73 per cent accuracy, compared with 54 per cent for human observers. When fused with linguistic data, accuracy rose to nearly 80 per cent. Singapore's law enforcement community has used lie detectors in police investigations since 1991, although their results are not admissible in Court. In tightly controlled environments, the machine therefore seems superior: faster, more consistent, and free from human bias. But lie-detection is rarely undertaken in laboratory conditions; it is important instead in courts, police stations, where two intelligence agents meet to out-deceive one-another, and in other situations of high adrenalin far from the laboratories of Stanford University.
The Problem of Ambiguity and Ethics
Moreover outside the laboratory, truth is seldom binary. Deception is not always conscious: people misremember, exaggerate, or conceal emotions without intent to deceive. Elizabeth Loftus's remarkable 1979 book, Eyewitness Testimony, grapples with the fact that witnesses to harrowing or remarkable events have a habit of filling in the gaps in their experiences that they cannot recall, and convincing themselves of the truth of the details that they have filled in. Many wrongful convictions have been premised upon such habits. AI systems, trained on labelled data, struggle with such nuances. They may label grief as guilt, anxiety as dishonesty, eye movement as lying, or cultural reticence as deceit. Algorithms depend upon training data that often reflects Western behavioural norms, and therefore may misread gestures, intonation, or facial expressions from other cultures.
Moreover ethical concerns loom large. Lie detection by AI can easily become intrusive or coercive. When used in border controls, job interviews, or police interrogations, it risks transforming probabilistic assessments into moral verdicts. AI models may have trouble distinugishing legal tests of "on the balance of probabilities" and "beyond reasonable doubt", leading to their use in legal contexts eceedingly morally precarious. Further a false positive from a machine might destroy a career or a life. The opacity of algorithmic reasoning compounds the problem: a human interrogator can justify his or her suspicions; an AI model cannot explain why it labelled someone a liar.
Towards a Symbiosis of Man and Machine
The most promising approach may therefore lie not in competition but in collaboration. Human beings possess emotional intelligence and contextual understanding; AI systems offer pattern recognition and consistency. When algorithms highlight behavioural anomalies, skilled interviewers can interpret them in light of motive, background and circumstance. Combined systems used in counter-terrorism and financial fraud investigations already employ such human-machine hybrids, with results superior to either alone.
Conclusion: Truth Beyond the Algorithm
Detecting deceit is as much an art as a science. Artificial intelligence may quantify micro-expressions and sentence structures, but it cannot yet grasp irony, shame or moral hesitation—the subtle ingredients of human truthfulness. The most effective lie detector remains the partnership between analytical precision and humane judgment: the machine’s eye that never blinks, and the human heart that understands why someone might lie (or might tell the truth), and that can understand the consequences of stress, fatigue, excitement and distress upon the articulation of sentences and movements of the body.

