top of page

Deep fake fraud: how can you trust anything on the internet?

  • 2 minutes ago
  • 7 min read

--- NOTE: The photo above is a deepfake ---

ITS REUSE MAY AMOUNT TO A CRIMINAL OFFENCE


Monday 23 February 2026


In the early days of cinema, fakery was a craft—prosthetics, lighting, dubbing, the quick cut. Audiences were meant to believe, at least for the length of a scene, that a stuntman was an actor, that a studio set was Paris, that a voice recorded in a booth belonged naturally to the lips on screen. None of this was new, and none of it was chiefly criminal. It was storytelling, advertising, persuasion—sometimes propaganda—conducted with tools that were expensive, specialised and, crucially, scarce.


Deep fake fraud is what happens when the scarcity disappears.


The phrase ‘deepfake’ entered public life not through a university or a government laboratory but through the rough democracy of the internet. In late 2017 a Reddit user popularised the term and the practice—faces swapped into videos by enthusiasts with consumer hardware and borrowed code. What began as a transgressive hobby quickly became a template. The techniques improved, the tools became easier and, before long, the question ceased to be whether a convincing counterfeit could be made. The question became how cheaply, how quickly and at what scale. The Russian intelligence services were key in scaling up this kind of fraud, and using it to discredit or humiliate people. Now the technology is available to everyone.


This shift—from capability to industrialisation—is the real story of deep fake fraud.


The technical roots—before the word existed


Deep fakes did not arrive ex nihilo in 2017. Academic researchers had been exploring ways of reanimating faces and synchronising mouths to audio for decades, often for benign purposes such as dubbing, accessibility and film production.


A landmark example is a 1997 paper, ‘Video Rewrite’, which demonstrated how existing footage could be modified so that a person appeared to mouth words he or she did not speak—an early form of automated lip re-synthesis built from machine-learning techniques available at the time. The point was not fraud; it was a technical proof and, on its surface, a convenience for production.


Two further milestones arrived as computing power and data availability expanded.


In 2016, the ‘Face2Face’ line of research demonstrated real-time facial reenactment—altering expressions in video in a way that could be performed live, rather than as a painstaking post-production exercise. 


In 2017, ‘Synthesizing Obama’ showed how audio alone could drive plausible lip movements in video of a public figure—again presented as research, and again plainly dual-use. 


These projects mattered because they helped convert the manipulation of faces from a craft into a pipeline—data in, synthetic likeness out. Meanwhile the broader deep-learning revolution supplied a general-purpose engine for generating and refining images, voices and video. The mid-2010s rise of generative methods, widely discussed as a turning point in the deepfake story, accelerated what could be produced and reduced the skill required to produce it. 


By the time the word ‘deepfake’ became popular, the underlying direction of travel was already set—more realism, less friction, lower cost.


From mischief to monetisation


Early deepfakes were often circulated as spectacle—proof that the trick could be done. Much of the initial notoriety clustered around non-consensual sexual imagery, which established a grim pattern that persists—synthetic media used not merely to deceive but to degrade and coerce. Yet the criminal economy is pragmatic. Where attention goes, money follows; where money follows, organised methods arrive.


The essential insight for fraudsters is simple—identity checks are social rituals. They rely upon familiar cues: a face on a screen, a voice on a phone, a small set of verbal mannerisms. Corporate life, particularly since the pandemic years, has multiplied the number of decisions made at a distance. A transfer is authorised over a call; a supplier is ‘verified’ through an email thread; a senior executive ‘approves’ an urgent request while travelling. In this environment, synthetic media is not merely a novelty. It is an accelerant for social engineering.


Deep fake fraud did not replace older scams—rather it made them more persuasive.


Business email compromise already exploited hierarchy and urgency. Add a cloned voice and the instruction to pay becomes harder to resist. Romance fraud already exploited emotion. Add a short, convincing video clip and the emotional leverage increases. ‘Proof of life’ scams have long relied on fragmentary evidence; synthetic media makes such proof easier to fabricate, and therefore harder to trust. 


The decisive period—cheap tools, abundant data, remote trust


Three forces have brought deep fake fraud from curiosity to commonplace.


First, tool commoditisation. What required expertise now comes packaged—voice cloning, face swapping, lip synchronisation—assembled behind simple interfaces. The barrier to entry has fallen, and so the pool of potential perpetrators has expanded.


Secondly, the abundance of training material. Ordinary people now publish high-quality audio and video of themselves as a matter of routine—social networks, voice notes, video calls, public speaking clips, podcasts. The raw material for impersonation is everywhere, and it is often linked to names, job titles and personal networks.


Thirdly, institutional drift towards remote verification. Banks, firms and public bodies have moved processes online—customer onboarding, internal approvals, procurement, recruitment. Convenience has become a competitive advantage, and friction is treated as a defect. Deep fakes exploit that preference.


Law enforcement has been warning publicly that criminals are using artificial intelligence to craft convincing voice and video messages in support of fraud against individuals and businesses.  European policing assessments have likewise argued that artificial intelligence is accelerating organised crime and enabling synthetic media for fraud and blackmail, complicating enforcement and attribution. 


When such warnings sound abstract, one case has become emblematic. In February 2024, Hong Kong police described a scam in which an employee of a multinational company was induced, via a deepfake video conference, to make multiple transfers totalling HK$200 million—around US$25 million.  The significance was not only the sum. It was the method. The deception did not rely on a single forged email or a spoofed phone number; it relied on a synthetic meeting—an imitation of the social reality in which corporate decisions are normally made.


This is the future-facing danger of deep fakes—fraud that imitates process, not merely identity.


The present day—industrial scale, normalised suspicion


By early 2026, the argument is no longer about whether deep fake fraud exists but about its scale and its effect on trust. Reporting on recent research has described deepfake fraud as taking place on an ‘industrial scale’—cheap impersonation tools, rapid execution, a widening range of targets. All that is needed is a photo of a person and a whole fake video can be made. The phrase matters because it captures the uncomfortable truth—this is not an artisanal crime committed by a few technically gifted enthusiasts. It is becoming a production line.


Two features of current deep fake fraud are particularly corrosive.


The first is the shift from video to voice. Humans are surprisingly tolerant of imperfect video—glitches can be blamed on bandwidth, camera quality, compression. Audio, by contrast, is intimate and authoritative; a familiar voice carries a presumption of authenticity. If a finance officer believes he or she is hearing her chief executive, the decision may be made before doubt catches up.


The second is the emergence of what might be called plausible deniability attacks. When deception is cheap, the victim is not only defrauded but then burdened with proving he or she was deceived. Organisations can be embarrassed into silence—share prices, reputations, regulatory scrutiny. Meanwhile genuine recordings can be dismissed as synthetic. The result is a general fog in which truth becomes expensive.


This is why deep fake fraud is not merely a cybercrime trend. It is a stress test for the social infrastructure of trust.


Where this goes next—probable trajectories


We should expect deep fake fraud to evolve along three lines.


One is scale—more attempts, lower per-attempt value, automation of targeting. Fraudsters already behave like marketers, testing scripts and refining conversion rates. Synthetic media fits that logic.


Another is precision—tailored impersonations built from stolen personal data and public content. The most effective deception is not a perfect face but a believable context: a recognisable voice, the right internal jargon, a reference to a real project, the correct anxiety-inducing urgency.


The third is hybridisation—deep fakes paired with ordinary compromise. A stolen inbox provides real email history; synthetic voice supplies the authority; a spoofed caller ID supplies the final nudge. Each element covers the weaknesses of the others.


What can be done—without magical thinking


There is no single ‘deepfake detector’ that will restore innocence to the internet. Detection will improve, then evasion will follow. The more reliable defences are procedural and cultural—designed around the assumption that audio and video can lie.


Useful measures tend to look boring, which is precisely why they work:


  • Separate identity from instruction—no high-risk payment, credential reset or data disclosure should be authorised solely because a familiar face or voice requested it.

  • Build friction into high-risk workflows—call-backs to known numbers, two-person approval for large transfers, waiting periods for changes to supplier bank details.

  • Treat internal video calls as untrusted environments—particularly where money, access or sensitive information is involved.

  • Train staff using real examples—people do not learn caution from slogans; they learn from scenarios that resemble their daily work.


Insurers, regulators and law enforcement are already nudging firms towards these disciplines, in part because the losses are no longer hypothetical. 


Yet the broader societal question remains. As synthetic media becomes normal, we may re-learn habits that the early internet taught us to abandon—verifying by secondary channels, valuing in-person confirmation, distrusting the emotional immediacy of the screen. This will feel, at first, like cultural regression. In fact it may be adaptation.


Deep fake fraud is not, at root, about technology. It is about the exploitation of human shortcuts—our desire to believe what looks and sounds familiar, our willingness to obey authority, our fear of delaying an urgent request. The technology merely industrialises those instincts.


The old Washington newsroom phrase was ‘if your mother says she loves you, check it out’. Deep fakes take that cynicism and make it a survival skill.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page