Police use of facial recognition
- Matthew Parish
- 21 minutes ago
- 9 min read

Tuesday 3 February 2026
The modern Police service is no longer defined only by boots on pavements, radios in patrol cars and the slow accumulation of witness statements. It is increasingly defined by data: by the ability to find a single person inside a crowd, or a single face inside millions of images. Facial recognition technology has become one of the most consequential instruments in that shift, because it offers something policing has always craved: speed, scale and an appearance of certainty.
Yet the same qualities that make facial recognition attractive to law enforcement make it dangerous to a free society. It can turn suspicion into infrastructure. It can convert public space into a perpetual identity checkpoint. The question is not whether the Police can use it. Across much of the world, they can. The question is whether she should, under what limits, and with what consequences.
How facial recognition enters policing
Facial recognition in policing usually falls into three broad categories.
First is retrospective identification. A face is extracted from an image or video, perhaps from a shop camera, a doorbell camera, a station camera, or a bystander’s mobile telephone, and it is compared against a database. This is often framed as the least controversial form, because the matching is done after an event, and it can be linked to a defined investigation. It is still powerful, however, because it changes the economics of detection: what once required hours of manual review and local knowledge can become an automated search.
Second is live facial recognition, sometimes called real time remote biometric identification. Cameras observe a public space and compare faces passing the lens to a watchlist. The emphasis here is not on solving yesterday’s crime but on intervening today, by locating a person sought by the Police. The Metropolitan Police Service describes this family of tools as a means to identify people who have broken the law and sets out public-facing explanations of how it is used. The UK Government has also published a public factsheet describing police use of facial recognition and the general mechanics involved.
Third is identity checking in the field. A Police officer photographs a person and queries databases, sometimes alongside other biometrics. In the United States, reporting and commentary around immigration enforcement has described an application used by US Immigration and Customs Enforcement to search large image holdings and link the result to records. This is not the same as live surveillance of a crowd, but it still relocates power from a magistrate’s warrant and a controlled interview room to an officer with a device in the street.
These categories blur. A system introduced for one purpose can drift into another. A retrospective tool becomes live. A watchlist system becomes a general screening system. That drift is where governance fails.
Capacity across the world
Policing capacity is not evenly distributed, but facial recognition has a peculiar logic that pushes it outward. Once a jurisdiction has cameras, a database and a vendor, it has the basic ingredients.
In the United Kingdom live facial recognition has been used by some forces, with public explanation and policy documents, but also amidst recurring legal and political dispute about proportionality and safeguards. The UK Parliament’s research service has noted that there is no single dedicated statute for live facial recognition, and that the framework instead arises from a mixture of common law, human rights, equality duties and data protection legislation. Individual forces, such as South Wales Police, have published their own explanations of facial recognition technology and its role in prevention and detection.
In the United States, capacity is fragmented but widespread. Federal, state and local agencies may have different rules, and some municipalities restrict or ban certain uses, but vendors and databases allow facial matching to proliferate through procurement and informal adoption. One emblematic case is Clearview AI, which has been widely reported as providing law enforcement with a vast image index and rapid search capability, alongside sustained civil liberties controversy.
In the China, facial recognition capacity sits inside a broader environment of large-scale digital surveillance and state-directed data integration. That does not mean there are no rules. China has introduced regulations focused on the use of facial recognition in certain contexts, including requirements and restrictions aimed largely at commercial use, although the relationship between regulation and state practice remains central to the debate.
In the European Union, the centre of gravity is regulation, because it is attempting to draw a bright line around the most intrusive forms. The European Parliament has described the approach of the EU AI Act as generally prohibiting real time remote biometric identification in public spaces, with limited exceptions for serious cases, while post-incident uses may be allowed under tighter conditions and, in some descriptions, with court involvement. The European Commission’s own guidance similarly stresses prohibition with narrow exceptions and safeguards.
The capacity question, therefore, is not only technical. It is legal, administrative and cultural. The same algorithm deployed in different constitutional traditions becomes a different instrument.
The advantages the Police see
Facial recognition offers a genuine operational gain, and it is best to acknowledge this frankly.
It can identify suspects and wanted persons more quickly, particularly when the only available evidence is visual. This matters in serious crimes where speed affects public safety. It can locate missing persons or vulnerable individuals if used within a narrowly defined remit, because it turns passive camera footage into an active search mechanism.
It can also reduce reliance upon fallible human recognition. A witness who saw an offender briefly at night may be mistaken. An officer may not know every face in a city. Algorithms can, in theory, offer consistent comparison against a defined dataset.
It can also make policing cheaper in labour terms, which is one reason governments are tempted. When budgets are tight, the promise that software can do in minutes what teams did in days is politically alluring. The risk is that savings become the hidden driver of expanding surveillance, because efficiency rarely arrives without new appetite.
The dangers that follow it
The first danger is misidentification. Facial recognition systems can produce false matches. In policing, a false match is not a mere error in a spreadsheet. It can become a stop, a search, a detention, an arrest, or the beginning of a case that never should have existed. The harm is magnified when officers treat algorithmic output as evidence rather than as a lead requiring independent corroboration.
The second danger is unequal impact. If accuracy differs across demographic groups, or if enforcement attention is already uneven, facial recognition can intensify that inequality. Even where a system’s raw accuracy improves, the social pattern of its use may remain biased because the watchlists and deployment sites reflect historical policing priorities.
The third danger is chilling effect. A society in which attending a protest, walking into a mosque, entering a clinic, or meeting a journalist may be algorithmically logged is a society that quietly changes its behaviour. The reporting on surveillance tools used by immigration enforcement in the United States illustrates how identification technologies can be repurposed beyond the narrow frame of violent crime into broader social control, including activities adjacent to protest and political speech.
The fourth danger is function creep. A tool introduced to find a small number of dangerous fugitives can be steadily expanded: more cameras, bigger watchlists, looser authorisations, broader offence categories. The creep is often bureaucratic rather than conspiratorial. Each extension appears modest. The cumulative result is transformation.
The fifth danger is database legitimacy. Facial recognition is only as lawful and ethical as the images it relies upon. If a database is compiled from sources that individuals never consented to, or that were not collected for policing, then the matching process imports that illegitimacy into law enforcement decisions. The controversy around commercial vendors and scraped image repositories sits squarely here.
The sixth danger is accountability failure. Many systems operate as vendor products: opaque, proprietary and difficult to audit. If an arrest is justified by a match score, a defendant may struggle to test the system’s reliability in court. Without transparency, the Police asks the public to trust a machine she cannot independently scrutinise.
What sensible safeguards look like
If facial recognition is to be used at all, it should be treated as an exceptional power, closer to interception than to routine CCTV.
A credible framework typically includes, at minimum:
Clear statutory authority, not merely guidance, because the core question is about power in a democratic society.
Narrow purposes, limited to defined serious offences or safeguarding circumstances, and written in a way that prevents mission creep.
Independent prior authorisation for live deployments, with strict limits on time, place and watchlist composition, which aligns with the direction of travel visible in EU-level rules and guidance.
Mandatory auditing, including bias and accuracy testing in the context where the system is actually deployed, not only in laboratory conditions.
A requirement that facial recognition outputs are investigative leads, not grounds by themselves for arrest or sanction, with a duty to corroborate.
Transparency to the public: where it is used, how often, for what outcomes, with publication of error rates and governance documents, subject to operational security constraints.
Strong data retention limits and deletion rules, because a face is not a password. It is an inescapable identifier.
The deeper question
Facial recognition is attractive because it promises an answer to an old policing problem: how to identify the unknown. But it also poses an old constitutional problem: how to prevent the state from treating every citizen as a potential suspect.
In some countries, she will be constrained by law and courts. In others, she will be constrained mainly by politics, which is less reliable. And in others still, she may not be constrained at all.
The world is moving towards a future where the technical capacity exists almost everywhere. The decisive variable will be restraint. The Police does not merely enforce a society’s rules. In a quiet way, she teaches them. If facial recognition becomes normal, the lesson will be that anonymity in public was a historical interlude rather than a civic condition.
---
Annex: Three regulatory models for police use of facial recognition
This annex sketches three broad regulatory models that shape how the Police uses facial recognition in practice. They are not pure types, but they help to explain why the same technology produces very different policing cultures.
The European regulatory prohibition-with-exceptions model
In the European Union, the organising principle is restraint. Facial recognition, particularly live or real time remote biometric identification in public space, is treated as an exceptional intrusion rather than a routine investigative aid. The underlying assumption is that anonymity in public is a civil condition that should not be removed by default.
Under this model, the Police’s capacity is deliberately constrained. Live deployments are generally prohibited, subject only to narrow exceptions for defined serious threats, such as terrorism or the search for victims of serious crime. Even then, the direction of travel is towards prior authorisation, strict geographic and temporal limits, and narrow watchlists. Retrospective facial recognition is more likely to be permitted, but under data protection rules that emphasise necessity, proportionality and purpose limitation.
In day-to-day policing, this produces caution. Facial recognition is planned rather than improvised. It is used sparingly, often with legal review, and it generates paper trails. The practical consequence is that the Police retains the tool, but she cannot easily normalise it. Operational convenience yields to constitutional design. This approach reflects the broader regulatory culture of the European Union, in which rights protection is built into administrative machinery rather than left to after-the-fact correction.
The Anglo-American permissive but contested model
In the United Kingdom and the United States, the dominant model is permissive but fragmented. There is no single, comprehensive facial recognition statute in most jurisdictions. Instead legality is assembled from existing police powers, human rights or constitutional principles, equality duties and data protection rules, combined with local policy.
This gives the Police comparatively wide operational room. Forces and agencies can adopt facial recognition systems, pilot them and expand their use, provided they can articulate a lawful basis and respond to challenge if one arises. Courts tend to intervene reactively, assessing specific deployments after they occur rather than licensing the technology in advance.
The practical effect is unevenness. Some forces develop relatively mature governance, with public notices, impact assessments and published policies. Others adopt tools quietly, driven by procurement opportunities or operational pressure. Political controversy and litigation become the main brakes, rather than statute.
In everyday policing, facial recognition under this model can drift. Retrospective use becomes common. Live trials recur. The same system may be acceptable in one city and prohibited in another. The Police gains flexibility, but at the cost of clarity and public confidence. This pattern is visible, with national variations, across the United Kingdom and the United States.
The state-integrated surveillance model
In China facial recognition operates within a different logic altogether. The technology is embedded in a broader architecture of digital governance, public security and state-directed data integration. The central question is not whether facial recognition should exist, but how it should be optimised and regulated within a system that already assumes extensive state visibility.
Regulation does exist, including rules on commercial use and data handling, but these rules sit alongside, rather than above, security practice. The Police’s capacity is correspondingly extensive. Large-scale deployment, integration across databases and routine use in public space are structurally easier, because the legal and political environment does not treat public anonymity as a baseline condition.
In practical terms, this produces scale. Facial recognition can be used continuously rather than episodically, and it can support not only crime control but broader administrative objectives. The risk of misidentification or chilling effect is managed primarily as a technical or managerial issue, not as a constitutional one. This reflects the governing assumptions of the China rather than a simple absence of rules.
Comparative consequences
These models shape behaviour as much as they shape law. Where prohibition with exceptions dominates, the Police learns to ask first whether she should use facial recognition at all. Where permissive but contested rules prevail, she asks whether she can defend its use if challenged. Where state integration prevails, she asks how to deploy it most efficiently.
The technology is the same. The outcomes are not. The difference lies in what each system treats as the default: liberty, discretion, or visibility.




