top of page

The Palantir AI Human Resources system: a study in controversy

  • 6 hours ago
  • 7 min read

Sunday 12 April 2026


Palantir is not, in the ordinary sense, a “human resources application”. It is a way of building human resources applications from the data an institution already holds—data that is typically scattered across payroll, rostering, sickness records, training logs, vetting files, professional standards casework and countless spreadsheets. Palantir’s promise is that, once those fragments are pulled into a single governed data environment, managers can see patterns they could not previously see and can act more quickly.


That is the attraction. It is also the source of the controversy.


What people mean by the Palantir AI human resources system


Palantir sells software commonly described as “data integration and decision support”. In practice this usually means:


  • Palantir Foundry, used in commercial and public-sector organisations to integrate many datasets, define how they relate to one another (Palantir calls this an “ontology”), and support operational workflows. 


  • Palantir Gotham, oriented towards intelligence and law enforcement use-cases, again centred on linking datasets and supporting investigative or operational decisions. 


  • Palantir’s Artificial Intelligence Platform (AIP), which sits atop integrated data and is marketed as a way to build “AI apps” and “agents” that can take actions in workflows—an attempt to bring large language model interfaces and automation into those same data environments. 


When commentators refer to Palantir as an AI human resources system, they are usually describing a Foundry or Gotham deployment (depending on the institution) with AIP-style features layered on top, used to support workforce decisions. That can include mundane questions—who is qualified for which role, where the gaps in specialist skills lie—but also far more sensitive ones: whether a pattern of overtime, absences, complaints or associations suggests misconduct or risk.


A Fujitsu statement about internal trials of Palantir AIP offers a neat example: it describes “optimis[ing] human resource utilisation” through skill analysis and matching engineers to tasks more accurately. That is, in essence, the benign version of the same idea: connect data about people to data about work, then use statistical and machine-assisted reasoning to recommend decisions.


How it works in plain terms


A Palantir-style human resources deployment typically has five layers.


1. Ingest


Data is pulled from existing systems: payroll, duty management, learning management, case management, access control logs, procurement records, even email metadata—depending on permissions and policy. This is technically straightforward and politically explosive, because “human resources data” is not a single category. It is a mosaic of personal information, some of which was collected for one purpose and is now being repurposed for another.


2. Link


The central move is to discover and define relationships: this person belongs to that unit; this training course enables that role; this overtime pattern coincides with that set of complaints; these sick days align with those deployments. Palantir markets this as an “ontology” because it is meant to be a structured map of the organisation’s reality. 


3. Govern


Based upon the links discovered by data analysis, the software recommends human resources policy decisions. So statistics take the place of human judgment. In this process Palantir emphasises access controls, audit logs and permissions. Those are important. Yet governance is not only a technical matter—it is also a constitutional one. Who is allowed to see which facts about whom? Who decides? Who checks the checkers? And what happens when an institution decides that “risk management” justifies broader internal surveillance?


4. Analyse and predict


Once the data is linked and queryable, a familiar catalogue of techniques appears:


  • anomaly detection (outliers in overtime, expenses, contacts or absences)

  • correlation analysis (patterns that co-occur)

  • forecasting (likely staffing shortfalls, attrition, training pipelines)

  • risk scoring (a composite indicator, sometimes formal, sometimes informal)


If AIP features are used, the interface may become conversational: a manager asks a question in natural language; the system returns an answer drawing on the integrated datasets, sometimes with suggested actions in a workflow. Palantir positions AIP precisely as an “AI apps and agents” layer tied to operational actions. 


5. Act


This is the step that matters ethically. The system is not merely a dashboard. It is meant to trigger interventions: an investigation, a welfare check, a redeployment, a training requirement, an escalation to Professional Standards. The line between “support” and “direction” becomes blurred—especially in hierarchical organisations where a computer-generated flag carries institutional weight even when humans retain formal decision-making power.


The Metropolitan Police example: from workforce management to internal surveillance


On 22 February 2026 The Guardian newspaper reported that London’s Metropolitan Police had confirmed the use of Palantir-built AI tools to monitor internal staff behaviour and flag potential misconduct—drawing on indicators such as sickness absence and overtime patterns. The Police Federation criticised the idea as “automated suspicion”, warning that legitimate absences and workload pressures could be misconstrued. The Met’s reply—familiar in modern algorithmic governance—was that the tools identify patterns but humans make the final assessments. 


This is a human resources system in the strict sense: it is directed at the workforce, at standards, and at internal discipline. Yet it is also something else: an institutional surveillance mechanism turned inward.


The Met has, for years, faced a crisis of legitimacy around officer conduct, vetting and internal culture. A system that can search for weak signals of misconduct can appear, from one angle, as a serious attempt at reform. From another, it looks like a technology-led substitution for leadership: rather than rebuilding trust through better supervision, clearer standards and credible accountability, the institution reaches for a machine that promises to find the bad apples.


---


That dilemma is not unique to policing. It is simply sharper there, because the organisation’s coercive power makes internal failures a public danger.


Palantir attracts a particular sort of dispute, and not only because of what its software can do. Three themes recur.


1. Purpose creep: the silent expansion of scope


A human resources system starts with staffing and skills. Soon it can include:


  • behavioural monitoring

  • predictive flags for “risk”

  • network analysis of associations

  • automated triage of complaints


In policing, “workforce integrity” and “operational intelligence” sit close together. That proximity invites blending datasets that were never meant to meet. Civil liberties organisations and investigative outlets have argued that UK policing engagements with Palantir have been accompanied by secrecy and reluctance to disclose details of contracts or deployments. 


Even when every technical safeguard is present, the fundamental issue remains: once data is linked, it is tempting to ask new questions of it—questions that would have seemed disproportionate when the data lived in separate silos.


2. Bias, fairness and the problem of proxies


An absence record is not a neutral fact. Overtime patterns are not a moral indicator. Complaints data reflects not only misconduct but also organisational culture, community relations and the likelihood that certain groups are reported and others are not.


When an “AI tool” flags an officer because a pattern resembles previous cases, the system is learning from history. If history contains bias—about who is investigated, who is believed, who receives informal warnings rather than formal discipline—then the machine can reproduce that bias at scale.


The Met’s reported use-case involves sickness and overtime patterns. Both are prone to innocent explanation: illness, caring responsibilities, trauma, understaffing, managerial pressure. A pattern-recognition tool does not understand context unless the institution builds context into the process—and even then, context is often precisely what cannot be codified.


3. Accountability and procurement: dependence becomes a governance problem


A separate controversy is not about algorithms at all, but about state capacity. The Financial Times recently described UK Ministry of Defence contracting practices around Palantir as creating high switching costs and a form of dependency—renewals without open competition, and a risk of lock-in because rebuilding systems and retraining staff becomes prohibitively expensive.


Translate that into human resources. Once an institution’s workforce data, performance metrics and internal discipline workflows are deeply embedded in a Palantir-built environment, the vendor is no longer merely a supplier. It becomes part of the organisation’s nervous system. At that point questions about democratic oversight—who can audit the logic, who can replicate the system, who can exit—become unavoidable.


What defenders of these systems say


It is important to understand why sensible people sign these contracts.


  • Large institutions frequently cannot answer basic questions about their own workforces because their data is fragmented.

  • Misconduct and safeguarding failures often persist because warning signs are scattered across units and databases.

  • Manual audits are slow and selective; a systematic approach can, in theory, be fairer, because it applies the same scrutiny everywhere.


In the Met’s case, defenders would argue that a force that has been repeatedly criticised for failing to detect internal problems has a duty to improve its early-warning mechanisms—and that pattern analysis, properly governed, is a practical tool. 


The trouble is that “properly governed” is doing an immense amount of work in that sentence.


What good governance would have to look like


If a Palantir-enabled HR system is to be legitimate in a democratic society, particularly in policing, several conditions should be visible—not merely asserted.


  • Clear purpose limitation: a published statement of what the system is for, and what it is not for.

  • Dataset discipline: explicit lists of which datasets are included, with prohibitions on sensitive expansions without fresh authorisation.

  • Independent audit: regular external reviews of false positives, disparate impact and operational outcomes.

  • Contestability: a process by which staff can challenge flags and correct records, without retaliation.

  • Procurement transparency: enough disclosure to allow meaningful public and parliamentary scrutiny, including exit planning and alternatives.


The Met’s reported model—machine flags, humans decide—may reduce some risks, but it does not solve the central issue: in a disciplined organisation, a flag is already a form of power. 


The deeper question: what kind of workplace is being built?


The most serious controversy is philosophical rather than technical. A “human resources AI” tool can be sold as efficiency, fairness, safeguarding and risk reduction. But it can also become a culture: a workplace in which every deviation is interpreted as signal, every pattern is a potential accusation, and trust is replaced by continuous monitoring.


In policing, the temptation is acute. A force under scrutiny wants proof that it is changing. A technology supplier offers dashboards, metrics and automated flags. The institution can then say: we are acting—look, the system is watching.


Yet there is a danger that the visible activity of surveillance substitutes for the harder work of leadership: training, supervision, discipline that is consistent, and a culture in which colleagues intervene early because they believe the institution will treat both complainant and accused with justice.


Palantir’s human resources deployments therefore raise a question that extends beyond one company: when an institution builds a data-driven model of its workforce, does it become more humane—better at understanding the pressures on people and allocating work fairly—or less humane, treating human beings as risk vectors to be managed?


The answer depends less on Palantir’s software than on the political choices of the institutions that buy it. But that is precisely why the controversies follow Palantir—because ultimately these are arguments about power, not only about technology.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page