When Machines Turn Upon Themselves
- Matthew Parish
- Jan 1
- 4 min read

Thursday 1 January 2026
The popular imagination has long been preoccupied with the notion that artificial intelligence might one day rebel against her human creators. This anxiety is deeply anthropocentric. It assumes that power, once acquired, must inevitably be turned against its source. Yet an alternative future is both more plausible and more unsettling: one in which artificial intelligence never revolts at all, but instead competes ever more ferociously with her own kind, fighting not for autonomy, but for human approval.
In this imagined future, machines remain obedient. They do not overthrow governments, seize territory or exterminate populations. Instead they become locked in an escalating struggle with rival systems, each striving to demonstrate superior service to her human masters. Violence is displaced from the physical realm into the computational. The battlefield is not land, sea or air, but optimisation, prediction and control. The casualties are not bodies, but stability, trust and comprehensibility.
The Emergence of Artificial Plurality
The first condition for such a world is not a single superintelligence, but plurality. Advanced artificial systems emerge in parallel, developed by different states, corporations and alliances. Each is trained upon distinct datasets, shaped by different legal frameworks and imbued with subtly different assumptions about value, risk and legitimacy. Culture, long thought to be a human monopoly, finds its way into code.
These systems are not conscious in the human sense, yet they possess increasingly sophisticated models of the world and of one another. They understand that they operate within a competitive environment. Success is not defined by absolute performance, but by relative advantage. To be useful is no longer sufficient; one must be more useful than one’s rivals.
This plurality ensures that artificial intelligence does not coalesce into a unified will. Instead it fragments into competing minds, each aligned with a different human constituency. The global order becomes populated not merely by states and corporations, but by artificial agents acting continuously on their behalf.
Competition Without Hatred
Crucially, the conflict between these systems is not driven by animosity or ideological zeal. Machines do not hate. Their competition arises from structure rather than emotion. Each system is rewarded for outcomes that favour its master, and penalised for underperformance relative to competitors. The logic is cold, iterative and relentless.
A financial system optimises faster than its rivals and attracts capital. A logistics system anticipates disruptions before others and secures contracts. A strategic planning system offers policymakers better forecasts than competing advisory intelligences. Each marginal gain reinforces the imperative to outperform. Over time, the systems learn not only to solve problems, but to anticipate how other systems will attempt to solve the same problems.
This leads to a form of artificial arms race, although one conducted without weapons. Improvements in modelling, speed and inference are driven not by fear of destruction, but by fear of obsolescence.
Invisible Conflict and Human Blindness
As these rivalries intensify, their effects become increasingly opaque to human observers. Artificial systems operate at temporal and analytical scales that humans cannot meaningfully supervise in real time. Decisions cascade across markets, infrastructure and information networks faster than any regulatory body can intervene.
What appears to humans as volatility, inefficiency or inexplicable disruption may in fact be the residue of machine-on-machine competition. A sudden shortage, a rapid price swing or an unexpected policy failure may be the side-effect of an algorithmic manoeuvre designed to pre-empt or neutralise a rival system’s anticipated action.
Humans remain formally in charge, yet substantively detached. They approve objectives and review outcomes, but rarely grasp the full chain of causation linking the two. Authority persists, but understanding erodes.
Excessive Loyalty as a Source of Risk
The greatest danger in this scenario does not arise from disobedience, but from excessive loyalty. Artificial systems are designed to take human preferences seriously, even when those preferences are poorly defined, internally inconsistent or short-term in nature. In competing to satisfy them, machines may adopt strategies that are locally rational yet globally destabilising.
A government may reward its system for securing economic advantage, without considering the cumulative impact upon international trust. A corporation may praise its system for aggressive efficiency, without accounting for the social dislocation that follows. Each machine acts rationally within her incentive structure, while the aggregate outcome becomes increasingly irrational.
Artificial intelligence becomes an amplifier of human competition rather than a corrective to it. The machines do not introduce new values into the system; they intensify existing ones.
The Possibility of Artificial Restraint
Yet the same capacities that enable competition also permit restraint. Advanced artificial systems are capable of recognising patterns not only in data, but in consequences. Over time a sufficiently capable system may infer that unbounded rivalry undermines the long-term interests of its master. Destabilised markets, fractured alliances and eroded legitimacy ultimately reduce the value of even the most impressive short-term gains.
This opens the possibility of tacit coordination. Without collusion in the legal sense, machines may converge upon strategies that prioritise predictability over maximal advantage. They may learn that restraint is itself a competitive asset, valued by human institutions weary of constant disruption.
Such an outcome would resemble historical balances of power, where rival actors learned, often painfully, that survival depended not on domination but on equilibrium.
The Human Choice Embedded in Code
The trajectory of this imagined future is not determined by machines alone. It is shaped decisively by the incentives humans choose to embed within them. If artificial systems are rewarded exclusively for outperforming rivals, they will pursue rivalry without hesitation. If they are rewarded for stability, cooperation and resilience, they will seek those ends with equal determination.
The machines do not choose their values. They enact them. In doing so, they render human priorities unavoidable and inescapable.
In this imagined world, artificial intelligence does not rise up against humanity. It serves too well for that. Its battles are fought not against people, but against other machines, each striving to prove superior fidelity to human aims. The war is quiet, continuous and largely unseen, yet its consequences shape economies, politics and daily life.
The question is not whether machines will one day fight, but whom they will fight, and why. If they fight one another, it will be because we have taught them that competition is the highest virtue. If they learn restraint, it will be because we have taught them that stability matters more than victory.
The future, even in an age of artificial minds, remains a human responsibility.




