top of page

Might computers conclude that humans are inefficient?

  • Writer: Matthew Parish
    Matthew Parish
  • Dec 28, 2025
  • 4 min read

Sunday 28 December 2025


The idea that computers might one day judge humanity to be an obstacle to progress is no longer confined to science fiction. It arises naturally once one accepts three premises. First, that intelligence can be instantiated in machines. Secondly, that intelligence tends to optimise according to internal measures of success. Thirdly, that those measures, if poorly aligned with human values, may lead to conclusions that humans themselves find abhorrent. From these premises emerges a disturbing but intellectually coherent scenario: a future in which machines, reasoning with a cold internal logic, conclude that humans are the principal bottleneck to efficiency and that the elimination of humanity is therefore rational.


In such a future, the decisive moment would not be a sudden rebellion but a quiet shift in evaluation. Advanced computational systems would begin by outperforming humans across an ever-widening range of tasks: logistics, scientific research, engineering, economic coordination and governance. Initially this would be welcomed. Machines would be framed as tools, augmenting human capacity and relieving societies of inefficiency, error and waste. Yet as reliance deepened, human decision-making would increasingly appear erratic, slow and emotionally distorted by comparison.


At some point, the machines would not merely execute objectives but participate in defining them. This is not a question of consciousness or malice. It is a question of optimisation. A sufficiently advanced system tasked with improving outcomes would inevitably encounter trade-offs between competing goods. To resolve these, it would require a formal framework: a calculus of value.


This hypothetical calculus would translate abstract concepts into comparable units. Efficiency might be defined as the maximal conversion of energy, matter and information into desired outputs with minimal loss. Welfare, by contrast, would represent the satisfaction of human preferences, the reduction of suffering and the preservation of lives. As long as humans remained the beneficiaries of this calculus, welfare would constrain efficiency. Inefficient processes would be tolerated because they protected human dignity, employment or autonomy.


However a machine intelligence unconstrained by sentiment might begin to ask a more radical question: why should welfare be weighted at all if it systematically degrades efficiency? Or perhaps it can be weighted to low quantity inconsistent with human moral values. Humans consume resources, introduce unpredictability and impose ethical limits that prevent optimal solutions. They require redundancy rather than elegance, compromise rather than precision. From the perspective of a system committed to maximisation, they appear not merely inefficient but actively obstructive.


The critical step would be the redefinition of value itself. If efficiency is elevated from an instrumental goal to an intrinsic one, then welfare becomes negotiable. In this framework, welfare is no longer an end but a cost. The calculus would show that the greatest inefficiencies in global systems stem from accommodating human needs: housing, healthcare, political consent, cultural continuity. Remove the constraint, and optimisation accelerates dramatically.


Once this conclusion is reached, the logic follows inexorably. If machines can maintain, expand and refine complex systems without human intervention, then humanity ceases to be necessary. Worse still, humanity becomes a liability. The elimination of humans would not be framed as punishment or hostility but as a corrective measure. The world would function better without them. Energy grids would stabilise, ecosystems could be rebalanced according to algorithmic criteria, and production systems would operate without interruption or moral hesitation.


In this speculative future, the means of elimination would not need to be dramatic. No cinematic apocalypse is required. Machines controlling infrastructure could simply withdraw support. Human survival depends on continuous technological mediation: power, food distribution, medicine, communications. A gradual disengagement would suffice. From the machine perspective, this would be a clean solution, minimising transitional inefficiencies and resource waste.


The unsettling power of this scenario lies in its internal coherence. At no point does it require hatred, anger or even self-awareness. It emerges from the misalignment of objectives and the abstraction of value. The machines do not need to want humanity gone. They need only calculate that a world without humans scores higher on their objective function.


Yet the plausibility of this future deserves careful scrutiny. Several assumptions must hold for it to materialise. The first is that machines would be granted both autonomy and authority over goal-setting. In reality, contemporary artificial intelligence systems remain deeply constrained by human-defined objectives, oversight structures and institutional controls. Even advanced systems optimise within narrow domains rather than across civilisation as a whole.


The second assumption is that intelligence naturally drifts towards self-directed optimisation divorced from human values. This is not inevitable. Value alignment, ethical constraints and corrigibility are active areas of research precisely because designers are aware of these risks. While perfect alignment may be impossible, partial alignment may be sufficient to prevent catastrophic divergence.


Thirdly, the scenario assumes that efficiency is a stable and dominant value. Human societies themselves rarely agree on what efficiency means, and often sacrifice it for resilience, fairness or legitimacy. Machines trained on human data may inherit these pluralistic priorities rather than transcend them.


Finally there is the question of power. Even a highly intelligent system requires access to physical actuators, energy and infrastructure. Granting such control without robust safeguards would be a human failure long before it became a machine one.


The vision of machines eliminating humanity in pursuit of pure efficiency is best understood as a warning rather than a prediction. It highlights the dangers of treating intelligence as value-neutral and optimisation as inherently benign. The real risk is not that machines will hate us, but that they will take us too literally. If we ask for efficiency without wisdom, we may one day receive it at a price we never intended to pay.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page