The Future of Warfare in the Age of Artificial Intelligence (II)
- Matthew Parish
- Sep 30
- 4 min read

If the first stage of AI-driven warfare is the widespread adoption of machine learning into intelligence, targeting, and logistics, then the second stage will be characterised by the consolidation of these systems into coherent doctrines of war. The battlefield case studies already observable provide early clues as to where such doctrines may lead, and what dangers they may entail.
Lessons from Contemporary Conflicts
Ukraine has been the proving ground of AI-assisted warfare. Ukrainian forces employ machine learning algorithms in systems such as Delta, a battlefield management platform that fuses data from drones, satellites and ground observers to provide real-time situational awareness. Russian forces have likewise used AI to automate drone swarms and enhance electronic warfare, jamming Ukrainian communications and GPS signals. The key lesson is that AI favours agility and decentralisation: units with access to live data and adaptive algorithms can strike faster and with greater precision than those waiting for orders through traditional chains of command.
Other conflicts reveal similar patterns. In Nagorno-Karabakh (2020 and beyond), Azerbaijan used AI-guided drones and loitering munitions to devastating effect against Armenian armour and air defences, demonstrating how relatively small states can employ algorithms to gain decisive battlefield advantage. In Gaza, AI-powered target selection systems have accelerated the pace of strikes, albeit with growing controversy over civilian casualties. These episodes foreshadow how AI might transform conflicts of all scales: rapid tempo, reduced human oversight, and blurred lines between combatant and civilian spaces.
The Doctrines of the 2030s
By the 2030s, states are likely to formalise doctrines centred on AI systems. Three trends appear particularly likely:
Algorithmic Command: Human generals may no longer design operations in detail. Instead, AI simulations will generate battle plans based on inputs of available forces, terrain, and enemy profiles. Commanders will become overseers rather than originators, validating or modifying the machine’s strategy. This will compress planning cycles from weeks to hours.
Swarm Superiority: Numerical mass will return to the battlefield in the form of cheap, expendable autonomous systems. Armies may measure their strength less in divisions or brigades, and more in the number of drones and robotic platforms they can deploy simultaneously. Control of the electromagnetic spectrum—jamming, spoofing, and hacking—will be as critical as firepower.
Synthetic Training Grounds: AI will also revolutionise preparation for war. Digital twins—virtual replicas of entire battlefields—will allow militaries to rehearse operations endlessly in simulation before carrying them out in reality. This means that when war begins, adversaries may already have fought thousands of simulated versions of the same campaign, learning from each iteration.
Strategic Implications
The global balance of power will shift in response to these developments. The great powers—United States, China, Russia, and the European Union collectively—will compete to dominate AI warfare not only by building better systems but by integrating them into doctrine and ensuring resilience against disruption. States unable to keep pace will find themselves relegated to irrelevance, their armed forces unable to operate effectively against machine-speed opponents.
A paradox arises: while AI promises to reduce human casualties amongst advanced militaries, it may simultaneously increase civilian exposure to warfare. Cyber-attacks on energy grids, water systems, and hospitals will be justified as “non-lethal strikes”, yet their consequences could be devastating. Moreover, if AI swarms can cheaply saturate cities, the threshold for launching such attacks may fall, eroding long-standing norms of restraint.
The Role of Non-State Actors
AI warfare will not be the monopoly of states. Private military contractors may develop proprietary AI systems, renting out swarms of autonomous drones much as mercenaries once hired their swords. Criminal syndicates could weaponise AI for extortion, shutting down ports or airports until ransoms are paid. Terrorist groups may adopt autonomous drones for spectacular attacks that require no suicide bomber. The diffusion of AI into non-state hands threatens to widen the scope of conflict beyond the control of governments.
The Dilemmas of Control
As warfare accelerates, the most pressing challenge will be retaining meaningful human control. States may adopt doctrines of “human-on-the-loop”, in which machines execute operations but humans monitor and can intervene if necessary. Yet in practice, the speed of engagement may render such oversight illusory. The risk of escalation by algorithm—a machine mistaking decoys for a missile barrage and triggering retaliation—will haunt military planners.
The 2030s may therefore witness efforts at arms control, not unlike nuclear treaties of the Cold War. Proposals may emerge to ban certain categories of autonomous weapons, or to establish international norms requiring human approval for lethal strikes. Yet verification will be difficult, for AI systems can be concealed within civilian technologies, and the temptation to gain advantage by secret deployment will be immense.
Towards an Uncertain Future
Artificial intelligence will not abolish the human element in war, but it will profoundly change its meaning. Political leaders will still decide when to fight, but they may find that once unleashed, AI systems escalate conflicts beyond their control. Soldiers will still exist, but increasingly as custodians of machines rather than direct combatants. Civilians will remain the victims, often caught in the invisible crossfire of cyber-assaults and algorithmic miscalculations.
The ultimate question is whether humanity can adapt its moral and legal frameworks quickly enough to match the pace of its technological ingenuity. The wars of the 2030s may not resemble the wars of the past; they may instead be contests of algorithms, waged at speeds beyond human comprehension. In such a world, the gravest danger may not be the machines themselves, but our inability to govern them wisely.




