top of page

Artificial Intelligence in Warfare: Lessons from the Ukrainian Battlefield

  • Writer: Matthew Parish
    Matthew Parish
  • 3 days ago
  • 4 min read


As the full-scale war in Ukraine enters its third year, it has become a proving ground not only for conventional military tactics, but also for emerging technologies — none more consequential than artificial intelligence (AI). AI is a poorly defined term and at its most basic refers just to the latest methods whereby machines can learn information, for example by drawing inferences from data sets. This is something the most elementary personal computers could do from their inception in the mid-1970's, although the degree to which machines have been able to learn has increased exponentially year on year since then.


Warfare used to be a purely human and mechanical affair, but from the invention of computers it has become ever decreasingly so. In the contemporary era, from target acquisition and battlefield surveillance to logistics optimisation and information warfare, AI is reshaping the conduct and tempo of combat in ways that echo, and at times surpass, prior revolutions in military affairs.


Here we explore how AI is being deployed in the war in Ukraine, what it reveals about the future of warfare, and the ethical and strategic implications for democracies confronting technologically enabled authoritarian adversaries.


Intelligence Amplified: Real-Time Targeting and Situational Awareness


One of the most transformative uses of AI in Ukraine has been in real-time intelligence processing. Ukrainian and allied forces, faced with a numerically superior enemy, have turned to AI-enhanced tools to fuse drone feeds, satellite imagery, mobile phone metadata and open-source intelligence (OSINT) into a coherent battlefield picture.


Key developments include:


  • AI-powered drone swarms, that autonomously scan terrain for enemy positions or vehicle movements. These are often guided by machine vision algorithms trained on vast image datasets of Russian equipment.


  • Edge computing (a model that involves decentralising data processing towards the source of the data rather than in a central system), deployed in drones and mobile devices, allowing those devices to identify targets and transmit coordinates for artillery or HIMARS strikes within seconds — even without permanent internet connectivity.


  • Palantir (an AI software platform company of global repute) platforms have reportedly been used by Ukraine, which integrate data from multiple sensors to offer decision-makers a dynamic picture of front-line activity and probable enemy intentions.


Russia too is employing AI, although with greater emphasis on centralised data processing and battlefield automation — for example, in automated counter-battery radar analysis (using radar to track projectiles fired by multiple weapons or from multiple positions) or cyber-intrusion algorithms (seeking unauthorised access to Ukrainian military computer systems).


Logistics, Repair, and Resupply Optimisation


AI is not only influencing the battlefield directly, but also the logistical spine that keeps armies moving. Ukraine, dealing with the vast geography of her defence lines, has leaned on AI-assisted systems to:


  • Optimise vehicle repair cycles, using predictive analytics to identify failing parts before breakdowns.


  • Coordinate delivery routes for ammunition and food, minimising exposure to drone reconnaissance or artillery fire.


  • Manage casualty evacuation, with algorithms predicting the safest and fastest medevac corridors under current fire conditions.


By contrast, Russia’s heavily centralised and often rigid logistics systems appear slower to adopt flexible, AI-assisted models — though they have made gains in areas like supply route protection using electronic warfare (EW) and automated drones.


AI on the Frontlines: Autonomous Weapons and Loitering Munitions


Perhaps the most controversial development has been the use of semi-autonomous or autonomous weapon systems, particularly so-called loitering munitions — drones that wait over a battlefield, scan for targets, and attack when a match is detected.


Examples in Ukraine include:


  • AI-assisted targeting in FPV (First-Person View) drones, where onboard image recognition identifies tanks or artillery tubes.


  • Autonomous flight path generation, where drones avoid EW zones or known air defences using AI-generated risk maps.


  • Turret stabilisation systems that allow mounted weapons to automatically lock on and track targets, especially in urban fighting or vehicle-based combat.


Though current systems still require human authorisation to engage — the so-called “human-in-the-loop” model — the pace at which targeting and firing decisions are made is shortening. Debates are intensifying over whether “human-on-the-loop” or even fully autonomous kill chains are already being field-tested.


Cyber and Cognitive Warfare: Information and Perception as Weapons


AI is a force multiplier in the information domain as well. Both Ukraine and Russia use AI tools to:


  • Generate deepfakes — Russia in particular has used these to sew disinformation (e.g. fake surrender announcements).


  • Amplify or debunk narratives on social media through bot networks or automated content analysis.


  • Detect and trace digital influence operations, using natural language processing (NLP) to flag hostile messaging in real time.


Ukraine’s civil society and technological communities have proven unusually agile here, launching platforms like “Find Your Own” (to identify Russian POWs via face recognition) and chatbots for citizens to report enemy troop movements.


Ethical and Strategic Dilemmas


The rapid adoption of AI in warfare raises a host of dilemmas:


  • Accountability: Who is responsible if an autonomous drone kills civilians due to algorithmic error?


  • Escalation: Does faster AI-enabled decision-making raise the risk of accidental conflict escalation or reduced human deliberation in lethal actions?


  • Proliferation: How do countries prevent non-state actors or rogue states from acquiring autonomous weapons cheaply modelled on systems seen in Ukraine?


The war has served as an unintended “open laboratory” for the world to witness both the power and peril of AI in conflict. Unlike in the laboratories of Silicon Valley or the think tanks of Washington DC, in Ukraine these questions are not theoretical.


Strategic Implications for NATO and Beyond


NATO and EU states are watching Ukraine’s AI warfighting innovations closely. Lessons include:


  • Resilience through decentralisation: AI tools that operate "at the tactical edge" (effectively meaning without constant internet connectivity to a central command) offer major advantages in degraded communications environments.


  • Civil-military innovation partnerships: Ukraine’s integration of civilian technology experts, hackers, and entrepreneurs into her war effort points to a new model of hybrid defence.


  • Urgency in regulatory policy: The war is outpacing Geneva Conventions-based norms on weapons and targeting. New treaties — or at least protocols — are needed fast, or international law is at risk of being left behind.


Conclusion: A Glimpse Into Future War


Ukraine’s battlefield reveals that AI is not some distant threat — it is already a core component of modern war. While its full potential and risks are still unfolding, the war has shown that the side that learns faster, integrates more flexibly, and manages teams between AI and humans effectively may hold a decisive edge.


In this sense the war in Ukraine is not only about territory; it is about the future of war itself. AI is now a permanent combatant.

 
 

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine.

bottom of page