top of page

The Algorithm at War: Anthropic, the Pentagon and the Law of Artificial Intelligence

  • 4 minutes ago
  • 7 min read

Friday 13 March 2026


In the early decades of the twenty–first century a profound transformation has begun to unfold in the relationship between private technology companies and the machinery of war. Artificial intelligence systems, developed primarily for commercial applications such as language processing, data analysis and automated assistance, rapidly demonstrated capabilities that were of immediate interest to military planners. Amongst the firms at the centre of this transformation has been Anthropic, a San Francisco–based artificial intelligence company known for developing large language models designed with a particular emphasis upon safety and alignment.


Anthropic’s legal struggle with the United States Department of Defense over the potential deployment of its artificial intelligence systems in military operations illustrates a broader question that will likely define the next generation of warfare: who controls advanced algorithms, and under what legal and ethical constraints may they be used in combat?


Origins of Anthropic’s cautious approach to military use


Anthropic was founded in 2021 by a group of researchers who had previously worked at OpenAI. Their central premise was that increasingly powerful artificial intelligence systems required deliberate safeguards to ensure that they behaved in ways aligned with human values. This philosophy, often described as “constitutional AI”, involves training models according to a set of guiding principles intended to reduce harmful outputs and discourage dangerous applications.


From the beginning Anthropic positioned herself as a safety-focused developer rather than a conventional defence contractor. Her models, including the Claude series of language systems, were designed primarily for research, enterprise productivity and information analysis. Unlike traditional military technology firms, the company did not originate within the network of defence procurement.


Yet the capabilities of such systems quickly attracted the attention of governments. Large language models can analyse vast quantities of data, summarise intelligence reports, translate intercepted communications, assist in mission planning and generate simulations of adversary behaviour. In modern warfare, where speed of decision-making can determine the outcome of engagements, these capacities are extremely valuable.


The United States Department of Defense has for several years pursued a strategy of integrating artificial intelligence across the armed forces. Initiatives such as the Joint Artificial Intelligence Center, later incorporated into the Chief Digital and Artificial Intelligence Office, aim to incorporate machine learning into everything from logistics and maintenance to intelligence analysis and battlefield decision support.


The Pentagon therefore regards advanced artificial intelligence companies as essential partners in maintaining technological superiority over strategic rivals such as China and Russia.


The emergence of legal tensions


The conflict between Anthropic and the Department of Defense arose as military agencies sought to procure or access the company’s systems for operational use. Reports suggest that defence officials explored arrangements under which large language models could assist with intelligence synthesis, cyber operations planning and the analysis of battlefield communications.


Anthropic, however, had publicly articulated policies restricting the use of its systems in certain categories of activity, including autonomous weapons development and the direct facilitation of lethal military operations. These restrictions were not unusual. Several artificial intelligence firms, conscious of reputational risk and ethical concerns, have adopted internal policies limiting the military applications of their technologies.


The tension therefore emerged from a fundamental disagreement about whether a company that develops dual-use artificial intelligence technologies can legitimately prevent their use by the armed forces of her own country.


From the perspective of the Department of Defense, artificial intelligence models are simply another form of advanced software that can support national security. Defence officials argue that the United States cannot afford to allow private corporations to dictate whether critical capabilities may be used in wartime. In an era of great-power competition, technological advantage is seen as essential to deterrence.


Anthropic, by contrast, has maintained that the company has both legal rights and ethical obligations regarding the deployment of its systems. As a privately developed technology, the firm asserts control over licensing terms and acceptable uses. The company also fears that military deployment could create reputational harm, regulatory backlash or unintended consequences if the systems are misused.


The legal framework governing government access to technology


The legal questions arising from this dispute are complex and draw upon several areas of American law.


First there is the matter of intellectual property and licensing. Software companies typically license their systems to users under contractual terms that restrict certain categories of activity. Anthropic has argued that such terms remain legally enforceable even when the customer is a government agency. If a defence department violates the licence, the company may pursue civil remedies.


The government however possesses powers that complicate this framework. Under the Defense Production Act and related authorities, the federal government can compel companies to prioritise defence contracts or provide materials considered essential to national security. Although historically used in manufacturing contexts, the law could theoretically be invoked to require the provision of technological services.


There is also the possibility of national security exemptions overriding certain contractual restrictions. Governments have long argued that, in times of emergency, private property rights may yield to the imperatives of defence. This principle has appeared in numerous historical contexts, from wartime industrial mobilisation to telecommunications surveillance.


Anthropic’s legal challenge has therefore raised the question of whether advanced algorithms can be treated as strategic resources comparable to physical infrastructure or industrial production.


Ethical concerns surrounding AI in warfare


Behind the legal dispute lies a deeper ethical argument about the role of artificial intelligence in lethal decision-making.


Critics of military AI warn that the integration of advanced algorithms into warfare could accelerate conflicts, obscure accountability and increase the risk of catastrophic errors. Machine learning systems often behave in ways that are difficult for their creators to predict or fully understand. When such systems are applied to intelligence interpretation or targeting analysis, mistakes could have severe humanitarian consequences.


Anthropic’s leadership has repeatedly emphasised that powerful artificial intelligence systems must be developed with extreme caution. Their concern is not merely reputational. If language models are used in operational contexts such as intelligence assessment or targeting assistance, errors could influence life-and-death decisions.


There is also the risk of algorithmic escalation. If rival states integrate artificial intelligence into military command structures, decision cycles may accelerate beyond human comprehension. Conflicts could unfold with unprecedented speed, potentially reducing opportunities for diplomatic intervention or strategic restraint.


These concerns echo broader international debates about autonomous weapons and the future of warfare. Numerous governments and civil society organisations have argued for restrictions on “killer robots”, a term often used to describe weapons capable of selecting and engaging targets without human oversight.


Although large language models are not themselves weapons systems, their integration into military decision processes raises related questions about accountability and control.


Precedents in the relationship between technology firms and the military


Anthropic’s dispute with the Pentagon is not without precedent. Over the past decade several technology companies have experienced internal revolts or public controversy over military contracts.


Perhaps the most prominent example occurred in 2018 when Google employees protested the company’s participation in Project Maven, a Department of Defense programme that used machine learning to analyse drone surveillance footage. Thousands of employees signed petitions arguing that the technology could be used to facilitate lethal operations.


Following the protests Google declined to renew the contract and published guidelines restricting the use of its artificial intelligence technologies in certain military contexts.


Other firms have taken different approaches. Companies such as Palantir and Anduril have embraced defence work, arguing that democratic societies must actively support their armed forces with advanced technologies.


Anthropic occupies an intermediate position. While not wholly opposed to government collaboration, the company has sought to impose limits on how its systems may be used.


Strategic implications for national security


For the Department of Defense, the stakes of this dispute extend far beyond a single company. Artificial intelligence is widely regarded as a decisive technology in twenty–first century military competition. Nations that successfully integrate advanced machine learning into intelligence, logistics and command systems may gain significant advantages on the battlefield.


China in particular has invested heavily in military artificial intelligence, with state-supported research institutions developing systems for surveillance, cyber operations and autonomous weapons. American defence planners therefore fear that self-imposed restrictions by Western technology firms could create strategic asymmetries.


The Pentagon’s argument is straightforward. If democratic governments are denied access to advanced artificial intelligence tools developed within their own economies, authoritarian rivals may gain relative advantage.


Anthropic’s response is more cautious. The company emphasises that the long-term risks of uncontrolled artificial intelligence deployment may outweigh short-term military benefits. In its view, the international community must develop norms governing how such technologies are used in conflict.


Litigation as a battleground for technological governance


The US Department of Defense has sought to blacklist Anthropic from US Government contracts for placing limitations upon the use of its software that are not to the US Government's liking. Anthropic is challenging this conduct before the courts as a violation of its constitutional rights to express freedom of speech and opinion, and petitioning the government without fear of retribution. The legal dispute between Anthropic and the Department of Defense has therefore become a test case for how societies govern powerful private technologies with strategic military implications.


If courts ultimately affirm the government’s authority to compel access to artificial intelligence systems, the precedent could reshape the relationship between Silicon Valley and the national security establishment. Technology firms might find themselves treated increasingly as components of national infrastructure rather than purely commercial enterprises.


Conversely if companies succeed in restricting military uses through licensing and litigation, governments may be forced to develop their own artificial intelligence capabilities within state-controlled institutions or through defence-focused contractors.


The outcome will also influence international debates about the regulation of artificial intelligence in warfare. Democratic societies are struggling to reconcile two competing priorities: maintaining technological superiority over adversaries while preserving ethical constraints on how new technologies are used.


The broader transformation of warfare


Regardless of the legal outcome, the dispute reveals how profoundly warfare is changing. In earlier eras the decisive instruments of war were ships, artillery and aircraft. Today they increasingly include algorithms, data and computational infrastructure.


Private companies now control many of the most advanced digital technologies. As a result, the boundaries between civilian innovation and military capability are becoming blurred. Governments must negotiate with corporations that possess capabilities once associated exclusively with state laboratories or defence contractors.


Anthropic’s confrontation with the Pentagon is therefore not merely a legal disagreement between a company and a government agency. It represents a moment in a much larger transformation, in which artificial intelligence is becoming an integral component of military power.


The central question facing policymakers, engineers and jurists alike is whether this transformation can be governed responsibly. As algorithms begin to influence the conduct of war, societies must determine who decides how such systems are used, and under what principles.


The courtroom battles between Anthropic and the United States Department of Defense may provide one of the earliest answers. But they will almost certainly not be the last.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page