top of page

The European Union's Artificial Intelligence Act

  • 2 minutes ago
  • 5 min read

Sunday 22 February 2026


The European Union’s Artificial Intelligence Act is not merely a gesture towards technological caution. It is a dense and highly structured regulatory instrument, drafted in the idiom of internal market law yet animated by a constitutional philosophy. If the General Data Protection Regulation sought to discipline the exploitation of personal data, the AI Act seeks to discipline the deployment of systems that increasingly mediate human judgment itself.


The measure proceeds under Article 114 of the Treaty on the Functioning of the European Union, the familiar legal basis for harmonising rules that affect the functioning of the single market. Its architects were conscious that divergent national regimes for artificial intelligence would fracture cross-border commerce. The Act therefore takes the form of a Regulation — directly applicable in all Member States without the need for national transposition. Uniformity is not an incidental feature; it is the premise.


At the heart of the Act lies a taxonomy of risk. This classification is not rhetorical but juridical: obligations attach according to category.


First are prohibited practices. These include certain forms of subliminal manipulation that materially distort a person’s behaviour, exploitation of vulnerabilities due to age or disability, and social scoring systems operated by public authorities that evaluate individuals’ trustworthiness across unrelated contexts. The prohibition is absolute — such systems may not be placed on the market, put into service or used within the Union. The legal technique is blunt because the perceived threat to fundamental rights is acute.


Secondly come high-risk systems. These are defined both by sector and by function. Annexes to the Regulation enumerate areas such as critical infrastructure, education and vocational training, employment and worker management, access to essential public and private services, law enforcement, migration and border control, and the administration of justice. In these domains artificial intelligence may significantly affect life chances — access to employment, credit, asylum or liberty itself.


For such systems the Act establishes a regime that resembles product safety law combined with administrative constitutionalism. Providers must implement a risk management system throughout the lifecycle of the model. They must ensure that training, validation and testing data are relevant, representative and free from errors to the extent possible. Technical documentation must be drawn up demonstrating compliance. Automatic logging capabilities are required to permit traceability. Human oversight mechanisms must be embedded so that operators can intervene or override outputs where necessary.


Before a high-risk system is placed on the market, it must undergo a conformity assessment. In some cases this may be conducted internally by the provider; in others, particularly where harmonised standards are absent, a notified body — an independent conformity assessment organisation designated by a Member State — must review compliance. Once conformity is established, the system bears the CE marking familiar from other regulated products within the single market.


The Act also imposes post-market monitoring obligations. Providers must operate a system for collecting, documenting and analysing data on the performance of high-risk systems. Serious incidents or malfunctioning that constitute breaches of fundamental rights must be reported to national supervisory authorities.


Enforcement is entrusted primarily to Member States, which must designate one or more competent authorities. At Union level an AI Office within the European Commission coordinates supervision of the most advanced general-purpose models. Administrative fines may reach significant percentages of global annual turnover — echoing the deterrent architecture of the GDPR.


A distinct and politically sensitive component concerns general-purpose artificial intelligence models, including large language models capable of a wide range of downstream applications. The Regulation distinguishes between ordinary general-purpose systems and those deemed to present “systemic risk” by virtue of scale and capability. Providers of such models must conduct model evaluations, perform adversarial testing, assess and mitigate systemic risks, document energy consumption, and report serious incidents. Transparency obligations require disclosure that content has been generated or manipulated by artificial intelligence, particularly in contexts susceptible to deception.


In this respect the Act grapples with frontier technology. Unlike traditional sectoral regulation, it must anticipate models that can be fine-tuned for unpredictable uses. The legal solution is layered responsibility: upstream model developers bear baseline obligations, while downstream deployers assume duties corresponding to their specific application.


Underlying these detailed provisions is a philosophical commitment. The Regulation repeatedly invokes fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union. Human dignity, non-discrimination, data protection and effective judicial remedy are not decorative references; they are interpretative guides. The Act presumes that algorithmic systems are not neutral artefacts but socio-technical constructs capable of entrenching bias or diffusing accountability.


The insistence upon human oversight is emblematic. It does not demand that humans manually replicate machine analysis. Rather, it requires that responsibility remain attributable. There must be a natural or legal person who can explain, justify and, if necessary, correct the system’s operation. The law resists the notion of autonomous authority detached from human command.


Critics contend that the regime is excessively complex. Compliance burdens may fall disproportionately upon smaller enterprises lacking legal departments. There is concern that innovation may migrate towards jurisdictions with lighter-touch regulation. Moreover the pace of technical development may render static annexes obsolete, necessitating frequent delegated acts by the Commission to update classifications.


Yet the European wager is strategic. By embedding trust into the regulatory environment, the Union hopes to create a market in which consumers, public authorities and international partners can rely upon certified standards. Just as the GDPR influenced global data governance, the AI Act aspires to generate a “Brussels effect” in artificial intelligence.


For Ukraine — whose aspiration to European Union membership is now constitutional doctrine — the implications are substantial. Alignment with the acquis communautaire in digital regulation will be a prerequisite for accession. Ukrainian developers of artificial intelligence systems, including those engaged in defence applications with dual-use potential, will increasingly need to document training data integrity, risk mitigation processes and human command structures if their products are to circulate within the European market. In a wartime environment where machine learning assists reconnaissance, targeting and logistics, the interplay between innovation and accountability becomes especially delicate.


The Act therefore operates on two planes. Legally it harmonises market conditions through a granular architecture of obligations, assessments and sanctions. Philosophically it asserts that technological progress must remain subordinate to democratic values. It affirms that efficiency does not eclipse dignity, nor prediction extinguish responsibility.


Whether the system will achieve its objectives depends upon enforcement capacity and adaptive governance. Supervisory authorities must acquire technical expertise commensurate with the systems they regulate. The Commission must calibrate delegated powers carefully to avoid regulatory drift. Courts will inevitably be called upon to interpret concepts such as systemic risk and sufficient human oversight.


Artificial intelligence evolves rapidly; legislation advances deliberately. The EU AI Act is an attempt to narrow that temporal gap through structure, precaution and institutional coordination. It is a statement that Europe intends not merely to consume intelligent systems but to constitutionalise them.


In doing so, Europe places law at the centre of technological modernity — not as a brake upon innovation, but as its frame.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page