top of page

What is the relationship between emerging technology, politics and ethics?

  • 3 minutes ago
  • 5 min read

Wednesday 13 May 2026


Throughout modern history there has been a persistent argument about the relationship between ideas and machines. Do human beings first formulate moral, political and social doctrines, and then build technologies that express those doctrines? Or does the process run in the opposite direction — with technological innovation reshaping the possibilities of society, and with political philosophies emerging afterwards as rationalisations of material change?


The second proposition is often regarded as unsettling because it diminishes the apparent autonomy of human morality. It suggests that people do not so much choose their values as inherit them from the technological conditions in which they live. Moral systems, under this view, supervene upon technology. New inventions alter the structure of economic life, communication and warfare, and societies then reconstruct their ethics and politics accordingly.


There is considerable evidence for this proposition across centuries of human civilisation.


The printing press is perhaps the classical example. Prior to Johannes Gutenberg’s movable type, literacy was constrained by the extraordinary expense of manuscript reproduction. Religious authority therefore became naturally centralised. The Roman Catholic Church’s monopoly over written scripture was not merely theological; it was technological. Once printing enabled mass replication of texts, the conditions for the Protestant Reformation emerged almost automatically. Martin Luther’s theological rebellion succeeded not solely because of the intrinsic persuasiveness of his arguments but because pamphlets could suddenly circulate cheaply and rapidly across the German principalities.


The moral consequence followed the technological transformation. Individual conscience became elevated over institutional hierarchy because printing made individual access to scripture materially possible. Protestantism was therefore not simply an ideological event but also a technological one.


The same may be said of the Industrial Revolution. Before industrialisation, aristocratic and monarchical political structures corresponded closely to agrarian economies. Land ownership determined power because land was the principal productive asset. Social hierarchy appeared morally natural because economic production itself depended upon rigid, inherited relationships.


Industrial machinery transformed this equilibrium. Capital became mobile. Urban labour markets emerged. Mass factory employment created concentrations of workers with shared grievances and identities. Liberal democracy, socialism and trade unionism did not arise in a vacuum of philosophical speculation. They emerged because industrial technology altered the structure of production and therefore changed the feasible forms of political organisation.


Karl Marx himself partially recognised this dynamic. Historical materialism was essentially a theory that productive technology determines social consciousness. Marx argued that economic relations — themselves dependent upon modes of production — form the “base” upon which political and moral “superstructures” are erected. While Marx overstated economic determinism, the broader insight retains force. Moral philosophy often follows changes in material capability.


Even modern conceptions of privacy illustrate the same phenomenon. Medieval societies possessed comparatively little expectation of privacy because surveillance technologies scarcely existed. Communities were small, interpersonal and physically transparent. Modern liberal notions of individual privacy emerged alongside bureaucratic states, urbanisation, photography, telephones and electronic data systems. One cannot meaningfully advocate a right to digital privacy before the existence of digital technology.


The relationship becomes even clearer in warfare.


For centuries, aristocratic military leadership appeared morally justified because warfare depended heavily upon mounted elites possessing expensive equipment and hereditary martial training. Firearms disrupted this arrangement. Mass conscription armies became feasible because industrial weapons reduced the military importance of noble horsemanship. Democratic politics expanded partly because industrialised warfare required mobilisation of entire populations. Governments increasingly needed the consent, or at least acquiescence, of masses who were now indispensable to national defence.


Twentieth century nuclear weapons transformed moral and political philosophy yet again. Prior to atomic weapons, total war remained theoretically winnable. After Hiroshima and Nagasaki, the possibility of mutual annihilation altered doctrines of sovereignty, deterrence and diplomacy. Entire schools of strategic ethics — including modern theories of arms control and collective security — emerged because the destructive power of technology had fundamentally changed the stakes of interstate conflict.


Artificial intelligence may now represent another such civilisational turning point.


Much contemporary discussion about artificial intelligence assumes that societies will consciously choose ethical frameworks governing machine systems. Yet history suggests the reverse may occur. Artificial intelligence systems may gradually restructure labour markets, education, communication and military affairs, after which political ideologies will adapt themselves to the new technological landscape.


Already this process is visible. Traditional educational models presupposed scarcity of knowledge and the importance of memorisation. Large language models undermine that assumption. When information synthesis becomes instantaneous, educational morality begins shifting away from rote learning towards judgment, verification and interpretation. What was once considered “cheating” may eventually become regarded as efficient technological augmentation, just as calculators transformed attitudes towards arithmetic.


Similarly, artificial intelligence may destabilise liberal assumptions about human uniqueness. Many moral systems implicitly rest upon the notion that human cognition possesses a special status. If machines become capable of sophisticated reasoning, persuasion and creativity, societies may gradually alter their conception of personhood and value. This would not necessarily occur because philosophers persuade populations through abstract argument. Rather, everyday interaction with intelligent systems may slowly normalise new assumptions about agency and intelligence.


Social media provides a recent illustration of how rapidly technological architecture reshapes political behaviour. Before algorithmic recommendation systems, political discourse was constrained by editorial gatekeepers and geographic communities. Social media dissolved these limitations. Political identities became increasingly performative, polarised and emotionally accelerated because the technology rewarded outrage, tribal affiliation and constant engagement. Political morality adapted accordingly. Concepts such as “misinformation”, “online radicalisation” and “cancel culture” emerged not from timeless philosophical disputes but from technological environments optimised for virality.


One might object that human beings still exercise agency. Technologies do not invent themselves spontaneously. Engineers, corporations and governments make choices about what to build. Yet even these choices often arise within competitive pressures generated by earlier technologies. Nuclear weapons research accelerated because rival states feared strategic inferiority. Artificial intelligence development now proceeds partly because governments and corporations believe that abstention would produce geopolitical or economic disadvantage.


Technology therefore creates self-reinforcing evolutionary pressures. Once a technological capability becomes feasible, societies frequently reorganise themselves around it regardless of prior moral hesitation.


This does not mean morality is entirely illusory. Human beings still interpret technological conditions through culture, religion and philosophy. Different societies respond differently to identical inventions. China’s integration of digital surveillance differs substantially from Europe’s regulatory emphasis upon privacy rights. Yet even these differences occur within technological constraints already imposed by the existence of digital systems themselves.


The deeper implication of technological supervenience is therefore profoundly humbling. It suggests that many political disputes may concern not eternal moral truths but temporary adaptations to changing material realities.


Debates over censorship, labour rights, family structures, nationalism and democracy often appear intensely ideological. Yet beneath them may lie shifts in communication networks, industrial systems, transportation infrastructure and military technology. Human beings then construct moral narratives explaining why the new arrangements are virtuous or inevitable.


In wartime Ukraine, this phenomenon is particularly visible. The widespread deployment of cheap drones has altered military hierarchy, battlefield tactics and even cultural perceptions of heroism. Small units equipped with commercially adapted technology can now inflict disproportionate damage upon mechanised formations. Traditional military prestige attached to heavy armour and massed artillery increasingly competes with admiration for agile drone operators and software engineers. A society under technological transformation inevitably begins re-evaluating status, expertise and national identity itself.


The theory that moral and political systems supervene upon technology is not wholly deterministic. Human creativity, religion and philosophy retain importance. Nevertheless technological change appears repeatedly to set the parameters within which moral reasoning subsequently operates.


Civilisations may therefore flatter themselves when they imagine they consciously choose their values independently of their tools. More often, they inherit new tools first — and only later invent the ethical language needed to explain the worlds those tools have already created.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page