top of page

The UK Online Safety Act: Regulation and the Boundaries of Free Expression

  • Writer: Matthew Parish
    Matthew Parish
  • Sep 5
  • 5 min read
ree

In October 2023 the United Kingdom enacted the Online Safety Act, the culmination of a long and controversial legislative process that began with a White Paper in 2019. The law reflects growing public and political concern about the harms associated with online communication, from cyber-bullying and pornography to terrorist propaganda. Its central premise is that social media companies, search engines and other large online platforms must take greater responsibility for the content they host and distribute. Yet it has also provoked unease amongst promoters of civil liberties, journalists and technology companies who fear that the measures it mandates could restrict freedom of speech.


The Core Provisions


At its heart the Act imposes duties of care upon companies that provide user-to-user services or search engines accessible to UK users. These duties vary according to the size of the service and the risks posed by its functions. Ofcom, the UK’s communications regulator, has been given broad oversight powers to enforce compliance. Companies that fail to fulfil their duties face fines of up to ten per cent of global turnover or blocking orders against their services.


The law requires platforms to remove illegal content expeditiously. This encompasses material relating to child sexual abuse, terrorism, revenge pornography, fraud and threats of violence. Large services that are designated as “Category 1” must also address content that is legal but harmful to children, such as content promoting self-harm or eating disorders. They must provide clear terms and conditions explaining how they moderate other content that might be harmful to adults, and they are obliged to apply those rules consistently.


The Act contains specific provisions concerning pornography, requiring sites to implement robust age-verification systems. It also grants Ofcom powers to demand information from companies, conduct audits and compel changes to content-moderation systems. Senior managers may be held personally liable in some circumstances for failures to comply.


The Case for the Act


Proponents argue that the Online Safety Act represents a necessary intervention in a digital environment where voluntary self-regulation has failed. Scandals such as the dissemination of livestreamed terrorist attacks, the circulation of child sexual abuse material, and the suicides of young people exposed to harmful content have persuaded Parliament that stronger state regulation is indispensable. Supporters emphasise that the Act does not create new categories of illegal speech but rather compels companies to enforce the laws that already exist. They also stress that the focus on child protection reflects overwhelming public concern, and that obligations for adult-facing content are limited to transparency and consistency rather than mandated censorship.


Concerns about Free Expression


Nevertheless critics warn that the Act could have a chilling effect upon free expression. One area of anxiety lies in the requirement to address content that is “legal but harmful” to children. Because this category is open-ended, companies may err on the side of over-removal to avoid sanctions, leading to the suppression of lawful speech. Civil liberties groups have warned that political discussion, satire or artistic expression could be caught by risk-averse moderation policies.


Another contentious issue is Ofcom’s ability to compel companies to deploy technologies to detect prohibited material, including in messages sent via private messaging services. Technology companies argue that this could undermine end-to-end encryption, a cornerstone of online privacy and secure communication. Human rights advocates echo these concerns, warning that mandating the scanning of encrypted messages would erode both privacy and free expression, and could set a precedent emulated by authoritarian governments.


Journalists and campaigners also point to the potential chilling effect upon platforms’ terms of service. Because services are bound to apply their rules consistently, they may adopt stricter moderation than necessary, removing lawful but controversial material rather than risk scrutiny for lax enforcement. In this way, the Act may indirectly incentivise censorship.


Comparisons with the European Union and the United States


The Online Safety Act sits within a wider global debate about how best to regulate online spaces, and it is instructive to compare it with the European Union’s Digital Services Act (DSA) and the prevailing American approach.


The EU’s DSA, which came into effect in 2024, is framed explicitly around fundamental rights. It emphasises systemic risk management, transparency obligations and independent audits, particularly for very large online platforms and search engines. It requires clear notice-and-action procedures and provides structured access for researchers, but it maintains the European prohibition against imposing a general monitoring obligation. Unlike the UK Act, the DSA does not mandate age-verification across the board, nor does it contemplate compelled scanning of encrypted private messages. Its centre of gravity lies in processes and accountability rather than substantive prohibitions, with the aim of preserving pluralism while ensuring systemic oversight.


The United States remains an outlier. The constitutional protection of the First Amendment and the liability shield of Section 230 of the Communications Decency Act (a general protection from liability for online platforms that share third party materials) together limit the ability of lawmakers to impose obligations on platforms. While state-level initiatives such as social media moderation laws in Texas and Florida have tested these boundaries, they have faced significant constitutional challenges. At the federal level, debates over child safety continue, but no comprehensive liability regime comparable to the UK or EU models exists. The US therefore retains the freest environment for online speech, albeit one often criticised for failing to protect users from harm.


Placed side by side, the three approaches reveal important contrasts. The EU prioritises transparency and systemic safeguards, seeking to minimise arbitrary censorship while addressing risks. The United States relies heavily upon constitutional rights and market forces, intervening only at the margins. The United Kingdom has chosen the most muscular model, combining broad regulatory oversight, enforceable duties of care and the possibility of intrusive measures such as age-verification and scanning of encrypted channels. From the standpoint of free speech, the UK approach carries the highest risk of incentivising precautionary takedowns and creating a culture of surveillance, while the EU’s framework is more process-driven and the US remains highly permissive.


A Question of Balance


The Online Safety Act exemplifies the difficult balance between protecting vulnerable users and safeguarding free speech. Few dispute the need to confront online harms, particularly where children are concerned. Yet the law entrusts significant discretion to both Ofcom and private companies to decide what should or should not remain online. This delegation of authority blurs the boundary between state regulation and corporate content moderation, raising questions about accountability and democratic oversight.


The Act is likely to be refined through implementation, secondary legislation and regulatory guidance. How Ofcom interprets its mandate, and how vigorously companies enforce their duties, will determine whether the law becomes a model of proportionate regulation or a framework that stifles legitimate expression.


Totalitarian methods within a democracy?


The UK Online Safety Act represents one of the most ambitious attempts by a democratic state to regulate the digital sphere. Its provisions on illegal content, child protection and corporate transparency are intended to make the internet safer. However the breadth of its obligations, the potential encroachment upon encryption, and the risk of over-zealous enforcement mean that it also sits uneasily with traditional principles of free expression and privacy.


When viewed in comparative perspective, the UK has opted for the most interventionist model among Western democracies. The real test will come not from the letter of the law but from its practical application: whether Ofcom and the platforms under its jurisdiction can enforce safety without extinguishing the lively, pluralistic debate upon which democracy depends; and the privacy that most of us expect to be upheld in our digital communications.


Further the Online Safety Act might be used as a precedent for other countries with less robust rule-of-law traditions, to uphold totalitarian means of surveillance because the United Kingdom, the cradle of free speech, privacy and democracy, has chosen to do the same. The effects of the statute may not just resonate in the United Kingdom but across the globe. In time, this sort of government surveillance of what people publish and read on the internet and in private messages may become the new norm for us all.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page