top of page

The history and future of the World Wide Web

  • 10 hours ago
  • 6 min read

Monday 16 February 2026


The World Wide Web was born from a very small problem with very large implications: how to help scientists share documents across borders, institutions and computer systems without first agreeing upon a single master catalogue. In 1989 at CERN, Tim Berners-Lee proposed a system that would let information live in many places but be connected by links. What made this idea politically, not merely technically, disruptive was its refusal of central permission. If you could publish a document and point to another document, you did not need a gatekeeper. That architectural choice, modest as it sounded, became the defining wager of the modern information age.


In April 1993 CERN placed the web software into the public domain, a decision that helped the web spread quickly because no single entity owned the basic machinery. In 1994 Berners-Lee founded the World Wide Web Consortium (W3C) to steward open standards so that the web would remain interoperable rather than splinter into incompatible corporate fiefdoms. These early choices were ethical choices disguised as engineering. They expressed a belief that universality, openness and low barriers to entry would, in the long run, produce more human flourishing than any curated alternative.


The web then acquired, in fairly rapid succession, three distinct identities.


First came the web as library. In the 1990s a page was often literally a page: static text, a handful of images, a link list maintained by a person. It was messy, slow and astonishingly fertile. Much of its cultural impact came from the fact that ordinary individuals could publish to a global audience with no more than curiosity and patience.


Then came the web as marketplace. Commercial browsers, advertising, secure transactions and the gradual professionalisation of web publishing turned hobby pages into businesses. E-commerce did not merely sell goods; it altered expectations. People began to treat the web as somewhere one could accomplish life tasks, not merely read about them.


Finally came the web as social arena. The shift often labelled “Web 2.0” bundled creation, conversation and community into platforms whose value increased with every new participant. The web ceased to feel like a collection of documents and began to feel like a set of places. It also began to concentrate power. The ability to host content, distribute it at scale and monetise attention created new gatekeepers, even though the underlying architecture had been designed to avoid gatekeepers.


This concentration is where the moral argument over the web’s balance sheet becomes genuinely difficult. If the web were only a library, its harms would resemble those of any library: bad books on shelves, propaganda in pamphlets, libels in newsletters. But the web is not only a library. It is also a set of behaviour-shaping machines, optimised in many corners for engagement, and engagement is not the same as truth, kindness or civic health.


Berners-Lee has been unusually clear about this. In 2017 he described three broad challenges: loss of control over personal data, the ease with which misinformation spreads and the distortion of democratic politics through online political advertising. The list is telling because it treats the web’s sickness as structural. It is not merely that some people behave badly online. It is that the economic and institutional arrangements of much of the modern web reward the worst incentives: harvest attention, harvest data, target persuasion, repeat.


In his more recent commentary he has sharpened the point that design is never neutral. A web service built to maximise time-on-site can be built to inflame, addict or polarise, even if nobody writes a line of code explicitly instructing it to do so. In an interview published on 29 January 2026, he rejected the older mantra that technology is neutral, stressing that design choices can be “explicitly good” or “explicitly bad”. That is not a rhetorical flourish. It is a claim about responsibility. If harm emerges predictably from a design, then the designers, funders and regulators cannot pretend surprise.


And yet it would be intellectually dishonest to deny what the web has delivered.


It has expanded access to knowledge on a scale that earlier generations would have found miraculous. It has enabled small businesses to reach customers without owning a high-street shopfront. It has helped diasporas maintain language and identity. It has supported investigative journalism, open-source collaboration and the rapid sharing of scientific results. In war, it has been both a lifeline and a battleground. Ukrainians, for example, have used online tools to document atrocities, to raise funds, to coordinate humanitarian relief and to tell their own story to audiences who might otherwise only hear the Kremlin’s narrative. At the same time Ukraine has also been subjected to relentless online manipulation, hacking, doxxing and propaganda, precisely because the web’s openness can be turned into a weapon.


So has the web been a force for good or for bad?


A fair answer is that it has been, on balance, a force for good at the level of capability, and a force for instability at the level of governance. It gives ordinary people powers that once belonged only to states and large corporations: publication, organisation, persuasion, fundraising, surveillance, recruitment. When that power is used to educate, connect and protect, it is profoundly good. When it is used to exploit, radicalise and deceive, it can corrode trust so deeply that society loses the ability to agree on what is real.


The dividing line is not simply content moderation. It is political economy.


Much of the modern web is financed by advertising models that reward profiling. “Free” services are often paid for with personal data, extracted at scale and repurposed to predict and influence behaviour. This is why Berners-Lee’s critique so often returns to data ownership and agency. His proposed remedy is not nostalgia for the 1990s, but a rebalancing of power away from platforms and back towards users.


One expression of this is the Contract for the Web, launched in 2019 through the World Wide Web Foundation as a set of principles for governments, companies and citizens aimed at protecting privacy, making the web accessible and preventing abuse. Critics have noted that principles without enforcement can become branding exercises, but the attempt matters because it frames the web as civic infrastructure rather than a mere consumer product.


Another expression is Solid, a project associated with Berners-Lee’s attempt to “re-decentralise” the web by separating data from applications. Under the Solid model, individuals store information in personal data “pods” and grant applications permission to read or write specific data, rather than surrendering everything to each new platform. The ambition is straightforward: if switching social networks did not require abandoning one’s social graph, history and identity, then platforms would have to compete on quality rather than captivity.


Whether Solid, or any similar approach, can scale is an open question. Even sympathetic observers note that adoption frictions remain and that the current web’s incumbents benefit from lock-in. But the broader idea, that interoperability and user-controlled data are prerequisites for a healthier web, is hard to dismiss. Email remains useful precisely because it is not owned by a single company. The web’s future may depend on rediscovering that spirit.


What, then, might the next twenty years look like?


One plausible future is the web as regulated public square. Governments, particularly in Europe, are already moving towards stronger rules on platform responsibility, political advertising transparency and data protection. Regulation will not make human beings wise, but it can make certain business models more expensive and therefore less attractive. If the law punishes reckless data extraction and manipulative design, the web’s incentives shift.


A second future is the web as fragmented territory. Under pressure from geopolitics, content controls and national security fears, states may push for more sovereign control of networks, standards and platforms. The result could be a balkanised web where information flows are filtered, commerce is compartmentalised and cross-border civil society becomes harder.


A third future is the web as interface for machine agents. As personal assistants powered by artificial intelligence become more capable, more of “using the web” may involve software acting on our behalf, reading, filtering, booking, buying and negotiating. That could reduce overload and make the web feel calmer. It could also intensify the struggle over data ownership, because an assistant that serves you needs access to your information, while an assistant that serves an advertiser needs access to you. Berners-Lee’s insistence on user control of data speaks directly to this fork in the road. 


The web’s future, in other words, is not pre-written by technology. It will be written by choices: legal choices, commercial choices and, unavoidably, moral choices. Berners-Lee’s enduring optimism is that it is “not too late” to fix what has gone wrong, but his recent remarks also underline that hope is not a substitute for design and governance. 


On balance the web has expanded human possibility more than it has diminished it. But the ledger is not closed. The question is no longer whether the web can connect the world. It already can. The question is whether the world can build institutions, laws and norms worthy of a tool that connects everyone to everyone else, instantly, at scale.


If we fail, the web will remain a brilliant machine for commerce and coercion, while becoming steadily worse at truth and community. If we succeed, it may yet resemble Berners-Lee’s original promise: a universal space where knowledge is shared, power is contested and human beings, not algorithms, remain sovereign.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page