top of page

Have large language models discredited journalism?

  • Writer: Matthew Parish
    Matthew Parish
  • 5 minutes ago
  • 6 min read

Sunday 1 February 2026


A quiet question now hangs over almost every article we read online: was this written by a person, or by a machine, or by some collaboration of the two?


If we mean ‘written’ in the strict sense, as in a large language model producing the first complete draft, it is unlikely that most journalism by reputable outlets has already crossed that threshold. What is far more plausible, and increasingly documented, is that large language models are already becoming routine editorial infrastructure: tools for transcription, translation, summarisation, headline variants, search-friendly rewrites, grammar and style checking, background research prompts and the rapid production of social media copy. In other words, the model often does not replace the journalist so much as it sits beside her, invisibly, shaping the text that reaches the reader.


A survey of UK journalists by the Reuters Institute for the Study of Journalism found widespread and regular use of artificial intelligence tools in newsroom work, including copy editing and grammar checks, alongside even more common uses such as transcription and captioning. This is not a fringe practice done by a few enthusiasts. It is becoming normalised, particularly where it saves time on labour that used to be done by junior staff, freelancers or overworked reporters.


Major news agencies have also tried to draw clear lines around what is, and is not, acceptable. The Associated Press, for example, has published standards that treat generative output as unvetted material and place responsibility back onto human editorial judgement. The significance is not merely that rules exist. It is that large institutions have found it necessary to say, in public, that these tools are already close enough to the production process to require boundaries.


So the more accurate framing is this: while much journalism is still reported and drafted by humans, an increasing proportion is being proofread, polished or operationally accelerated by large language models, often without disclosure to the reader. That change matters for the integrity of journalism, not because it automatically produces lies, but because it alters responsibility, incentives and ultimately trust.


The first integrity problem is provenance


Journalism has always relied on a chain of custody. A reporter speaks to sources, checks documents, visits places, cross-checks claims and then writes. Editors query gaps and insist upon corroboration. Lawyers assess risk. The final copy has a lineage, even if the public cannot see it.


A large language model interrupts that lineage in two ways.


First, it can introduce text whose origin is fundamentally opaque. If a model proposes a paragraph, a metaphor or a framing, neither the journalist nor the reader can easily tell what it is drawing upon. The prose may sound plausible, but plausibility is not evidence. This is why news organisations that permit experimentation typically insist that a journalist must still begin, edit and vet the work. 


Secondly, even when a model is used only for ‘editing’, it can drift from form into substance. A grammar pass can quietly change meaning. A ‘tightening’ rewrite can remove caveats. A summary can omit the one sentence that supplied essential context. Over time the reader receives not simply a corrected human text, but a text partly co-authored by a system whose priorities are statistical fluency rather than truth.


This is where integrity begins to fray. Journalism is not merely a sequence of words. It is a claim to reliable knowledge about the world. If the provenance of those words becomes uncertain, the claim weakens.


The second integrity problem is accountability


When journalism is wrong, there is supposed to be someone answerable. The editor signs off. The publication corrects. The reporter explains. In serious failures, careers end.


Large language models muddy that moral clarity. If a fabricated detail enters a piece because a model suggested it and a human did not notice, who is responsible? In principle the answer is easy: the newsroom is responsible, because it published. In practice the temptation arises to treat model error as a kind of weather, regrettable but nobody’s fault. The older disciplines of verification can then soften.


That is not hypothetical. Across many sectors, regulators are beginning to insist on human verification precisely because so-called hallucinations are not rare edge cases. A recent example in another professional field is legislation designed to require lawyers to verify any artificial intelligence output used in court filings, prompted by repeated incidents of fabricated citations. Journalism is not law, but it shares the same vulnerability: a fluent sentence can smuggle in falsehood with very little friction.


The more newsrooms rely on models for speed, the more they need explicit internal doctrines of responsibility: what may be automated, what must never be automated and who personally checks what. Without that, accountability becomes a fog.


The third integrity problem is homogenisation


Even when a model produces ‘accurate’ prose, it tends to produce prose that sounds like everything else: smooth, balanced, politely hedged, rhetorically familiar. That is useful for routine tasks and deadly for distinctive voice.


This matters in two ways.


One is cultural. Journalism is partly a civic craft, shaped by local idiom and national sensibility. If the same handful of models and the same editing prompts are used across continents, the result is not simply standardisation of grammar, but standardisation of thought patterns. The language becomes international, frictionless and forgettable. Readers may not be able to name what is missing, but they feel it as a loss of human presence.


The other is epistemic. Diversity of voice is not only aesthetic. It is a defence against groupthink. If fifty reporters phrase uncertainty in the same way, frame conflict with the same templates and summarise policy with the same stock abstractions, journalism becomes easier to manipulate because it becomes predictable.


The risk is amplified by how people now encounter news. Many readers do not read full articles. They read summaries, snippets and machine-generated overviews. Recent research on Microsoft Copilot found that its news summaries could sideline local journalism in favour of larger foreign sources, with implications for diversity and democratic attention.  The more the public consumes journalism through machine intermediaries, the more editorial diversity becomes both harder to sustain and more vital.


The fourth integrity problem is the economics of substitution


Journalism is expensive because reporting is expensive. If an industry can replace a portion of its labour with cheaper machine output, it will. The danger is not simply job loss, though that is real. The deeper danger is that editorial budgets shift away from reporting towards distribution, optimisation and volume.


A model can produce ten variants of a headline in seconds. It cannot sit in a cold municipal office for six hours waiting for a document, cultivate a frightened source over months, or endure the social pressure that comes with asking hostile questions in public. If newsroom leadership becomes accustomed to the productivity gains of model-assisted copy production, there is a structural temptation to invest in content throughput rather than in reporting depth.


This is the path towards ‘journalism’ that is actually content: quick takes, reheated summaries, low-risk aggregation and endless explainers built from other explainers. The copy becomes abundant precisely where knowledge is thin.


What would integrity look like in an LLM-shaped newsroom?


None of this requires panic. It requires clarity.


First, newsrooms should treat model output as they would treat an anonymous tip: potentially useful, never publishable without verification. The Associated Press’s principle that generative output should be treated as unvetted material is a good foundation. 


Secondly, they should practise transparent disclosure, not as a marketing gimmick but as an editorial norm. If a piece was translated by a model and checked by a human, say so. If a data-heavy story used automated summarisation of documents that were then reviewed by a reporter, say so. Disclosure does not weaken trust. Hidden dependence does.


Thirdly, they should maintain a firewall between models and confidential sources. A newsroom that casually pastes sensitive material into third-party systems risks betraying people who already take extraordinary risks to speak. Integrity includes operational security.


Fourthly, editors should protect voice. If every sentence is smoothed into the same ‘professional’ register, journalism loses the texture that tells the reader a human mind is present. Style is not ornament. It is a signature of responsibility.


Finally, publishers should distinguish between two kinds of speed. There is the speed of production, which models can increase, and there is the speed of truth, which cannot be rushed without loss. If journalism sacrifices the second for the first, it will discover that a fluent industry can still become an untrusted one.


The central question


Are many or most media articles written by large language models, or edited by them? For the reputable parts of the press, the best-supported answer is that full machine authorship is not yet the dominant mode, but model-assisted editing and production workflows are rapidly becoming common. 


What that means for integrity is straightforward.


Journalism’s legitimacy rests on the reader believing that someone took responsibility for truth. Large language models can be useful servants, but they are poor bearers of responsibility. The future integrity of journalism therefore depends upon whether editors keep the moral centre of the craft intact: named humans who verify, who correct, who can be questioned and who accept blame when the work fails.


If newsrooms do that, models may become like spellcheck, only more powerful: an instrument that improves the surface while the human mind guards the substance.


If they do not, journalism risks becoming an industry of plausible paragraphs, published at scale, with nobody quite sure where the words came from and nobody able to explain why the public should trust them.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page