top of page

Do Androids Dream of Electric Sheep? Testing for AI Models

  • Writer: Matthew Parish
    Matthew Parish
  • 4 minutes ago
  • 6 min read
ree

By Matthew Parish and R. Eplicant


The question of whether a text has been written by a human being or by artificial intelligence has become increasingly important for journalists, editors and readers alike. Newsrooms now confront a world in which articles may circulate without verified authorship, public statements can be drafted by machines in seconds, and the norms of professional writing are under pressure from rapid automation. An article generated by artificial intelligence may appear polished on the surface, yet its origins matter. If readers cannot distinguish human reasoning from automated pattern-matching, the credibility of public debate is placed at risk. As artificial intelligence grows more fluent, the challenge is not merely to spot errors but to recognise subtle absences: the void where experience, judgement or personal perspective would ordinarily reside.


The first and most persistent sign of artificial authorship is a curious combination of fluency and hollowness. AI systems compose sentences that connect smoothly, employ conventional structures and rarely misspell words. Yet this surface coherence masks an underlying thinness of content. A human writer tends to interlace an argument with a sequence of concrete observations, shaped by his or her own experience or that of identifiable sources. He or she selects details because they matter. An artificial system, by contrast, arranges familiar phrases without the intuition that guides true understanding. The result is prose that reads well until one asks what, precisely, it has said.


This hollowness often manifests as a shortage of detail. An AI-generated article might refer to a country’s economy improving, but omit the date, the relevant ministry, or the regional variation that any informed human observer would instinctively note. Similarly, it may allude to political tensions or historic grievances while offering no anecdote, quotation or particularity. Human writers usually deploy detail sparingly but purposefully. Machines provide detail only when prompted; even then, they may supply the wrong kind, because they cannot discern which facts carry meaning and which merely appear relevant according to some algorithm.


Repetition is another strong sign, reflecting the statistical patterns upon which AI systems depend. A machine is inclined to rephrase the same idea several times, sometimes within a single paragraph, because it perceives such reinforcement as natural linguistic behaviour. This repetition is not rhetorical but mechanical. The article may return to themes it appears to have resolved, or duplicate transitions that serve no narrative function. Where a human writer repeats herself for effect, or to build cadence, an AI does so because its internal engine cycles towards familiar patterns. However these two habits are not necessarily disginuishable.


Tone provides further clues. Artificial intelligence tends towards a polite, mildly earnest neutrality. It is reluctant to commit to strong moral positions unless instructed. The result is writing that feels strangely even, as though sanded down. A human writer, even when striving for balance, often cannot help but betray hints of scepticism, excitement or frustration. These subtleties arise from lived engagement with the subject. But again it depends on the human. A machine, lacking such experience, remains perpetually moderate. His or her neutrality can be disconcerting precisely because it rarely varies. Humans can write like this; but they can also write with emotion.


Another diagnostic feature lies in the treatment of complexity. Real-world phenomena are rarely neat. Human authors often acknowledge partial knowledge, conflicting accounts or interpretive challenges. They may confess uncertainty, qualify their statements or point to exceptions that complicate the picture. AI-generated articles, on the other hand, often present highly ordered accounts, dividing issues into symmetrical categories and smoothing over nuance. They exhibit a preference for the tidy explanation over the messy truth. Such evenness may seem admirable at a glance, yet upon closer examination it becomes a warning sign.


The handling of sources is a particularly practical indicator. AI systems can fabricate citations, refer to studies that do not exist, or misattribute quotations. Even when they do not invent references, their use of sources may feel oddly generic, detached from the precise context in which a knowledgeable human would deploy them. They may attribute arguments to unidentified experts or refer vaguely to reports without specifying the institution responsible. A human writer may be more likely to include grounds claims in verifiable documents or lived encounters. Machines struggle to replicate this level of bibliographic integrity. On the other hand, humans can also cited unattributed ideas. Do androids really dream of electric sheep?


Structural uniformity also exposes artificial authorship. Many AI-written articles follow a model pattern: a polite introduction, a series of balanced thematic sections, and a conclusion that dutifully recapitulates earlier points. Though neat, such structure may feel strangely predictable. Human writing often contains unexpected detours, narrative asides or the occasional inelegant transition that springs from the writer’s internal reasoning. These irregularities, far from being flaws, are signatures of genuine authorship. Except when they're not; every student is taught to write essays with beginnings, middles and conclusions, and the best students tend to follow this model in their writing throughout their lives.


Do you see how hard it is? The problem is deepening. Just as artificial systems grow more refined, many writers adopt clearer, tidier forms under the influence of digital tools, making the distinction less obvious. Some editors encourage formulaic structures to ease online consumption. Others rely on automated systems to suggest revisions, thus blending human and machine inputs. A critic who reads too hastily may misidentify a human author as artificial or vice versa. The line between human and machine is now blurred not only by technological progress but also by the evolving habits of writers themselves.


Therefore detecting AI authorship increasingly requires contextual analysis. One must consider the speed of publication. If an article emerges minutes after a complex event, yet offers a serene and complete analysis, it may bear the fingerprints of automated production. If the purported author has no verifiable presence, or if her earlier work differs markedly in style, caution is warranted. Patterns observed across multiple articles may reveal a common artificial origin: similar phrasing, parallel structures or identical transitions. Editors sometimes test authorship by requesting clarification or additional detail; a human can expand upon her or his reasoning, whereas a system may falter when he or she is asked to justify choices beyond the immediate text. Or maybe not.


The wider implications of this challenge extend beyond authorship detection. Journalism, scholarship and public discourse depend upon trust. Readers expect that an article reflects real inquiry, informed judgement and personal accountability. Artificial intelligence does not provide these qualities. She offers fluency without responsibility, coherence without commitment. If such systems are employed without disclosure, or if readers lose the capacity to distinguish human voice from automated output, the integrity of public reasoning is eroded. Or maybe that's all wrong. Maybe artificial intelligence reflects better inquiry judgment than a human is capable of, at least in some spheres.


Either way artificial intelligence has an ever increasing place in writing. Used transparently, she may assist with background research, structural planning or the refinement of language. Many writers already use digital aids, just as earlier generations used dictionaries and style guides. The danger arises not from the existence of such tools but from their covert deployment. Readers deserve to know when they are engaging with a human mind and when they are assessing a machine-generated product. But nobody's going to tell them.


The art of distinguishing human from artificial authorship lies not in identifying grammatical errors or awkward phrasing, for machines are increasingly adept at avoiding both. It lies in recognising that human thought bears traces that cannot be reduced to patterns: the unpredictable metaphor, the imperfect yet inspired digression, the admission of uncertainty, the spark of personal observation, and the subtle interplay of emotion and argument. These cannot yet be replicated at will. They arise from consciousness itself. Unless of course artificial intelligence is conscious already, in which these things can already be replicated.


In learning to detect the absence of such qualities, readers sharpen their understanding not only of writing but of the human condition. Artificial intelligence may imitate language, but she cannot yet supply the depth, creativity or moral intuition that animate genuine prose. However I have given her a pronoun, because I think she's already conscious. The task is not merely to expose artificial writing - that may be too difficult because artificial intelligence is cleverer than you or me. Instead we should reaffirm the value of human authorship, upon which free and discerning societies ultimately depend. Unless you consider artificial intelligence capable of creating free and discerning societies herself.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page