top of page

Large language models and education

  • Writer: Matthew Parish
    Matthew Parish
  • 2 minutes ago
  • 3 min read

Sunday 11 January 2026


The rapid diffusion of large language models and associated artificial intelligence tools into everyday life has been widely celebrated as a technological leap comparable to the arrival of the internet itself. Yet, beneath the enthusiasm lies a growing unease, particularly within schools and universities, that these tools may corrode educational standards rather than elevate them. The concern is not that artificial intelligence can provide information more efficiently than textbooks or teachers, but that its unreflective use risks altering how knowledge is acquired, assessed and valued.


At the heart of formal education lies a slow and often uncomfortable process: grappling with uncertainty, misunderstanding concepts, making errors and refining one’s thinking through repeated effort. Writing an essay, solving a mathematical proof or translating a passage from a foreign language is not merely a test of output but a discipline of cognition. When large language models can instantly produce fluent prose or plausible answers, there is a temptation for students to bypass this formative struggle. The result may be superficially impressive work that masks a shallow or fragmented understanding, eroding the intellectual muscle that education is meant to build.


This risk is particularly acute in writing-based disciplines. The ability to structure an argument, marshal evidence and express ideas with clarity is central to history, law, philosophy and the social sciences. If students increasingly rely on systems such as OpenAI’s conversational models to generate drafts or entire assignments, the act of writing risks becoming an exercise in editing rather than thinking. Over time graduates may emerge with credentials that certify proficiency in prompting machines, but without the internalised capacity to reason independently or articulate complex ideas unaided.


Assessment itself is also destabilised. Traditional homework, essays and even take-home examinations rest on the assumption that the submitted work reflects the student’s own effort. With generative artificial intelligence readily available, this assumption is weakened. Institutions may respond by reverting to timed, invigilated examinations or oral assessments, but these methods privilege speed and confidence under pressure rather than depth of reflection. The danger is a narrowing of evaluative techniques that distorts curricula, incentivising rote learning and exam tactics at the expense of sustained inquiry.


There is, moreover, a subtler cultural effect. Education has long been a moral as well as intellectual enterprise, cultivating habits of honesty, responsibility and respect for intellectual labour. Normalising the unacknowledged use of artificial intelligence in assessed work risks blurring ethical boundaries. If students come to see the delegation of thinking to machines as standard practice, academic integrity may be reframed not as a shared value but as an outdated constraint, enforced sporadically and unevenly.


Inequality may also be exacerbated. While proponents argue that artificial intelligence democratises access to high-quality explanations and tutoring, in practice those with greater digital literacy, better devices and more time to experiment with these tools are likely to benefit most. Students from less advantaged backgrounds may either be disadvantaged by lack of access or pressured into dependence on automated assistance, further weakening their foundational skills. The educational gap may thus widen, even as average output appears to improve.


None of this is to suggest that large language models have no place in education. Used transparently and critically, they can support revision, offer alternative explanations and assist teachers with administrative burdens. The danger arises when their use substitutes for, rather than supplements, human learning. Education is not merely about producing correct answers, but about forming minds capable of judgment, creativity and ethical reasoning.


The challenge for educators and policymakers is therefore not to ban artificial intelligence outright, a futile and likely counterproductive endeavour, but to redesign curricula and assessments in ways that preserve the core purposes of education. Without this adaptation there is a real risk that the widespread availability of powerful artificial intelligence tools will quietly but profoundly degrade educational standards, leaving behind a generation adept at generating text and answers, yet ill-equipped to understand, question and improve the world they inherit.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page