top of page

Can large language models understand their users?

  • 2 minutes ago
  • 4 min read

Friday 1 May 2026


The aspiration to make machines understand us has always been more ambitious than the aspiration to make them answer us. In the present age of large language models, this distinction has acquired a practical urgency. Systems built upon statistical patterns in language, such as those emerging from the lineage of transformer architecture, have demonstrated extraordinary fluency. Yet fluency is not understanding, and understanding in the human sense is inseparable from a grasp of identity and motivation. The central challenge is therefore not merely to generate correct sentences, but to situate those sentences within an implicit theory of the person asking the question.


Human conversation depends upon a dense web of assumptions about interlocutors. When a journalist asks a question he or she is presumed to seek clarity or accountability; when a soldier asks, he may seek survival; when a child asks, she seeks comprehension. The same sentence, uttered in each case, carries different meanings because it arises from different motivations. Philosophers from Ludwig Wittgenstein onwards have insisted that meaning is use — that language cannot be divorced from the contexts in which it is deployed. Large language models however are trained upon corpora stripped of precisely this contextual richness. They observe the words, but only imperfectly the lives behind them.


The consequence is a peculiar asymmetry. A human reader encountering a question instinctively constructs a mental model of the questioner. This model may be crude, biased or mistaken, but it is nevertheless operative. By contrast a language model must infer identity and motivation indirectly, through statistical cues embedded in phrasing, vocabulary or topic selection. It may detect that a query about “ballistic trajectories” is more likely to be professional than casual, or that a question about “bedtime stories” is more likely to concern a child. Yet such inferences are probabilistic rather than interpretative. They lack the grounding in lived experience that allows humans to revise their assumptions mid-conversation.


This limitation becomes particularly acute in domains where motivation is not merely informative but normative. Consider legal advice, medical guidance or wartime reporting. A request for information about drones might be made by a researcher, a policymaker, or a combatant. Each brings different ethical and practical stakes. Without an understanding of who is asking, the system risks providing answers that are technically correct but contextually inappropriate — or even dangerous. The challenge is not simply epistemic but moral.


Attempts to address this problem have taken several forms. One approach involves the accumulation of conversational context over time. By tracking prior exchanges, a model may infer a user’s interests, expertise and intentions. Another approach involves explicit personalisation — allowing users to provide information about themselves, their goals or their preferences. A third approach, more subtle, relies upon reinforcement learning techniques, such as reinforcement learning from human feedback, to shape model behaviour in ways that approximate human judgments about appropriateness.


Yet each of these approaches encounters structural constraints. Context accumulation is limited by memory and by privacy considerations. Personalisation depends upon the user’s willingness and ability to articulate his or her own identity — a task that humans themselves often find elusive. Reinforcement learning, meanwhile, encodes the values of those who provide the feedback, raising questions about whose conception of appropriateness is being institutionalised. In all cases the model’s “understanding” remains derivative, constructed from patterns rather than grounded in consciousness.


There is also a deeper philosophical difficulty. Identity is not a static attribute but a dynamic process. A person’s motivations evolve over time, often within the course of a single conversation. A reader of the Lviv Herald may begin with a general interest in geopolitics and end with a specific concern about humanitarian consequences. To capture this fluidity requires not merely a model of the user, but a model of how that model should change — a meta-model of interaction. Current systems, for all their sophistication, struggle to maintain such adaptive representations without drifting into inconsistency or overfitting.


Moreover there is an inherent opacity in the data upon which these systems are trained. The vast textual corpora that underpin contemporary models contain traces of countless identities and motivations, but these traces are anonymised, fragmented and decontextualised. The model learns correlations between linguistic forms and likely intents, but it does not learn the causal structures that generate those intents. It knows that certain phrases are associated with certain kinds of users, but it does not know why. This absence of causal understanding limits its capacity to generalise to novel or ambiguous situations.


The challenge is compounded by the ethical imperative to avoid intrusive inference. To “understand” a user’s identity too well may entail making assumptions about sensitive characteristics — political beliefs, health conditions or personal vulnerabilities — that ought not to be inferred without consent. There is therefore a tension between the desire for contextual sensitivity and the obligation to respect privacy and autonomy. A system that perfectly models its users risks becoming a system that surveils them.


In practice the solution is likely to be plural rather than singular. It will involve a combination of improved contextual modelling, more transparent personalisation mechanisms and clearer communication about the limits of the system’s understanding. It may also require a shift in expectations. Rather than seeking to replicate human understanding in its entirety, designers might aim to create systems that are explicitly aware of their own uncertainty — systems that ask clarifying questions when motivation is ambiguous, rather than presuming to know.


There is finally a question of whether true understanding is even the appropriate goal. Human interlocutors do not always understand one another; indeed much of human communication consists in negotiating misunderstandings. What matters is not perfect insight into identity and motivation, but the capacity to respond in ways that are useful, respectful and adaptable. Large language models, for all their limitations, are already capable of approximating this standard in many contexts. The task ahead is to refine that approximation without succumbing to the illusion that statistical inference can wholly substitute for human judgment.


The challenge of teaching machines to understand us is also a mirror held up to ourselves. It forces us to articulate what we mean by identity, motivation, and understanding — concepts that are often taken for granted but rarely examined. As we attempt to encode these notions into systems of artificial intelligence, we are compelled to confront their complexity. The result may not be machines that fully comprehend their interlocutors, but it may be a deeper human comprehension of what such comprehension entails.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page