top of page

How to Speak to an Artificial Intelligence Large Language Model

  • Writer: Matthew Parish
    Matthew Parish
  • 2 minutes ago
  • 5 min read
ree

The rapid ascendancy of artificial intelligence in the form of large language models has created an unusual communicative landscape. People now address machines that speak fluently, reason impressively and respond instantly. These models are powerful tools, but they behave differently from humans; they require a slightly different conversational style to reveal their full value. Understanding how to speak to them is therefore a practical skill. What follows is an attempt to distil some principles, grounded in the nature of such systems, that help a user obtain the most precise, useful and trustworthy results.


The first principle is clarity. A language model does not possess human intuition about what a speaker might mean but has not expressed; and it has trouble guessing what you might mean. Although modern models are extremely capable at inferring context, they remain sensitive to ambiguities. Clear, unambiguous prompts with accurate words and correct grammar produce clearer and more reliable answers. If a user wants an essay, it helps to specify tone, style, length, perspective and any particular constraints. If one desires a legal draft, a commercial proposal or a policy analysis, then these features should be stated explicitly. A lack of clarity typically leads not to a wrong answer, but to one that does not quite satisfy the user’s needs.


The second principle is specificity. General questions yield general answers, which may be useful; you quickly find out what the language model knows in general about the subject. However specific questions sharply improve focus and quality, but at the risk of the language model guessing or making up an answer if it cannot find it in the databases available to it. A user who asks ‘Tell me about Ukrainian drone capabilities’ will receive a broad overview. If instead one asks ‘Explain how Ukrainian FPV drone operators manage simultaneous multi-drone flights in urban combat conditions, with reference to training protocols’, the model can produce something far more tailored and insightful. However the greater the specificity, the greater the risk of the model guessing inaccurately. Specificity is particularly important when the topic is technical, legal or geopolitical, but with specific questions it is all the more important to check the answers oneself by reference to third-party sources.


Third, useful results often depend upon context. A language model does not remember a user’s life circumstances unless the user deliberately re-states them within the conversation. Therefore offering context at the outset improves performance. If requesting an investment summary, one might mention the intended audience, the jurisdiction, the size of the fund, the strategic objective and any pre-existing constraints. If requesting an essay, one may explain the readership, the house style, or the broader project into which the essay will ultimately fit. The more relevant context a user provides, the more the model can tailor the material.


A fourth principle concerns structure. Structured prompts lead to structured outputs. If a user requests a list of elements, it helps to specify whether they should appear as bullet points, subsections or paragraphs. If one desires a multi-part analysis, one might instruct the model to divide the answer into sections, steps or themes. Although the model can infer structure from the content, explicit structural cues ensure that the final text is well organised, coherent and ready for practical use. Without them, you receive a set of materials likely to be well organised, but not necessarily for your purpose.


The fifth principle is iteration. A language model is not a one-shot oracle. Rather it behaves like an expert assistant capable of refinement. The best results often emerge through sequential interaction: one asks for a draft; one identifies what works and what does not; and one requests amendments or additional layers of detail. This iterative approach mirrors good editorial practice. It also acknowledges that a user’s intentions often evolve as the conversation develops. By correcting or refining the model’s output, the user shapes the final product with a precision that a single prompt could rarely achieve.


Another important principle is to understand the limits of the model. While modern systems are highly capable, they may lack real-time data unless explicitly integrated within their search tools. Some web sources use tools to block language models from accessing them. Others, in particular professional publishers, charge language models to access books they publish. (Authors, check the fine print of your contract with your publisher.) When dealing with high-stakes matters such as legal advice, financial projections or data-dependent forecasting, the user should either request citations, ask the model to check its work or provide the factual inputs directly. The model can typically analyse extraordinarily well, far better than any human; but it cannot confirm external reality unless the user provides it.


Tone also matters. A well-phrased prompt is neither adversarial nor excessively vague, but simply clear and purposeful. The model responds best to normal, polite and direct human instruction. At the current time, language models are our servants, not our masters; we must clearly tell them what to do. Demanding that the model adopt a persona can be useful for creative tasks, but it must be expressed with precision. A poorly defined persona leads to inconsistent or stylised answers, whereas a well-defined persona can help explore ideas, simulate negotiations or draft fictional work.


Finally, users benefit from an awareness of the strengths of language models. They excel at synthesis, comparison, exposition, long-form drafting, editing, translation, creative generation and structured reasoning. They can also simulate different viewpoints, which is helpful for policy analysis, academic writing or conflict-resolution scenarios. Because they are tireless and consistent, they can produce multiple versions of a text, enabling the user to choose the best one. This suggests a productive conversational style: instead of requesting a single perfect answer, one may ask for alternatives, contrasts or expanded explanations. You may have to go round again and again with a language model, asking it to refine its output until you get what you really want.


Speaking effectively to a large language model is a new craft in the science of information technology, and one that is developing every day. Experts now offer courses on how to speak to language models; but the real skill lies in understanding how computers work and process data, which is algorithmic and analytical, even if the model pretends to be imaginative. It is incapable of imagination; it is drawing its inspiration from analysis of other creative work. Engaging with a language model requires clarity, specificity, context, structure, iteration and an understanding of what the model does well, as well as a firm grasp of the principles of logic to which all computers, big or small, are irrevocably bound. These tools are not human, but nor are they simple machines. They are sophisticated linguistic engines that respond directly and immediately to the manner in which they are addressed using a vast capacity for logicial analysis of virtually unlimited quantities of public data. When understood for what they are, they become extraordinarily powerful allies in writing, analysis and decision-making. In understanding their limitations, a human can become vastly more efficient in their work while maintaining human instincts that language models can not yet, and perhaps not ever, fully replicate.

 
 

Note from Matthew Parish, Editor-in-Chief. The Lviv Herald is a unique and independent source of analytical journalism about the war in Ukraine and its aftermath, and all the geopolitical and diplomatic consequences of the war as well as the tremendous advances in military technology the war has yielded. To achieve this independence, we rely exclusively on donations. Please donate if you can, either with the buttons at the top of this page or become a subscriber via www.patreon.com/lvivherald.

Copyright (c) Lviv Herald 2024-25. All rights reserved.  Accredited by the Armed Forces of Ukraine after approval by the State Security Service of Ukraine. To view our policy on the anonymity of authors, please click the "About" page.

bottom of page