
Sylvie Delacroix is the inaugural Jeff Price Chair in Digital Law and the director of the Centre for data Futures (King’s College London).
In the excitement following ChatGPT’s release, much of the discussion has focused on these systems’ purported ‘intelligence: a source of unhelpful hype that distracts from what truly matters about these tools. As a counter to the hype that followed ChatGPT’s release, Alison Gopnik suggest we should simply view large language models (LLMs) as yet another cultural technology, like writing, print, or libraries.1 But this assessment goes too far in the opposite direction, missing what makes these tools different from their predecessors.
To understand this difference, we need to take a step back and reflect on the role of conversation in our lives. If conversation was merely a conduit for information exchange, Gopnik’s comparison might hold. But we converse for a myriad of reasons, many of which have little to do with filling gaps in our knowledge. Sometimes, we chat simply because we share a physical space: be it a train compartment, a kitchen table, or a garden bench. At other times, we talk to uplift others, or to coordinate joint endeavors. Crucially, we also find ourselves engaging in dialogue as a means of making sense of the world and those around us.
With her colleagues caught up in their own heavy caseloads, Emily turns to ChatGPT, describing her concerns about potentially mishandling the sensitive situation.
Consider Emily, a general practitioner in the UK. After a long day of seeing patients, she’s troubled by a phone consultation from the previous day. A patient and his family reacted angrily to her advice to wait for further test results. She knows this patient has struggled to get support for his children with special needs, and she worries her defensive reaction may have deepened his sense of disenfranchisement. With her colleagues caught up in their own heavy caseloads, Emily turns to ChatGPT, describing her concerns about potentially mishandling the sensitive situation. Through this exchange, she finds herself better able to articulate what went wrong in the conversation and feels more confident about moving forward.
Some might argue that Emily should have waited to discuss her concerns with a human colleague; that turning to an LLM for such a sensitive matter is inappropriate. Yet in Emily’s situation, like many others, the choice isn’t between an LLM and an ideal human conversation partner, but between an LLM and no immediate conversation at all. Her colleagues are overwhelmed with their own caseloads, and her need to process and understand the interaction is immediate and pressing. This practical reality makes it crucial to understand exactly what happens when we engage with these tools as conversation partners.
These sense-making conversations are delicate constructions that require special kinds of interlocutors. Consider Emily’s situation: What if ChatGPT had responded to her concerns with absolute certainty, without acknowledging the nuances of the situation or the possibility of alternative interpretations? She might have been misled into thinking there was a single ‘right’ way to handle the interaction, potentially hindering her ability to learn and grow from the experience. This highlights a critical challenge: how should LLMs communicate uncertainty, especially when the stakes are high?
The current focus on quantifying uncertainty in LLM outputs (such as measuring the dispersion of possible responses), while valuable, misses crucial qualitative aspects of uncertainty communication. When a human expresses uncertainty in a sensitive conversation, like one about end-of-life care, the goal isn’t merely to signal the potential inaccuracy of their statements. Rather, it serves as a humility marker that creates space for others to share their views, however tentative they might be.
Emily’s interaction with ChatGPT proved valuable precisely because it helped her explore the ethical dimensions of her conversation with the patient. Some clinical decisions involve ethical complexities that are less immediately visible. A GP who recognizes that suspected domestic abuse raises questions about documentation has already done crucial perceptual work. Should this suspicion be noted in patient records? Might it protect or endanger the patient, help or harm children in the household? The underlying uncertainty stems from evolving professional norms and the way ethical standards apply to this particular, often opaque situation.
A GP who recognizes that suspected domestic abuse raises questions about documentation has already done crucial perceptual work.
An LLM that effectively communicates such ethical uncertainty acts like human humility markers,2 encouraging users to consider different perspectives and engage in more thoughtful deliberation. This capacity to support rather than foreclose ethical reflection matters increasingly as LLMs are deployed in fields ranging from healthcare3 to justice and education. In these domains, the qualitative nature of conversations shapes our efforts to navigate evolving professional and ethical values.4
Looking ahead, we face both opportunities and challenges. Can we develop LLMs that communicate uncertainty in ways that enhance our ability to create space for diverse viewpoints? The human feedback-based refinement methods used by many large language models offer an opportunity to build truly participatory interfaces,5 distinct from the often exploitative, profile-based optimization processes of search engines.
As these tools become increasingly integrated into our professional and personal lives, we must ensure they’re developed in ways that support meaningful dialogue and preserve the ‘human element’ in medicine.6 We need to move beyond simply quantifying uncertainty7 and focus on getting LLMs to communicate it in a way that fosters empathy, understanding, and ethical awareness. Just as Emily benefited from a conversation that helped her navigate a complex situation, we can all benefit from LLMs that encourage us to think critically and engage in open, honest conversations. The challenge lies not just in making these tools more accurate, but in ensuring they support rather than undermine the art of human conversation and understanding.
References
- https://www.alisongopnik.com/Papers_Alison/science.adt9819.pdf [accessed 21/11/25]
- S. Delacroix, ‘Designing with uncertainty: LLM interfaces as transitional spaces for democratic revival‘, Minds and Machines, 35 (41), 2025
- Fraile Navarro, D., Lewis, M., Blease, C., Shah, R., Riggare, S., Delacroix, S., Lehman, R., ‘GenAI and the changing dynamics of clinical consultations’, British Medical Journal, 2025 https://doi.org/10.1136/bmj-2025-085325
- Marcus Lewis and Benedict Hayhoe, ‘The digital Balint: using AI in reflective practice’, Education for Primary Care, 35(6) (2024), 198-202, https://doi.org/10.1080/14739879.2024.2372606
- S. Delacroix, ‘Moral Perception and Uncertainty Expression in LLM-Augmented Judicial Practice’, Minds and Machines, 35 (44), 2025.
- Marcus Lewis, Sylvie Delacroix, David Fraile Navarro, and Richard Lehman, ‘The human element in the age of AI: Balancing technology and meaning in medicine’, in Rupal Shah and Robert Clarke (eds), Finding meaning in healthcare: Looking through the hermeneutic window (Routledge, 2025), https://doi.org/10.4324/9781003517665
- Delacroix, S., Robinson, D., Bhatt, U, Domenicucci, J., Montgomery, J., Varoquaux, G., Ek, C.H., Fortuin,V., He, Y., Diethe, T., Campbell, N., El-Assady, M., Hauberg, S., Dusparic, I., Lawrence, N., ‘Beyond Quantification: Navigating Uncertainty in Professional AI Systems’, RSS Data Science and Artificial Intelligence, 1 (1), 2025.
Featured Photo by Brett Jordan on Unsplash