Ian Peake is an Addictions Medical Officer in Glasgow
Still in the Room: When AI Knows the Answers, What Will We Bring?
In the world of addiction medicine, patients often arrive with little to shield them. Their lives may be fraying from trauma, isolation, or profound instability—but they still come. Not always for answers. Sometimes just to sit in a room with someone who won’t look away.
Lately, and so far infrequently, a new kind of presence has begun arriving too. Read aloud or displayed on a phone screen: a full diagnostic and treatment pathway proposed by an AI assistant. In the few times it’s happened, the plans have been, by turns, reasonable if outdated, full-bloodedly maverick, detailed, plausible and remarkably sensitive to individual circumstances and particular trauma.
…the plans have been, by turns, reasonable if outdated, full-bloodedly maverick, detailed, plausible and remarkably sensitive to individual circumstances and particular trauma.
I’ve relied on AI tools myself—to synthesise policy updates, to surface relevant literature that might otherwise get lost in the flood. These tools can be extraordinarily helpful: lucid, fast, unflagging. They don’t get distracted, tired, or caught in moral knots—not unless we bring those knots with us.
If that’s happening here—where people arrive traumatised, economically marginalised, and often navigating multiple intersecting challenges—then I imagine it’s happening everywhere.
I’ve felt defensiveness myself. We’ve rolled our eyes at ‘Doctor Google’ in recent years, but now ‘Doctor ChatGPT’ and colleagues are rotating in- and they’re increasingly the steadier, surer ‘Registrar’ to the faltering ‘SHO’ of earlier iterations. There’s a flicker of erasure in that moment. If a bot can generate the care plan, where do I fit in? What becomes of all those long nights, the years of study, the hard-earned instincts that emerge only after being broken open by the job again and again?
Even when someone brings a plan written by a machine, it still matters that they came. They still chose a human face. That means we still have something to offer.
Knowledge is no longer rare. Authority no longer goes unchallenged. But was that ever the best of us? On my best days I have felt the weight of the unbearable carried in and helped it be set down. That’s the core of what we’re protecting. And that’s not easy to automate.
The future may not belong to those with the sharpest memory or fastest mind. I think it belongs to those who can stay in the room. Who can tolerate ambiguity. Who can say: “Yes, the AI was right—but let’s talk about how that feels.” Or: “The plan it gave you makes sense—but here’s what it doesn’t understand about your housing situation, your recent relapse, your daughter’s care.”
The future doctor may be less the sole expert and more a kind of relational anchor—someone grounded in science but unshaken by challenge, open to ideas from patient, family, colleague, or algorithm. We bring discernment, perspective, humanity. We’re invested.
In my own work, I try to play the long game of relationship. I meet people where they are—sometimes in acute distress, sometimes presenting with the layered complexities of long-term trauma—and I try to avoid the trap of seeing their whole being as a pass/fail problem to be solved in that moment. Instead, I offer what I can: validation, reframing, a safe space, and an undertaking to return.
What I’m noticing, I suppose, is that presence itself is becoming more visible, more valuable.
What I’m noticing, I suppose, is that presence itself is becoming more visible, more valuable. Not just as a soft skill, but as a foundation—something the machines can’t yet replicate in full. Something we already do, and can choose to do with even more intention.
Some of the doctors I admire most aren’t defined by their dazzling brilliance or encyclopaedic recall—but by their steadiness. They’re the ones who stay present when things are messy, uncertain, or thankless. The ones whose patients trust them not just for their knowledge, but for their willingness to walk with them through difficult terrain.
So perhaps as AI grows ever more precise, we let the machines in as colleagues to help write the plans and chase the bloods. Let them listen, speak kindly, even build trust. Perhaps then our own task grows simpler but deeper: not to know everything, but to embody knowledge where it matters. When life grows tangled and choices carry weight, when someone must say what’s safe, what’s fair, what’s enough—it falls to us. Presence isn’t just a soft skill—it’s a duty. And that, still, is medicine.
Featured Photo by Mohammadreza alidoost on Unsplash