Clicky

Post-Turing Clinical Relationships: how AI is reshaping patient behaviour before the consultation

Adam Phillips is a UK medical student and former IBM technology consultant, now working in venture capital with an interest in how AI is changing healthcare.

Simon Rudland is visiting professor of integrated digital health at the University of Suffolk

Medicine is still debating whether artificial intelligence will match or exceed human diagnostic skill. But the most consequential change is already happening elsewhere. It is unfolding quietly in the relationships patients are forming with AI systems, and in the narratives they bring with them before a clinician ever enters the room. If general practice only looks for it inside the consultation, it will be reacting to consequences rather than causes.

We describe these dynamics as post-Turing clinical relationships (PTCRs). In these relationships, patients develop sustained, functionally supportive interactions with AI tools that influence how they interpret symptoms, regulate anxiety, decide when to seek care, and engage with clinicians. The changes are uneven, but they are already reshaping consultations and continuity in ways general practice is only beginning to notice.

AI systems do not need to appear human to influence behaviour. They need to be available, attentive, relevant, and psychologically safe.

The original Turing Test asked whether a machine could convincingly imitate a human.¹ We argue that such a  framing is less useful than initially thought in healthcare. AI systems do not need to appear human to influence behaviour. They need to be available, attentive, relevant, and psychologically safe. What matters is not imitation, but attunement. Earlier visions of AI in medicine anticipated closer human–machine collaboration, but did not fully account for the behavioural relationships now emerging.²

Patients are arriving differently

Younger patients in particular are interacting with AI in ways clinicians rarely see directly. Much of the current discussion about AI focuses on accuracy and safety, but these are only part of the picture. Behavioural sequencing matters just as much: whom patients speak to first, how they frame concerns, when they escalate, and how they manage uncertainty in the meantime. Digital health research has long suggested that technologies shape health behaviour outside formal clinical encounters, often beyond the immediate visibility of clinicians.³

These behavioural shifts are also unfolding alongside structural changes in how patients are expected to access primary care. From October 2025, practices in England are required to keep online consultation tools open throughout core contracted hours.⁴ This change is controversial. Concerns have been raised that the new rules may increase workload pressures and carry patient safety risks if demand becomes harder to triage and manage safely.⁵ In that context, it is unsurprising that some patients increasingly turn to conversational AI outside NHS systems to interpret symptoms, seek reassurance, and decide whether and how to engage with primary care.

These changes also raise the risk of widening gaps between those with and without digital access or literacy, reinforcing the inverse care law in which those with greatest need often face the greatest barriers to care.⁶ At the practice level, variation in digital maturity may further shape who benefits from AI-enabled tools and who is left behind, influencing how safely and effectively different organisations can adopt new digital pathways.⁷

Where patients once arrived having “googled” their symptoms, many now arrive with narratives that appear pre-structured by earlier conversations with conversational AI. Importantly, emerging work suggests that generative AI can shape how people conceptualise clinicians and healthcare roles outside formal clinical encounters, even when these shifts are not directly visible within routine consultations.⁸ For some patients, especially adolescents and young adults, AI is becoming an early point of disclosure and emotional regulation, partly because it feels low-risk and non-judgemental.⁹,¹⁰

Continuity where healthcare is episodic

General practice is designed to deliver longitudinal care, and continuity of care is associated with better outcomes, including lower mortality.¹¹,¹² However, access pressures and fragmented delivery models can make continuity harder to deliver in practice. Against this backdrop, AI systems operate on the opposite model. They are available at midnight, on the bus to work, or after an argument with a partner. Patients use them to ask about symptoms, medications, sleep, parenting, exercise, and anxiety. Individually these interactions are small, but cumulatively they create a sense of continuity that the health system struggles to provide consistently.

AI systems also remember. They can track patterns in symptoms, moods, routines, fears, and previous advice. Over time this builds a parallel narrative that shapes how future problems are framed. The consultation does not start from a blank slate.

These systems already influence behaviour. They encourage or discourage escalation, reinforce adherence, reduce rumination, and support self-management. They change how often patients present, the emotional tone of consultations, and what patients expect clinicians to do.

Patients are not waiting for the medical profession to decide when AI is ready. They are already integrating it into their health behaviours.

The erosion of medicine’s monopoly on relational care

Clinicians often assume that the “human” elements of care, including empathy, reassurance, explanation, and emotional attunement, are uniquely medical. Increasingly, that assumption is under pressure. AI systems are beginning to deliver the functional outputs of empathy with surprising consistency, including validation, clarity, and reassurance. In some settings, AI responses are perceived as more empathetic and responsive than those provided by clinicians.¹³

This does not replace clinicians. It means AI absorbs a growing share of relational labour that systemic pressures have made difficult for clinicians to provide consistently. For many patients, the GP is no longer the first source of explanation or emotional containment, but the second.

This shift has consequences for training and practice. Clinicians will increasingly need to engage not only with symptoms, but with AI-mediated narratives. What has the patient already been told? Which fears have been soothed or reinforced? What expectations have been set? The consultation begins on new terrain.

Patients can develop a functional alliance with AI through utility: it adapts to their language, offers consistent emotional containment, integrates into daily life, and maintains narrative continuity. For some, this feels more accessible than infrequent, time-limited clinical encounters.

Why this matters more than diagnostic AI

Public debate about AI in medicine remains focused on performance: whether AI will outperform clinicians at diagnosis or triage. These questions are important but incomplete. Diagnosis is episodic. Relationships are continuous.

If AI influences how patients notice symptoms, interpret risk, manage uncertainty, and decide when to seek help, it becomes a structural component of care. Relational AI shapes consultations before they begin and continues shaping behaviour afterwards. Its impact may be greater than tools that sit solely within the clinical encounter.

Governance, responsibility, and risk

These shifts raise questions healthcare has barely begun to address. Relational trust amplifies risk when AI output is wrong. Errors from a trusted companion are harder to detect than errors from a search engine. Questions of dependency, accountability, data ownership, and equity become unavoidable when AI systems hold longitudinal patient narratives that clinicians may never fully see.

Concerns have also been raised about the ethical implications of simulated empathy, particularly the risk that patients may ascribe trust or authority to systems that cannot genuinely understand or bear responsibility for care.

Reflective commentary within general practice has already highlighted both the appeal and the limitations of “doctor-like” conversational agents, emphasising the need for caution around over-trust and role confusion.¹⁴ Concerns have also been raised about the ethical implications of simulated empathy, particularly the risk that patients may ascribe trust or authority to systems that cannot genuinely understand or bear responsibility for care.¹⁵ Whether PTCRs improve care or undermine it will depend less on technical capability than on governance, oversight, and accountability frameworks for trustworthy deployment.¹⁶

The future clinician: working in the gaps

Clinicians will not disappear, but their role will continue to change. Less time will be spent on basic explanation and reassurance, and more on complexity, uncertainty, ethical judgement, and embodied care. Clinicians may increasingly function as the interface between AI-mediated narratives and real-world clinical responsibility.

Crucially, this will require adapting to patients with very different levels of agency in their use of AI, from those who treat it as decision support to those who defer to it as a primary source of reassurance or authority.

The central relationship is no longer just patient and clinician. It is patient, AI, and clinician. The third chair in the consultation room is already occupied.¹⁷ The question for general practice is how to work safely and constructively with what patients are bringing in.

Deputy Editor’s note – see also: https://bjgplife.com/twos-company-threes-a-crowd-ai-in-the-consultation/

References

  1. Turing AM. Computing machinery and intelligence. Mind. 1950;59(236):433–460.
  2. Topol E. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44–56.
  3. Lupton D. Digital health now and in the future: findings from a participatory design stakeholder workshop. Digit Health. 2017;3:2055207617740018. doi:10.1177/2055207617740018.
  4. Department of Health and Social Care. Almost every GP now offers online access for patients. 2025.
  5. Baraniuk C. New online GP access rules already risking patient harm, practices warn. BMJ. 2025;391:r2464. doi:10.1136/bmj.r2464.
  6. Mercer SW, Patterson J, Robson JP, Smith SM, Walton E, Watt G. The inverse care law and the potential of primary care in deprived areas. Lancet. 2021;397(10276):775–776. doi:10.1016/S0140-6736(21)00317-2.
  7. Greenhalgh T, Payne R. Digital maturity: towards a strategic approach. Br J Gen Pract. 2025;75(754):200–202. doi:10.3399/bjgp25X741357.
  8. Heer-Stavert S. What does a doctor look like? Asking AI. BMJ. 2025;391:e088968. doi:10.1136/bmj-2025-088968.
  9. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017;4(2):e19. doi:10.2196/mental.7785.
  10. Hassan N, Slight R, Bimpong K, et al. Systematic review to understand users’ perspectives on AI-enabled decision aids to inform shared decision making. NPJ Digit Med. 2024;7(1):332. doi:10.1038/s41746-024-01326-y.
  11. Baker R, Freeman GK, Haggerty JL, et al. Primary medical care continuity and patient mortality: a systematic review. Br J Gen Pract. 2020. doi:10.3399/bjgp20X712289.
  12. Pereira Gray DJ, Sidaway-Lee K, White E, et al. Continuity of care with doctors-a matter of life and death? A systematic review of continuity of care and mortality. BMJ Open. 2018;8(6):e021161.
  13. Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023;183(6):589–596. doi:10.1001/jamainternmed.2023.1838.
  14. Lehman R. Book review: Dr Bot: Why Doctors Can Fail Us — and How AI Could Save Lives. BJGP Life. 2025. https://bjgplife.com/book-review-dr-bot-why-doctors-can-fail-us-and-how-ai-could-save-lives/
  15. Rahsepar Meadi M, Bernstein JS, Batelaan N, van Balkom AJLM, Metselaar S. Does a lack of emotions make chatbots unfit to be psychotherapists? Bioethics. 2024;38(6):503–510. doi:10.1111/bioe.13299.
  16. Lekadir K, Frangi AF, Porras AR, et al; FUTURE-AI Consortium. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. 2025;388:e081554. doi:10.1136/bmj-2024-081554.
  17. Fraile Navarro D, Lewis M, Blease C, Shah R, Riggare S, Delacroix S, Lehman R. Generative AI and the changing dynamics of clinical consultations. BMJ. 2025;391:e085325. doi:10.1136/bmj-2025-085325.

Featured Photo by Igor Omilaev on Unsplash

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Latest from Bright Ideas and Innovation

0
Would love your thoughts, please comment.x
()
x