Clicky

//

How LLMs will transform general practice

Richard Armitage is a GP and Honorary Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on X: @drricharmitage

 

In Learning to Imagine,1 Andrew Shtulman argues that the inevitable decline of the brain’s capacity for imagination with age is a myth.  To the contrary, the cognitive developmental psychologist makes a compelling case that imagination must draw upon experience, meaning it can be learned and developed with advancing age.  This is an encouraging message when applied to the growing list of crises that beset our health service, as it suggests that innovation – which requires large doses of imagination – might hold the answer.  The solution to these problems is widely believed to lie in the development and deployment of digital health technologies, such that ‘digital transformation’ constitutes Chapter 5 of the NHS Long Term Plan,2 and was the focus of a recent Health and Social Care Committee report.3  And yet, despite this streak of apparent techno-optimism, the term ‘artificial intelligence’ (AI) appears only once in the Long Term Plan, and not at all in the report.  This glaring oversight appears to be a failure of imagination that only a few minutes spent with ChatGPT, or any of the leading large language models, would immediately correct.

…despite this streak of apparent techno-optimism, the term ‘artificial intelligence’ (AI) appears only once in the Long Term Plan, and not at all in the report.

ChatGPT is a large language model (LLM) developed by OpenAI that uses deep learning techniques to generate human-like text.  At the time of writing (18 December 2023), GPT-4 (the model that powers the leading iteration of ChatGPT) accepts input data across text, speech and image modalities, extracts text from PDF and other files, has direct access to the live internet, analyses non-textual visual information such as graphs and charts, writes original computer code, and generates novel images through the integrated DALL·E 3 model.  While GPT-4 is no longer the only competitive model on the LLM landscape,4 the power of these tools (particularly that of GPT-4) is evident and rapidly increasing.  As such, the extent to which they could transform general practice is only limited by one’s imagination (be that human or AI).  This article will lay out how I predict this will occur, specifically in the domain of the clinician-patient consultation.

Booking appointments and clinical triage

Today, booking an appointment generally requires the patient or carer to make repeated phone calls to the surgery at 8am on the day they wish to be seen.  The chaos, frustration and inefficiency of this system is well known to the patient, reception staff and clinicians alike.  LLMs will transform this process.  Once these technologies have been fully deployed, the patient or carer will call the surgery at any time they wish and, rather than waiting endlessly on the line to briefly speak with a non-clinically trained member of staff, they will immediately speak for as long as they wish with an LLM that has been subjected to specialist training and fine-tuning.  The LLM will prompt the patient with questions about their complaint such as the nature, duration and severity of their symptoms, the success of any attempts of self-management, and their availability for attending an appointment.  The LLM will record this conversation, perform a speech-to-text conversion to generate a written transcript, generate a summarised version that includes key phrases using the patient’s own words, and book an appointment for the patient.  This appointment will be with an appropriate clinician (GP, practice nurse, HCA, practice pharmacist, ANP, etc) at an appropriate time (by considering the patient’s clinical urgency, their availability, and the need of the other patients wishing to be seen that day) and via the appropriate modality.  Through this process, the LLM will promote patient convenience, prioritise patients with the greatest clinical need, and enhance clinical practicality and patient safety, such as booking an in-person appointment when a DRE is likely to be required, and for members of high-risk groups (including extremes of age, language non-concordance, and conditions that may complicate communication such as autism).5  This automated triage will save the patient significant time and frustration by avoiding long telephone queues (since the LLM can be simultaneously running on multiple telephone lines, while a receptionist cannot), makes reception staff available to attend to other tasks, and enables the right patient to be seen by the right clinician at the right time via the right modality, thereby boosting the efficiency and safety of the appointment booking process.

Before the consultation

Prior to seeing the patient or speaking with them via telephone or video, the LLM will present the clinician with a summary of their medical history and recent events.  It will do this by scanning the medical record for all recent clinical encounters (including GP and ANP appointments, investigation results, and letters from outpatient appointments, A&E attendances, ambulance call-outs, 111 consultations, and inpatient stays) and summarising the story in succinct yet precise natural language for the clinician to read.  This will save the clinician substantial amounts of time which would otherwise be spent piecing together a narrative of the patient across various digital platforms so the clinician has some idea of the background of the patient they are about to encounter (a process which can often take numerous minutes, particularly for complex patients that the clinician has not seen before).

During the consultation

The LLM will then present the summarised patient triage to the clinician, including the exact key phrases used by the patient during the consultation with the LLM.  This will be in bulleted form and located in a new consultation window in the history section of the clinical record system (such as SystmOne and EMIS).  This process will save the clinician significant amounts of time as the framework of the history will have already been established and prepopulated in the relevant section of the written notes.  The clinician can then elaborate on the history by asking clarifying questions, which could be suggested by the LLM if the clinician wants to take advantage of this clinical support. The clinical support feature could be turned off according to the clinician’s preference, although the ethical permissibility of doing so will weaken as the diagnostic capabilities of these technologies increases beyond those of human clinicians.6

As the conversation between the clinician and the patient proceeds, the LLM will ‘listen’ to the consultation and use its speech-to-text capabilities to generate real-time written notes that are updated and summarised.  The multi-lingual capabilities of these tools will facilitate this even when there is language non-concordance between the clinician and the patient, thereby saving the time required to arrange and deal with telephone interpreters (the LLM will also ‘speak’ aloud the patient’s interpreted speech in real time for the clinician to respond to).  At the end of the consultation, the clinician will be able to review the notes generated by the LLM and make any necessary edits before they are saved.

In a similar manner to the prompts offered by the LLM to facilitate history taking, suggestions will also be made of differential diagnoses as the consultation progresses.  These will be updated in real-time as more information is elicited through further questioning and examination findings (which the clinician speaks aloud to be ‘heard’ by the LLM), and presented in descending order of likelihood.  The model will also present suggested management options for the clinician to consider, including investigations, medications, signposting and referrals.  Once again, these suggestions will be updated according to the differential diagnoses as the consultation advances.  As the technology becomes increasingly integrated into the clinical system, putting these management steps into place will become progressively more automated such that the AI requests the investigations, makes the prescriptions and writes the referrals for the clinician to approve and authorise.

This combined impact of digital automation and clinical assistance will save clinicians significant amounts of time per patient encounter.  As such, the quality of care provided by clinicians will improve as they will be empowered to spend more of their time on the elements of clinical care that AI will always perform to sub-human standards, such as active listening, the development of doctor-patient rapport and clinical examination (poor communication and feelings of not being listened to by clinicians are major sources of patient dissatisfaction).7

After the consultation

The integration of AI into the infrastructure of general practice, and the automation and delegation of routine tasks within healthcare more broadly, are both inevitable and essential to surmount the growing crises faced by our health system. 

The LLM will follow-up the patient at the clinically-relevant point after the consultation.  For example, the technology might call the patient 48 hours after the initiation of antibiotics for a chest infection to ensure there has been no clinical deterioration, and shortly after the completion of the treatment to ensure there has been complete resolution of the symptoms.  During these calls, the LLM will use the details of the recent consultation to ask the patient relevant questions about their condition and to identify and concerning features, deterioration or poor compliance.  What happens next could be predetermined by the clinician.  For example, the LLM could autonomously book the patient another appointment by using the process of triage discussed earlier, or present the information to the clinician for them to act on it independently.  While this might generate extra work for the clinician (currently we do not actively follow-up all our patients routinely, but rather provide safetynetting such that the patient seeks medical attention if necessary), the clinician time saved by the LLM’s assistance both before and during the consultation will more than make up for this, and will enhance patient safety to a level beyond that made possible by current safetynetting practices.

The integration of AI into the infrastructure of general practice, and the automation and delegation of routine tasks within healthcare more broadly, are both inevitable and essential to surmount the growing crises faced by our health system.  This digital transformation is accompanied by multiple inherent ethical concerns, which must be established, understood and navigated for the roll-out of AI to go well in healthcare.  These issues include clinical accountability (who is ultimately responsible for the clinical decisions made or facilitated by these technologies – the clinician, the writer of the algorithm, the owner of the LLM, or even the AI itself should it be assigned personhood?),8 problems with knowledge derived from large datasets (are data sufficiently representative for results to be generalisable, or does algorithmic bias reduce the quality and safety of care in certain groups?), and data privacy and patient confidentiality (who owns the data collected by LLMs, what rights do patients have over these data, and who could access it and for what purpose?).

With the recent explosion of user-friendly AI, the digital transformation of general practice, specifically harnessing the power of LLMs, is on the horizon.  Those who are resistant to these innovative digital tools must recognise and accept their power and utility.  For the sceptical, the tech startup landscape is already racing to provide these digital solutions to the growing number of crises in healthcare, such as clinicassist.aiand Abridge, which automatically capture insights from clinician-patient conversations and produces summarised clinical notes.  In light of this rapid innovation, while Shtulman reassures us that our imagination can be improved, foreseeing the transformative potential of LLMs in general practice doesn’t require much imagination at all.

References

  1. A Shtulman. Learning to Imagine: The Science of Discovering New Possibilities. Harvard University Press, 2023.
  2. NHS Long Term Plan. Chapter 5: Digitally-enabled care will go mainstream across the NHS. https://www.longtermplan.nhs.uk/online-version/chapter-5-digitally-enabled-care-will-go-mainstream-across-the-nhs/ [accessed 20 December 2023]
  3. House of Commons Health and Social Care Committee. Digital transformation in the NHS. Eighth Report of Session 2022-23. 30 June 2023. https://committees.parliament.uk/publications/40637/documents/198145/default/ [accessed 20 December 2023]
  4. R Armitage. Three AIs sit the GP SelfTest. BJGP Life 06 November 2023. https://bjgplife.com/three-ais-sit-the-gp-selftest/
  5. R Payne, A Clarke, N Swann, et al. Patient safety in remote primary care encounters: multimethod qualitative study combining Safety I and Safety II analysis. BMJ Quality & Safety 28 November 2023. DOI: 10.1136/bmjqs-2023-016674
  6. R Armitage. The utilitarian case for AI-mediated clinical decision-making. BJGP Life 16 July 2023. https://bjgplife.com/the-utilitarian-case-for-ai-mediated-clinical-decision-making/ [accessed 20 December 2023] 
  7. JD Boudreau , E Cassell, and A Fuks. Preparing medical students to become attentive listeners. Medical Teacher 2009; 31(1): 22-29. DOI: 10.1080/01421590802350776
  8. VAJ Kurki. ‘The Legal Personhood of Artificial Intelligences’ in A Theory of Legal Personhood. Oxford, 2019; online edn, Oxford Academic, 19 Sept. 2019. DOI: 10.1093/oso/9780198844037.003.0007

Featured photo by Steve Johnson on Unsplash.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Previous Story

Physician Associates; Dissociated Doctors

Next Story

Extended roles and special interests

Latest from BJGP Long Read

0
Would love your thoughts, please comment.x
()
x
Skip to toolbar