Marcus Lewis is a salaried GP and medical educator in London
For years, the promise of artificial intelligence (AI) transforming the way we work often exceeded its real impact. Proprietary AI models, like DeepMind’s model for helping to diagnose eye disease,1 were confined to specific tasks. These models rely on highly specialized data – in DeepMind’s case, thousands of optical coherence tomography scans – and demand significant computational power and technical expertise. Thus they remained inaccessible to most people. The recent rise of large language models AIs, like ChatGPT, has changed this. Unlike task-specific AI, large language models are trained on massive, diverse datasets encompassing a wide range of text sources, which allows them to perform a massive array of functions. The are intuitive to use through a simple, natural language interface and their application is limited only by the user’s willingness to experiment. Over the past year, I have explored tools such as ChatGPT, Claude, Microsoft Copilot and Perplexity in my work as a GP and medical educator. Their ability to engage in open-ended dialogue, assist in creative thinking, summarize information, and explain complex topics are all useful to me during a typical working day.
One of the most immediate and practical applications of AIs in my practice is handling routine, repetitive tasks. AIs can assist me in condensing and simplifying patient information leaflets into the length of a text message for me to send to my patients (and when necessary translating them idiomatically into any language). They can significantly speed up writing housing, parking badge letters and other medical reports. AIs can help me summarize or reformat some of the many complicated documents and spreadsheets I receive via email. Additionally, they can condense anonymized medical notes for complaints handling or case reviews. I have even used these tools to quickly create multiple-choice questions for teaching from journal articles or to provide me with IT solutions when my computer or printer have been playing up.
Educators must emphasize that AIs should complement, not replace, the reflective process.
AIs are also useful for assisting with creative tasks. Their ability to generate interesting or unusual ideas can be helpful when I am trying to freshen up my teaching. For example, developing lesson plans for group teaching can be time-consuming. Using AIs, I can swiftly generate structured and engaging lesson plans, which I then refine to meet specific educational goals or formats, such as learning needs assessment questions or creative ways to structure a tutorial. This not only saves time but also inspires new ideas and teaching approaches. I have even used AIs to co-author detailed and challenging scripts for medical actors for a consultation simulation day for GP trainees.
AIs play an increasingly useful role in my reflective practice, particularly in my appraisal. I use the voice function of the ChatGPT app to narrate my brief reflections on educational activities and then prompt it to ask me searching questions to help me think more deeply. The rote act of preparing for my appraisal can be transformed from essay writing to storytelling. I have also used a similar approach to help me to unpack challenging cases. AI-generated questions encourage me to actively think about the situation from multiple angles and consider a broader context, leading to more meaningful reflection.
I am aware of talk that some GP trainees are using ChatGPT to bypass the reflective process altogether and write plausible but superficial “pseudo-reflections” for their learning log. This overreliance on AIs poses a risk to the development of genuine reflective skills. Educators must emphasize that AIs should complement, not replace, the reflective process. Therefore, I ask my trainees about their AI use and, if they are using AIs, I encourage my trainees to use them as an assistant to guide and deepen their reflections. This ensures that AIs serve as a tool to enhance the depth and authenticity of reflections rather than detract from them. Encouraging AI use in this manner has proven particularly useful for GP trainees who struggle with reflective thinking.
Although large language models hold promise for medical diagnosis,2 I do not yet find them to be a useful substitute for my clinical expertise. The reality is, AI’s abilities are inconsistent, and I need to be the one who decides when it’s best utilized and when it’s wiser to rely on my own judgment. As the technology continues to develop, its role in diagnosis may shift. For now, I believe the “Centaur model”3 strikes the right balance: AI can provide valuable support for a range of tasks, but human intuition and expertise remain essential for others – especially the complex art of diagnosis and patient care.
AI’s abilities are inconsistent, and I need to be the one who decides when it’s best utilized and when it’s wiser to rely on my own judgment.
The use of AIs brings many benefits but also raises important ethical concerns about data privacy and patient confidentiality. It’s difficult to guarantee data privacy when the methods companies use to train and update these models are not fully transparent, especially since AI models are usually trained on large datasets, making the impact of individual contributions difficult to assess. One key issue is the lack of clarity about how AI systems handle personal data; while some companies offer opt-out options, the specifics and true extent of user control are often unclear.4 Given these uncertainties, I use data anonymization when working with AIs. Following General Medical Council (GMC) guidelines, I ensure that no identifiable patient details are included in my reflections. When writing notes or letters, I avoid providing any patient-identifiable information to the AI and ask it to write in the third person without referencing the patient’s name. These steps help ensure patient confidentiality and responsible use of AIs as the technology continues to develop.
The integration of AIs into my daily practice is still in its early stages, and it’s crucial to approach these tools with a healthy dose of caution and critical thinking. However, by embracing AIs as a collaborative partner and focusing on their strengths in specific areas, I believe they can free up time and mental energy, allowing me to focus on the uniquely human aspects of general practice that truly benefit my patients.
As AI tools become increasingly sophisticated and accessible, it is important for GPs to engage with this technology. Experimenting with different applications and sharing experiences, both positive and negative, will be essential in harnessing the power of AIs responsibly within general practice. Open dialogue and collaboration between clinicians, regulators, and commissioners will be vital in shaping the future of AIs in primary care, ensuring its benefits reach all corners of our profession.
Deputy Editor’s note: see also Richard Armitage on the many potential uses of AI here: https://bjgplife.com/using-ai-in-the-gp-consultation-present-and-future/
References
- De Fauw J, Ledsam JR, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature. 2018 May;557(7705):S1. DOI: 10.1038/s41591-018-0107-61.
- Eriksen, Alexander V., M.D., Sören Möller, M.Sc., Ph.D., and Jesper Ryg, M.D., Ph.D. “Use of GPT-4 to Diagnose Complex Clinical Cases.” NEJM AI, vol. 1, no. 1, 2023, DOI: 10.1056/AIp2300031.
- Mollick E. Cyborgs and Centaurs, One Useful Thing. [Online]. Available from: https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged [Accessed 15 Jun 2024].
- Washington Post. Opting Out of AI Training: Meta-ChatGPT. Technology section. 2024 May 31. Available from: The Washington Post. https://www.washingtonpost.com/technology/2024/05/31/opt-out-ai-training-meta-chatgpt/ [Accessed 15 Jun 2024]
Photo by Matt Jones on Unsplash
Thank you for mentionning how generative AI use can enhance reflections. As GP educators we have written about several ways this could be helpful, including to address educational inequalities, if used in the context of a long-term educational relationship: https://bit.ly/3zp1lL0
Thank you for sharing Camille.
I have written about this in more depth in an article which is pending publication.
A preprint version is available at https://www.researchgate.net/publication/381638932_The_Digital_Balint_Using_AI_in_Reflective_Practice – with the associated resources (transcripts/prompts) – https://www.researchgate.net/publication/382114612_The_Digital_Balint_Resources_Transcripts_and_Promptspdf
Best wishes,
Marcus