Kaso Osman is a Foundation Year 2 doctor in general practice.
Paul McNamara is a GP and Honorary Clinical Lecturer at the University of Glasgow.
Artificial intelligence (AI) is no longer a futuristic concept. It is already woven into health care: transcribing notes in general practice, generating clinic letters in hospitals, and scanning through thousands of radiology images in seconds. Its presence is growing, and so are the questions it raises.
We come to this topic from different points in our careers and specialties. One of us is a resident doctor with aspirations for a career in interventional radiology — a specialty often described as the frontline for AI integration. The other is a GP and educator, navigating the reality of overloaded consultations, endless documentation, and the constant pressure to maintain safe and human-centred care. What we share is a mixture of curiosity, optimism, and unease.
“We come to this topic from different points in our careers and specialties … What we share is a mixture of curiosity, optimism, and unease.”
Different specialties, common challenges
The most striking feature of AI in health care is how it touches each specialty differently. In general practice, we wrestle with dictation software that can confuse amoxicillin with amitriptyline. In radiology, the promise is early detection, quicker reporting, and reduced fatigue — but also anxiety about training and de-skilling. Dermatologists are already trialling image recognition, while pathologists are exploring automated slide analysis. Each has its own opportunities and pitfalls, but the themes overlap: risk of error, bias in training data, and the ethical fog of accountability.
Our concern is that these conversations too often happen in silos. Radiologists talk about image recognition, GPs about transcription, and dermatologists about lesion detection. Yet these challenges are not isolated. All of us share the same fundamental questions: Who is responsible if AI makes a mistake? How do we train the next generation of doctors without hollowing out clinical skills? And what safeguards do we need to keep the human heart of medicine alive?
Listening to colleagues
We asked colleagues what they thought about AI and how it might affect their work. The responses weren’t statistics, but stories and impressions — cautious optimism on one hand, unease on the other.
AI was seen as a helpful tool rather than a replacement for doctors. Yet few colleagues felt ready or trained to use it well. Some were excited about efficiency gains; others feared a creep towards overinvestigation and erosion of clinical judgement. Many worried about the impact on training — if a machine flags the lesion every time, how do juniors learn to recognise it themselves?
These reflections echo what has been found in wider research. An earlier study, before the COVID-19 pandemic, suggested that GPs doubted AI’s real clinical ability.1 A more recent review shows attitudes are shifting, with clinicians increasingly open to its potential while still highlighting risks of overinvestigation, workload creep, and lack of clarity over accountability.2
“… the future of medicine should not be human versus machine, but human with machine.”
What unites us
If there was a thread running through these conversations, it was the need for clinicians to remain interpreters, validators, and ethical stewards of machine outputs. We do not all need to become coders, but we do need digital literacy: enough understanding to question, critique, and contextualise what AI produces.
This demands changes to curricula and training. Just as we once learned to interpret X-rays or electrocardiograms, we may soon need to learn how to interpret an algorithm’s probability score or transcription output. Future clinicians will require the confidence to use AI critically, not passively.
Human with machine, not versus machine
Our conclusion is simple: the future of medicine should not be human versus machine, but human with machine. AI can and should take on the repetitive, mundane tasks that drain our time and energy. But the core values of medicine — empathy, communication, trust, and ethical judgement — remain irreducibly human. Machines may detect a suspicious shadow on a scan, but only a clinician can explain its meaning to a worried patient with compassion and nuance.
If we embrace AI with care, transparency, and humility, it can help us. But if we hand over the reins uncritically, we risk de-skilling, widening inequalities, and eroding trust. The challenge is not simply to adopt the technology, but to ensure it serves patients, clinicians, and the integrity of our profession.
References
1. Blease C, Kaptchuk TJ, Bernstein MH, et al. Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views. J Med Internet Res 2019; 21(3): e12802.
2. Razai MS, Al-bedaery R, Bowen L, et al. Implementation challenges of artificial intelligence (AI) in primary care: perspectives of general practitioners in London UK. PLoS One 2024; 19(11): e0314196.