Richard Armitage is a GP and Honorary Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on twitter: @drricharmitage
Much about AI has been written for medical audiences that simultaneously forewarns of AI’s arrival and reassures that it will not take doctors’ jobs.1–3 This oxymoron is largely predication on the Moravec paradox — the observation by AI and robotics researchers that tasks that are challenging for humans are simple for machines, while those that are simple for humans are challenging for machines.4
The argument usually goes as follows: AI will, and in some cases already does, vastly out-perform the ability of human doctors in domains such as image recognition — CT scans, X-rays, and diagnosing skin lesions, for example — but will never replicate the social and communication skills that are vital to the doctor–patient relationship, ergo doctors will never be replaced by machines.
“… all individuals will soon have their own enormously powerful ‘personal AI’.”
While it may alleviate some professional anxiety among the medical workforce, the comfort generated by this appeal to the Moravec paradox is unfortunately misguided. This is because the Moravec paradox is only applicable in our current technological context — one in which AI systems are unilaterally held and deployed by doctors. This context will soon undergo profound change, and the Moravec paradox itself will cease to apply in health care.
In the current context, the ‘doctor AI’ often exceeds the human doctor’s ability to interpret clinical information, such as in diagnosing skin lesions,5 blood chemistry,6 and radiological imaging,7–9 and in predicting an individual’s risk of disease. The human doctor, however, far exceeds the doctor AI’s ability to establish the relevant social, environmental, and clinical factors that contextualise these diagnoses and predictions, and to effectively communicate sensitive clinical information through compassionate, empathetic, and trusting therapeutic relationships. Crucially, in this context — and for the Moravec paradox to successfully protect doctors from job automation — the patient has no ‘personal AI’, while patients’ trust in clinical decisions made without the input of human doctors is limited.10
In the near future, however, the context will be radically different. Doctors will no longer have unilateral access to AI systems. Due to the recent and ongoing rapid acceleration of AI capabilities — driven by the increased availability and reduced price of data, computational power, and algorithmic ability — all individuals will soon have their own enormously powerful ‘personal AI’. Through wearable (for example, watches) and implantable (for example, intravascular or brain–computer interface11) sensors, personal AIs will have real-time access to the individual’s physiological state, which they will contextualise according to the individual’s baseline physiology, medical history, and other relevant factors including social, emotional, environmental, relational, and economic influences.
“… machines will soon become superior to doctors in all domains of health care … “
Knowledge of these factors will be obtained through regular dialogue between the individual and their personal AI (via large language models, text-to-speech, and speech-to-text technologies), through which the personal AI becomes deeply knowledgeable of, and specifically tailored to, that individual.
As it learns more about the individual, the personal AI will increasingly ask the ‘right’ questions at the right time in the right manner — all according to its deepening understanding of the individual’s physical health, psychological state, personality characteristics, and social situation — to establish an increasingly nuanced understanding of that individual, such that the personal AI becomes to constitute an extension or ‘proxy’ of the individual. Crucially, the individual will develop a deep trust in their personal AI, which will reveal insights into the individual that neither the individual nor their human doctor could establish alone.
In this near-future context, personal AIs will recognise any deviation from the individual’s baseline physiology, establish the relevant contextual factors (including social, emotional, and environmental influences) through dialogue with the individual, and make contact with the doctor AI only when it is medically appropriate to do so. Simultaneously, the same acceleration of AI capabilities will enable doctor AIs to out-perform human doctors in clinical information collection (by knowing what questions to ask personal AIs of individuals), the interpretation of clinical information using contextual insights provided by personal AIs, and, thus, the formulation of safe and effective clinical decisions. In addition, since personal AI-to-individual communication is exquisitely tailored to the individual’s characteristics, and individuals have deep trust in their personal AIs, the effective communication of sensitive clinical information to patients will be more successfully, reliably, and promptly accomplished via personal AIs than human doctors.
As such, the domains in which the Moravec paradox holds in the current context — human doctors out-performing AIs in clinical information collection, contextualised clinical information interpretation, clinical decision making, and communication with patients — will soon cease to be dominated by human doctors. This transition will bring about both improved health outcomes (through the superior clinical abilities, speed, and indefatigability of AI), and a diminishing role for human doctors, including myself. Until robotic capabilities match those of AI,12 and ethical and legal considerations are adequately encoded, this role is likely to concentrate in practical procedures and ethico-legal practice. Accordingly, machines will soon become superior to doctors in all domains of health care, where the Moravec paradox will cease to apply.
References
1. Buch VH, Ahmed I, Maruthappu M. Artificial intelligence in medicine: current trends and future possibilities. Br J Gen Pract 2018; DOI: https://doi.org/10.3399/bjgp18X695213.
2. Mistry P. Artificial intelligence in primary care. Br J Gen Pract 2019; DOI: https://doi.org/10.3399/bjgp19X705137.
3. Arora A. Moravec’s paradox and the fear of job automation in health care. Lancet 2023; 402(10397): 180–181.
4. Agrawal K. To study the phenomenon of the Moravec’s Paradox. arXiv 2010; DOI: 10.48550/arXiv.1012.3148.
5. Pham TC, Luong CM, Hoang VD, Doucat A. AI outperformed every dermatologist in dermoscopic melanoma diagnosis, using an optimized deep-CNN architecture with custom mini-batch logic and loss function. Sci Rep 2021; 11(1): 17485.
6. Walter W, Haferlach C, Nadarajah N, et al. How artificial intelligence might disrupt diagnostics in hematology in the near future. Oncogene 2021; 40(25): 4271–4280.
7. Chen J, Wu L, Zhang J, et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci Rep 2020; 10(1): 19196.
8. Plesner LL, Müller FC, Nybing JD, et al. Autonomous chest radiograph reporting using AI: estimation of clinical impact. Radiology 2023; 307(3): e222268.
9. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016; 316(22): 2402–2410.
10. Hatherley JJ. Limits of trust in medical AI. J Med Ethics 2020; 46(7): 478–481.
11. Hramov AE, Maksimenko VA, Pisarchik AN. Physical principles of brain–computer interfaces and their applications for rehabilitation, robotics and control of human brain states. Physics Reports 2021; 918: 1–133.
12. Morgan AA, Abdi J, Syed MAQ, et al. Robots in healthcare: a scoping review. Curr Robot Rep 2022; 3(4): 271–280.
Featured photo by Steve Johnson on Unsplash.
Great piece Richard. What’s your opinion though on Searle’s ‘Chinese Room’ thought experiment ? True?
https://youtu.be/TryOC83PH1g