Clicky

/

The utilitarian case for AI-mediated clinical decision-making

Richard Armitage is a GP and Public Health Specialty Registrar, and Honorary Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on twitter: @drricharmitage

The explosive emergence of generative AI systems – algorithms that can be used to create new content including text (e.g. ChatGPT), images (e.g. Stable Diffusion), computer code (e.g. GitHub Copilot), audio, simulations and videos – has recently brought artificial intelligence into the centre of the public’s attention. Only a few minutes spent with GPT4, Google Bard, or DALL-E is sufficient to demonstrate the incredible power of even these ‘sandbox’ (pre-application) iterations, which are substantially more intuitive and accessible to non-technical users than their similarly powerful predecessors, such as AlphaGo and Alphafold. However, the speed with which this technology has recently developed has exceeded the predictions of many technologists, and suggests that the dawn of artificial general intelligence (AGI) – autonomous systems that surpass human capabilities in the majority of economically valuable tasks – may soon be upon us.1 While this has triggered many to raise the alarm about the potential dangers of advanced AI to employment, trust and verifiability, and even humanity’s survival,2 the economic potential of generative AI alone has been estimated to reach $8.8 trillion annually by unlocking astonishing productivity gains across all industry sectors, including medicine and healthcare.3

I will argue that, in three well-defined contexts, clinical decision-making should be delegated to AI systems either today or in the very near future.

I have previously explored what generative AI means for the present and future of general practice,4 how it will impact medical education,5 and its potential to reveal new insights from the founders of our profession.6 However, I have so far remained silent on the ethical implication of advanced AI in medicine, and have not ventured to make normative (action-guiding/moral) claims regarding their deployment in clinical practice. Now, I will argue that, in three well-defined contexts, clinical decision-making should be delegated to AI systems either today or in the very near future. I shall mount this argument from the position of consequentialism, a forward-looking ethical theory which holds that the permissibility of an action is determined by its predicted consequences. More precisely, I will use utilitarianism – the consequentialist theory founded by Jeremy Bentham and John Stuart Mill, which holds that the correct action to take is the one which is expected to bring about the greatest good for the greatest number – to put forward this argument.

Context 1: where healthcare would otherwise not exist

Access to healthcare in low-resource contexts such as refugee camps, humanitarian emergencies (including earthquake, hurricane, and conflict zones), and remote and rural parts of low-income countries, is often either highly limited or entirely non-existent. Simultaneously, two-thirds of the world’s population is expected be online by the end of 2023,8 while an estimated 6.9 billion smartphones are currently in use globally,9 meaning a large proportion of the individuals residing in low-resource settings are connected to the internet. As AI systems become increasingly available to non-technical users, including via smartphone applications (the ChatGPT app is now available on iOS),10 ‘digital doctors’ in the form of generative healthcare AI systems will become increasingly available to those who currently do not have access to universal health coverage. Users will be able to consult with their digital doctor as they would a human doctor – if they had access to one – in a two-way verbal discussion via natural language (in the individual’s mother tongue, via text or speech) regarding the details of their symptoms and relevant medical history. After taking a history, the digital doctor would present the patient with a list of differential diagnoses, the likelihood of each being the correct diagnosis, and recommended actions such as to carefully watch and wait, attend an emergency department, or even purchase over-the-counter medications to treat the problem without human medical supervision. The amplification of AI in this manner would serve to provide healthcare to those who otherwise would not have access to it. For the utilitarian, therefore, the only ethically relevant question is whether this scaling would serve to generate greater total utility. Accordingly, from the utilitarian perspective, the democratisation of healthcare in these settings would not only be ethically permissible, but ethically obligatory, if the digital doctor’s advice generates outcomes that are superior to those that would otherwise occur had patients continued to have limited or no access to healthcare. In other words, if the consequences for patients of digital doctors are, on average, better than those of no doctors, then they ought to be made available to those without access to doctors. Given the competence of current generative AI systems, and the speed with which they are improving, it is inevitable that sufficiently competent digital doctors are already or will soon be available to meet the requirements of consequentialists for their deployment in low-resource contexts.

Context 2: where existing healthcare could be improved

AI systems are already matching, and in some cases reliably and significantly out-performing, the abilities of their human counterparts in some repetitive medical tasks including dermoscopic melanoma diagnosis,11 CT interpretation,12 chest x-ray interpretation,13 and detection of diabetic retinopathy in retinal fundus photography.14 For the utilitarian, once patient outcomes can be predicably enhanced by the handing over of clinical decision-making from humans to AI systems, then such delegation is ethically required. In other words, it would be unethical to subject patients to the error-prone, human factor-susceptible, inconsistent decisions of human doctors while a more accurate and reliable digital alternative – which never becomes tired, hungry, or frustrated by institutional politics – is concurrently available. While this does not entirely remove the need for dermatologists and radiologists, it requires that the primacy of their clinical roles evolves from diagnosticians into users, supervisors and interpreters of powerful AI tools. Simultaneously, the portion of their roles that serves as the compassionate human interface between disease pathology, medical science, and sick patients will become increasingly important, as they begin to translate AI-mediated clinical insights into the embodied, social, physical world of the doctor-human relationship – an element of the profession that even fully-realised AGI could never entirely replace.

Context 3: where delivery of healthcare could become more efficient

…is the clinician, the tech company, the individual coder, or the AI itself, ultimately responsible when things go wrong?

While access to, and quality of, healthcare in prosperous societies (such as the UK) is relatively high, the efficiency with which that healthcare is delivered is far from perfect. Triage – the prioritization and allocation of scarce healthcare resources according to clinical urgency – is the means by which this scarce resource is distributed, and includes emergency and same-day appointments in primary care, five level triage systems in emergency departments,15 and 2-week wait pathways for suspected cancer. While GPs in out-of-hour primary care, nurses in Emergency Departments, and non-clinical staff in GP surgeries can all be ‘triage trained,’ there is substantial variability in the reliability and validity of clinical triage systems, which are all inherently imperfect and suffused with multiple vulnerabilities.15 One susceptibility in primary care triage is the ‘8am madness’ in which patients are largely allocated to appointments according to their fortune in the phone queues rather than their clinical urgency with respect to the other 99 in the queue. Instead of this, an advanced AI clinical triage system could assess the clinical details submitted (prior to 8am) by all patients desiring medical attention, and subsequently prioritise all patients with respect to one another, and allocate them to clinician appointments according to their clinical need. While this would necessitate further improvements in digital access and online proficiency, this trend is already well underway as the ‘digital divide’ continues to close.16 Similarly powerful AI-mediated triage systems – each with reliabilities and validities that far exceed those of existing human-mediated systems – could be deployed across all clinical settings, and thus would substantially improve the efficiency with which healthcare is delivered. Since prioritising the provision of clinical care according to clinical need leads to better clinical outcomes (the greatest good) for the greatest number of patients, the consequentialist would consider the deployment of such systems as ethically required.

The use of AI-mediated clinical decision-making in these contexts raises numerous ethical and legal problems including accessibility, confidentiality, data privacy, and professional accountability (is the clinician, the tech company, the individual coder, or the AI itself, ultimately responsible when things go wrong?). Additionally, non-consequentialist theories of ethics would offer normative claims on this evolution of healthcare that differ to the utilitarian perspective put forward above. However, since the delivery of healthcare around the world – particularly in socialist-originated health systems such as the NHS17 – is largely predicated on a consequentialist maximisation of the good (which, in this case, amounts to health), the utilitarian case for AI-mediated clinical decision-making in the three aforementioned contexts may prove to be persuasive. Accordingly, clinicians should be prepared to adapt and evolve their practice to facilitate the enormous patient benefits that this technology will enable.

 

References

  1. M Roser. AI timelines: What do experts in artificial intelligence expect for the future? Our World in Data. 07 February 2023. https://ourworldindata.org/ai-timelines [accessed 26 June 2023]
  2. Future of Life Institute. Pause Giant AI Experiments: An Open Letter. 22 March 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ [accessed 26 June 2023]
  3. M Chui, E Hazan, R Roberts, et al. McKinsey report: The economic potential of generative AI: The next productivity frontier. 14 June 2023. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier [accessed 26 June 2023]
  4. R Armitage. Using AI in the GP consultation: present and future. BJGP Life 29 May 2023. https://bjgplife.com/using-ai-in-the-gp-consultation-present-and-future/[accessed 26 June 2023]
  5. R Armitage. ChatGPT: a threat to medical education? BJGP Life 11 May 2023. https://bjgplife.com/chatgpt-a-threat-to-medical-education/
  6. R Armitage. Interviewing Hippocrates: a conversation with the father of Western medicine. BJGP Life 04 May 2023. https://bjgplife.com/interviewing-hippocrates-a-conversation-with-the-father-of-western-medicine/ [accessed 26 June 2023]
  7. Digital Around the World. Data Reportal. https://datareportal.com/global-digital-overview [accessed 26 June 2023]
  8. S Kemp. Digital 2023 April Global Statshot Report. Data Reportal. 27 April 2023. https://datareportal.com/global-digital-overview [accessed 26 June 2023]
  9. OpenAI. Introducing the ChatGPT app for iOS. 18 May 2023. https://openai.com/blog/introducing-the-chatgpt-app-for-ios [accessed 26 June 2023]
  10. TC Pham, CM Luong, VD Hoang, et al. AI outperformed every dermatologist in dermoscopic melanoma diagnosis, using an optimized deep-CNN architecture with custom mini-batch logic and loss function. Nature Scientific Reports 2021; 11: 17485. DOI: 10.1038/s41598-021-96707-8
  11. J Chen, L Wu, J Zhang, et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Nature Scientific Reports 2020; 10:19196. DOI: 10.1038/s41598-020-76282-0
  12. LL Plesner, FC Müller, JD Nybing, et al. Autonomous Chest Radiograph Reporting Using AI: Estimation of Clinical Impact. Radiology March 2023; 307(3). DOI: 10.1148/radiol.222268
  13. V Gulshan, L Peng, M Coram, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016; 316(22): 2402–2410. DOI:10.1001/jama.2016.17216
  14. M Christ, F Grossmann, D Winter, et al. Modern triage in the emergency department. Deutsches Arzteblatt Interational. December 2010; 107(50): 892-898. DOI: 10.3238/arztebl.2010.0892
  15. M van Veen, and HA Moll. Reliability and validity of triage systems in paediatric emergency care. Scandanavian Journal of Trauma Resuscitation and Emergency Medicine 27 August 2009; 17(38). DOI: 10.1186/1757-7241-17-38
  16. Office for National Statistics. Exploring the UK’s digital divide. 04 March 2023. https://www.ons.gov.uk/peoplepopulationandcommunity/householdcharacteristics/homeinternetandsocialmediausage/articles/exploringtheuksdigitaldivide/2019-03-04 [accessed 26 June 2023]
  17. Powell M. Socialism and the British National Health Service. Health Care Anal. 1997 Sep;5(3):187-94. doi: 10.1007/BF02678377.

Featured image: Photo by Kevin Ku on Unsplash

Ethics of the Ordinary is a regular column on BJGP Life that explores ethical and moral concerns relevant to general practice and primary care.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Previous Story

Book review: What Mental Illness Really Is … (And What It Isn’t)

Next Story

Shall we all move to Denmark?

Latest from BJGP Long Read

0
Would love your thoughts, please comment.x
()
x
Skip to toolbar