Clicky

//

Automation, machine learning and artificial intelligence: considerations for commissioners, providers, and recipients, of healthcare

Andrew Papanikitas is the Deputy Editor of the BJGP. He is a GP and medical educator based in Oxford.* He is on X: @gentlemedic

Siân Rees is Director, Community Involvement & Workforce Innovation at  Health Innovation Oxford & Thames Valley and Associate Fellow, Green Templeton College, University of Oxford*

 

Commissioners of artificial intelligence (AI) for healthcare, clinicians who use it, and patients who receive care including it, may wish to ask themselves, “What do I need to understand to commission, or use, healthcare systems that include automation, machine learning or artificial intelligence, in a way that I trust, and that promotes trust for others? What will the public expect to have been considered? “ This can be a challenge as technology is often created, and implemented, before the debate has been had, and people have developed informed views. This brief note outlines some key questions .

Some background

Advances in AI have the potential to transform medicine by generating novel cures, improving diagnostics, making care more accessible, reducing costs, and alleviating the workload of clinicians.1,2

AI refers to a broad field of science encompassing not only computer science but also psychology, philosophy, linguistics, and other areas. It is concerned with getting computers to do tasks that would normally require human intelligence. The term AI is often used as a catch all term to include any automated device or website, machine learning, or the achievement of artificial sentience.

Machine learning is a branch of artificial intelligence that allows computer systems to learn directly from examples, from data and experience. Through enabling computers to perform specific tasks intelligently, machine learning systems can carry out complex processes by learning from data, rather than following pre-programmed rules.3

Natural language processing takes an advanced neural network (in this case a network of powerful computers) to analyse and recreate human language. When an AI algorithm is trained to interpret human communication, it is called natural language processing. This is useful for chat bots and translation services, but it is also represented by AI assistants like Alexa and Siri. 3

Medical journals have discussed the alignment problem: where AI is poorly aligned with the needs of the public and values of society it can present a public health risk.

The development and application of AI are rapidly advancing: both ‘narrow AI’ where only a limited and focused set of tasks are conducted; and ‘broad’ or ‘broader’ AI where multiple functions and different tasks are performed. Broad AI has been discussed as having the potential to be an existential risk to humans.4 Such official guidance as there is for healthcare stakeholders is mainly aimed at those who wish to develop, sell, or purchase AI destined for clinical use, rather than for clinical users. Clinical users have been advised to consult their professional bodies (This would include the General Medical Council, The British Medical Association and the Royal Colleges in the UK).5

Public trust

What kinds of concern might the public have with an automated system in the place of a human healthcare professional? In their book, The future of the professions (p.232-233), Richard and David Susskind highlight 8 key concerns where a professional service is replaced by an automated one:6

  • Loss of trustworthy institutions: Is the automated system provided by a reputable and reliable institution?
  • Loss of moral character: Is the automated system seen as a public benefit or a profit-making product? In the case of AI does the sale of products or acquisition of patient data for use or sale affect trust?
  • Loss of old way of doing things: When a helpful person to see/speak to) is replaced by an online form or a bot is this something that is acceptable?
  • Loss of the personal touch: Does standardisation of processes result in fairness or unfairness, and do clients feel connected or abandoned?
  • Loss of empathy: Is the system able recognise and accommodate for distress? Is it ‘coldly’ clinical, or emotionally unresponsive?
  • Loss of jobs: Is the system offered primarily as a way to balance a budget by reducing staff numbers?
  • Loss of pipeline of experts: Who backs up, supervises and helps to train the automated system?
  • Loss of future roles: What will people do instead of being doctors, nurses and receptionists?

Susskind and Susskind consider these to be addressable concerns. Moreover, they argue that for many people, automation is the only way that services will remain or become accessible. This could be for financial reasons (professionals are expensive) or social ones (machines can operate 24/7).

The alignment of AI technologies

The use of automated systems can be considered in terms of four broad groups of stakeholders:

  • The creators/vendors of an automated system
  • The policymaker and/or commissioner of a service using the system.
  • The human team who would ordinarily deliver part or all of the service.
  • The recipients, or users of service.

Stakeholders may have different goals, beliefs, values and material needs/constraints (conditions).  Medical journals have discussed the alignment problem: where AI is poorly aligned with the needs of the public and values of society it can present a public health risk.2

So ask yourself: Have I considered how the system in question benefits or harms: service users, healthcare staff, the institutions delivering the service, and wider society? Have I thought through all the issues, for example applying an  analysis of strengths, weaknesses, opportunities and threats (SWOT) for this technology?

Do I know:

  • That the data on which the system is trained adequately represents the service user population? For example are different genders, age groups, sexes and ethnicities represented in the data used to develop an automated application or AI?
  • That the system is fair and are there ways of identifying and adjusting for systematic bias?
  • What transparency is in the system so that errors and failings can be addressed and learned from?
  • What accountability for reasonableness is in a system – sharing reasons why the system has made a decision, which then allows appeal to a human supervisor and/or allows a decision to be understood/complied with?
  • What is the ability of human intervention to change the goals of the system in order to prevent significant or fatal errors?
  • What safeguards are there against unwise or mischievous adjustment of the system?
  • Is there any risk that a system will amplify human mistakes or failings?
  • How does the system measure success? Do metrics used for success or failure align well enough with population benefits and harms?

Policy makers need to consider what trade-offs are relevant in the deployment of AI. It is a mistake to trade values, e.g. respect for autonomy vs benefit and harm, rather than instances of those values e.g. more people can access medical advice vs there is less access to advanced professional care. Does use of an automated application tend more towards digital democratisation or digital discrimination?7

Accountability, risk and assurance

Have I thought through all the issues, for example applying an  analysis of strengths, weaknesses, opportunities and threats (SWOT) for this technology?

Many automated systems are deployed in high volume and low risk settings, meaning that they can process many thousands more interactions than is possible by humans, but only those where there are clear decision-making pathways, low risk of harm, and lower stakes or extent of harm in the event of system error. All of these elements: clear pathways, low risk and low stakes need to be considered.6 Policymakers, commissioners and clinicians need to understand their responsibilities and accountability when working with AI.5

Consider:

  • What legal liability does the designer/vendor of the technology accept? What liability is passed to institutions and individuals that use a technology?
  • What is the potential for a technology to be misused either through error or by bad actors? Can a system be ‘gamed’ to produce particularly desirable outcomes?
  • Has the designer or vendor of the relevant technology identified any inherent risks that have been managed or mitigated. Are there any residual risks which cannot yet be managed?
  • Has the designer or vendor of the technology met minimum legal compliance to be first to market or gone beyond this? For example, does the technology take account of legally-protected characteristics in terms of equality and diversity, or take into account other sources of inequity, such as being a carer or low socioeconomic status?
  • Are there clear liberty/autonomy safeguards?
  • Where is data held? Is it held in ways that comply with NHS expectations for information governance?
  • What happens to data collected by the system? Does it get used for purposes other than provision of the service e.g. for improvement of the service, or ‘in trust’ for end users to decide what to do with? AI can use data in unacceptable ways or on ways that are inscrutable.
  • Is data ever it sold on?
  • What happens if the designer/vendor sells their company?

Opportunity costs and hidden harms

Commissioners and users of AI for healthcare should be imaginative about the potential costs of using AI in the long-term and to third parties, as well as broader impacts beyond the vendor-client relationship. For example:

  • Are cost savings generated by a technology (e.g. an increase in numbers of patients seen compared to a person) reduced, or negated, by unintended consequences of the technology and/or system being risk averse (e.g. the technology having a lower threshold for referral onwards or for further investigation)? Even if does not, does a successful automated service generate un-costed or unmet demand elsewhere in a health system?
  • Does the environmental benefit of an app delivering advice by phone rather than patients and staff travelling to a clinic match the environmental cost of creating and running the computing facilities required?
  • Do un-nuanced ethical principles pose predictable but unforeseen risks: e.g. does maximising the welfare of the majority routinely and systematically discriminate against vulnerable minorities?

A recipe for success

Morley describes 4 unifying concepts (Utility, usability, efficacy, and trust)  in her work on an algorithmically enhanced healthcare service, arguing we should be sceptically optimistic.8 We applaud her realist approach to appraising the practical and moral issues raised by AI. Drawing on this and in summary the key questions to ask yourself are:

1. Is this technology useful?

  • Does it deliver something that patients, or the healthcare service needs? The desire to solve a healthcare problem should lead the agenda rather than the inevitable enthusiasm for something new and clever.
  • Does it actually do what it claims? Beware wary of the hype and seek evidence of real world evidence for effectiveness and safety.

2. Is it useable?9 What is the experience of the practitioner or patient or carer? Does it produce results that can be easily interpreted by users? Would they use this over existing solutions?

3. Will it be used and effective? Is it implementable? Is it embeddable within a care pathway or existing workflows? Do benefits in terms of time cost and saving of resources outweigh costs?

4. Will it be trusted – would you recommend it to your colleague or family member?

 

These notes are intended as a brief guide to some key issues for the clinical users as well as the shapers, drivers, creators and embedders of AI in healthcare, to provoke further reading and discussion in a rapidly developing field.

 

*COI: AP and SL were funded to explore the the ethical acceptability and trustworthiness of AI in healthcare by SBRI Phase II funding to Ufonia for exploration of views on ethics in AI, The views expressed are solely those of the authors.

References

  1. Ethics and governance of artificial intelligence for health. 2021, www.who.int/publications/i/item/9789240029200
  2. PiersonL, Tsai B. Misaligned AI constitutes a growing public health threat, BMJ  2023;  381 :p1340 doi:10.1136/bmj.p1340
  3. Upshur, R. (2019). Artificial Intelligence, Machine Learning and The Potential Impacts on The Practice of Family Medicine: A Briefing Document. Toronto, ON, AMS Healthcare.
  4. Federspiel F, Mitchell R, Asokan A, et al. Threats by artificial intelligence to human health and human existence. BMJ Global Health 2023;8:e010435. doi:10.1136/bmjgh-2022-010435
  5. Smith H, Downer J, Ives J. Clinicians and AI use: where is the professional guidance? J Med Ethics. 2024 Jun 21;50(7):437-441. doi: 10.1136/jme-2022-108831. PMID: 37607805; PMCID: PMC11228205.
  6. Susskind D, Susskind R. The Future of the Professions: How Technology Will Transform the Work of Human Experts. Oxford: OUP Oxford, 2015.
  7. Kass MH, Porter Z et al Ethics in conversation: Building an ethics assurance case for autonomous AI-enabled voice agents in healthcare, https://arxiv.org/abs/2305.14182
  8. Morely J, 2024, Collaborating Centre for Values-Based Practice, Designing an algorithmically enhanced NHS, https://www.youtube.com/watch?v=fY5hSn426EE
  9. https://www.interaction-design.org/literature/article/useful-usable-and-used-why-they-matter-to-designers#:~:text=A%20product%20must%20be%20useful,persuade%20others%20to%20use%20it [accessed 21/9/24]

 

Featured image: Photo by Kevin Ku on Unsplash

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Previous Story

Episode 182: What predicts unplanned hospital admissions in older adults, and what can we do about it?

Next Story

Introduction of online workplace-based assessments for practicing primary care physicians in countries lacking structured primary care

Latest from BJGP Long Read

0
Would love your thoughts, please comment.x
()
x
Skip to toolbar