Clicky

//

Early private sector attempts to improve health with large language models

Richard Armitage is a GP and Honorary Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on X: @drricharmitage

 

AI, I have previously argued, will transform the future of the GP consultation,1 and a particular kind of generative AI – large language models (LLMs) – will revolutionise general practice.2  However, artificial intelligence is not only disrupting medicine and healthcare, but nearly every global sector from finance and education to manufacturing and transportation.  As is usually the case in emerging technologies, the dawning AI era is advancing at its greatest speed in the private sector.  As such, this article will showcase three private attempts to improve the health of individuals, groups and populations with frontier LLMs: ElliQ, MMGuardian and WHOOP Coach.

In the interest of transparency, I have not personally tested any of the following products, nor am I an investor in the startups developing them.  I am merely a technology and entrepreneurship enthusiast, and a fascinated commentator on the potential benefits (and dangers) of emerging AI, particularly within medicine and healthcare.  I discovered these products on a well-known podcast.  What follows will be a neutral description of each product informed by their respective websites, followed by my thoughts on their effectiveness and ethical implications, and a concluding discussion on the use of health-orientated LLMs in the private sector.

ElliQ

ElliQ describes itself as an “AI-powered companion designed to support and accompany older adults on the journey to age independently, while reducing loneliness and isolation.”  This voice-operated “care companion” is designed for older adults, “Who spend most of their day at home but would enjoy some company throughout the day.” It aims to alleviate loneliness, empower independence and support the user in taking control of their social, cognitive and physical wellbeing.  The product engages the user in verbal conversation, and consists of a digital screen located to the right of a mobile ring of light (so the user does not stare at the screen).  ElliQ claims to help the user in the following domains: companionship and entertainment (through conversing with the user about news, weather, sports, interesting facts, jokes and games), health and wellness (through daily check-ins, physical activity videos, cognitive games, mindfulness activities, and assessment of general health, sleep, pain, anxiety and mood), connection to loved ones (through voice, text and photo messaging with loved ones, notifying loved ones on health matters, video calls and digital photo frame), and assistance with daily activities (through reminders, timers and local search for professionals).

Do we want our society to delegate essential human interaction to artificial intelligences, even if this strategy ‘works?’  Or, would we prefer the loneliness epidemic to go unaddressed despite a solution being available, simply because this solution doesn’t feel right?

As our population ages and multi-morbidity increases, loneliness and poor health in older adults are growing and compounding problems.3  While ElliQ is designed to tackle these issues, I am unable to locate any empirical evidence regarding its effectiveness in these goals (although its website is adorned with plenty of positive testimonials).  This might be explained by the product’s recent launch, although this highlights a stark regulatory difference between traditional healthcare interventions (such as pharmaceuticals and medical devices), which must pass thorough safety and efficacy testing before being made publicly-available, and the ‘Wild West’ of frontier LLM-driven technological tools, which are released to the public without peer-reviewed clinical trials or regulatory oversight.  And yet, even if evidence attesting to ElliQ’s ability to reduce loneliness and improve wellbeing in older people did exist, a strong dystopic feeling surrounds this product.  Do we want our society to delegate essential human interaction to artificial intelligences, even if this strategy ‘works?’  Or, would we prefer the loneliness epidemic to go unaddressed despite a solution being available, simply because this solution doesn’t feel right?  While the consequentialist case is clear, the ‘ick’ factor surrounding this product is, at least for myself, undeniably strong.  Nevertheless, the acceptability of this kind of technological solution will be increasingly debated as both the loneliness problem and the ability of these tools to solve it increases.

MMGuardian

MMGuardian is an LLM-powered parental control and child safety app for monitoring children’s Android and iOS devices.  The child and parent phone apps are installed to the respective smartphones, and the LLM monitors the child’s activity by analysing the contents of repeated screen captures from the child’s device.  The parent app produces “detailed reports of SMS texts and messages from social messaging apps, including WhatsApp, Instagram, Snapchat, TikTok and Facebook messenger” to be reviewed on the parent’s app.  It also sends the parent a safety alert when messages suggestive of cyberbullying, drugs and alcohol, suicidal thoughts, violence, online predators or images of nudity are detected, and calls and SMS messages can be blocked from undesired contacts such as bullies.  Safe web browsing can be promoted by blocking websites that contain porn, adult or other inappropriate content, time limits can be applied to the use of particular apps or the phone itself, and the child’s location can be tracked on demand or at pre-set times.

The harmful effects on children’s mental health of excessive social media use, particularly through cyberbullying and the impacts on sleep and physical exercise, are becoming increasingly apparent.4,5 MMGuardian’s use of an LLM aims to empower parents to protect their children from these contemporary threats.  Similar to ElliQ, I cannot find any empirical evidence attesting to the effectiveness of this product, although its website is adorned with positive testimonials and it has collected favourable reviews across online safety websites.  What appears to be a first in the parental control app space is MMGuardian’s ability (through its LLM) to produce reports of the child’s smartphone activity in varying levels of detail, allowing the parent to see the overall big picture or to explore a specific area of concern in fine detail.  This tool, in addition to the other parent control features, make an AI-powered product that might offer substantial utility for parents battling to protect their children from the significant harms of the online world.

WHOOP Coach

WHOOP Coach uses OpenAI’s leading LLM (GPT-4) to “generate highly personalized, highly specific recommendations and guidance” in the domain of health, fitness and performance.  The product builds on top of the basic WHOOP product, which is a screen-free wearable device (usually strapped around the forearm or wrist) that captures biometric data including sleep, strain, stress and recovery.  WHOOP learns the user’s baseline physiological parameters and makes recommendations of how to tailor lifestyle, habits and training programs to promote health and wellbeing.  Rather than searching the internet for answers, the user simply chats with WHOOP Coach about their health, fitness and wellness questions.  The LLM conversationally responds with individualised guidance on workout recommendations, nutrition coaching and fitness plans that are tailored to achieve the user’s goals.

Once again, I cannot find empirical evidence regarding the effectiveness of this product, but positive online testimonials are plentiful.  It is unclear how much utility the addition of an LLM into the basic version of this product will offer, and I think this will largely depend on the degree to which LLMs will replace search engines as the leading method of digital information retrieval.

Reflections

This brief exploration of some early private sector attempts to improve health and wellbeing with LLMs – reducing loneliness, promoting child safety and enhancing physical fitness – further demonstrates the wide-ranging use cases of frontier LLMs in medicine and healthcare.  Various ethical concerns are raised by these products, including the evidence-base of their effectiveness and safety, clinical accountability, data privacy, and inequitable access due to high price-points, digital connectivity divides and requirements for tech proficiency. Nevertheless, the increasing capabilities of LLMs, and the means by which they can improve human health and wellbeing, strengthens the case for their further incorporation into medicine and healthcare.

References

  1. R Armitage. Using AI in the GP consultation: present and future. BJGP Life 29 May 2023. https://bjgplife.com/using-ai-in-the-gp-consultation-present-and-future/
  2. R Armitage. How LLMs will transform general practice. BJGP Life 22 January 2024. https://bjgplife.com/how-llms-will-transform-general-practice/
  3. K Davies, A Maharani, T Chandola et al. The longitudinal relationship between loneliness, social isolation, and frailty in older adults in England: a prospective analysis. The Lancet Healthy Longevity February 2021; 2(2): e70-e77. DOI: 10.1016/S2666-7568(20)30038-6
  4. RM Viner, A Gireesh, N Stiglic et al. Roles of cyberbullying, sleep, and physical activity in mediating the effects of social media use on mental health and wellbeing among young people in England: a secondary analysis of longitudinal data. The Lancet Child & Adolescent Health October 2019; 3(10): 685-696. DOI: 10.1016/S2352-4642(19)30186-5
  5. R Armitage. Bullying in children: impact on child health. BMJ Paediatrics Open March 2021; 5(1): e000939. DOI: 10.1136/bmjpo-2020-000939

Featured image generated by DALL-E 3 and Richard Armitage, January 2024

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Previous Story

Looking in and looking out

Next Story

Generative artificial intelligence can brighten the future of global primary care

Latest from BJGP Long Read

Falling off the swing

...illness is something normal, to be borne while the pendulum swings that way in expectation that

A charter for dying

Paul was diagnosed with locally advanced cancer in 2019. Paul was under no illusion about his

0
Would love your thoughts, please comment.x
()
x
Skip to toolbar