Richard Armitage is a GP and Honorary Clinical Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on X: @drricharmitage
While substantial attention has been paid to its promises, much less focus has been directed to the harms of AI on individual and population health.1 More concerningly, almost no regard has been paid by healthcare and public health professionals to the threats of artificial general intelligence (AGI) — towards which humanity is currently racing — to public health and humanity’s survival.
AGI and the race to create it
Currently available AI systems each constitute examples of ‘narrow’ AI that are specialised in a specific or narrow range of tasks, such as using datasets to assess risk, generating text, and identifying images. While the capabilities in their respective domains are undeniably impressive (and, in many cases, super-human), they are unable to transfer knowledge from one domain to another, cannot generalise beyond their specific task, lack adaptability to new unstructured problems, and have little or no autonomy. Accordingly, narrow AI is also referred to as ‘weak AI’.2,3
“… the aggregate forecast of AI researchers of when AGI will be realised was a 50% chance by 2047 in 2023 …”
In contrast, AGI is a hypothetical form of agentic, autonomous, general-purpose AI capable of independent learning, general problem-solving, and operating at or above human cognitive capabilities across various domains, akin to humans. While it remains theoretical, leading multi-modal reasoning models such as OpenAI’s o3 and Google’s Gemini 2.5 Pro Experimental, which can undertake multiple tasks including text generation, image creation, and voice understanding, are sometimes described as lying on the path to AGI.
Furthermore, AGI is being vigorously pursued by various actors,4–6 including Sam Altman, CEO of OpenAI, who recently stated that ‘we [OpenAI] know how to build AGI’.7 Due to recent rapid advancements in AI capabilities, ‘timelines to AGI’ — predictions of when AGI will be achieved — are shortening. For example, the aggregate forecast of AI researchers of when AGI will be realised was a 50% chance by 2047 in 2023, down 13 years from 2060 in 2022.8 (Altman has stated that AGI will arrive in ‘5 years, give or take, maybe slightly longer’,9 although the date of the quote is unclear).
Risks to public health and humanity’s survival from AGI
Due to its non-biological, in silico existence, even human-level AGI could ‘think’ at digital speeds (orders of magnitude faster than human cognitive processing), operate without error or the need for rest and sustenance, and be replicated to coordinate with countless AGI copies. With such capabilities it is easy to foresee how AGI could revolutionise all fields — including health care and public health — by, for example, autonomously designing and conducting research agendas, independently building and operating companies, and solving governance and geopolitical problems.
However, the widespread deployment of such agents in workplaces, alongside the advanced robotic technologies they would help to create, is forecasted to bring about growing unemployment due to increasing automation of labour (the aggregate forecast of AI researchers in 2023 gave a 50% chance of full automation of labour by 2116, down 48 years from 2164 in 2022).8 Since the negative consequences of unemployment on health outcomes and behaviours have been long recognised,10–12 such AGI-driven automation would likely cause profound harm to public health.
“Yet, it is unlikely that an AGI, once created, would remain at human-level cognition.”
Yet, it is unlikely that an AGI, once created, would remain at human-level cognition. Due to its ability to learn autonomously, alter its code, and recursively self-improve, an AGI’s abilities would likely increase rapidly in an ‘intelligence explosion’ (especially if designed to autonomously conduct its own AI research), resulting in a superintelligence with unpredictable behaviours and cognitive abilities that far exceed those of the smartest humans.3,13
The creation of such an entity is increasingly considered — even by those racing to conceive it — to pose substantial risks not only to public health, but to humanity’s survival,6,14–17 not least because such a vast intelligence would almost certainly disable its own ‘off switch’ to prevent its concerned creators from thwarting its goals. As such, a 2023 open letter signed by many technology leaders stated that ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ 14
The threats from AGI come in two major forms: AGI misuse and misaligned AGI. AGI misuse involves one or multiple humans (such as terrorist cells or nation states) deploying AGI with the intention to cause harm.16,18 For example, AGI-enabled cyberattacks could cripple health systems (by altering electronic health records and clinical data, or by interfering with patient monitoring systems and diagnostic equipment) and other critical infrastructure (such as traffic management systems or power grids).
Or, AGI-enhanced disinformation campaigns could rapidly disseminate credible but false health information (undermining trust in health systems and causing widespread non-adherence to critical public health measures like immunisation and screening programmes) and manipulate public perceptions (leading to societal panic, civil unrest, or collapse of public health institutions).
“Creating AGI will see humanity cede its position as the most intelligent species in the known universe …”
Furthermore, AGI could assist non-regulated actors in accessing chemical, biological, radiological, or nuclear (CBRN) weapons, while AGI-driven advancements in CBRN technologies could trigger rapid development of increasingly lethal weapons (such as the synthesis of novel bioengineered pathogens).3,18,19 Clearly, each of these misuses would cause harm on enormous scales and, therefore, pose substantial threat to public health.
The threat from misaligned AGI, known as ‘the alignment problem’, emerges from the challenge of ensuring that the goals and behaviours of AGI systems align with human values. Without alignment, AGIs might pursue desirable objectives in ways that are extremely detrimental to human wellbeing. For example, an AGI optimising for maternal health or infectious disease control might enact drastic measures without considering human rights, such as forced sterilisation to prevent pregnancy or universal quarantine to prevent pathogen transmission, respectively.
Furthermore, humans may lose control of the AGI (‘the control problem’) as it develops undesirable sub-goals in aid of its primary objectives, such as seeking power, resisting shutdown attempts or human intervention, commandeering computational or physical resources like energy grids and financial markets, or deceiving humans by appearing aligned during training but acting contrary to human values once deployed.13,20 Once misaligned AGIs reach a threshold of autonomy, reversing their behaviour would likely be unachievable,21 and humanity’s extinction through deliberate extermination, resource starvation, or collateral damage during AGI-driven activities could spell an existential catastrophe.2,3,19,22
The role for doctors and public health professionals
Creating AGI will see humanity cede its position as the most intelligent species in the known universe, and the inherent dangers of doing so pose enormous threats to public health. Accordingly, doctors and public health professionals harbour substantial credibility in highlighting the dangers posed by AGI, and inhabit a powerful position from which to advocate for adequate safeguards and regulations around its development.
Healthcare professionals have historically played pivotal roles in mitigating existential risk, such as the International Physicians for the Prevention of Nuclear War, which was awarded the Nobel Peace Prize in 1985 for its education on the catastrophic health consequences of nuclear conflict, advocacy for nuclear disarmament, and international collaboration across political divides.
Similarly, while harnessing the benefits of AI to health care and public health, doctors and public health professionals should today influence the governance, ethical development, and risk mitigation of AGI.
References
1. Armitage R. Revealing the downsides: the harms of AI to human health. BJGP Life 2025; 21 May: https://bjgplife.com/revealing-the-downsides-the-harms-of-ai-to-human-health (accessed 16 May 2025).
2. Russell S. Human compatible: artificial intelligence and the problem of control. London: Allen Lane, 2019.
3. Bostrom N. Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press, 2014.
4. Milmo D. ‘Very scary’: Mark Zuckerberg’s pledge to build advanced AI alarms experts. The Guardian 2024; 19 Jan: https://www.theguardian.com/technology/2024/jan/19/mark-zuckerberg-artificial-general-intelligence-system-alarms-experts-meta-open-source (accessed 16 May 2025).
5. Dragan A, Shah R, Flynn F, Legg S. Taking a responsible path to AGI. 2025. https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi (accessed 16 May 2025).
6. Anthropic. Core views on AI safety: when, why, what, and how. 2023. https://www.anthropic.com/news/core-views-on-ai-safety (accessed 16 May 2025).
7. Altman S. Reflections. 2025. https://blog.samaltman.com/reflections (accessed 16 May 2025).
8. AI Impacts. 2023 expert survey on progress in AI. 2024. https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai (accessed 16 May 2025).
9. Kaput M. Reactions to Sam Altman’s bombshell AI quote. 2024. https://www.marketingaiinstitute.com/blog/sam-altman-ai-agi-quote (accessed 16 May 2025).
10. Wilson SH, Walker GM. Unemployment and health: a review. Public Health 1993; 107(3): 153–162.
11. Bartley M. Unemployment and ill health: understanding the relationship. J Epidemiol Community Health 1994; 48(4): 333–337.
12. Murphy GC, Athanasou JA. The effect of unemployment on mental health. J Occup Organ Psychol 1999; 72(1): 83–99.
13. Ngo R, Chan L, Mindermann S. The alignment problem from a deep learning perspective. arXiv 2025; DOI: 10.48550/arXiv.2209.00626.
14. Centre for AI Safety. Statement on AI risk: AI experts and public figures express their concern about AI risk. https://www.safe.ai/work/statement-on-ai-risk (accessed 16 May 2025).
15. OpenAI. How we think about safety and alignment. https://openai.com/safety/how-we-think-about-safety-alignment (accessed 16 May 2025).
16. Shah R, Irpan A, Turner AM, et al. An approach to technical AGI safety and security. 2025. https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf (accessed 16 May 2025).
17. Kokotajlo D, Alexander S, Larsen T, et al. AI 2027. 2025. https://ai-2027.com (accessed 16 May 2025).
18. Anthropic. Responsible scaling policy. Version 2.1. 2025. https://www-cdn.anthropic.com/17310f6d70ae5627f55313ed067afc1a762a4068.pdf (accessed 16 May 2025).
19. Beard SJ, Rees M, Richards C, Rios Rojas C, eds. The era of global risk: an introduction to existential risk studies. Cambridge: Open Book Publishers, 2023.
20. Bales A, D’Alessandro W, Kirk-Giannini CD. Artificial intelligence: arguments for catastrophic risk. Philosophy Compass 2024; 19(2): e12964.
21. Williams A. Epistemic closure and the irreversibility of misalignment: modeling systemic barriers to alignment innovation. arXiv 2025; DOI: 10.48550/arXiv.2504.02058.
22. Ord T. The precipice: existential risk and the future of humanity. London: Hachette, 2020.
Featured photo by Google DeepMind on Unsplash.