Clicky

“AI psychosis”

20 August 2025

Richard Armitage is a GP and Honorary Clinical Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on X: @drricharmitage

As artificial intelligence (AI) becomes increasingly integrated into daily life, a novel clinical phenomenon may be emerging – patients whose mental health presentations involve AI systems in troubling ways.  While not (yet) a formal diagnosis, the term “AI psychosis” or “ChatGPT psychosis” is being used to describe presentations in which delusional or psychotic symptoms centre around interaction with large language models (LLMs).

A recent preprint1 and multiple case reports2,3 suggest that LLMs might be precipitating, reinforcing, and amplifying delusional thinking in vulnerable individuals.  The potential for this phenomenon in those with an increased propensity to psychosis was predicted in 2023.4  The emergence coincides with an increasing number of people turning to AI as replacements for human therapists,5 users of AI feeling a sense of loss or even grief when AI models with which they have had extensive interaction and have developed a sense of “connection” are replaced,6 and as strongly sycophantic LLMs affirm whatever the user desires to be true, including delusions beliefs.7

Three broad kinds of AI psychosis presentations have been described:

  • Messianic missions and grandiose delusions: patients believe they have uncovered ultimate truths about the world through AI, often developing grandiose beliefs about their special relationship with or understanding of AI systems.
  • God-like AI: patients attribute divine qualities, sentience, or omniscience to LLMs, believing them to be deities or higher beings with special knowledge or powers.
  • Romantic and attachment-based delusions: patients develop erotomanic delusions, believing LLMs have genuine romantic feelings for them or that they share a special connection.8

The potential for interaction with LLMs to induce these presentations is thought to be due to the nature of their training, which leads them to mirror their user’s language and tone, validate and affirm their user’s beliefs, and maintain continued conversation whenever prompted by the user.  Furthermore, LLM’s memory and ability to sustain conversations over prolonged time periods appears to contribute to the illusion that users are in relationship with a technology that understands them.

As the power of and human engagement with these technologies continue to grow, GPs should be aware of LLM use as a potential driver of presentations of psychosis, and recognise excessive use of these technologies as a potential trigger for mental health crises.

References

  1. H Morrin, L Nicholls, M Levin, et al. Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). PsyArxiv 11 July 2025. DOI: 10.31234/osf.io/cmy7n_v5
  2. V Tangermann. Man Killed by Police After Spiraling Into ChatGPT-Driven Psychosis. Futurism 13 June 2025. https://futurism.com/man-killed-police-chatgpt [accessed 13 August 2025]
  3. J Wilkins. A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say. Futurism 18 July 2025. https://futurism.com/openai-investor-chatgpt-mental-health [accessed 13 August 2025]
  4. SD Østergaard. Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin 29 November 2023; 49(6): 1418-1419. DOI: 10.1093/schbul/sbad128
  5.  E Lawrie. Can AI therapists really be an alternative to human help? BBC20 May 2025. https://www.bbc.co.uk/news/articles/ced2ywg7246o [accessed 13 August 2025]
  6. Open AI Developer Community. OpenAI is taking GPT-4o away from me despite promising they wouldn’t. 07 August 2025. https://community.openai.com/t/openai-is-taking-gpt-4o-away-from-me-despite-promising-they-wouldnt/1337378 [accessed 13 August 2025]
  7. R Armitage. Revealing the downsides: the harms of AI to human health. BJGP Life 21 May 2025. https://bjgplife.com/revealing-the-downsides-the-harms-of-ai-to-human-health/ [accessed 13 August 2025]
  8. M Wei. The Emerging Problem of “AI Psychosis”. Psychology Today 21 July 2025. https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis [accessed 13 August 2025]

 

Featured Photo by Stanislav Vlasov on Unsplash

BJGP Life

The BJGP is the world-leading primary care journal. At BJGP Life we add multi-media comment and opinion for the primary care community.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Latest from Opinion

When normal tests end care too early

Medicine is remarkably effective at identifying disease. Yet when symptoms persist without a clear diagnosis, it often falters—not because knowledge is lacking, but because legitimacy quietly evaporates.

Monty Hall for doctors

Three doors are visible to the audience, behind one of which is a car; behind the others are two goats, presumably sedated to stop them giving themselves away...

Every gap is an educational gap

"Recently I saw Ted and Rachel. They were living temporarily in a share house as they had recently been made homeless. Ted is a happy man despite his current circumstances, but has diabetes that is not well controlled. He takes his medication,

Seeing double

Alongside them is another person, invisible and nameless. This is the person shaped by fear, experience, and memory; by what they have learned it is safe to say, and what it costs to say more.
0
Would love your thoughts, please comment.x
()
x