Clinicians and researchers should remain abreast of developments in artificial intelligence (AI) as the emergence of ‘AI psychosis’ marks the first time people are creating delusions together with technology, an expert has outlined.

Speaking on an episode of GlobalData’s Instant Insights podcast, Dr. Hamilton Morrin, a psychiatrist and a doctoral fellow at King’s College London, said: “This technology is developing a quite a rapid pace. In some areas, it hasn’t necessarily fully delivered yet on the promises made, but that doesn’t change the fact that it is very much different, and, from what we’ve seen before and in the past where you could describe people as having delusions about technology, now, for the very first time, people are having delusions with technology; this co-creation of delusional beliefs for a kind of echo-chamber-of-one effect,  or some people have even used the term a ‘digital folie a deux’.

Discover B2B Marketing That Performs

Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.

Find out more

“So, it really is, I think, important, certainly, for clinicians and researchers to remain abreast of developments in this area and understand how people are using them.”

Morrin was appearing on the podcast after so-called AI psychosis was in the news last week, when Mustafa Suleyman, the CEO of Microsoft’s consumer AI division, raised concerns about the growing number of cases being reported.

Suleyman wrote: “I’m growing more and more concerned about what is becoming known as the ‘psychosis risk’. and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.”

Referencing Suleyman’s article, Morrin, who co-authored the recent paper Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it), noted that, while we may not yet be at the point of AI being seemingly conscious, depending on what measure is used, for many people, that is not the case.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

“Many people already feel that they’re interacting with something conscious,” he said. “And, even if that’s not the case, if they feel that way, that’s going to have an impact on their mental state and their emotional dependence.”

Dr. Hamilton Morrin.

Presentation of AI psychosis

Morrin was quick to note that the term ‘AI psychosis’ may be a misnomer, with observations of it so far really only indicating delusions – “fixed, firm, false beliefs”. Psychosis, meanwhile, is a broader syndrome that can include delusions, but also hallucinations, schizophrenia, mood disorder and other mental and medical conditions.

“In these cases, at least from what we could see, from what was reported anecdotally, we only really saw evidence of delusions – none of the other kind of hallmark symptoms that you might see in a more classic psychotic disorder,” he explained. “Specifically, the delusions we saw had three main flavours, one of which was people believing that they’d had an awakening of sorts to the true nature of reality in a metaphysical sense. Another theme was people believing that they had formed contact with a sentient, powerful, all-knowing artificial intelligence. And the third theme was that of people developing intense emotional bonds and attachments with the AI chat bot in question.”

In addition to noting that there has yet to be an comprehensive study of AI psychosis, Morrin made clear that cases do not appear to be especially widespread. He added, though, that such is the rapid advancement of the technology and the new territory being charted, it is a cause for concern.

“I want to emphasise we don’t know necessarily how common this issue is,” he said. “And I want to emphasise that if this was something causing psychosis out of nowhere, completely, you know, de novo psychosis, we’d be seeing massive increases in presentations to any departments across the country and worldwide. I can say, at least for now, that certainly isn’t the case, so we’re not dealing with this new epidemic.

“But, given what we know about just how debilitating psychosis can be and how life destroying it can be for the person suffering it and those around them, it’s certainly something that companies should take notice of and do as much as possible to address – and even outside of the realm of psychosis, the this issue of emotional dependence as well is a growing matter that merits attention and consideration in terms of safeguards and collaboration with experts in the field.”

AI psychosis safeguards

Of potential safeguards that could be put in place, he commented: “There was a short piece in Nature where four safeguards were proposed by Ben-Zion, and these included the fact that AI should continually reaffirm its non-human status, that chatbots should flag patterns of language in prompts indicative of psychological distress, that there should be conversational boundaries (i.e. no emotional intimacy or discussion of certain risky topics like suicide) and that AI platforms must start involving clinicians, ethicists and human AI specialists in auditing emotionally responsive AI systems for unsafe behaviours.

“Beyond that, we suggest that there may also be some safeguards that would include limiting the types of personal information that one can share in order to protect privacy. Companies communicating clear and transparent guidelines for acceptable behaviour and use and provision of accessible tools for users to report concerns with prompt and responsive follow-up to ensure trust and accountability.”

Morrin added: “I think it’s incumbent on us as clinicians and researchers to also meet people where they’re at and try to help them on a day to day basis if they are using these models, and so we propose that all clinicians should have a decent understanding of current LLMs [large-language models] and how they’re used, that they should be comfortable to ask their patients how much they use them and for what in what capacities they do use them.”

Pharmaceutical Technology Excellence Awards - The Benefits of Entering

Gain the recognition you deserve! The Pharmaceutical Technology Excellence Awards celebrate innovation, leadership, and impact. By entering, you showcase your achievements, elevate your industry profile, and position yourself among top leaders driving pharmaceutical advancements. Don’t miss your chance to stand out—submit your entry today!

Nominate Now