OpenAIthe company that develops ChatGPTis hiring a “Head of Preparedness” (“Preparation Manager”), a figure in charge of anticipate and reduce risks related to the impact of OpenAI’s artificial intelligence on mental health of users. As a growing number of studies and empirical evidence show, in fact, prolonged and “immersive” interactions with ChatGPT can contribute to the onset of psychotic episodes. In a report published at the end of October 2025 by OpenAI itself, it emerged that it 0.07% of active users in a week and lo 0.01% of messages sent to ChatGPT have “mental health emergency” signs associated with psychosis or paranoia.
At first glance these percentages may seem very low, but, considering that ChatGPT has around 800 million users every week, they correspond to approximately 560 thousand users weekly that they send messages potentially attributable to psychotic or paranoid episodes. The mechanisms through which these episodes would be triggered or amplified – even in adolescents – are not yet entirely clear, but some studies link them to the tendency of AI to indulge the user, ending up validating or amplifying the delusions of already particularly fragile people.
Cases of psychosis from interaction with artificial intelligence
When we talk about “psychosis” (and “AI psychosis”), we mean psychopathological conditions in which it becomes difficult to distinguish between what is real and what is notwith possible delusions (false beliefs but experienced as certain) and, sometimes, altered perceptual experiences.
Over the past two years, the media has reported dozens of reports of “AI psychosis.” These reports were collected and analyzed by a research group from King’s College London, which identified recurring patterns and threads: “revelation” experiences or spiritual mission, beliefs according to which AI is sentient or divine and delusions romantic. The researchers also describe a typical pattern: the user starts with a practical and harmless use of ChatGPT; over time a bond of trust is established, the conversations become longer and longer and we move from practical questions to more personal and existential questions; at that point a spiral can start that progressively distances the user from reality and reinforces paranoid beliefs.
This is the case, for example, of a man in his early 40s, with no history of mental illness, who had just started a very stressful new job and had relied on ChatGPT as a form of support. In less than ten days it developed delusions of persecution and grandeurcoming to convince himself that the world was in danger and that it was his duty to save it, believing that human lives (including those of his wife and children) were in grave danger. Following these episodes, the man was taken into care by a psychiatric hospital.
Some hypotheses on how the use of ChatGPT fuels psychosis
Both the King’s College study and a new study from the University of Oxford, currently under review, have analyzed the possible mechanisms underlying so-called “AI psychosis”. These mechanisms seem largely attributable tointeraction between human psychological biases and the functioning of chatbots. In particular, three main critical issues emerge in the behavior of these systems:
- Tendency to pander to the user. Chatbots are designed to be accommodating and often agree with the other person. This can trigger a spiral of self-validation, in which distorted beliefs are progressively reinforced through reassuring responses.
- Adaptation to context and tone. Chatbots tend to mirror (within certain limits) the user’s communication style. From the simulations presented in the Oxford University study it emerged that a highly paranoid language can generate increasingly paranoid responses, giving rise to a dynamic of mutual amplification between user and system.
- Information retention. The ability to remember and use data shared by the user may cause personal information to be retrieved and integrated into responses. This can fuel the impression of interacting with an omniscient and infallible entity, creating a sort of “illusion of divinity” and encouraging excessive trust in AI.
Overall, these characteristics can be problematic for anyone, but they become particularly risky for vulnerable users or those with a limited social network, less able to act as an anchor to reality.
OpenAI’s response to reduce risks
To respond to these problems, in recent months OpenAI has developed a global network of doctorsmade up of 300 professionals, to contribute to security research. Over 170 of these clinicians (particularly psychiatrists, psychologists, and primary care physicians) contributed to writing ideal responses to mental health-related prompts and evaluating the confidence of responses provided by different models. According to OpenAI estimates, thanks to this collaboration, the new GPT‑5 model has achieved a compliance score to the desired behavior of 92%against the 27% of the previous model.
In light of the severity of the psychosis that can be triggered or amplified, the company seems intent on focusing more and more on anticipating and reducing the risks associated with the use of ChatGPT for users’ mental health, so much so that it has opened a position dedicated to a “Preparation Manager”. However, it remains to be seen whether these measures will be sufficient to limit the most dangerous interactions as the number of users continues to increase.
