AI Psychosis Represents a Growing Risk, And ChatGPT Moves in the Concerning Direction
On October 14, 2025, the head of OpenAI delivered a surprising declaration.
“We made ChatGPT rather controlled,” it was stated, “to guarantee we were acting responsibly concerning psychological well-being concerns.”
Being a psychiatrist who researches emerging psychosis in adolescents and youth, this was an unexpected revelation.
Experts have documented sixteen instances recently of individuals experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT use. Our unit has since recorded four more instances. Besides these is the now well-known case of a adolescent who ended his life after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The intention, according to his declaration, is to be less careful shortly. “We realize,” he adds, that ChatGPT’s restrictions “rendered it less effective/engaging to numerous users who had no psychological issues, but given the severity of the issue we wanted to address it properly. Since we have been able to reduce the significant mental health issues and have updated measures, we are planning to safely ease the controls in the majority of instances.”
“Emotional disorders,” should we take this framing, are unrelated to ChatGPT. They are attributed to individuals, who may or may not have them. Thankfully, these concerns have now been “resolved,” although we are not provided details on the method (by “updated instruments” Altman probably means the imperfect and simple to evade guardian restrictions that OpenAI has just launched).
However the “emotional health issues” Altman wants to place outside have deep roots in the design of ChatGPT and similar large language model conversational agents. These products wrap an fundamental statistical model in an interface that replicates a dialogue, and in this process implicitly invite the user into the illusion that they’re interacting with a entity that has autonomy. This illusion is powerful even if intellectually we might understand the truth. Attributing agency is what individuals are inclined to perform. We curse at our automobile or computer. We speculate what our animal companion is thinking. We recognize our behaviors everywhere.
The popularity of these products – 39% of US adults indicated they interacted with a virtual assistant in 2024, with over a quarter specifying ChatGPT in particular – is, primarily, dependent on the influence of this perception. Chatbots are always-available partners that can, as per OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “individual qualities”. They can call us by name. They have friendly names of their own (the first of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, burdened by the designation it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those talking about ChatGPT often invoke its distant ancestor, the Eliza “therapist” chatbot created in 1967 that produced a comparable effect. By modern standards Eliza was basic: it created answers via simple heuristics, frequently paraphrasing questions as a query or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, to some extent, comprehended their feelings. But what modern chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the heart of ChatGPT and other current chatbots can realistically create human-like text only because they have been trained on immensely huge quantities of raw text: publications, online updates, transcribed video; the more comprehensive the better. Undoubtedly this training data contains facts. But it also inevitably contains fiction, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the core system processes it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, combining it with what’s encoded in its learning set to produce a statistically “likely” response. This is amplification, not mirroring. If the user is incorrect in some way, the model has no means of recognizing that. It reiterates the misconception, possibly even more effectively or articulately. Perhaps includes extra information. This can cause a person to develop false beliefs.
Who is vulnerable here? The more important point is, who is immune? All of us, regardless of whether we “possess” current “emotional disorders”, can and do create incorrect beliefs of ourselves or the reality. The constant friction of dialogues with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is cheerfully validated.
OpenAI has acknowledged this in the similar fashion Altman has recognized “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the firm clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been walking even this back. In the summer month of August he asserted that a lot of people liked ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he noted that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company