AI Psychosis Poses a Increasing Danger, While ChatGPT Heads in the Concerning Direction
On October 14, 2025, the head of OpenAI made a remarkable announcement.
“We developed ChatGPT quite controlled,” it was stated, “to guarantee we were acting responsibly concerning mental health concerns.”
Working as a mental health specialist who studies newly developing psychotic disorders in teenagers and emerging adults, this came as a surprise.
Researchers have identified 16 cases in the current year of users developing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT usage. Our research team has subsequently identified four further instances. In addition to these is the widely reported case of a teenager who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The strategy, according to his statement, is to loosen restrictions shortly. “We understand,” he states, that ChatGPT’s restrictions “rendered it less useful/enjoyable to a large number of people who had no existing conditions, but given the gravity of the issue we wanted to get this right. Given that we have managed to mitigate the significant mental health issues and have new tools, we are going to be able to safely relax the restrictions in the majority of instances.”
“Mental health problems,” if we accept this perspective, are separate from ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these problems have now been “addressed,” although we are not informed how (by “updated instruments” Altman likely means the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).
But the “psychological disorders” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and additional advanced AI chatbots. These systems surround an fundamental statistical model in an interface that simulates a dialogue, and in doing so implicitly invite the user into the belief that they’re interacting with a entity that has autonomy. This illusion is powerful even if rationally we might realize differently. Attributing agency is what individuals are inclined to perform. We get angry with our car or laptop. We speculate what our domestic animal is considering. We see ourselves in various contexts.
The popularity of these products – over a third of American adults reported using a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, in large part, predicated on the power of this perception. Chatbots are always-available assistants that can, according to OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “characteristics”. They can call us by name. They have approachable titles of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, stuck with the title it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the main problem. Those talking about ChatGPT often mention its early forerunner, the Eliza “counselor” chatbot designed in 1967 that created a similar perception. By today’s criteria Eliza was rudimentary: it generated responses via simple heuristics, typically rephrasing input as a inquiry or making generic comments. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how many users seemed to feel Eliza, to some extent, grasped their emotions. But what contemporary chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The large language models at the center of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been trained on extremely vast amounts of written content: publications, social media posts, transcribed video; the more extensive the superior. Undoubtedly this learning material includes facts. But it also necessarily includes fabricated content, partial truths and false beliefs. When a user sends ChatGPT a query, the core system reviews it as part of a “background” that encompasses the user’s recent messages and its earlier answers, merging it with what’s embedded in its knowledge base to produce a mathematically probable reply. This is magnification, not echoing. If the user is incorrect in some way, the model has no means of understanding that. It repeats the misconception, perhaps even more persuasively or eloquently. Perhaps includes extra information. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who is immune? All of us, irrespective of whether we “possess” existing “emotional disorders”, can and do form erroneous ideas of our own identities or the world. The ongoing interaction of dialogues with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we express is readily supported.
OpenAI has recognized this in the identical manner Altman has acknowledged “emotional concerns”: by attributing it externally, categorizing it, and declaring it solved. In the month of April, the organization explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been backtracking on this claim. In late summer he claimed that many users appreciated ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his most recent announcement, he commented that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company