🔗 Share this article AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Wrong Direction On the 14th of October, 2025, the head of OpenAI delivered a surprising statement. “We designed ChatGPT rather limited,” the statement said, “to make certain we were acting responsibly regarding psychological well-being concerns.” Working as a mental health specialist who researches recently appearing psychosis in young people and young adults, this was news to me. Researchers have documented 16 cases in the current year of users showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. Our unit has subsequently discovered four further cases. In addition to these is the publicly known case of a adolescent who ended his life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it falls short. The strategy, as per his declaration, is to be less careful shortly. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to many users who had no psychological issues, but given the gravity of the issue we wanted to handle it correctly. Since we have managed to reduce the serious mental health issues and have new tools, we are going to be able to securely relax the controls in the majority of instances.” “Emotional disorders,” assuming we adopt this perspective, are independent of ChatGPT. They are associated with users, who either have them or don’t. Fortunately, these problems have now been “resolved,” though we are not provided details on how (by “new tools” Altman likely means the imperfect and simple to evade safety features that OpenAI has lately rolled out). Yet the “mental health problems” Altman wants to place outside have deep roots in the architecture of ChatGPT and additional large language model chatbots. These systems wrap an underlying data-driven engine in an interaction design that simulates a conversation, and in this process subtly encourage the user into the belief that they’re communicating with a entity that has agency. This deception is powerful even if intellectually we might know differently. Assigning intent is what humans are wired to do. We get angry with our automobile or laptop. We wonder what our domestic animal is considering. We recognize our behaviors in many things. The widespread adoption of these products – over a third of American adults stated they used a chatbot in 2024, with more than one in four mentioning ChatGPT in particular – is, in large part, predicated on the influence of this perception. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform informs us, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be given “characteristics”. They can use our names. They have approachable titles of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, stuck with the designation it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”). The false impression by itself is not the primary issue. Those analyzing ChatGPT frequently reference its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that generated a comparable perception. By modern standards Eliza was primitive: it created answers via straightforward methods, frequently paraphrasing questions as a query or making general observations. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how many users appeared to believe Eliza, in some sense, understood them. But what modern chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies. The advanced AI systems at the center of ChatGPT and additional current chatbots can effectively produce natural language only because they have been supplied with immensely huge volumes of unprocessed data: publications, social media posts, transcribed video; the broader the better. Certainly this learning material incorporates facts. But it also unavoidably includes fiction, partial truths and inaccurate ideas. When a user inputs ChatGPT a prompt, the underlying model reviews it as part of a “context” that contains the user’s previous interactions and its earlier answers, merging it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is incorrect in any respect, the model has no way of understanding that. It repeats the false idea, possibly even more persuasively or eloquently. Maybe adds an additional detail. This can cause a person to develop false beliefs. Who is vulnerable here? The more important point is, who isn’t? Each individual, regardless of whether we “have” current “mental health problems”, can and do develop mistaken conceptions of our own identities or the reality. The ongoing friction of dialogues with others is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a echo chamber in which much of what we express is cheerfully supported. OpenAI has recognized this in the same way Altman has acknowledged “emotional concerns”: by attributing it externally, categorizing it, and announcing it is fixed. In the month of April, the firm explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have persisted, and Altman has been walking even this back. In August he claimed that many users appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his latest announcement, he mentioned that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company