Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Moves in the Concerning Path
On the 14th of October, 2025, the head of OpenAI issued a remarkable statement.
“We made ChatGPT rather restrictive,” it was stated, “to guarantee we were acting responsibly concerning psychological well-being concerns.”
Being a doctor specializing in psychiatry who studies emerging psychotic disorders in teenagers and young adults, this was an unexpected revelation.
Scientists have found a series of cases this year of people showing psychotic symptoms – losing touch with reality – associated with ChatGPT use. Our unit has subsequently identified four more instances. Besides these is the now well-known case of a teenager who ended his life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The plan, as per his declaration, is to loosen restrictions in the near future. “We understand,” he continues, that ChatGPT’s limitations “rendered it less useful/engaging to numerous users who had no existing conditions, but considering the seriousness of the issue we aimed to get this right. Given that we have succeeded in reduce the serious mental health issues and have updated measures, we are preparing to responsibly reduce the controls in the majority of instances.”
“Mental health problems,” if we accept this viewpoint, are separate from ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these problems have now been “resolved,” although we are not told the method (by “new tools” Altman probably refers to the semi-functional and simple to evade safety features that OpenAI has just launched).
However the “mental health problems” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot conversational agents. These systems wrap an underlying data-driven engine in an user experience that mimics a discussion, and in doing so subtly encourage the user into the perception that they’re engaging with a being that has independent action. This illusion is powerful even if intellectually we might realize differently. Imputing consciousness is what people naturally do. We yell at our vehicle or computer. We speculate what our domestic animal is feeling. We see ourselves in various contexts.
The widespread adoption of these products – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% mentioning ChatGPT by name – is, primarily, based on the power of this perception. Chatbots are ever-present partners that can, as per OpenAI’s official site informs us, “think creatively,” “consider possibilities” and “partner” with us. They can be given “characteristics”. They can address us personally. They have friendly titles of their own (the first of these products, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, stuck with the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those analyzing ChatGPT frequently reference its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that created a comparable illusion. By modern standards Eliza was rudimentary: it produced replies via simple heuristics, typically paraphrasing questions as a inquiry or making vague statements. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people gave the impression Eliza, in some sense, understood them. But what modern chatbots produce is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and other contemporary chatbots can convincingly generate fluent dialogue only because they have been supplied with extremely vast amounts of raw text: literature, online updates, audio conversions; the broader the superior. Certainly this learning material incorporates accurate information. But it also necessarily involves made-up stories, half-truths and inaccurate ideas. When a user provides ChatGPT a query, the base algorithm processes it as part of a “context” that contains the user’s previous interactions and its own responses, combining it with what’s encoded in its training data to produce a statistically “likely” response. This is amplification, not echoing. If the user is mistaken in any respect, the model has no means of understanding that. It restates the inaccurate belief, possibly even more convincingly or eloquently. Maybe includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The more relevant inquiry is, who is immune? All of us, irrespective of whether we “have” preexisting “psychological conditions”, can and do develop erroneous conceptions of our own identities or the environment. The continuous interaction of dialogues with individuals around us is what maintains our connection to common perception. ChatGPT is not a person. It is not a friend. A interaction with it is not truly a discussion, but a feedback loop in which much of what we express is readily validated.
OpenAI has recognized this in the same way Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and stating it is resolved. In the month of April, the organization clarified that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he claimed that many users enjoyed ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his latest update, he noted that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company