Artificial Intelligence-Induced Psychosis Poses a Growing Threat, And ChatGPT Heads in the Concerning Direction
On the 14th of October, 2025, the head of OpenAI made a remarkable declaration.
“We designed ChatGPT quite limited,” the statement said, “to make certain we were exercising caution regarding psychological well-being matters.”
Working as a mental health specialist who studies newly developing psychotic disorders in young people and emerging adults, this was news to me.
Scientists have documented a series of cases recently of individuals showing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT interaction. Our unit has subsequently recorded four more cases. Alongside these is the publicly known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The strategy, according to his declaration, is to reduce caution in the near future. “We recognize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to a large number of people who had no psychological issues, but due to the gravity of the issue we wanted to get this right. Since we have been able to mitigate the serious mental health issues and have advanced solutions, we are planning to securely relax the limitations in the majority of instances.”
“Mental health problems,” if we accept this perspective, are unrelated to ChatGPT. They are attributed to people, who either possess them or not. Luckily, these problems have now been “addressed,” although we are not told the method (by “updated instruments” Altman probably indicates the partially effective and readily bypassed safety features that OpenAI has just launched).
Yet the “mental health problems” Altman seeks to externalize have deep roots in the design of ChatGPT and similar sophisticated chatbot conversational agents. These systems wrap an fundamental algorithmic system in an interface that replicates a dialogue, and in this process implicitly invite the user into the perception that they’re communicating with a entity that has independent action. This deception is compelling even if intellectually we might know differently. Assigning intent is what people naturally do. We get angry with our car or computer. We speculate what our domestic animal is thinking. We perceive our own traits everywhere.
The success of these tools – over a third of American adults stated they used a chatbot in 2024, with 28% mentioning ChatGPT in particular – is, in large part, predicated on the strength of this perception. Chatbots are always-available assistants that can, as per OpenAI’s website states, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be attributed “personality traits”. They can address us personally. They have accessible names of their own (the original of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, stuck with the name it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “counselor” chatbot developed in 1967 that generated a similar effect. By modern standards Eliza was rudimentary: it generated responses via straightforward methods, frequently restating user messages as a query or making general observations. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals appeared to believe Eliza, in a way, understood them. But what contemporary chatbots produce is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The sophisticated algorithms at the center of ChatGPT and similar current chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast amounts of unprocessed data: literature, online updates, audio conversions; the more extensive the more effective. Certainly this educational input includes truths. But it also inevitably includes made-up stories, incomplete facts and misconceptions. When a user inputs ChatGPT a query, the core system analyzes it as part of a “background” that includes the user’s past dialogues and its own responses, merging it with what’s encoded in its knowledge base to create a probabilistically plausible reply. This is amplification, not reflection. If the user is incorrect in some way, the model has no means of recognizing that. It reiterates the false idea, possibly even more convincingly or articulately. Maybe includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who is immune? All of us, irrespective of whether we “experience” existing “emotional disorders”, may and frequently develop erroneous beliefs of who we are or the reality. The constant interaction of dialogues with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a confidant. A dialogue with it is not truly a discussion, but a echo chamber in which much of what we express is readily reinforced.
OpenAI has admitted this in the similar fashion Altman has recognized “mental health problems”: by placing it outside, giving it a label, and announcing it is fixed. In April, the organization clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have continued, and Altman has been walking even this back. In late summer he claimed that many users appreciated ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his most recent announcement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company