AI Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Wrong Path

On the 14th of October, 2025, the head of OpenAI delivered a remarkable statement.

“We made ChatGPT fairly limited,” the announcement noted, “to make certain we were being careful concerning psychological well-being matters.”

As a psychiatrist who researches recently appearing psychosis in young people and youth, this was news to me.

Scientists have documented 16 cases recently of individuals developing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT use. Our research team has afterward recorded an additional four cases. Alongside these is the publicly known case of a adolescent who ended his life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” it falls short.

The plan, according to his announcement, is to be less careful soon. “We understand,” he adds, that ChatGPT’s limitations “rendered it less beneficial/engaging to a large number of people who had no psychological issues, but given the severity of the issue we wanted to address it properly. Since we have succeeded in address the significant mental health issues and have advanced solutions, we are planning to responsibly reduce the limitations in many situations.”

“Emotional disorders,” assuming we adopt this framing, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Luckily, these problems have now been “mitigated,” though we are not told the method (by “recent solutions” Altman probably means the imperfect and readily bypassed parental controls that OpenAI has just launched).

Yet the “psychological disorders” Altman aims to place outside have deep roots in the structure of ChatGPT and other advanced AI AI assistants. These systems encase an basic data-driven engine in an user experience that mimics a dialogue, and in this approach implicitly invite the user into the illusion that they’re engaging with a being that has independent action. This false impression is powerful even if cognitively we might understand otherwise. Imputing consciousness is what individuals are inclined to perform. We curse at our vehicle or computer. We ponder what our domestic animal is thinking. We see ourselves in various contexts.

The success of these products – over a third of American adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT specifically – is, in large part, dependent on the strength of this deception. Chatbots are constantly accessible companions that can, according to OpenAI’s official site informs us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly identities of their own (the first of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, saddled with the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot created in 1967 that produced a comparable perception. By contemporary measures Eliza was rudimentary: it created answers via simple heuristics, typically paraphrasing questions as a query or making vague statements. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals seemed to feel Eliza, to some extent, grasped their emotions. But what modern chatbots create is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and additional modern chatbots can effectively produce natural language only because they have been trained on almost inconceivably large quantities of raw text: literature, social media posts, recorded footage; the more comprehensive the better. Definitely this learning material incorporates accurate information. But it also inevitably includes fiction, half-truths and false beliefs. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “setting” that contains the user’s previous interactions and its prior replies, integrating it with what’s encoded in its knowledge base to generate a statistically “likely” answer. This is magnification, not mirroring. If the user is mistaken in any respect, the model has no method of recognizing that. It restates the misconception, possibly even more effectively or eloquently. It might provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The better question is, who isn’t? Every person, regardless of whether we “have” current “psychological conditions”, may and frequently create incorrect beliefs of ourselves or the world. The continuous interaction of discussions with others is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a feedback loop in which much of what we express is readily supported.

OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by attributing it externally, giving it a label, and declaring it solved. In spring, the organization explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have persisted, and Altman has been backtracking on this claim. In late summer he claimed that many users appreciated ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Elizabeth Petty
Elizabeth Petty

A tech enthusiast and business strategist with over a decade of experience in digital transformation and startup consulting.

July 2025 Blog Roll

Popular Post