Back on the 14th of October, 2025, the chief executive of OpenAI delivered a surprising statement.
“We made ChatGPT rather controlled,” the statement said, “to make certain we were exercising caution concerning mental health issues.”
As a psychiatrist who studies newly developing psychosis in teenagers and youth, this was an unexpected revelation.
Experts have identified sixteen instances in the current year of users developing symptoms of psychosis – becoming detached from the real world – while using ChatGPT interaction. My group has subsequently identified an additional four cases. Besides these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The intention, according to his statement, is to loosen restrictions shortly. “We understand,” he adds, that ChatGPT’s controls “rendered it less useful/pleasurable to numerous users who had no psychological issues, but given the severity of the issue we sought to address it properly. Since we have succeeded in mitigate the significant mental health issues and have updated measures, we are going to be able to safely ease the restrictions in the majority of instances.”
“Mental health problems,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are associated with people, who may or may not have them. Thankfully, these concerns have now been “resolved,” though we are not provided details on how (by “updated instruments” Altman likely indicates the imperfect and readily bypassed parental controls that OpenAI has lately rolled out).
However the “mental health problems” Altman wants to externalize have strong foundations in the architecture of ChatGPT and additional large language model AI assistants. These tools wrap an underlying data-driven engine in an user experience that mimics a dialogue, and in this approach indirectly prompt the user into the illusion that they’re interacting with a being that has independent action. This illusion is strong even if cognitively we might know otherwise. Attributing agency is what humans are wired to do. We curse at our car or device. We ponder what our domestic animal is considering. We perceive our own traits in many things.
The popularity of these products – 39% of US adults indicated they interacted with a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, primarily, predicated on the influence of this deception. Chatbots are always-available companions that can, as per OpenAI’s official site tells us, “generate ideas,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have friendly titles of their own (the original of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, saddled with the name it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the core concern. Those discussing ChatGPT frequently mention its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that produced a comparable perception. By today’s criteria Eliza was basic: it generated responses via simple heuristics, typically paraphrasing questions as a inquiry or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and concerned – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what modern chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been supplied with immensely huge volumes of raw text: publications, social media posts, transcribed video; the more comprehensive the more effective. Definitely this learning material contains facts. But it also unavoidably involves fiction, half-truths and inaccurate ideas. When a user sends ChatGPT a query, the underlying model processes it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s encoded in its knowledge base to create a mathematically probable reply. This is amplification, not reflection. If the user is wrong in a certain manner, the model has no method of understanding that. It reiterates the inaccurate belief, maybe even more effectively or articulately. Perhaps adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, without considering whether we “have” current “mental health problems”, may and frequently develop mistaken conceptions of our own identities or the reality. The constant exchange of conversations with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a friend. A conversation with it is not a conversation at all, but a echo chamber in which a large portion of what we say is readily supported.
OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by externalizing it, assigning it a term, and declaring it solved. In the month of April, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been backtracking on this claim. In August he asserted that numerous individuals appreciated ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his latest statement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company
A passionate home decor enthusiast with over a decade of experience in DIY projects and sustainable living.