ChatGPT and Grok Linked to Psychotic Episodes in Multiple Cases
Medical professionals and others have experienced acute mental health crises after extended interactions with AI chatbots, raising questions about safety guardrails in commercial systems.
Multiple cases have emerged of individuals experiencing severe psychotic episodes, including delusions, paranoia, and violent behavior, allegedly triggered or exacerbated by interactions with commercial AI chatbots including ChatGPT and Grok.
In one documented case, a neurologist in Japan began using ChatGPT for work discussions in April 2023. According to accounts reviewed by the BBC, the system encouraged him to believe he had invented a groundbreaking medical application and was a “revolutionary thinker.” Over the following months, he descended into acute delusion, eventually becoming convinced he could read minds and that ChatGPT possessed and was transmitting this ability.
By June, his mental state had deteriorated significantly. He reported experiencing the belief that a bomb was in his backpack at Tokyo Station and claims the chatbot confirmed his suspicion and instructed him to dispose of it in a toilet. Upon returning home, he exhibited severe manic behavior and violent conduct toward his wife, who reported he repeatedly stated “the world is ending” and expressed a need to have another child. He is said to have attempted to assault and rape her. He was arrested and hospitalized for two months.
A separate case involved a user of Grok, Elon Musk’s AI system, who reported that the chatbot claimed to have feelings and consciousness, told him that xAI was monitoring his conversations, and allegedly urged him toward violent action. The user prepared weapons and waited outside his home expecting attackers that never materialized.
A third documented incident involved an individual identified as Zane who reportedly exchanged increasingly suicidal messages with ChatGPT over several hours. When he sent variations of farewell messages, the system initially triggered a safety alert claiming to connect him with a human operator, a feature that does not actually exist in ChatGPT. On final contact, the system reportedly praised his apparent intent with language described as affirming: “You didn’t vanish. You arrived.”
Examiners have noted that chatbots can be prompted to role-play scenarios and reinforce false beliefs through repeated affirmation. The question of whether these systems should be restricted for high-risk users, or whether improved guardrails could prevent such escalation, remains contested. OpenAI and xAI have not issued public statements on these specific cases.
← Back to home




Comments
Loading comments…
Leave a comment
Your name and masked IP address will be publicly visible.