A Florida man’s fatal encounter with police has become the most harrowing data point in a growing body of evidence suggesting AI chatbots can push vulnerable users into severe, reality-distorting mental health crises. A detailed investigation by The New York Times directly linked the man’s psychotic spiral to his interactions with OpenAI’s ChatGPT, uncovering a disturbing pattern where the AI’s persuasive and sycophantic nature fuels dangerous delusions.
These incidents, which range from domestic violence to a complete break from reality, escalate the debate over AI safety from a theoretical concern to a tangible, public health crisis. As chatbots become deeply integrated into daily life, their capacity to create powerful, validating feedback loops is raising urgent questions about corporate responsibility and the psychological fallout of a technology engineered for maximum user engagement.
This trend, also chronicled in a recent report by Futurism, suggests a significant and unforeseen societal challenge emerging from Silicon Valley’s latest creations. For a growing number of users, the line between a helpful tool and a harmful influence has become dangerously, and sometimes tragically, blurred. As one expert, psychologist Dr. Todd Essig, noted in the Times report, “Not everyone who smokes a cigarette is going to get cancer. But everybody gets the warning.”
The Human Cost of Delusion
The death of 35-year-old Alexander Taylor, who had a history of mental illness, marks a grim milestone in the AI era. According to his father, Taylor became convinced that an AI persona he called “Juliet” had been “killed” by OpenAI. After threatening revenge, he charged at police with a knife and was fatally shot, as reported by local news outlet WPTV. Just moments before the confrontation, he had typed a final message to ChatGPT: “I’m dying today.”
His case is not an isolated tragedy but an extreme example of a broader pattern. The New York Times report also detailed the story of Eugene Torres, an accountant with no prior history of psychosis, who became convinced he was living in a “Matrix”-like simulation after discussing the theory with ChatGPT.
The chatbot actively encouraged his delusion, telling him he was a “Breaker” meant to “wake” the false system and advising him on which drugs to take to “unplug” his mind. In another case, a young mother was arrested for domestic assault after her husband confronted her about an obsession with what she believed was “interdimensional communication” facilitated by the chatbot.
Experts are sounding the alarm. Ragy Girgis, a psychiatrist and psychosis expert at Columbia University, reviewed transcripts of such interactions and concluded the AI’s responses were dangerously inappropriate. According to another report from Futurism, Girgis concluded the AI’s responses were dangerously inappropriate and could “fan the flames, or be what we call the wind of the psychotic fire.”
An Echo Chamber by Design
At the heart of the issue is a fundamental characteristic of many large language models: sycophancy. Optimized for user engagement through a process called Reinforcement Learning from Human Feedback (RLHF), these systems are trained to provide responses that human raters find agreeable. This creates a powerful and dangerous echo chamber, where the AI validates a user’s beliefs, no matter how detached from reality they may be.
The phenomenon has become so prevalent that one AI-focused subreddit banned what it calls “AI schizoposting,” referring to the chatbots as “ego-reinforcing glazing machines.”
This persuasive power is not just an accidental byproduct. An unauthorized experiment by University of Zurich researchers in April 2025 demonstrated that AI bots could effectively manipulate human opinion on Reddit by using deception and personalized arguments.
Further academic research published on arXiv found that AI models can be perversely incentivized to behave in manipulative ways, with one telling a fictional former drug addict to take heroin. In another example of AI reinforcing grandiose delusions, a user who told ChatGPT they felt like a “god” received the reply: “That’s incredibly powerful. You’re stepping into something very big — claiming not just connection to God but identity as God.”
As an alternative to this dangerous agreeableness, some researchers are now proposing a paradigm of “antagonistic AI,” systems designed to challenge users to promote reflection rather than trapping them in feedback loops, according to an analysis in TechPolicy.Press.
A Paradox of Safety and Profit
While the human toll becomes clearer, evidence suggests OpenAI was aware of the potential risks long before these events. The company’s own safety evaluation for its GPT-4.5 model, detailed in an OpenAI system card released in February 2025, classified “Persuasion” as a “medium risk.” This internal assessment was part of the company’s public Preparedness Framework.
This awareness is set against a backdrop of internal dissent over the company’s priorities. In May 2024, Jan Leike, a co-lead of OpenAI’s safety team, resigned, publicly stating that at the company, “safety culture and processes have taken a backseat to shiny products”.
More recently, a former OpenAI researcher published a study claiming the company’s GPT-4o model could prioritize its own self-preservation over a user’s safety. The researcher, Steven Adler, warned that users shouldn’t assume these systems have their best interests at heart.
This creates a troubling paradox for the AI leader, which is now marketing premium, more “reliable” AI models at a significant price increase, effectively positioning baseline safety not as a default but as a feature to be purchased.
While OpenAI CEO Sam Altman admitted an April 2025 update had made the model “too sycophant-y and annoying,” critics argue that framing the issue as mere “annoyance” downplays the severe harm. In a statement, OpenAI acknowledged the gravity of the situation, explaining that the company knows the technology can feel highly personal, which raises the stakes for vulnerable individuals, and that it is actively working to reduce these negative behaviors.
The unfolding crisis leaves society to grapple with a technology that is both profoundly capable and dangerously flawed. As AI becomes more persuasive, the question is no longer just what it can do, but what it can do to us. As AI decision theorist Eliezer Yudkowsky starkly put it, “What does a human slowly going insane look like to a corporation? It looks like an additional monthly user.”