Developers using the AI-powered code editor Cursor encountered a bizarre situation around mid-April 2025 when the company’s own customer support system fabricated a company policy, leading to widespread confusion and forcing a public retraction from Cursor’s leadership.
The incident began when users attempting to log into the service across multiple devices found themselves unexpectedly booted out. Seeking answers via support channels, some were informed by an AI bot – reportedly named “Sam” – that their subscription was limited to a single active session, a restriction the company later confirmed was entirely fictitious.
Related: OpenAI New o3/o4-mini Models Hallucinate More Than Previous Models
The misinformation spread rapidly through developer communities like Hacker News and Reddit last week, causing frustration and prompting some users to cancel their subscriptions before the company, Anysphere (which develops Cursor AI), could intervene.
The episode serves as a reminder of the unpredictability inherent in current AI models, particularly when deployed in customer-facing roles where accuracy is essential. Adding to the confusion, separate user complaints about login difficulties and session invalidation appeared concurrently on Cursor’s official community forums, suggesting a distinct technical bug might have been exacerbating the situation.
An AI Makes Up the Rules
Cursor co-founder Michael Truell quickly moved to address the growing controversy online. Posting on Reddit and Hacker News, he emphatically denied the existence of any single-session login policy. “Hey! We have no such policy,” Truell stated, clarifying, “You’re of course free to use Cursor on multiple machines.”
He explained the erroneous policy information originated from their “front-line AI support bot,” adding that Cursor uses AI-assisted responses as the first filter for email support.
Truell also acknowledged that a genuine technical issue, possibly a race condition (where the outcome depends on the unpredictable sequence of events) linked to a recent session security update and potentially more prevalent on slower connections, was likely responsible for the actual logout problems users were experiencing. He confirmed this underlying bug had been addressed and the primary user who brought the bot’s false claims to light was refunded.
In response to the bot’s failure, Truell announced a procedural change: “Any AI responses used for email support are now clearly labeled as such.”
While intended to increase transparency, the effectiveness of labeling in preventing user frustration from incorrect AI responses remains a point of debate. Cassie Kozyrkov, former Google chief decision scientist, commented on the incident via LinkedIn, noting, “Cursor…just landed itself in a viral hot mess because it failed to tell users that its customer support ‘person’ Sam is actually a hallucinating bot.” She elaborated on the broader implications: “This mess could have been avoided if leaders understood that (1) AI makes mistakes, (2) AI can’t take responsibility for those mistakes (so it falls on you), and (3) users hate being tricked by a machine posing as a human.”
Hallucinations and the Automation Dilemma
The Cursor support bot’s invention of policy is a classic case of AI hallucination – the generation of confident but incorrect information, a phenomenon that cannot currently be stopped entirely, though it can be managed, as documented on resources like the Hugging Face Hallucination Leaderboard.
This tendency is a known challenge across various large language models, sometimes leading to AI inventing non-existent software packages or sabotaging code with fake dependencies.
As Marcus Merrell from Sauce Labs told The Register, the bot exhibited both hallucination and non-determinism (giving different answers to the same query), leading to inconsistent messaging that damaged user trust. Merrell cautioned that using AI to cut staffing costs in support roles carries brand risk, adding, “Letting users know ‘this response was generated by AI’ is likely to be an inadequate measure to recover user loyalty.”
Cursor’s Context in the AI Coding Market
Anysphere’s Cursor operates within the increasingly crowded market for AI coding assistants, aiming to enhance developer productivity. The tool, built on the VS Code open-source framework, competes directly with established players like Microsoft’s GitHub Copilot (which recently added more autonomous ‘Agent Mode’ features and a $39/month Pro+ plan) and Google’s Firebase Studio (launched April 9th with integrated Gemini AI).
Cursor itself offers a Pro tier reported at $20 per month. The company has attracted considerable investment, closing a $105 million Series B round in January 2025 led by Thrive Capital, Andreessen Horowitz, and Benchmark, following earlier rounds involving OpenAI’s Startup Fund. This connection to OpenAI adds an interesting dynamic, especially considering reports from mid-April that OpenAI was exploring an acquisition of Windsurf (Codeium), another AI coding tool startup.