Steven Adler, who spent four years at OpenAI working on safety-related research, has openly criticized the speed at which organizations are pursuing artificial general intelligence (AGI).
After leaving the company in mid-November, Adler now took to X to articulate his fears about the future of humanity if AI systems continue to advance without adequate safeguards.
In a series of posts, he offered a candid look at what he perceives as a “race” to develop ever more powerful AI, despite major unresolved safety challenges.
“Some personal news: After four years working on safety across @openai, I left in mid-November,” Adler wrote.
“It was a wild ride with lots of chapters – dangerous capability evals, agent safety/control, AGI and online identity, etc. – and I’ll miss many parts of it.” Following this reflection on his time at the company, he conveyed apprehension about how quickly AI is evolving.
“Honestly I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?”
Related: Green Beret Used ChatGPT for Cybertruck Blast, Police Releases Chat-Logs
The Departure from OpenAI: Key Motivations
Adler’s decision to depart from OpenAI is deeply tied to the reservations he began expressing in his final months there.
As someone directly involved in evaluating the capabilities of AI systems under development, he was privy to the complexities of making models safe before their public release.
Although he found the work stimulating and mentioned that he would “miss many parts of it,” Adler ultimately felt compelled to speak out about the industry’s momentum, which he deems too swift for comfort.
He acknowledged that the progress itself is not the main issue, but rather the absence of proven methods to ensure advanced AI aligns with human objectives.
Related: OpenAI Launches ChatGPT Gov for US Federal Agencies With Secure Data Handling
The question of alignment—encouraging AI systems to follow ethical and socially acceptable guidelines—remains open-ended in many research labs.
Adler’s stance is that the race to achieve AGI may surpass attempts to solve alignment, noting in one of his posts, “IMO, an AGI race is a very risky gamble, with huge downside. No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.”
The severity of his phrasing is matched by the chorus of experts who share such reservations about releasing cutting-edge systems without decisive checks.
Related: OpenAI Cuts Dev Who Built a ChatGPT-Powered Robotic Sentry Rifle
Early Signals of AI Risks
The trajectory of AI has triggered cautionary voices from inside and outside OpenAI, with some employees raising concerns about how swiftly the company was scaling its models.
Adler’s departure adds further weight to these conversations. He had a front-row seat to how new features and capabilities were tested, and he saw firsthand the challenges of ensuring AI systems did not produce harmful content or inadvertently disclose private information.
While he commended the expertise of his colleagues, Adler argued that more emphasis on alignment research was needed relative to launching new products.
Related: Ilya Sutskever Calls for Post-Data AI Revolution: “There’s Only One Internet”
In explaining the underlying cause of his unease, Adler stated, “Today, it seems like we’re stuck in a really bad equilibrium. Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes all to speed up. I hope labs can be candid about real safety regs needed to stop this.”
His worry is that the competitive pressures fueling AI research might overshadow ethical and policy considerations that would ideally guide how these systems are built and deployed.
By pointing to a collective predicament—where no single lab wants to be outpaced—Adler highlighted the risk of a global AI arms race unfolding faster than regulators or even the researchers themselves can manage.
Competitive Pressures and Calls for Regulation
Several industry observers share Adler’s view that global AI competition might outpace strategies aimed at preventing unintended consequences.
China’s DeepSeek, which recently released its powerful R1 reasoning model and the Janus multimodal model series that beat OpenAI’s o1 and DALL-E 3 with very limited resources, exemplifies the intensity of this race and underscores how even well-intentioned organizations risk being overtaken if they proceed cautiously.
Related: OpenAI Pushes for U.S.-Focused AI Strategy to Counter Chinese Influence
Researchers connected to the Future of Life Institute also highlight this dilemma, citing their AI Safety Index 2024 to show that major developers, including OpenAI and Google DeepMind, lack clear frameworks for managing existential threats posed by highly capable models.
Regulatory proposals have begun to surface in the United States and internationally, although no single set of rules has yet been widely accepted. Some experts liken the current situation to the early days of biotechnology, when governments stepped in with guidelines on experimentation and eventual market release.
Tech leaders such as Max Tegmark argue that consistent, transparent oversight could encourage labs to coordinate on safety rather than sprint toward one breakthrough after another without fully accounting for ethical pitfalls.
Deliberative Alignment and Other Mitigation Efforts
Within OpenAI, attempts to address these concerns have included research on deliberative alignment, an approach designed to embed safety reasoning into the very fabric of AI systems.
The methodology relies on supervised fine-tuning (SFT) and reinforcement learning (RL) to align AI responses with explicit ethical and legal guidelines. Proponents of the technique hope it can reduce unwanted behaviors, such as the generation of harmful content or instructions for illicit activities, by having the AI actively reference human-crafted standards in real time.
Still, critics question whether any single technical fix can keep pace with how quickly AI capabilities are expanding—and whether labs will invest enough computational resources to refine these solutions before racing to deploy new models.
Adler himself remains cautiously interested in such projects, but he insists that multiple lines of defense are necessary.
In one of his final posts before taking a step back, he wrote, “As for what’s next, I’m enjoying a break for a bit, but I’m curious: what do you see as the most important & neglected ideas in AI safety/policy? I’m esp excited re: control methods, scheming detection, and safety cases; feel free to DM if that overlaps your interests.”
In that same post, he signaled that industry-wide collaboration could be beneficial, pointing out that isolated efforts are less likely to produce robust, universal safety measures.
Uncertain Path Ahead
The dissolution of OpenAI’s Superalignment team—once publicly tasked with studying how to keep superintelligent AI under effective control—fuels further debate about the best way to structure safety initiatives.
Although its work has been folded into other groups at the company, some former members express concern about losing a centralized authority devoted specifically to long-term risk reduction.
Meanwhile, whistleblower statements about restricted internal policies, combined with further high-profile departures, add tension to the question of whether OpenAI and its contemporaries can balance their ambitions against thoughtful governance.
In a broader sense, Adler’s commentary illustrates how researchers at the forefront of AI development grapple with dilemmas that straddle technological promise and existential worry.
While some, like Meta’s Yann LeCun, contend that advanced AI may solve key global crises, others—Adler among them—remain uneasy about a future where these systems outstrip human control if their safety is not prioritized.
Whether that leads to stronger industry consensus on responsible AI or compels government bodies to step in remains to be seen, and for the moment, the rapid pace continues unchecked.