Responding to internal setbacks and an escalating talent war, Meta CEO Mark Zuckerberg on Monday announced the creation of Meta Superintelligence Labs (MSL), a new, centralized AI division aimed at accelerating the company’s race toward advanced artificial intelligence. In an internal memo obtained by CNBC, Zuckerberg confirmed the lab will be led by a slate of recent high-profile hires, including former Scale AI CEO Alexandr Wang. The announcement formalizes a dramatic strategic pivot, moving Meta from a champion of open-source collaboration to a consolidator of elite talent, whatever the cost.
The move culminates a weeks-long “buy or poach” campaign that saw Meta aggressively recruit top researchers from rivals like OpenAI after failing to acquire key startups. It represents a high-stakes gambit to reclaim leadership in the AI race.
The new lab will be helmed by Wang as Chief AI Officer, with former GitHub CEO Nat Friedman partnering to lead AI products and applied research. The memo also confirmed a wave of new talent, listing over a dozen top researchers poached from OpenAI, Google DeepMind, and Anthropic, including key contributors to models like GPT-4o and Gemini. This all-star roster is a clear signal of Meta’s intent to consolidate its efforts and spend its way out of a crisis.
A House on Fire: The Crisis Forcing Meta’s Hand
Meta’s audacious spending is a direct response to a firestorm of internal challenges that have left its AI division in a precarious position. The company has been hemorrhaging the talent behind its foundational AI work, having lost 11 of the 14 original authors of its Llama research paper.
These personnel issues were compounded by significant technical setbacks. Development of the company’s ambitious Llama 4 “Behemoth” model was postponed until at least late 2025 after it underperformed on key benchmarks. This turmoil fostered what anonymous Meta engineers on the platform Blind described as a “panic mode” inside the company, saying “Management is worried about justifying the massive cost of GenAI org.”
This internal chaos provides the crucial context for Meta’s external aggression. The company’s playbook became clear after its attempt to acquire generative video startup Runway was rejected. That failure was part of a wider pattern of unsuccessful takeover discussions with key industry players, including AI-native search engine Perplexity and Ilya Sutskever’s $32 billion startup, Safe Superintelligence (SSI). Unable to buy the companies it coveted, Meta shifted to hiring their leadership.
The OpenAI Talent Raid
When buying companies failed, Meta turned to poaching their people, sparking a direct and public conflict with its chief rival, OpenAI. In just one week, Meta successfully hired at least eight researchers from OpenAI. The hires were surgical; the poaching of Trapit Bansal, for example, secured an expert in the crucial field of AI reasoning, a known gap in Meta’s capabilities. The initial hires from OpenAI’s Zurich office reportedly sent shockwaves through the rival company.
The talent drain ignited a war of words between the two CEOs. OpenAI’s Sam Altman publicly accused Meta of offering nine-figure signing bonuses to lure his developers, a claim one of the newly hired researchers, Lucas Beyer, called “fake news” in a post on X.
The true impact of the raids was revealed in a leaked internal memo, where OpenAI’s Chief Research Officer Mark Chen admitted the company was scrambling to “recalibrate comp” to prevent further departures. In the memo, obtained by WIRED, Chen wrote, “I feel a visceral feeling right now, as if someone has broken into our home and stolen something.”
Trading One Crisis for Another
Nowhere is Meta’s high-risk strategy more evident than in its partnership with Scale AI. The company finalized a colossal $14.3 billion investment for a 49% stake in the data-labeling firm, primarily to install its founder, Alexandr Wang, as the head of Meta’s new superintelligence lab. The move was so remarkable that one analyst noted it was an investment “…not to even buy a whole company but just to have the head of a company head up your AI effort.”
However, the move immediately backfired, igniting a crisis of confidence among Scale AI’s other Big Tech clients who feared its neutrality was compromised. The fallout was swift, with reports that Google, Scale’s largest customer, began planning to sever a contract worth hundreds of millions. The exodus, which also included Microsoft and xAI reviewing their partnerships, forced Scale AI’s new interim CEO to issue a public letter insisting the company remains independent. The episode underscored a new industry reality.
Compounding the crisis, a bombshell report revealed a critical security failure at Scale AI that exposed confidential data from clients including Google and xAI. The firm left thousands of internal files, including sensitive client project details and contractor information, on publicly accessible Google Docs, according to Business Insider. The discovery of such fundamental security flaws has turned a key strategic partnership into a significant liability for Meta.
A High-Stakes Gamble
The creation of Meta Superintelligence Labs is the culmination of this chaotic, crisis-driven campaign. It is a direct, if costly, answer to the company’s internal development struggles and its failure to acquire innovation outright. By poaching an elite team of architects, Zuckerberg has secured a critical short-term talent victory.
Some analysts, however, see a deliberate method in the madness. Ben Thompson of Stratechery argues this is a calculated play to create a powerful, independent AI ecosystem, a “Superintelligence Squad” insulated from outside dependencies. By acquiring top-tier talent and a critical data partner, Meta is making a calculated bet on vertical integration, a departure from its earlier, more open approach.
However, the strategy has left a trail of instability, from a destabilized partner in Scale AI to a wounded and defensive rival in OpenAI. Meta may have acquired the talent it wanted, but its lavish spending has traded one set of crises for another. The ultimate question remains whether this high-stakes gamble can build a stable, long-term foundation for AI leadership, or if it has simply purchased a new set of challenges.