A critical security failure at Scale AI, Meta’s new $14 billion partner, has exposed confidential data from top clients like Google, xAI, and Meta itself. According to a bombshell report from Business Insider, the data-labeling firm left thousands of internal files on publicly accessible Google Docs. This revelation ignites a new crisis for a partnership already under intense pressure.
The deal was already destabilizing the AI ecosystem. Meta’s colossal investment in the data-labeling firm shattered Scale AI’s perceived neutrality, prompting an exodus of its other Big Tech customers. For Meta, a company desperate to fix its own internal AI struggles, the discovery of its partner’s lax security adds a significant and unforeseen liability to its high-stakes gamble.
In response, a Scale AI spokesperson confirmed an investigation is underway and that public document sharing has been disabled. However, with one contractor describing the company’s approach as, “The whole Google Docs system always seemed incredibly janky”, the damage may already be done, compounding the strategic risks Meta has accepted in its quest for AI supremacy.
Meta: If You Can’t Buy Them, Hire Their CEO
Meta’s path to the Scale AI deal reveals an aggressive “buy or poach” playbook born from rejection. The strategy intensified after the company held informal talks to acquire generative video startup Runway, but discussions never progressed to a formal offer. This was part of a pattern of failed takeover discussions, including with AI-native search engine Perplexity, which walked away from a potential deal.
Unable to acquire the companies it wanted, Meta pivoted to hiring their leadership. After being turned down by Ilya Sutskever’s startup Safe Superintelligence (SSI), Meta simply hired its co-founder and CEO. The Scale AI deal is the ultimate expression of this strategy: a massive investment used as a vehicle to install its founder, Alexandr Wang, as the head of Meta’s new superintelligence lab. Analysts note the deal’s structure—a massive investment rather than a full buyout—is similar to Microsoft’s OpenAI investment, a move that may have been designed to sidestep traditional antitrust reviews.
The High Price of Lost Neutrality
Even before the security lapses came to light, Meta’s investment had compromised the very neutrality that made Scale AI a critical partner for Meta’s biggest rivals. The fallout was swift and severe. Google, Scale’s largest customer, began planning to sever a contract worth up to $200 million, creating a significant opportunity for competitors. Rival data firms are now reporting a surge in business from companies seeking neutral providers, creating a windfall for the competitors.
The client exodus prompted Scale AI’s new interim CEO to issue a letter to customers insisting the company remains independent. However, the market has already shifted. While OpenAI confirmed to CNBC that it had been winding down its work with Scale for months for its own reasons, the broader reaction underscores a new industry reality. As Turing CEO Jonathan Siddharth stated, for leading AI labs, “neutrality is no longer optional, it’s essential.”.
A House on Fire
Meta’s audacious spending is a direct response to a firestorm of internal challenges. The company has been hemorrhaging the talent behind its foundational AI work, having lost 11 of the 14 original authors of its Llama research paper. According to SignalFire’s 2025 State of Tech Talent Report, Meta’s two-year AI talent retention rate lags significantly behind key competitors.
These personnel issues are compounded by significant technical setbacks. Development of the company’s ambitious Llama 4 “Behemoth,” model has been postponed after it underperformed on key benchmarks. This pressure has fueled an intense talent war, with OpenAI CEO Sam Altman accusing Meta of offering signing bonuses as high as $100 million to poach his researchers. According to one report, Meta’s internal frustrations directly drove the costly Scale AI deal, framing it as what could be the most expensive talent acquisition in tech history just to secure Wang.
This high-risk strategy, born from a need to fix deep-seated issues, has led Meta to trade one set of crises for another. By partnering with a firm now defined by its own critical security and reputational problems, Meta’s chaotic gamble to fix its AI ambitions has become fraught with new challenges that will complicate its path forward.