Some AI researchers at Google DeepMind are being offered a full year of paid leave—on the condition they don’t take a job at a competing firm. The arrangement, detailed in a report by Business Insider, is part of the company’s wider strategy to retain critical AI talent as competition escalates between top research labs.
The restrictive noncompete clauses reportedly last up to 12 months and are applied even in jurisdictions where such agreements are unlikely to be enforceable. In the United Kingdom, where DeepMind is based, the legality of these agreements is under increased scrutiny, but that hasn’t stopped the company from including them in employee contracts. The clauses, according to the report, are enforceable only when staff members accept the company’s offer of paid leave—which is conditional on not joining a competitor.
Microsoft AI VP Nando de Freitas, working for one of DeepMind’s senior rivals, publicly criticized the approach. In a post on X, he wrote, “Every week one of you reaches out to me in despair to ask me how to escape your notice periods and noncompetes.” He warned prospective employees against signing such contracts, framing them as efforts to suppress mobility and innovation within the field.
Dear @GoogDeepMind ers, First, congrats on the new impressive models.
— Nando de Freitas (@NandoDF) March 26, 2025
Every week one of you reaches out to me in despair to ask me how to escape your notice periods and noncompetes. Also asking me for a job because your manager has explained this is the way to get promoted, but…
Personal Outreach and Legal Gray Zones
DeepMind’s retention strategy appears to go beyond legal agreements. In one case, Google co-founder Sergey Brin personally intervened to keep a departing researcher from joining OpenAI. As reported by Business Insider, Brin offered direct compensation and other incentives during a one-on-one call that persuaded the employee to stay.
The competition for AI researchers isn’t limited to Silicon Valley. he race to hire top AI talent is intensifying in Europe, where a wave of new startups has increased the demand for researchers. Established players like DeepMind are now forced to weigh high compensation packages and legal containment strategies against the risk of losing their most valuable people to newer rivals.
Noncompetes Follow Gemini 2.5 Pro Launch
The enforcement of noncompete clauses comes at the time as Google just unveiled Gemini 2.5 Pro, a new AI model designed to push the boundaries of reasoning, multimodal understanding, and long-context processing. The timing suggests a connection between the release of more advanced models and the company’s intensified efforts to prevent knowledge spillover to rivals.
Gemini 2.5 Pro features what Google calls “structured reasoning,” allowing the model to verify multi-step logic during generation. The model supports a one million-token context window—with two million promised soon—which enables it to handle vast volumes of data without losing coherence. On the AIME 2024 benchmark for mathematical reasoning, Gemini scored 92.0%, far ahead of OpenAI’s GPT-4.5 (36.7%) and just behind models like xAI’s Grok 3 Beta and DeepSeek R1 when given multiple attempts.
In multimodal tasks—where models interpret both text and images—Gemini 2.5 Pro led with an 81.7% score on the MMMU benchmark, outpacing Claude 3.7 Sonnet and GPT-4.5. Its performance on the MRCR 128K long-context benchmark reached 91.5%, with 83.1% accuracy sustained at a million-token input length. Such figures suggest Gemini is well-suited for demanding applications like enterprise analytics, document processing, and research assistance.
Performance Gaps Still Visible in Key Areas
Despite Gemini 2.5 Pro’s advances, it isn’t dominant across every benchmark. In factual accuracy, OpenAI’s GPT-4.5 leads with a 62.5% score on the SimpleQA dataset, compared to Gemini’s 52.9%. When tested on autonomous, multi-step software engineering tasks—an area known as agentic coding—Anthropic’s Claude 3.7 Sonnet performed best at 70.3%, while Gemini trailed at 63.8%.
OpenAI’s O3-Mini High outperformed Gemini in LiveCodeBench’s code generation test, scoring 74.1% to Gemini’s 70.4%. However, Gemini did take the lead in code editing tasks using the Aider Polyglot benchmark. These variations highlight the fragmented nature of current model performance—no single model has emerged as the leader in all categories.
Gemini is the Backbone of Google’s AI Strategy
Alongside its technical performance, Gemini 2.5 Pro has become the backbone of Google’s AI rollout across consumer and productivity tools. In mid-March, the company confirmed that Gemini would replace Google Assistant on Android devices, enabling real-time assistance via screen analysis and live camera input through Gemini Live.
Gemini’s influence now extends into Google’s productivity suite. A March update to Google Drive introduced smart file suggestions and automated document summaries. Gmail’s new AI-powered search helps users find key emails faster, and NotebookLM’s new mind map tool offers a visual method to organize AI-generated research insights.
These integrations show that DeepMind’s research isn’t just theoretical. Its output is deeply embedded in commercial products that millions of users interact with every day. The risk of losing top staff isn’t just about model quality—it’s about strategic control over Google’s AI-powered future.
Regulatory Uncertainty Clouds Noncompete Strategy
The legality of DeepMind’s approach is far from settled. In the UK, regulators are actively reviewing the use of noncompete clauses. Meanwhile, the U.S. Federal Trade Commission has proposed a nationwide ban, which, if implemented, could invalidate many such agreements.
As Business Insider noted, “The threat alone may be enough to discourage people from leaving for a competitor, especially when paired with a comfortable salary to wait things out.” For now, that threat appears to be working—but whether it can withstand legal and public scrutiny is another matter.