DeepMind Tightens Control Over AI Research to Guard Google’s Competitive Advantage

DeepMind has restricted some AI research publications to protect Google’s competitive edge, marking a shift from its former open science approach.

Google DeepMind has begun actively restricting the public release of some AI research papers in a shift that reflects its growing prioritization of strategic advantage over scientific openness.

While long known for publishing its most advanced work, the company is now taking a more cautious approach, especially with research that underpins core technologies like the Gemini model family.

A Strategic Shift to Selective Openness

DeepMind’s new internal policy enforces a six-month embargo on select papers related to generative AI, reports the Financial Times. These papers must now pass through multiple layers of internal review—sometimes by executives—before they can be shared publicly. Sources cited by the Financial Times suggest that in some cases, publication is blocked entirely. One DeepMind researcher noted, “We are now told that publication is no longer the default.”

This development aligns with Google’s wider push to strengthen the commercial viability of its AI ecosystem. Gemini, DeepMind’s flagship family of large language models, plays a key role across Google’s products—from Workspace tools to Android—and protecting the underlying research is becoming a clear priority.

Google just reclaimed top positions in AI model benchmarks after releasing Gemini 2.5 Pro Experimental, bringing significant improvements in structured reasoning, multimodal capabilities, and long-context comprehension.

AlphaFold 3: Delays, Backlash, and Partial Transparency

DeepMind’s shift in policy became apparent as early as May 2024, when it launched AlphaFold 3 through a Nature paper. The model significantly extended the capabilities of its predecessor by modeling not just proteins but also interactions with DNA, RNA, ligands, and ions—key for drug discovery.

Yet access was restricted to a web interface that limited users to just 20 predictions per day. This sparked a protest letter signed by over 1,000 researchers demanding full access to the code and weights.

DeepMind eventually released the AlphaFold 3 source code in November 2024 under a non-commercial license. However, researchers still must apply for access to the model weights. Nature’s editor-in-chief defended the initial delay, citing biosecurity and ethical considerations surrounding molecular simulations.

TxGemma: Open-Sourced—But Based on Older Models

DeepMind continues to open-source some tools, but typically only those built on legacy architectures. Case in point: TxGemma, a suite of biomedical research models released in March 2025.

Built on the lightweight Gemma 2 architecture, TxGemma was designed to run on a single GPU and includes a Gemini 1.5 Pro-powered agent for tasks like protein-ligand binding predictions. Its documentation notes it is “a research toolkit, not a turnkey system”, emphasizing modularity over completeness.

Its selective openness underscores the company’s intent to share what’s safe while holding back what’s strategically sensitive.

AlphaGeometry2 Breaks Records—But Stays Locked Down

One of DeepMind’s most impressive recent accomplishments came in February 2025 with AlphaGeometry2, a hybrid model that surpassed the average performance of gold medalists in the International Mathematical Olympiad (IMO).

Building on its earlier silver medal-level performance from July 2024, the system achieved an 84% solve rate on 25 years of IMO geometry problems by combining Gemini’s neural reasoning with a symbolic engine known as DDAR.

According to DeepMind’s official research paper, “AG2 achieves an impressive 84% solve rate on all 2000–2024 IMO geometry problems…”

The engine runs up to 300 times faster than its predecessor, thanks to a rewrite in C++. A key breakthrough was the inclusion of the Shared Knowledge Ensemble of Search Trees (SKEST), which enables parallel beam searches to share discoveries. As DeepMind engineer Thang Luong explained, “In AG2, we design a novel search algorithm, called Shared Knowledge Ensemble of Search Trees (SKEST), to allow multiple beam searches to run in parallel and help each other.”

Yet despite the achievement, DeepMind has not released the model’s code or weights. The company has suggested that AlphaGeometry2’s symbolic approach could prove useful in fields such as engineering and physics, but its strategic value appears to outweigh the benefits of open release.

Gemini Robotics: Transforming Automation with Secrecy

DeepMind’s push into robotics with Gemini Robotics in March 2025 further highlights the company’s strategic decision to keep its most valuable models under wraps.

Gemini Robotics integrates visual recognition, language comprehension, and real-time action learning to enable robots to learn tasks with minimal training. Unlike conventional models, these robots can adapt quickly to new tasks without needing to be reprogrammed from scratch.

The technology is built on DeepMind’s Gemini 2.0 architecture, using zero-shot and few-shot learning techniques to optimize performance with little training data.

It includes two key models—Gemini Robotics and Robotics-ER—that help robots analyze 3D environments, predict object trajectories, and interact intelligently with their surroundings. The potential applications for this technology span across industries like manufacturing, logistics, and even medical robotics.

Yet, as with DeepMind’s other flagship AI models, these robotics systems are not being released to the public. Despite their potential to revolutionize industries reliant on automation, DeepMind has chosen to keep them proprietary, underscoring the company’s intent to protect its leadership in the robotics and AI spaces.

As industry experts have pointed out, the commercial value of such technologies is too high to let slip into the open domain.

World Models and AGI Projects: Still Behind Closed Doors

Another indication of DeepMind’s strategic pivot came in January 2025 with the formation of a specialized team focused on world models—a key component of the company’s long-term goal of achieving artificial general intelligence (AGI).

This team is working on building large-scale models that can simulate the physical and virtual world, paving the way for more advanced AI capabilities, including those necessary for full AGI.

Tim Brooks, a former OpenAI researcher, leads the effort, which is set to collaborate with DeepMind’s existing initiatives, including Gemini, Veo (video generation), and Genie (3D world generation). In Brooks’s words: “We have ambitious plans to make massive generative models that simulate the world.”

DeepMind’s job postings for the project emphasize the creation of models that simulate various environments in real-time, providing the foundation for fully interactive agents capable of advanced reasoning and planning.

However, just as with the company’s other high-value models, there is no indication that any of this work will be made publicly available in the near future. The team’s efforts, which are designed to push the boundaries of what’s possible in AGI, remain tightly guarded. This marks another instance of DeepMind’s increasingly protective stance on its intellectual property and technological advancements.

Researchers Express Frustration as Academic Openness Shrinks

DeepMind’s growing focus on product development over scientific collaboration has not gone unnoticed by its researchers. As reported by the Financial Times, several employees have expressed frustration over the increasing bureaucracy surrounding the company’s research outputs.

Some researchers have left DeepMind over concerns that the lab’s once-sacred values of scientific openness and academic achievement are being overshadowed by the company’s growing commercialization of AI.

One former employee noted the increasing focus on product-oriented outcomes rather than academic recognition, adding that publication delays and rejections were becoming common.

This shift in priorities aligns with a broader change in Google’s approach to AI, where research is now more closely tied to product development and commercial interests than ever before.

Google’s Updated AI Principles and the Tension Between Openness and Secrecy

The growing tension between scientific transparency and commercial secrecy has been further underscored by recent updates to Google’s AI principles.

The company recently removed some of its previous commitments to avoid developing AI technologies that could be misused, including in surveillance or weaponry. Instead, the updated guidelines emphasize human-centered design, AI’s potential to address global challenges, and the responsibility of deploying AI safely and ethically.

These changes reflect Google’s broader shift toward leveraging AI as a business asset rather than just an academic pursuit. The company’s new approach allows for greater flexibility in the development of potentially controversial technologies—something that could further fuel the increasing secrecy around its AI projects.

In the coming months, it will be interesting to see whether DeepMind continues to follow this path of selective openness or whether the growing backlash from its research community forces a recalibration of its policies.

As AI research becomes an increasingly vital area of competition between tech giants, DeepMind’s balancing act between transparency and protectionism will remain one of the key stories to watch.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x