DeepSeek R1 AI Gets Upgrade Ahead of R2 Release

DeepSeek has announced a minor trial upgrade to its R1 artificial intelligence model, inviting users to test enhancements while maintaining API stability, as the company prepares its next-generation R2 reasoning model.

Chinese AI startup DeepSeek confirmed on May 28 a “minor trial upgrade” to its R1 artificial intelligence model. The model was crucial in elevating the company’s global profile earlier in the year. Users can now test the enhanced version.

DeepSeek assured that its API interface and usage methods remain unchanged, according to an official company announcement via a company WeChat channel, noted by National Business Daily. This incremental step signals DeepSeek’s continuous development within a competitive AI field and under considerable geopolitical pressure.

The company is encouraging feedback on the upgrade through its official website, mobile app, and mini-program. This update is the latest in a series of developments from DeepSeek. The company has consistently released models and tools while navigating international scrutiny. The significance for users and the industry lies in observing DeepSeek’s iterative improvements and its ability to manage external challenges.

R1’s Evolution and Broader Context

The DeepSeek R1 model model has seen various iterations and adoptions since its impactful first release, which outperformed OpenAI’s O1 – the leading model at the time – on several reasoning benchmarks.

As DeepSeek R1 was released as open-source it also has seen several third-party modifications. Perplexity AI introduced R1 1776 as a censorship-free variant in February, as the original R1 model includes content filtering mechanisms. TNG Technology Consulting released its DeepSeek-R1T-Chimera model in April, which aims to combine R1’s reasoning with the efficiency of DeepSeek’s V3-0324 checkpoint, which was released in March.

DeepSeek has also actively contributed to open-source AI. In April 2025, the company initiated an open-source initiative, releasing FlashMLA, an efficient MLA decoding kernel. DeepSeek described this as sharing “Small but sincere progress.” In late April, DeepSeek then released DeepSeek-Prover-V2-671B as another model, aimed at mathematical theorem proving.

The company also open-sourced its Fire-Flyer File System (3FS) and, in collaboration with Tsinghua University, introduced Self-Principled Critique Tuning (SPCT), an innovative AI alignment technique.

Geopolitical Pressures and Strategic Responses

DeepSeek’s advancements occur amid intense geopolitical headwinds. A US House Select Committee on the CCP in April labeled the company a national security risk. “This report makes it clear: DeepSeek isn’t just another AI app — it’s a weapon in the Chinese Communist Party’s arsenal, designed to spy on Americans, steal our technology, and subvert U.S. law.”, stated Committee Chairman John Moolenaar.

In response to such pressures and hardware restrictions, particularly limited access to top-tier Nvidia GPUs due to US export controls, DeepSeek has strategically focused on computational efficiency.

This involves techniques like Multi-Head Latent Attention (MLA) and FP8 quantization, a low-precision numerical format that reduces memory needs. This efficiency focus was validated when chinese competitor Tencent, during its Q4 2024 earnings call, confirmed leveraging DeepSeek models

The competitive AI landscape continues to drive DeepSeek’s development. Reports from April indicated DeepSeek was accelerating the launch of its next-generation R2 model, initially planned for May 2025. The R2 model is expected to improve upon R1’s earlier noted limitations in advanced reasoning and coding capabilities.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x