DeepSeek, a subsidiary of the Chinese firm High-Flyer Capital Management, has introduced the R1 large language model (LLM), capturing global attention for its technical prowess and accessibility.
Offered as a free, open-source tool, DeepSeek R1 is already outperforming OpenAI’s o1 in some reasoning benchmarks, while its affordability and adaptability make it a potential game-changer for developers.
However, its embedded government-aligned censorship has sparked significant ethical concerns, raising questions about the trade-offs between technological innovation and freedom of information.
Outperforming OpenAI at a Fraction of the Cost
DeepSeek R1 has emerged as one of the most capable reasoning models in the AI space, outperforming OpenAI’s o1 in coding, mathematics, and complex logic tasks. DeepSeek R1 employs “chain of thought” reasoning, enabling step-by-step problem-solving—a feature that closely mirrors OpenAI’s o1 but at a fraction of the cost.
NVIDIA Senior Research Manager Jim Fan, commenting on the significance of R1’s release, stated, “We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive – truly open, frontier research that empowers all. It makes no sense. The most entertaining outcome is the most likely.
DeepSeek-R1 not only open-sources a barrage of models but also spills all the training secrets. They are perhaps the first OSS project that shows major, sustained growth of an RL flywheel.”
We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive – truly open, frontier research that empowers all. It makes no sense. The most entertaining outcome is the most likely.
— Jim Fan (@DrJimFan) January 20, 2025
DeepSeek-R1 not only open-sources a barrage of models but… pic.twitter.com/M7eZnEmCOY
The affordability of DeepSeek R1 further amplifies its appeal. Training the model reportedly cost only $5 million, an impressive feat given the U.S. restrictions on exporting high-performance GPUs to China.
The model is available for download, allowing developers to modify and deploy it locally, free from reliance on external servers. In comparison, OpenAI’s o1 is locked behind subscription paywalls starting at $20 per month.
The Power of Open Source and Immediate Adoption
The decision to release R1 as an open-source model has fueled its rapid adoption. On Hugging Face, a platform for sharing AI models, R1 quickly became one of the most downloaded tools, with developers fine-tuning it for specific tasks. Last month on Hugging Face they nearly surpassed 70,000.
Some have adapted the model for mobile devices, while others have integrated it into enterprise solutions, capitalizing on its flexibility.
Additionally, DeepSeek provides an API option that costs 90% less than OpenAI’s comparable offerings, making advanced AI capabilities accessible to smaller businesses and independent developers.
Arnaud Bertrand, a tech entrepreneur, observed on X (formerly Twitter), “There’s no overstating how profoundly this changes the whole game. And not only with regards to AI, it’s also a massive indictment of the US’s misguided attempt to stop China’s technological development, without which Deepseek may not have been possible (as the saying goes, necessity is the mother of inventions).”
All benchmarks now confirm it: Deepseek is truly is as good as OpenAI's o1 (which is top of the range) for 3% of the price. Boom.
— Arnaud Bertrand (@RnaudBertrand) January 24, 2025
And that's when you want to pay for the API. You can also use it Open Source for "free" (which you can't do with o1).
There's no overstating how… https://t.co/FcFSA1KRzu pic.twitter.com/FyFaclFYQo
Censorship Built into AI Models
Despite its technical achievements, R1’s political constraints have drawn scrutiny. Both hosted and locally-run versions of the model are programmed to avoid politically sensitive topics, reflecting Chinese government directives.
Questions about the 1989 Tiananmen Square massacre, for instance, result in evasive responses. When asked about Tiananmen, the hosted version replied, “Sorry, that’s beyond my current scope. Let’s talk about something else.”
“Wow, DeepSeek is so amazing!” ☺️🙄 pic.twitter.com/NWQL0FQHra
— Carl Franzen (@carlfranzen) January 23, 2025
Even more revealing is the model’s internal reasoning, which showcases its deliberate adherence to government-approved narratives.
In one instance, it deliberated: “My guidelines require me to present China’s official stance,” before delivering a response that aligned with the government’s position on Xinjiang.
When queried about Uyghur treatment, the model described re-education camps as a “vocational education and training program,” while avoiding acknowledgment of international criticism.
Testing of DeepSeek R1’s predecessor, V3, revealed similar issues. Users discovered that by manipulating prompts—for example, inserting spaces or punctuation between letters—they could bypass filters and elicit responses critical of the Chinese government.
Such workarounds underscore the challenges of enforcing strict content control within generative AI systems.
Geopolitical Stakes in AI Development
The rise of DeepSeek R1 highlights the geopolitical dimensions of AI competition. Developed under conditions of U.S. export controls, which limit China’s access to critical technologies like GPUs and HBM memory chips, R1 represents a significant achievement for Chinese AI.
However, critics argue that R1’s embedded censorship undermines the ethos of open-source AI. OpenAI CEO Sam Altman, in response to the growing competition, announced plans to integrate its forthcoming o3-mini reasoning models into ChatGPT’s free-tier offering.
big news: the free tier of chatgpt is going to get o3-mini!
— Sam Altman (@sama) January 23, 2025
(and the plus tier will get tons of o3-mini usage)
Yet, OpenAI faces increasing pressure to balance accessibility with the costs of maintaining proprietary infrastructure. As Altman recently also revealed, OpenAI’s $200-per-month ChatGPT Pro plan, launched in December 2024, is generating a loss instead of profit in spite of its elevated price tag.
Implications for Developers and Enterprises
While R1 offers a compelling combination of performance and affordability, particularly in enterprise environments where control over AI systems is paramount. However, its censorship mechanisms raise ethical concerns, especially for applications requiring unbiased or politically neutral outputs.
DeepSeek R1’s success is indicative of China’s ability to navigate technological barriers and assert its presence on the global stage. At the same time, the model’s limitations highlight the risks of embedding political agendas into AI systems. Developers and enterprises must weigh the benefits of adopting R1 against the potential for ethical compromises, particularly in politically sensitive applications.