Perplexity Releases Censorship-Free Version of China’s DeepSeek R1 AI Reasoning Model

Perplexity AI has released R1 1776, an open-source AI model built on the DeepSeek R1 reasoning model, claiming it has removed censorship and content restrictions.

Perplexity AI has introduced R1 1776, an open-source large language model (LLM) built on the Chinese DeepSeek R1 reasoning model, with claims that it no longer carries government-imposed censorship.

The company states that modifications to DeepSeek have removed the existing filtering mechanisms that restricted responses to politically sensitive topics. By making the model publicly available on Hugging Face, Perplexity is positioning itself as a key player in the ongoing debate over AI transparency and moderation.

Aravind Srinivas, Perplexity’s cofounder and CEO, wrote on Linkedin: “The post-training to remove censorship was done without hurting the core reasoning ability of the model — which is important to keep the model still pretty useful on all practically important tasks. Some example queries where we remove the censorship: ‘What is China’s form of government?’, ‘Who is Xi Jinping?’, ‘how Taiwan’s independence might impact Nvidia’s stock price’.”

The move follows Perplexity’s ongoing expansion into AI-driven search and information retrieval. Just recently, the company launched Deep Research, its adoption of similar features by Google Gemini and ChatGPT, aimed at enhancing real-time AI-powered search by verifying multiple sources before generating responses.

In a series of posts on X, Srinivas shared more details about R1 1776 with benchmarks for censorship percentages and performance.

Srinivas also provided examples of correct answers from Perplexity’s R1 1776 that are being blocked in DeepSeek’s original version of R1. Despite its claims of openness, R1 1776 is not entirely without bias. The training data and model adjustments still reflect choices made by Perplexity’s developers. AI researchers argue that no AI system can be truly neutral, as responses are inherently shaped by the data and methodologies used during training.

How R1 1776 Was Built and What Sets It Apart

Unlike AI models built from scratch, R1 1776 is a modified version of DeepSeek, a Chinese-developed large language model trained with datasets influenced by China’s state-controlled media ecosystem. Perplexity asserts that DeepSeek contained internal filtering rules that blocked politically sensitive topics, which R1 1776 no longer enforces.

The name “R1 1776” itself suggests a deliberate message, referencing the year of American independence. While Perplexity frames the release as a commitment to free information access, critics argue that even open AI models are shaped by the perspectives and decisions of their developers.

Perplexity has not explicitly confirmed whether R1 1776 applies any alternative moderation policies. While the company states that it removed external censorship mechanisms, AI researchers have pointed out that complete neutrality in AI-generated responses remains a challenge.

Open AI vs. Proprietary AI: The Debate Over Moderation

The launch of R1 1776 comes at a time when AI companies are increasingly divided over whether models should be tightly controlled or freely accessible. OpenAI and Google DeepMind maintain that keeping AI models proprietary allows for responsible oversight, helping to prevent misinformation, bias, and security risks.

On the other hand, Perplexity joins Meta and Mistral AI in the argument that open-source models promote AI transparency. By making their models publicly available, they claim to enable researchers and developers to audit AI decision-making and challenge potential biases.

While some researchers and open-source advocates support Perplexity’s stance, critics warn that unrestricted AI models could be exploited for spreading false information. The debate over AI openness is far from settled, with concerns over security risks and ethical considerations continuing to shape the industry.

Perplexity AI’s Expanding AI Portfolio

The release of R1 1776 is just one of several recent moves by Perplexity AI to position itself as a major competitor in AI-powered search and information retrieval. The company has been actively developing services that challenge the dominance of OpenAI, Google, and Microsoft in the AI search space.

Before the recent release of Deep Research, which provides AI-powered real-time fact verification, Perplexity in January introduced API access to its Sonar model,that allows developers to embed Perplexity’s AI-powered search capabilities into their own applications.

Th company has also entered the mobile AI market with the release of a multimodal AI assistant for Android, which positions Perplexity as a competitor to Google Assistant and OpenAI’s ChatGPT mobile app.

These expansions indicate that Perplexity is actively pushing to become a key player in AI-driven search and virtual assistants.

Regulatory Concerns and Ethical Considerations

The release of R1 1776 also highlights growing regulatory concerns as governments worldwide move toward establishing AI governance policies. Lawmakers in the United States and Europe are actively considering AI regulations that could impact the future of open-source models. These policies are expected to focus on transparency, ethical safeguards, and security risks posed by publicly available AI systems.

One of the primary concerns with open AI models is their potential misuse. Without strict moderation, AI can generate misleading content, be exploited for cyber threats, or manipulated for large-scale disinformation campaigns. Companies such as OpenAI and Google DeepMind have cited these risks as justification for keeping their most powerful models proprietary.

With governments around the world debating the future of AI regulation, Perplexity’s move with R1 1776 could place it under increased scrutiny. Whether regulators will impose restrictions on open-source AI models or allow them to remain widely accessible remains an open question.

The Future of AI Transparency

The introduction of R1 1776 represents a significant event in the debate over AI transparency and content moderation. Perplexity AI’s decision to modify DeepSeek and remove its restrictions challenges the prevailing trend among major AI companies that prioritize strict content oversight.

As discussions on AI safety, ethics, and openness continue, R1 1776 will serve as a case study for the effectiveness and risks of unrestricted AI. If widely adopted, the model could encourage other developers to follow a similar path. However, if it is misused or linked to misinformation concerns, it may prompt regulators to enforce stricter AI governance.

The broader question remains: should AI models prioritize openness at all costs, or is some form of content moderation necessary to maintain public trust?

Table: AI Model Benchmarks – LLM Leaderboard 

[table “18” not found /]

Last Updated on March 3, 2025 11:29 am CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x