Microsoft Adds DeepSeek R1 to Azure AI Foundry as OpenAI Investigates Possible Data Misuse

The addition of DeepSeek R1 to Microsoft’s AI platform has intensified scrutiny over AI ethics and whether OpenAI’s proprietary data has been used without authorization.

Microsoft has added DeepSeek R1, the groundbreaking AI reasoning model developed by China-based DeepSeek, to its Azure AI Foundry platform despite an ongoing investigation into whether the company improperly accessed OpenAI’s API data.

The decision to integrate DeepSeek R1 into Microsoft’s cloud offering raises questions about AI data security, competitive dynamics, and regulatory oversight as OpenAI and Microsoft examine potential unauthorized use of proprietary model outputs.

Related: Alibaba’s New Qwen 2.5-Max Model Takes on DeepSeek in AI Benchmarks

The inclusion of DeepSeek R1 in Azure AI Foundry reflects Microsoft’s continued expansion of its AI ecosystem, which already features models from OpenAI, Meta, Mistral, and Cohere.

Microsoft stated that the model underwent security assessments before being made available to enterprise customers. In an official statement, Asha Sharma, Microsoft’s Vice President of AI Platform, said, “One of the key advantages of using DeepSeek R1 or any other model on Azure AI Foundry is the speed at which developers can experiment, iterate, and integrate AI into their workflows.”

Source: Microsoft

Related: DeepSeek Drops Another OpenAI-Buster With Janus Multimodal Models, Outpacing DALL-E 3

Microsoft has also announced plans to release optimized versions of DeepSeek R1 for Copilot+ PCs, allowing local execution through Neural Processing Units (NPUs). These versions will utilize low-bit quantization to improve efficiency while maintaining reasoning capabilities.

Microsoft and OpenAI Investigate Unusual API Access Patterns

While DeepSeek R1’s technical capabilities have been widely praised, the model has also drawn scrutiny over its potential links to OpenAI’s proprietary data. According to Bloomberg, Microsoft security researchers detected an unusual spike in OpenAI API traffic originating from developer accounts linked to China in late 2024.

The report raised concerns that OpenAI-generated outputs may have been used to train competing AI models at a fraction of the cost required to develop a foundation model from scratch.

Neither OpenAI nor Microsoft has publicly confirmed whether DeepSeek was directly involved in this data access. However, OpenAI has acknowledged that it is actively reviewing API usage patterns and has already implemented stricter API policies to prevent large-scale data extraction by external developers.

The investigation is ongoing, and OpenAI has not indicated whether it will take legal action or introduce further restrictions on access to its models.

DeepSeek R1’s Training Efficiency and Market Impact

DeepSeek R1 stands out in the AI industry for its ability to achieve high reasoning performance while using fewer computational resources than competing models from OpenAI and Google DeepMind.

Unlike OpenAI’s latest systems, which require extensive GPU clusters, DeepSeek R1 was trained on 2,048 Nvidia H800 GPUs, a model designed to comply with U.S. trade restrictions on AI hardware exports to China. The model’s efficiency contributed to a temporary decline in Nvidia’s stock, which fell by nearly $600 billion as investors reassessed the long-term demand for high-end AI chips.

The emergence of DeepSeek R1 has prompted discussions about AI hardware dependence for model training. While models like GPT-4 required vast computational infrastructure, DeepSeek R1’s ability to perform complex reasoning tasks with a smaller GPU footprint suggests an alternative approach to AI training that could make high-performance models more accessible to a broader range of developers.

U.S. and European Regulators Scrutinize DeepSeek AI

DeepSeek R1 has also attracted attention from government agencies and regulatory bodies due to concerns over data security and compliance with international privacy laws and for the applied CCP censorship for certain topics like the 1989 Tiananmen Square protests and massacre or the Chinese leadership in general.

In Europe, DeepSeek R1 is under investigation for potential violations of the General Data Protection Regulation (GDPR). Italy’s data protection authority, Garante, has launched an inquiry into whether DeepSeek transfers European user data to China without adequate safeguards or disclosures.

If the investigation finds that DeepSeek has failed to comply with GDPR requirements, the company could face fines or restrictions on its ability to operate in European markets.

The increased regulatory focus on DeepSeek AI reflects growing concerns about the security and transparency of AI models developed outside the United States and Europe.

Beyond these data security and regulatory issues, DeepSeek R1 has raised concerns about content moderation and potential censorship. An analysis conducted by NewsGuard found that the model demonstrated heavy filtering on politically sensitive topics.

The study reported that DeepSeek R1 refused to answer 85 percent of queries related to China, particularly those concerning governance, human rights, and historical events. Additionally, the model provided incomplete or misleading responses in 83 percent of fact-based news prompts, raising concerns about whether its moderation aligns with Chinese government restrictions on information access.

Microsoft has not publicly commented on whether it will implement additional content moderation measures for DeepSeek R1 before fully integrating it into its AI ecosystem. The findings from the NewsGuard study suggest that enterprises deploying DeepSeek R1 should be aware of potential limitations in the model’s ability to provide unbiased information.

The U.S. Navy recently issued a directive banning the use of DeepSeek AI models in both official and personal settings, citing security risks associated with China’s data policies. The move is part of a broader effort by U.S. defense agencies to limit the use of foreign-developed AI systems in sensitive operations.

While Microsoft has emphasized that DeepSeek R1 passed security assessments before being added to Azure AI Foundry, the ongoing regulatory scrutiny suggests that enterprises using the model may still face compliance challenges.

Microsoft’s Expanding AI Model Portfolio and Its Risks

By adding DeepSeek R1 to Azure AI Foundry, Microsoft is further diversifying its AI offerings to provide businesses with multiple model choices.

The company has emphasized the importance of supporting a wide range of AI models to allow developers to select the most suitable technology for their specific needs. However, the inclusion of DeepSeek R1 while an OpenAI investigation is ongoing has raised questions about Microsoft’s vetting process for AI partnerships and its approach to managing potential data security risks.

As OpenAI continues to review API usage patterns, Microsoft’s role in evaluating the integrity of AI models hosted on its cloud services is likely to face increased scrutiny.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x