Meta’s AI team is under intense pressure following the release of DeepSeek’s R1 model, which has challenged the AI industry with its unprecedented efficiency and performance.
Anonymous posts on the professional networking platform Blind reveal turmoil within Meta’s ranks, with engineers describing a frantic effort to understand and replicate DeepSeek’s success while grappling with internal inefficiencies and leadership missteps.
Blind is an anonymous professional networking platform where employees can share information, discuss workplace issues, and network with peers in the same or different industries. It has a verification system in place to ensure that users are actual employees of the companies they claim to work for, and is primarily popular among professionals in the tech industry.
Related: How DeepSeek R1 Surpasses ChatGPT o1 Under Sanctions, Redefining AI Efficiency Using Only 2,048 GPUs
One anonymous Meta employee, posting under the name “ngi,” summarized the mood within the GenAI division of Meta:
“It started with DeepSeek V3 [a DeepSeek model released in December 2024], which rendered Llama 4 already behind in benchmarks. Adding insult to injury was the ‘unknown Chinese company with 5..5 million training budget.’ Engineers are moving frantically to dissect DeepSeek and copy anything and everything we can from it.
I’m not even exaggerating. Management is worried about justifying the massive cost of GenAI org. How would they face the leadership when every single ‘leader’ of GenAI org is making more than what it cost to train DeepSeek V3 entirely, and we have dozens of such ‘leaders.’ DeepSeek R1 made things even scarier. I can’t reveal confidential info but it’ll be soon public anyways.
It should have been an engineering focused small org but since a bunch of people wanted to join the impact grab and artificially inflate hiring in the org, everyone loses.”
The employee’s comments highlight the internal dissatisfaction with Meta’s approach to AI development, which many describe as overly bureaucratic, resource-intensive, and driven by superficial metrics rather than meaningful innovation.
The release of DeepSeek R1 has exposed these shortcomings and forced a reckoning for one of the AI industry’s largest players.
Related: LLaMA AI Under Fire – What Meta Isn’t Telling You About “Open Source” Models
DeepSeek R1 Sends Shockwaves Though US Tech Sector
DeepSeek’s R1 model, released on January 10, 2025, has upended the global AI landscape by demonstrating that high-performance models can be developed at a fraction of the cost typically associated with such projects.
Using Nvidia H800 GPUs—lower-grade chips restricted by U.S. export controls—DeepSeek engineers trained the model for under $6 million, according to a research paper released in December 2024.
These GPUs, intentionally throttled to comply with U.S. sanctions, presented unique challenges, but DeepSeek’s optimization techniques allowed the team to achieve comparable performance to industry-leading models.
R1’s benchmarks include a 97.3% score on MATH-500 and a 79.8% score on AIME 2024, placing it among the most capable AI systems in the world.
The efficiency of DeepSeek R1, which also partially outperforms OpenAI’s o1 model, has not only shaken confidence in U.S. tech giants like Meta but has also triggered significant market reactions.
Nvidia’s stock dropped over 13% in premarket trading following the model’s release, and the Nasdaq 100 futures fell by more than 5%. Meanwhile, DeepSeek has climbed to the top spot on Apple’s U.S. App Store, surpassing OpenAI’s ChatGPT in downloads.

Meta Engineers Question Reliance on Expensive Computational AI Training
Within Meta, engineers have criticized the company’s reliance on brute computational power rather than pursuing efficiency-driven innovation.
One employee remarked on Blind: A lot of the leadership has literally no idea (even a lot of engineering) about the underlying technology and they keep selling ‘more GPUs = win’ to the leadership.” Another shared frustration with the culture of “impact chasing,” describing it as a race for promotions rather than a commitment to meaningful advancements.
Meta’s AI efforts have also faced scrutiny for their lack of agility compared to competitors. DeepSeek’s R1 model is not only cost-effective but also open-source, allowing developers worldwide to examine and build upon its architecture.
The Blind discussions also reveal broader industry concerns. Google employees acknowledged the disruptive impact of DeepSeek, with one noting: “It really is crazy what DeepSeek is doing. It’s not just Meta, they are lighting a fire under OpenAI, Google and Anthropic’s ass as well. Which is a good thing, we are seeing real-time how effective an open competition is for innovation.”
This sentiment reflects the growing recognition that traditional resource-heavy strategies may no longer guarantee dominance in AI development.
This transparency has drawn praise from industry leaders, including Meta’s own Chief AI Scientist, Yann LeCun, who wrote on LinkedIn: “DeepSeek has profited from open research and open source (e.g., PyTorch and Llama from Meta). They came up with new ideas and built them on top of other people’s work.”
Mark Zuckerberg Doubles Down on AI Infrastructure Investments
In stark contrast, Meta has focused on large-scale infrastructure investments. CEO Mark Zuckerberg recently announced plans to deploy over 1.3 million GPUs in 2025 and invest $60-65 billion in AI development.
“This is a massive effort, and over the coming years, it will drive our core products and business, unlock historic innovation, and extend American technology leadership,” Zuckerberg said in a public statement earlier this year. However, these plans now appear increasingly at odds with the lean, efficiency-first approach demonstrated by DeepSeek.
DeepSeek’s rise has also reignited debates over U.S. export restrictions on AI-related technologies to China. Since 2021, the Biden administration has implemented measures to limit China’s access to advanced chips, including Nvidia’s H100 GPUs.
However, DeepSeek’s ability to achieve world-class results with restricted hardware underscores the limitations of these policies. By stockpiling H800 GPUs before the sanctions took full effect and focusing on efficiency, DeepSeek has turned constraints into advantages.
Founder Liang Wenfeng, a former hedge fund manager, described the company’s strategy: “We estimate that the best domestic and foreign models may have a gap of one-fold in model structure and training dynamics. For this reason, we need to consume four times more computing power to achieve the same effect. What we need to do is continuously narrow these gaps”.
As the AI industry grapples with the implications of DeepSeek’s success, Meta faces an urgent need to adapt. The company’s employees have made their frustrations clear, calling for a shift toward more efficient, innovation-driven strategies. For now, DeepSeek’s R1 model stands as a powerful demonstration of resourceful engineering, reshaping the competitive dynamics of global AI development.