China’s artificial intelligence (AI) startup DeepSeek is shaking the foundations of global tech markets, calling into question the inflated valuations of U.S. technology giants.
The company’s R1 model, released on January 10, has proven that competitive AI systems can be developed with a fraction of the resources typically required by industry leaders.
Related: Meta Employees Say Their AI Team Is in “Panic Mode” After DeepSeek R1 Model Release
This sent Nasdaq 100 futures tumbling more than 5% on Monday. As investors grapple with the implications, some are asking a pressing question: Did DeepSeek just burst the U.S. tech stock market bubble?

Nvidia, the poster child of the AI boom, saw its shares plummet more than 13% in premarket trading.

At the core of the upheaval is DeepSeek R1’s revolutionary efficiency. Unlike models developed by OpenAI and Meta that rely on costly, high-performance hardware, R1 achieved comparable performance using Nvidia’s H800 GPUs—lower-grade chips restricted by U.S. sanctions.
Related: How DeepSeek R1 Surpasses ChatGPT o1 Under Sanctions, Redefining AI Efficiency Using Only 2,048 GPUs
This achievement has disrupted longstanding assumptions about the necessity of massive infrastructure spending in AI development and raised new concerns about the sustainability of Silicon Valley’s business model.
DeepSeek R1: A Cost-Effective Challenger to Silicon Valley
DeepSeek’s R1 model is a milestone in AI innovation, quickly climbing to the top spot on Apple’s U.S. App Store just days after its release. Offering transparency in its reasoning processes, the app has been praised for its ability to solve complex queries efficiently. User reviews highlight its accessibility and reliability, contrasting with the resource-intensive approaches taken by U.S. counterparts.
The model was trained using 2,048 Nvidia H800 GPUs at a total cost of under $6 million, according to a December 2024 research paper released by DeepSeek. These GPUs, intentionally designed with reduced capabilities to comply with U.S. export restrictions, presented unique challenges.
Yet, DeepSeek’s engineers developed novel optimization techniques to minimize computational and memory requirements, achieving performance benchmarks of 97.3% on MATH-500 and 79.8% on AIME 2024.
Founder Liang Wenfeng, a former hedge fund manager, described the company’s strategy: “We estimate that the best domestic and foreign models may have a gap of one-fold in model structure and training dynamics. For this reason, we need to consume four times more computing power to achieve the same effect. What we need to do is continuously narrow these gaps” [36Kr].
Ripple Effects Across Global Markets
The release of R1 triggered a sharp selloff in global tech stocks. Nvidia, whose GPUs are widely regarded as essential to AI development, saw its valuation drop by billions.
European chipmaker ASML Holding NV also suffered an 11% decline, while Nasdaq 100 futures recorded trading volumes four times the daily average by early Monday. Investors are reassessing the financial underpinnings of the AI sector, which has driven significant growth in tech stocks over the past year.

The fallout extends beyond the U.S., with Chinese AI-related stocks such as Merit Interactive Co. surging as much as 20% in response to DeepSeek’s success. The Hang Seng Tech Index rose ahead of the Lunar New Year, reflecting optimism about China’s growing presence in AI innovation.
The Geopolitical Dimension: Sanctions and Innovation
DeepSeek’s rise is a direct response to U.S. export controls designed to limit China’s access to advanced technologies. Since 2021, these restrictions have aimed to prevent the development of competitive AI systems in China by restricting access to cutting-edge hardware.
However, DeepSeek’s resourceful use of H800 GPUs has demonstrated that innovation can thrive even under stringent constraints.
Liang’s strategy of stockpiling restricted GPUs before the sanctions took full effect was pivotal. By focusing on efficiency rather than brute computational power, DeepSeek’s engineers showcased how constraints can drive creative problem-solving.
Yann LeCun, Meta’s Chief AI Scientist, praised the open-source ethos behind R1’s development, stating: “DeepSeek has profited from open research and open source (e.g., PyTorch and Llama from Meta). They came up with new ideas and built them on top of other people’s work.”
Implications for U.S. Tech Giants
The success of DeepSeek’s R1 model poses uncomfortable questions for U.S. tech leaders like Meta and Microsoft, which have invested billions in AI infrastructure. Meta CEO Mark Zuckerberg recently outlined the company’s ambitious plans to deploy over 1.3 million GPUs in 2025, stating: “We’re planning to invest $60-65B in capex this year while also growing our AI teams significantly, and we have the capital to continue investing in the years ahead.”
A New Era for AI Innovation
DeepSeek’s commitment to open-source collaboration has set it apart from industry giants. By publishing R1’s architecture and training methods, the company has enabled developers worldwide to replicate or enhance its work.
This transparency contrasts with the proprietary nature of platforms like OpenAI’s ChatGPT, highlighting a potential shift toward more accessible AI innovation.
DeepSeek’s achievements are a reminder that technological leadership is not solely defined by financial resources. Whether this marks the end of the U.S. tech stock market bubble or a new chapter in global AI competition, one thing is clear: the rules of the game are changing.