Why Blocking China’s DeepSeek from Using U.S. AI May Be Difficult
White House Concerns Over AI Distillation
Top White House advisers are raising alarms over China’s DeepSeek and its potential use of a controversial AI training method known as “distillation.“ This technique allows one AI system to learn from another, potentially giving DeepSeek an advantage by leveraging the advancements of U.S. rivals—without the massive costs and computing power investments.
Despite concerns, stopping this practice may prove challenging, according to Silicon Valley executives and investors.
DeepSeek’s Breakthrough Shakes Up the AI Industry
DeepSeek made headlines this month by unveiling an AI model that rivals top U.S. technologies, such as OpenAI’s ChatGPT, but at a significantly lower cost. Even more surprising, the China-based company released its model for free, sparking debate over how it achieved such rapid advancements.
Some experts suspect that DeepSeek may have used distillation to learn from U.S. models, allowing it to bypass the costly and time-consuming process of developing AI from scratch.
How AI Distillation Works
AI distillation involves training a newer model by having it interact with an older, more powerful AI system. The established model evaluates the quality of responses from the newer system, effectively transferring its knowledge.
This means that companies like DeepSeek could benefit from the extensive resources spent by U.S. firms—without directly accessing or copying proprietary data.
While AI distillation is widely used in the industry, it violates the terms of service of several major U.S. AI firms, including OpenAI.
OpenAI and U.S. Firms Investigating DeepSeek
A spokesperson for OpenAI confirmed that the company is aware of groups in China actively working to replicate U.S. AI models through distillation. OpenAI is now investigating whether DeepSeek improperly used this method to develop its latest model.
Industry Experts: Learning from Rivals is Common
Despite ethical and legal concerns, some industry leaders argue that learning from competitors is standard practice in AI development.
Naveen Rao, vice president of AI at San Francisco-based Databricks, compared AI distillation to automakers reverse-engineering each other’s engines to gain insights.
“To be completely fair, this happens in every industry. Competition is real, and when information is extractable, companies will try to use it to gain an advantage,” Rao said. “We all try to be good citizens, but we’re also competing at the same time.”
Why Stopping DeepSeek May Be Difficult
Blocking DeepSeek or similar companies from leveraging U.S. AI advancements is complicated for several reasons:
- AI distillation doesn’t require direct access to U.S. systems – Instead of stealing data, the method allows models to learn indirectly, making enforcement tricky.
- The practice is widely used in AI research – Even though some U.S. firms prohibit distillation in their terms of service, monitoring and proving violations can be difficult.
- AI innovation moves at a rapid pace – By the time regulations catch up, new methods may emerge to bypass restrictions.
The Bigger Picture: Global AI Competition
The DeepSeek case highlights the increasingly competitive race for AI dominance between the U.S. and China. With open-source models and indirect learning techniques making AI more accessible, preventing knowledge transfer between global rivals is becoming a significant challenge for policymakers and tech companies alike.
For now, DeepSeek’s rapid rise is a reminder that in the world of AI, staying ahead means more than just innovation—it also means navigating the complex realities of global competition.