The Anthropic logo displayed on the stage during the company’s Builder Summit in Bengaluru, India, on Monday, Feb. 16, 2026. Photographer: Samyukta Lakshmi/Bloomberg via Getty Images
Bloomberg | Bloomberg | Getty Images
Anthropic on Monday accused three Chinese AI enterprises of engaging in coordinated campaigns to extract information from its model, making it the latest American tech firm to level such claims after OpenAI issued similar complaints.
According to a statement from Anthropic, DeepSeek, Moonshot AI and MiniMax — the three firms in question — engaged in concerted “distillation attack” campaigns, flooding Claude with large volumes of specially-crafted prompts to train proprietary models.
Through distillation, smaller AI models are able to mimic the performance of larger, pre-trained models by extracting knowledge from the better-trained model, a technique particularly useful for smaller teams with fewer resources.
Despite Anthropic’s service restrictions preventing commercial access to Claude in China, the three firms allegedly engaged commercial proxy services to sidestep Anthropic’s restrictions, enabling access to networks running tens of thousands of Claude accounts simultaneously.
“Once access is secured, the labs generate large volumes of carefully crafted prompts designed to extract specific capabilities from the model,” Anthropic said in the statement.
Claude’s responses to these prompts are farmed en masse either for direct training of the Chinese models, or to run a process known as reinforcement learning, a data-intensive process where AI models learn decision-making through trial and error, in the absence of human guidance.
Anthropic estimated that the three Chinese firms were collectively able to generate over 16 million exchanges with Claude from around 24,000 fraudulently created accounts. Of the three enterprises, Anthropic found MiniMax to have driven the most traffic, with over 13 million exchanges.
DeepSeek, Moonshot AI and MiniMax have yet to respond to a request for comment from CNBC.
Not the first time
Anthropic joins a growing chorus of American companies expressing concerns over distillation from Chinese AI firms.
Earlier this month, Sam Altman’s OpenAI submitted an open letter to U.S. legislators, claiming to have observed activity “indicative of ongoing attempts by DeepSeek to distill frontier models of OpenAI and other US frontier labs, including through new, obfuscated methods.”
The company has flagged evidence of distillation by Chinese firms since early last year, with the launch of China’s first DeepSeek model, which users found strikingly similar to ChatGPT, the Financial Times reported in Jan. 2025, citing insiders from OpenAI.
Distillation, however, is not an uncommon practice in the AI industry.
Anthropic acknowledged in the Monday statement that AI firms “routinely distill their own models to create smaller, cheaper versions.”
The company was, however, concerned with the competitive advantage rival firms stand to gain, as the practice can be used “to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
In their respective statements, Anthropic and OpenAI have framed distillation by these Chinese firms as national security threats.
Like OpenAI, who described DeepSeek’s practices as “adversarial distillation,” Anthropic expressed concern over the possibility of “authoritarian governments deploy[ing] frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”
It remains unclear, however, how much these statements reflect genuine security concerns over a desire to preserve the competitive lead of America’s AI corporations.
Some online users were quick to point out the similarities between Anthropic’s claims and its own use of distillation to train proprietary models.
Anthropic has long framed “compute leadership as a national security priority,” consistently advocating for tighter export controls of advanced AI chips to China, according to Rui Ma from boutique consulting firm Tech Buzz China.
“Whether intentional or not, the narrative of illicit capability transfer strengthens the case for stricter chip restrictions,” Ma added.
On the same day of Anthropic’s statement, Reuters reported that the U.S. had found evidence of DeepSeek training its AI model on Nvidia’s flagship Blackwell chip, apparently flouting export controls, according to anonymous senior officials.
Such reports add fodder to concerns from an administration that appears increasingly anxious over China’s rapid advancements in the AI industry, especially as China’s gains reportedly arise from the use of American-developed systems.
Last Friday, the White House announced the establishment of the Peace Corps, an initiative within the Peace Corps established to promote American AI interests abroad, and to help partner nations adopt cutting-edge systems.
