Anthropic Accuses Chinese AI Firms of Massive Claude Theft - 24,000 Fake Accounts Exposed (Feb 2026)
Anthropic reveals DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts and 16 million queries to steal Claude AI capabilities in "industrial-scale" attacks. Full details and what it means for AI security.
TLDR
Anthropic has accused three major Chinese AI companies - DeepSeek, Moonshot, and MiniMax - of stealing its Claude AI using 24,000 fake accounts and more than 16 million questions. One company switched to a newer Claude version in just 24 hours. Anthropic called it “industrial-scale” theft. Reports also link DeepSeek to banned Nvidia Blackwell chips. This case shows the growing difficulty of protecting expensive AI technology across borders.
Chinese AI Firms Accused of Stealing Secrets from Anthropic
San Francisco, February 25, 2026 - Anthropic has made a major claim against three Chinese AI companies.
The company says DeepSeek, Moonshot AI, and MiniMax ran industrial-scale distillation attacks on Claude. Distillation is when someone asks a frontier model millions of questions and uses those answers to train a cheaper model that imitates it.
Anthropic says the operators used around 24,000 fake accounts and generated more than 16 million exchanges. The company also reports that one firm moved to a newer Claude version within 24 hours after a prior block.
The incident is part of a broader pattern of model extraction concerns across major AI labs. Anthropic argues these campaigns were coordinated and persistent, not isolated abuse.
Separate reporting has also linked DeepSeek to alleged use of restricted Nvidia Blackwell chips in China, adding a hardware compliance angle to the software-theft allegations.
Why this story matters
Building a frontier model can cost hundreds of millions of dollars in compute, data preparation, safety work, and evaluation. If competitors can copy capabilities quickly through API distillation, the economics of AI development shift.
There is also a safety risk: copied systems may not carry over the same red-team process, policy controls, and post-training safeguards. That creates a path for fast capability transfer without equivalent governance.
The likely result is tighter API defenses, stronger anomaly detection, and potentially new cross-border policy responses focused on AI intellectual property and model security.
Sources
Featured Image Alt Text
Anthropic accuses Chinese AI labs of stealing Claude model using fake accounts
Tags
AI Theft, Anthropic, Claude, Distillation Attacks, AI Ethics, China AI News