Chinese AI companies 'distilled' Claude to improve own models, Anthropic says - Reuters

Published
Score
3

Why it matters

Core event: Anthropic, creator of the Claude AI chatbot, publicly accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of conducting "industrial-scale distillation attacks" on Claude to train their own models.[1] The firms allegedly created over 24,000 fake accounts and generated more than 16 million interactions with Claude, using distillation—a technique where a less advanced model learns from a superior one's outputs—to extract capabilities without independent development.[1]

Key players: Anthropic led the allegations via a press note and X post, detailing the misuse.[1] DeepSeek reportedly accessed Claude over 150,000 times; Moonshot AI conducted over 3.4 million interactions, including targeted efforts to reconstruct Claude's reasoning traces (linked via metadata to senior staff); MiniMax logged about 13 million exchanges.[1] No responses from the accused firms are noted in available reports.[1]

Context and timeline: Distillation is a legitimate internal training method but was allegedly misused here by competitors to bypass R&D costs and time.[1] The activity spanned multiple phases, with Moonshot shifting to more precise extraction tactics; Anthropic detected it through request metadata matching public profiles.[1] The allegations surfaced in Anthropic's February 23, 2026, press release, tied to recent spotlight on DeepSeek's R1 model.[1]

Newsworthiness: This highlights escalating AI industry tensions over data extraction, terms-of-service violations, and monitoring challenges amid global competition, especially U.S.-China AI rivalry, raising questions on ethical training practices and potential regulatory responses.[1]

Sources

mail

Get notified about new Artificial Intelligence developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Artificial Intelligence.