Anthropic says it has identified thousands of ‘fraudulent accounts’ taking Claude and ‘extracting its capabilities to train and improve their own models’

The question of what data AI models are trained on, and the legitimacy of that data, is a thorny one. Anthropic found itself defending its use of copyrighted material to train its Claude AI in the US last year, a case that eventually resulted in a ruling that its copyrighted scraping fell under fair use privileges.

However, the company eventually agreed to pay a $1.5 billion settlement in regards to claims that it pirated copies of several author’s works. I mention this, because Anthropic has recently taken to X to complain about “industrial-scale distillation attacks” on Claude, perpetrated by what it says are over “24,000 fraudulent accounts” that have generated over 16 million exchanges with the AI chatbot, thereby “extracting its capabilities to train and improve their own model.”

We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.February 23, 2026

Which, as far as Anthropic is concerned, really isn’t on. It identifies DeepSeek, Moonshot AI, and MiniMax as the perpetrators of the attacks, and while it says that “distillation can be legitimate”, it also declares: “Foreign labs that illicitly distil American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.”

I mean don’t you basically train your models the same way, by sucking up half the internet?February 23, 2026

In a further post, Anthropic says: “These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community,” before linking out to a news post on the topic.

The post goes into further detail regarding the discovery of the attacks, and also says that Anthropic was able to attribute “each campaign to a specific lab with high confidence through IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners.”

Which, as X user AntonLaVay points out, sounds like Anthropic loudly declaring that it can de-anonymize its users with relative ease. That’s perhaps a privacy-related point for another day.

In the meantime, though, it seems that while Anthropic is fine with training its own models on copyrighted data, other companies using Anthropic’s work to train their own models is a serious problem.

And while the foreign military angle is certainly an interesting one, I’ve got a feeling it might not engender the same sort of sympathy as that given to private individuals who claim to have had their work incorporated into the Claude AI behemoth. Just a thought.

Leave a Reply

Your email address will not be published.

Previous post Phasmophobia’s most popular map is getting a major rework, making this the only acceptable time to be excited by the phrase ‘a new surprise in the basement’
Next post All materials required to complete the Weather Monitor System project in Arc Raiders