Anthropic Reports The First 80-90% 'ai-orchestrated' Cyber...

Anthropic Reports The First 80-90% 'ai-orchestrated' Cyber...

"Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with asskissing, stonewalling and acid trips?"

Recently, in a blog post titled "Disrupting the first reported AI-orchestrated cyber espionage campaign", Claude owner Anthropic shared details on a "Chinese state-sponsored group" it claimed was using Claude's coding tool to try and infiltrate 30 global targets. This report claims "the threat actor was able to use AI to perform 80-90% of the campaign", but that assertion has since faced scepticism.

As reported by Ars Technica, Dan Tentler, cofounder of internet security company Phobos Group, recently took to Mastodon to share their thoughts on this claim. After saying they'd like to see logs to verify this campaign, they cast doubts upon the veracity of the claim, saying, "Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with asskissing, stonewalling and acid trips?"

Effectively, Tentler is asking why their experience with these tools doesn't reveal the same sophistication as the models claimed to be used by the China-based hacking group that Anthropic is referring to. Similarly, Bob Rudis, the VP of Data Science, security research and detection engineering at GreyNoise Intelligence (a security firm), doubts the abilities of this new tech. They say, "It doesn't expand the threat model in a meaningful way, and mostly serves as a well-packaged demonstration of trends we’ve already known about for years."

Rudis notes, "Sure, this speeds things up (for a TINY FRACTION of adversaries) and likely lowers labor costs. But it doesn’t rewrite the rules of the "game" (I hate calling it that but it is what it is)." They continue to argue that, if anything, these tools can be more beneficial to defenders than attackers as "defenders (in theory) have the data advantage."

In a world where AI carries out the bulk of cyberattacks, it's not hard to imagine it also being used for the bulk of cyber defence, something that might be in the best interest of AI companies.

We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents.Read more: https://t.co/VxqERnPQRJNovember 13, 2025

The full report from Anthropic acknowledges potential failure points for AI's use by bad actors. It points out "Claude frequently overstated findings and oc

Source: PC Gamer