Essential Guide: What Should We Learn From How Attackers Leveraged AI In 2025?
Old Playbook, New Scale: While defenders are chasing trends, attackers are optimizing the basics
The security industry loves talking about "new" threats. AI-powered attacks. Quantum-resistant encryption. Zero-trust architectures. But looking around, it seems like the most effective attacks in 2025 are pretty much the same as they were in 2015. Attackers are exploiting the same entry points that worked - they're just doing it better.
As the Shai Hulud NPM campaign showed us, supply chain remains a major issue. A single compromised package can cascade through an entire dependency tree, affecting thousands of downstream projects. The attack vector hasn't changed. What's changed is how efficiently attackers can identify and exploit opportunities.
AI has collapsed the barrier to entry. Just as AI has enabled one-person software projects to build sophisticated applications, the same is true in cybercrime. What used to require large, organized operations can now be executed by lean teams, even individuals. We suspect some of these NPM package attacks, including Shai-Hulud, might actually be one-person operations.
As software projects become simpler to develop, and threat actors show an ability to play the long game (as with the XZ Utils attack) - we're likely to see more cases where attackers publish legitimate packages that build trust over time, then one day, with the click of a button, inject malicious capabilities to all downstream users.
Phishing still works for the same reason it always has: humans remain the weakest link. But the stakes have changed dramatically. The recent npm supply chain attack demonstrates the ripple effect: one developer clicked a bad link, entered his credentials and his account was compromised. Packages with tens of millions of weekly downloads were poisoned. Despite the developer publicly reporting the incident to npm, mitigation took time - and during that window, the attack spread at scale.
Perhaps most frustrating: malware continues to bypass official gatekeepers. Our research on malicious Chrome extensions stealing ChatGPT and DeepSeek conversations revealed something we already know from mobile app stores—automated reviews and human moderators aren't keeping pace with attacker sophistication.
The permissions problem should sound familiar because it's already been solved. Android and iOS give users granular control: you can allow location access but block the microphone, permit camera access only when an app is open, not
Source: The Hacker News