Traditional Security Frameworks Leave Organizations Exposed To...
In December 2024, the popular Ultralytics AI library was compromised, installing malicious code that hijacked system resources for cryptocurrency mining. In August 2025, malicious Nx packages leaked 2,349 GitHub, cloud, and AI credentials. Throughout 2024, ChatGPT vulnerabilities allowed unauthorized extraction of user data from AI memory.
The result: 23.77 million secrets were leaked through AI systems in 2024 alone, a 25% increase from the previous year.
Here's what these incidents have in common: The compromised organizations had comprehensive security programs. They passed audits. They met compliance requirements. Their security frameworks simply weren't built for AI threats.
Traditional security frameworks have served organizations well for decades. But AI systems operate fundamentally differently from the applications these frameworks were designed to protect. And the attacks against them don't fit into existing control categories. Security teams followed the frameworks. The frameworks just don't cover this.
The major security frameworks organizations rely on, NIST Cybersecurity Framework, ISO 27001, and CIS Control, were developed when the threat landscape looked completely different. NIST CSF 2.0, released in 2024, focuses primarily on traditional asset protection. ISO 27001:2022 addresses information security comprehensively but doesn't account for AI-specific vulnerabilities. CIS Controls v8 covers endpoint security and access controls thoroughly—yet none of these frameworks provide specific guidance on AI attack vectors.
These aren't bad frameworks. They're comprehensive for traditional systems. The problem is that AI introduces attack surfaces that don't map to existing control families.
"Security professionals are facing a threat landscape that's evolved faster than the frameworks designed to protect against it," notes Rob Witcher, co-founder of cybersecurity training company Destination Certification. "The controls organizations rely on weren't built with AI-specific attack vectors in mind."
This gap has driven demand for specialized AI security certification prep that addresses these emerging threats specifically.
Consider access control requirements, which appear in every major framework. These controls define who can access systems and what they can do once inside. But access controls don't address prompt injection—attacks that manipulate AI behavior through carefully crafted natural language input, bypassing authentication entirel
Source: The Hacker News