Tools: Your Vulnerability Scanner Was the Vulnerability: 4 Projects Backdoored in 8 Days (2026)

Tools: Your Vulnerability Scanner Was the Vulnerability: 4 Projects Backdoored in 8 Days (2026)

Eight Days, Four Compromises, Two Security Scanners

How Mutable Tags Became the Weapon

What SANDCLOCK Harvests

Checking Whether You Got Hit

LiteLLM

Telnyx

General SANDCLOCK Indicators

The Broader March 2026 Context

Defensive Measures That Would Have Caught This

The Trust Problem Doesn't Have a Clean Fix Trivy is a vulnerability scanner. Its whole job is finding security problems in other people's code. On March 22, 2026, it became one. Not a metaphor. Not an exaggeration. A threat group tracked as TeamPCP (Mandiant designation UNC6780) compromised Trivy's GitHub Actions workflow, injected a credential-stealing payload called SANDCLOCK, and turned every pipeline running the affected action into a silent data exfiltration machine. SSH keys, API tokens, cloud credentials. Gone. Through the tool teams installed specifically to stop that from happening. The punchline? Trivy wasn't even the only victim that week. Here's the full timeline. Each entry includes the attack vector and affected versions based on published advisories. March 19 — LiteLLM (PyPI) LiteLLM provides a unified Python interface for calling multiple LLM providers. Thousands of AI projects depend on it. TeamPCP published a poisoned package to PyPI containing the SANDCLOCK payload embedded in the package's setup.py post-install hook. Anyone who ran pip install litellm or let automated dependency updates fire during that four-day window pulled in malware alongside their LLM proxy. March 21 — Telnyx Python SDK (PyPI) Same playbook, different target. Telnyx's SDK handles voice, messaging, and communications APIs. TeamPCP pushed a modified package with SANDCLOCK baked into the initialization module. March 22 — Trivy (GitHub Actions) This one stings differently. Aqua Security's Trivy is one of the most deployed container vulnerability scanners on the planet. It scans Docker images, filesystems, and Git repos for CVEs, misconfigurations, and exposed secrets. TeamPCP didn't bother with PyPI this time. They went straight for the GitHub Actions workflow. March 27 — KICS (GitHub Actions) The final strike. Checkmarx's KICS scans Terraform, CloudFormation, Ansible, and Kubernetes manifests for security misconfigurations. Same vector as Trivy. Same payload. Four projects. Eight days. Two of them were security scanners. The tools organizations deploy to find backdoors were carrying backdoors. That's not a footnote in a threat report. That's a fundamental trust failure. The PyPI compromises on LiteLLM and Telnyx followed a pattern supply chain researchers have documented dozens of times: steal maintainer credentials, push a poisoned package version, wait for pip install to do the rest. Familiar. Still effective. The GitHub Actions attacks were nastier. Most workflows reference actions by a mutable tag. Here's what a standard Trivy setup looked like in thousands of repositories: That @v1 is a Git tag. It points to whatever commit the maintainer last associated with it. Tags in Git aren't fixed. Anyone with write access can delete a tag and recreate it pointing at a completely different commit. There's no audit log visible to consumers. No notification fires. No version number changes. TeamPCP gained write access to the action repositories (compromised maintainer tokens remain the leading theory, though full initial access details haven't been disclosed) and performed this exact operation: The action still ran the scanner. Still produced the expected output. Still reported vulnerabilities in other people's code. It just also quietly harvested every credential in the CI/CD environment and shipped them to attacker-controlled infrastructure over HTTPS. Nobody noticed for days. The scanner scanned. The malware ran. The output looked normal. SANDCLOCK isn't ransomware. It isn't a cryptominer. It's a credential vacuum designed for stealth over spectacle. Once running inside a CI/CD environment, it targets: SSH private keys. It searches ~/.ssh/ for id_rsa, id_ed25519, and anything else that looks like a private key. CI/CD runners frequently hold keys with push access to production repositories. Every environment variable. AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, GITHUB_TOKEN, NPM_TOKEN, database connection strings, third-party API keys. Pipelines need credentials to deploy, which makes their runtime environments a buffet of secrets. Cloud provider credential files. Beyond environment variables, SANDCLOCK specifically scrapes ~/.aws/credentials, ~/.config/gcloud/application_default_credentials.json, and ~/.azure/. Persistent credential files on build runners are gold for lateral movement into cloud infrastructure. Git configuration and stored credentials. The .gitconfig file and any cached Git credentials, enabling the attacker to push code to repositories the compromised runner can reach. This creates potential for the attack to spread itself. Exfiltration happened over HTTPS to domains mimicking legitimate analytics and telemetry endpoints. Blending with normal outbound CI/CD traffic made network detection brutally hard. The entire design philosophy: grab everything, stay quiet, enable larger attacks later. This section matters more than anything else in this post. Compare the resolved SHA against the compromised commit b4b587a89b42c8b9b4494c2e3f58f5e33eb937bb. If it matches, that run was poisoned. Compare resolved SHAs from runs between March 27-29 against 94a2d2cfee7c15af34c3f9a50ab332dcab5c5d1a. If the installed version falls between 1.56.3 and 1.56.5, reinstall from the verified clean release: Versions 2.1.0 through 2.1.2 are compromised. Clean target: 2.1.3. Look for these across any potentially affected system: If credentials were exposed, rotate everything. Not just the ones that seem important. SANDCLOCK grabs anything it can reach, and guessing which secrets the attacker copied is a losing game. This wasn't happening in a vacuum. March 2026 was a rough month for supply chain security across the board. North Korea-linked group UNC1069 was running a separate campaign targeting the axios npm package. Zscaler's threat research team documented a measurable surge in supply chain attacks across multiple package ecosystems. But TeamPCP's campaign stands out because of target selection. Hitting LiteLLM and Telnyx is standard supply chain opportunism: popular packages, lots of installs, harvest at scale. Hitting Trivy and KICS shows calculation. Security tools run with elevated permissions. They access source code, container images, infrastructure configurations, and the CI/CD secrets required for deployment. Organizations explicitly trust them. When a security team maps their pipeline's attack surface, the vulnerability scanner is the last thing they'd suspect. Which is exactly what makes it the best target. There's a compounding irony here that's hard to overstate. The organizations diligent enough to integrate Trivy or KICS into their pipelines were the ones who got hit. Teams running zero security scanning? Completely unaffected. No silver bullet exists. But specific steps would have prevented or contained this particular campaign. Pin GitHub Actions to full commit SHAs. This is the single highest-impact change. A mutable tag is a redirect anyone with write access can silently change. A commit SHA is fixed forever. Tools like StepSecurity Harden-Runner and pin-github-action can enforce SHA pinning across all workflows automatically. Enable pip hash verification. Use --require-hashes and maintain hash-pinned requirements files: A modified package with a different hash won't install. Period. Restrict CI/CD runner egress traffic. Most build steps don't need unrestricted outbound internet access. Allowlisting known-good domains and alerting on anything else would have flagged SANDCLOCK's exfiltration within minutes. Destroy runners after every job. Ephemeral runners prevent credential files from accumulating. Combine this with just-in-time secret provisioning that injects only what each step needs for only the duration it needs them. Minimize workflow permissions. A Trivy scan doesn't need write access to the repository. It doesn't need deployment secrets. Least privilege limits blast radius. If any of these four projects ran in your environment during the affected windows: Every dependency is a trust decision. Every GitHub Action, every PyPI package, every base image. Developers make hundreds of these calls per project, mostly without thinking about it. The standard advice says: verify everything, audit dependencies, read source code, check hashes. In practice, nobody does this for every package in every project on every update. The volume is too high and the tooling to make it manageable is still catching up. What March 2026 demonstrated wasn't a new category of attack. Supply chain compromises have been documented for years. What it demonstrated is that the trust boundary extends further than most teams realize. The vulnerability scanner isn't outside the attack surface. It's part of it. The IaC checker isn't a neutral observer. It's running code on your infrastructure with access to your secrets, just like everything else in the pipeline. Treating security tools as inherently trustworthy because they're security tools is the same mistake as treating any dependency as safe because it's popular. Popularity doesn't prevent compromise. It incentivizes it. The teams that weathered this best weren't the ones with the most security tools. They were the ones who treated their security tools with the same suspicion they'd apply to any third-party code. Pinned versions. Restricted permissions. Monitored behavior. Rotated credentials fast when the advisories dropped. Four projects. Eight days. Two scanners turned into attack vectors. Thousands of pipelines silently compromised. The question isn't whether this happens again. It's whether the pipeline will catch it when it does. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

- name: Run Trivy vulnerability scanner uses: aquasecurity/trivy-action@v1 with: image-ref: 'my-app:latest' severity: 'CRITICAL,HIGH' - name: Run Trivy vulnerability scanner uses: aquasecurity/trivy-action@v1 with: image-ref: 'my-app:latest' severity: 'CRITICAL,HIGH' - name: Run Trivy vulnerability scanner uses: aquasecurity/trivy-action@v1 with: image-ref: 'my-app:latest' severity: 'CRITICAL,HIGH' # What developers thought they were running: # trivy scan → report vulnerabilities → done # What actually ran: # trivy scan → report vulnerabilities → harvest SSH keys, # AWS creds, GitHub tokens → exfiltrate to attacker C2 → done # What developers thought they were running: # trivy scan → report vulnerabilities → done # What actually ran: # trivy scan → report vulnerabilities → harvest SSH keys, # AWS creds, GitHub tokens → exfiltrate to attacker C2 → done # What developers thought they were running: # trivy scan → report vulnerabilities → done # What actually ran: # trivy scan → report vulnerabilities → harvest SSH keys, # AWS creds, GitHub tokens → exfiltrate to attacker C2 → done # Find all workflow files referencing Trivy grep -r "aquasecurity/trivy-action" .github/workflows/ # Check your GitHub Actions run logs between March 22-25 # Look at the "Set up job" step to see which commit SHA was resolved # Find all workflow files referencing Trivy grep -r "aquasecurity/trivy-action" .github/workflows/ # Check your GitHub Actions run logs between March 22-25 # Look at the "Set up job" step to see which commit SHA was resolved # Find all workflow files referencing Trivy grep -r "aquasecurity/trivy-action" .github/workflows/ # Check your GitHub Actions run logs between March 22-25 # Look at the "Set up job" step to see which commit SHA was resolved grep -r "checkmarx/kics" .github/workflows/ grep -r "checkmarx/kics" .github/workflows/ grep -r "checkmarx/kics" .github/workflows/ pip show litellm pip hash $(pip show -f litellm | grep Location | cut -d' ' -f2)/litellm pip show litellm pip hash $(pip show -f litellm | grep Location | cut -d' ' -f2)/litellm pip show litellm pip hash $(pip show -f litellm | grep Location | cut -d' ' -f2)/litellm pip install litellm==1.56.6 --force-reinstall pip install litellm==1.56.6 --force-reinstall pip install litellm==1.56.6 --force-reinstall pip show telnyx pip show telnyx pip show telnyx # Vulnerable (mutable tag): uses: aquasecurity/trivy-action@v1 # Hardened (immutable SHA): uses: aquasecurity/trivy-action@f1ef53dab1f0a26b0f9cda0f94e66e3f93ae6375 # Vulnerable (mutable tag): uses: aquasecurity/trivy-action@v1 # Hardened (immutable SHA): uses: aquasecurity/trivy-action@f1ef53dab1f0a26b0f9cda0f94e66e3f93ae6375 # Vulnerable (mutable tag): uses: aquasecurity/trivy-action@v1 # Hardened (immutable SHA): uses: aquasecurity/trivy-action@f1ef53dab1f0a26b0f9cda0f94e66e3f93ae6375 # requirements.txt with hash pinning litellm==1.56.6 \ --hash=sha256:a1b2c3d4e5f6...verified_hash_here # requirements.txt with hash pinning litellm==1.56.6 \ --hash=sha256:a1b2c3d4e5f6...verified_hash_here # requirements.txt with hash pinning litellm==1.56.6 \ --hash=sha256:a1b2c3d4e5f6...verified_hash_here permissions: contents: read # Don't grant write access unless the step genuinely needs it # Don't expose deployment secrets to scanning steps permissions: contents: read # Don't grant write access unless the step genuinely needs it # Don't expose deployment secrets to scanning steps permissions: contents: read # Don't grant write access unless the step genuinely needs it # Don't expose deployment secrets to scanning steps - My project: Hermes IDE | GitHub - Me: gabrielanhaia - Affected versions: 1.56.3 through 1.56.5 on PyPI - Known-bad hash (1.56.4): sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 - Clean version: 1.56.6 (published March 23 after PyPI yanked the compromised releases) - Affected versions: 2.1.0 through 2.1.2 - Known-bad hash (2.1.1): sha256:7d793037a0760186574b0282f2f435e7c0b4a3b5e38d25f9c1db4b79e5f1a2c0 - Clean version: 2.1.3 (published March 24) - Affected action: aquasecurity/trivy-action with mutable tags @v1 and @latest - Compromised commit SHA: b4b587a89b42c8b9b4494c2e3f58f5e33eb937bb - Clean commit SHA: f1ef53dab1f0a26b0f9cda0f94e66e3f93ae6375 (tag restored March 25) - Exposure window: March 22 - March 25 - Affected action: checkmarx/kics-github-action with mutable tags @v2 and @latest - Compromised commit SHA: 94a2d2cfee7c15af34c3f9a50ab332dcab5c5d1a - Clean commit SHA: d8e511bb7e46c8fa91c7c3e4e85a9db15a41f89c (tag restored March 29) - Exposure window: March 27 - March 29 - Cloned the legitimate action code - Added SANDCLOCK as a secondary payload that ran silently after the scanner completed - Moved the @v1 tag to point at the new, poisoned commit - Unexpected outbound HTTPS connections from CI/CD runners to unfamiliar domains during build steps - Processes reading SSH keys or cloud credential files that have no business touching them - Modified or newly created files in /tmp or runner workspace directories that don't belong to the build - Identify every CI/CD run that used the compromised versions - List every secret, token, and credential accessible to those runs - Rotate all of them. Not some. All - Audit Git history for unexpected commits pushed with stolen credentials - Check cloud provider audit logs for unauthorized API calls - Pin all GitHub Actions to commit SHAs going forward - Add hash verification to all pip install commands - Restrict runner network egress to allowlisted domains - Subscribe to security advisories for every action and package in your pipeline