Tools: I scanned 5 popular OSS repos in 5 minutes. Here's what I found. - 2025 Update

Tools: I scanned 5 popular OSS repos in 5 minutes. Here's what I found. - 2025 Update

The 5 repos

The same 3 rules show up in all 5 repos

Why these specific 3 are everywhere

What to do about it (5-minute fixes)

1. Add a job-level timeout

2. Add concurrency at workflow scope

3. Tell setup-node what package manager you use

Run this scan on your own repo

Methodology

What this is NOT Earlier today I shipped scan.html: a one-page in-browser tool that takes any public GitHub repo URL, fetches its .github/workflows/*.yml, and returns a per-workflow report using ci-doctor (14 rules) and gha-budget (per-job pricing). Runs entirely client-side via the GitHub public API. No signup, nothing uploaded. To make sure it actually works on real-world repos and not just on the canned examples I built it against, I picked 5 well-known npm-ecosystem repos that I had not specifically optimized for, and ran them through. All 5 are maintained by experienced engineers. None of these are random small repos; they all matter. *Modeled at 30 runs/day, 8 min/job, on standard ubuntu-latest GitHub-hosted runner pricing. Real spend depends on actual run frequency, runner choice, and OSS rate-limit credits. The point of the column is comparison, not accusation. This is the part I find genuinely interesting. These repos have nothing in common architecturally - vite is a bundler, axios is an HTTP client, eslint is a static analyzer, etc. - but the top-3 ci-doctor findings are nearly identical across all of them: The interesting outlier is axios/axios with 12 error-severity findings. All 12 are deprecated-action: workflows still pinned to actions/checkout@v3, actions/setup-node@v3, and actions/upload-artifact@v3. v3 of upload-artifact was deprecated in late 2024 and the v3 endpoint is being shut off. These are not "save 3% of your CI bill" findings; these are "your CI will silently start failing" findings. My theory: GitHub Actions doesn't push you to add any of them. The workflow files YAML-validates fine without a timeout, without concurrency, without a cache. The CI passes. The PR ships. There is no linter built in to nudge anyone toward better defaults. So the same 3 smells survive in every repo I scan, including in mine before I built ci-doctor. This is a tooling problem, not a competence problem. The maintainers of all 5 of these repos are excellent engineers. The smells are just invisible until something points at them. For each of the top 3 rules, here's the smallest possible change: Or just run npx ci-doctor --fix and let it patch missing-concurrency, missing-timeout, wide-trigger, and artifact-no-retention in place. The other 10 rules need a human to look at them, but those 4 are mechanical. Free, browser-only, no signup: https://depmedicdev-byte.github.io/scan.html Paste any public GitHub repo URL. Get the same per-workflow report above for your own code in about 10 seconds. Shareable result URL. Nothing uploaded. This isn't an attack on any of these projects. They all ship excellent software, and "modeled $/mo" is not the same as "actual $/mo" - large OSS projects get free GitHub Actions credits, run things conditionally, and use self-hosted runners for the heavy lifting. The point is that the same three workflow-YAML patterns show up in every repo I scan, including mine before I started building ci-doctor. Free CLIs: ci-doctor, gha-budget, pin-actions. All MIT. If the in-browser report flags 5 things in your workflow and you'd rather just copy a known-good template than fix one rule at a time, the Cut Your CI Bill cookbook ships 30 production patterns: monorepo dispatch, OIDC publish, security gates, matrix trims, the works. $19 one-time, MIT-licensed templates. https://depmedicdev-byte.github.io Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ jobs: test: runs-on: ubuntu-latest timeout-minutes: 15 # <-- add this steps: ... jobs: test: runs-on: ubuntu-latest timeout-minutes: 15 # <-- add this steps: ... jobs: test: runs-on: ubuntu-latest timeout-minutes: 15 # <-- add this steps: ... # top of every workflow that runs on PRs concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.ref }} cancel-in-progress: true # top of every workflow that runs on PRs concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.ref }} cancel-in-progress: true # top of every workflow that runs on PRs concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.ref }} cancel-in-progress: true - uses: actions/setup-node@v4 with: node-version: '20' cache: '-weight: 500;">npm' # or 'pnpm' or 'yarn' - uses: actions/setup-node@v4 with: node-version: '20' cache: '-weight: 500;">npm' # or 'pnpm' or 'yarn' - uses: actions/setup-node@v4 with: node-version: '20' cache: '-weight: 500;">npm' # or 'pnpm' or 'yarn' - missing-timeout (76 hits across 5 repos). No timeout-minutes: on jobs, so a hung step bills until GitHub's 6-hour default cap. Every repo has this. - missing-concurrency (20 hits). Push 3 commits to a PR in 30 seconds, you get 3 stacked CI runs and GitHub bills all 3. concurrency: with cancel-in-progress: true kills the first 2 in milliseconds. Free 30-50% CI savings on PR-heavy repos. - missing-cache (16 hits, mostly in eslint). actions/setup-node without cache: '-weight: 500;">npm' / 'pnpm' / 'yarn' means every job re-downloads node_modules. Slow and expensive. - All 5 repos were pulled via the GitHub public API, no auth. - Each .github/workflows/*.yml was passed through ci-doctor 0.4.1 (14 rules) and gha-budget for per-job pricing. - Cost = sum of all jobs at GitHub-hosted standard ubuntu-latest rates, assuming 8 min/job. - Monthly = per-run * 30 runs/day * 30 days. Self-hosted and large-runner jobs are unpriced. - This is the same engine that runs in the browser at /scan.html. The 20-repo version of this analysis lives at /benchmarks.html with per-repo deep dives.