Tools: Update: We Cut Our GitLab Build Time by 59% With One Change

Tools: Update: We Cut Our GitLab Build Time by 59% With One Change

The problem: shared runners forget everything

What happens when the cache actually sticks around

Proof: same job, same project, very different numbers

"I'll just self-host a runner."

What we built instead

What this looks like in your .gitlab-ci.yml

When this matters (and when it doesn't)

Try it You know the feeling. You push a one-line fix, open the pipeline, and watch your runner spend two minutes downloading node_modules. Again. The same node_modules it downloaded ten minutes ago. On the last push. That was also a one-line fix. Shared runners have the memory of a goldfish. And you're paying for it in build minutes. GitLab's shared runners are ephemeral by design. Each job gets a clean machine. Great for isolation. Terrible for your afternoon. "But there's the cache: keyword!" Sure. It uploads a tarball to object storage and downloads it on the next run. In practice: For small projects, whatever. For anything with real dependencies or Docker builds, you feel it on every push. When your runner lives on a dedicated machine that doesn't self-destruct after each job, things get better fast: Docker layer cache just works. Your FROM node:20 isn't pulled every run. Your RUN apt-get install layer is already built. Docker's native caching does what it was designed to do. No config, no tricks. The /cache volume persists between jobs. GitLab runners support a local cache directory mounted as a Docker volume. On a shared runner, that volume dies with the VM. On a dedicated machine, it stays. Your cache: directive in .gitlab-ci.yml writes to local disk instead of round-tripping through S3. Docker-in-Docker benefits the most. If you're building container images in CI, a persistent Docker daemon means every subsequent build reuses layers from previous builds. No registry hacks. No BuildKit configuration. Just Docker doing its thing. None of this is magic. It's just what happens when your runner isn't destroyed after every job. Here's our build app job: Same job. Same codebase. 59% faster. And that's a warm cache. The first run is comparable to a shared runner. Every run after that benefits from Docker layers and dependencies already sitting on disk. The queue time drop matters too. Shared runners serve everyone on GitLab.com, so your job waits in line behind strangers. A dedicated runner picks up your job immediately because it has nothing better to do. Now multiply that by 50 pipeline runs a day. You can. And if you have a dedicated ops person, or you genuinely enjoy debugging Docker daemon crashes on a Saturday morning, go for it. The cache benefits of a persistent runner are real. The Sunday afternoon you lose figuring out why /var/lib/docker filled up the disk is also real. This is why RocketRunner exists. You get a dedicated VM. Real hardware, not a shared slice. Docker and the GitLab runner are installed and registered with your project automatically. Because it's your machine running your Docker daemon, all caching works natively. You don't configure any of this. It's a side effect of having a runner that doesn't get thrown away after every job. Typical Node.js setup: On a shared runner, npm ci downloads everything every time. The cache round-trip to S3 often takes longer than the install itself. Ironic. On RocketRunner, that cache lives on a local volume. First run populates it. Second run reads from disk. Done. For Docker builds, the gap gets embarrassing: Shared runner: pulls docker:24.0.5, pulls every layer in your Dockerfile, every time. A 3-minute build that should take 20 seconds. You go make coffee. You come back. It's still pulling. RocketRunner: Docker daemon is already running. Base images are cached. Unchanged layers are skipped. It finishes before you can alt-tab away. RocketRunner starts with a 48-hour free trial. Setup takes about 2 minutes. Connect your GitLab account, pick a server size, choose a region, and your runner is live. Smallest plan runs at $0.018/hr with a $10.59/month cap. Most teams pay between $1-10/month. If your pipelines spend more time downloading dependencies than running your actual code, a persistent cache might be all you need. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

build: stage: build image: node:20 cache: key: ${CI_COMMIT_REF_SLUG} paths: - node_modules/ script: - npm ci - npm run build build: stage: build image: node:20 cache: key: ${CI_COMMIT_REF_SLUG} paths: - node_modules/ script: - npm ci - npm run build build: stage: build image: node:20 cache: key: ${CI_COMMIT_REF_SLUG} paths: - node_modules/ script: - npm ci - npm run build build-image: stage: build image: docker:24.0.5 services: - docker:24.0.5-dind script: - docker build -t myapp:latest. build-image: stage: build image: docker:24.0.5 services: - docker:24.0.5-dind script: - docker build -t myapp:latest. build-image: stage: build image: docker:24.0.5 services: - docker:24.0.5-dind script: - docker build -t myapp:latest. - Docker pulls your base images from scratch - npm install / pip install / bundle install downloads every dependency again - Docker-in-Docker builds re-download every layer, every time - Your test suite can't reuse compilation artifacts from the previous run - Uploading and downloading a 500MB archive takes its own sweet time - Cache misses are silent and frequent (good luck debugging that) - Docker image layers? The cache: keyword can't help you there. You end up in a rabbit hole of registry-based workarounds and BuildKit inline caching - Provisioning the server and keeping it updated - Installing and configuring Docker + GitLab Runner - Monitoring disk space (those Docker layers add up quietly) - Rotating tokens, managing SSH keys - Getting paged at 2 am because the runner went offline - Projects with Docker-in-Docker builds - Monorepos with large dependency trees - Teams running 20+ pipelines per day - Anything where npm install takes longer than your actual tests - Small projects with minimal dependencies - Pipelines that only run linters or simple scripts - Teams running fewer than a handful of pipelines per week