build: stage: build image: node:20 cache: key: ${CI_COMMIT_REF_SLUG} paths: - node_modules/ script: - npm ci - npm run build
build: stage: build image: node:20 cache: key: ${CI_COMMIT_REF_SLUG} paths: - node_modules/ script: - npm ci - npm run build
build: stage: build image: node:20 cache: key: ${CI_COMMIT_REF_SLUG} paths: - node_modules/ script: - npm ci - npm run build
build-image: stage: build image: docker:24.0.5 services: - docker:24.0.5-dind script: - docker build -t myapp:latest.
build-image: stage: build image: docker:24.0.5 services: - docker:24.0.5-dind script: - docker build -t myapp:latest.
build-image: stage: build image: docker:24.0.5 services: - docker:24.0.5-dind script: - docker build -t myapp:latest. - Docker pulls your base images from scratch
- npm install / pip install / bundle install downloads every dependency again
- Docker-in-Docker builds re-download every layer, every time
- Your test suite can't reuse compilation artifacts from the previous run - Uploading and downloading a 500MB archive takes its own sweet time
- Cache misses are silent and frequent (good luck debugging that)
- Docker image layers? The cache: keyword can't help you there. You end up in a rabbit hole of registry-based workarounds and BuildKit inline caching - Provisioning the server and keeping it updated
- Installing and configuring Docker + GitLab Runner
- Monitoring disk space (those Docker layers add up quietly)
- Rotating tokens, managing SSH keys
- Getting paged at 2 am because the runner went offline - Projects with Docker-in-Docker builds
- Monorepos with large dependency trees
- Teams running 20+ pipelines per day
- Anything where npm install takes longer than your actual tests - Small projects with minimal dependencies
- Pipelines that only run linters or simple scripts
- Teams running fewer than a handful of pipelines per week