Tools: Breaking: CI/CD Build Systems for Cloud-Native Applications

Tools: Breaking: CI/CD Build Systems for Cloud-Native Applications

TLDR ;

Multi-Stage Docker Builds

Build Caching Strategies

Remote caching + monorepo tooling = 20-min builds → 2-min builds.

Parallel Pipeline Execution

Alternative Container Builders

Build Security and Supply Chain

Optimization Techniques Summary

Conclusion

Frequently Asked Questions

How do I reduce Docker build times?

Should I use distroless images in production?

What is an SBOM and when do I need one? Build systems are the foundation of every CI/CD pipeline. They transform source code into deployable artifacts: container images, binaries, or bundled assets. Slow builds directly impact developer productivity and deployment frequency. The reality is simpler. Three techniques cover most optimization: multi-stage Docker builds with layer caching, remote build caches shared across CI infrastructure, and parallel pipeline execution. For European B2B organizations building cloud-native applications, build systems also need to produce signed artifacts with Software Bill of Materials (SBOM) documentation to satisfy supply chain security requirements. This article covers practical build optimization and security patterns for Kubernetes-targeted applications. Multi-stage builds separate build-time dependencies from runtime artifacts. The builder stage includes compilers, package managers, and test tools. The runtime stage contains only the final binary and its runtime dependencies. This produces a minimal image with just the Go binary. According to Google's distroless documentation, distroless images contain no shell, no package manager, and no utilities that attackers could exploit. The resulting image is typically under 20MB compared to 800MB+ for a full Ubuntu-based image. BuildKit cache mounts (--mount=type=cache) persist package manager caches between builds without bloating the final image. Dependency downloads happen once and are reused on subsequent builds. For Node.js applications, the same pattern applies: Caching is the single most impactful build optimization. According to Docker's BuildKit documentation, proper layer ordering and caching can reduce build times by 60-80% on subsequent runs. Layer ordering matters. Place instructions that change rarely (dependency installation) before instructions that change often (source code copy): Remote registry caching shares build cache across your team and CI: First builds populate the cache. Subsequent builds pull cached layers from the registry. This eliminates redundant dependency downloads across CI runners and developer machines. Monorepo build tools like Turborepo and Nx provide content-addressable caching: Changed packages rebuild; unchanged packages use cached outputs. For large monorepos, this transforms 20-minute builds into 2-minute incremental builds. The techniques above work. But configuring remote caches across CI runners and setting up monorepo tooling requires expertise. Get Build Optimization Expertise → Run independent stages simultaneously instead of sequentially. Build, unit tests, and linting have no dependencies on each other and should run in parallel. Build and unit-tests run in parallel. Security scanning and integration tests run after build completes but parallel to each other. According to GitHub Actions documentation, this job dependency model is the standard approach for optimizing workflow execution time. Build machine sizing impacts cost more than most teams realize Docker is not the only option for building container images. Kaniko runs inside Kubernetes pods, making it ideal for GitOps-driven build pipelines: Multi-architecture builds produce images for both amd64 and arm64 platforms: Build systems are high-value attack targets. According to Sonatype's State of the Software Supply Chain 2024, supply chain attacks continue to accelerate, making build-time security controls a necessity. Dependency scanning fails builds on high-severity vulnerabilities: SBOM generation creates an inventory of all software components: Container signing with Sigstore Cosign proves images have not been tampered with: For European regulated industries, SBOM documentation and artifact signing satisfy supply chain transparency requirements. Integrate these steps into your pipeline security workflow and enforce signature verification during progressive delivery rollouts. Fast, secure builds are the foundation of a productive CI/CD pipeline. Start with multi-stage Docker builds and BuildKit caching to reduce image sizes and build times. Add remote registry caching to share artifacts across your team. Structure pipeline jobs for parallel execution to minimize total workflow duration. Layer in supply chain security: dependency scanning, SBOM generation, and container signing. These controls integrate with multi-environment deployment and GitOps workflows to maintain security from build through production. Three techniques have the most impact: order Dockerfile instructions from least to most frequently changed for better layer caching, use BuildKit cache mounts to persist package manager caches, and implement remote registry caching to share build layers across CI runners. Distroless Images - Pros and Cons SBOM Definition and Requirements: Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ [Source Code] --> [Builder Stage] --> [Runtime Stage] --> [Minimal Image] | | [Dependencies] [Binary Only] [Build Tools] [No Shell] [Test Frameworks] [Non-root User] [Source Code] --> [Builder Stage] --> [Runtime Stage] --> [Minimal Image] | | [Dependencies] [Binary Only] [Build Tools] [No Shell] [Test Frameworks] [Non-root User] [Source Code] --> [Builder Stage] --> [Runtime Stage] --> [Minimal Image] | | [Dependencies] [Binary Only] [Build Tools] [No Shell] [Test Frameworks] [Non-root User] # Builder stage FROM golang:1.21 AS builder WORKDIR /app go.mod go.sum ./ RUN --mount=type=cache,target=/go/pkg/mod \ go mod download . . RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ CGO_ENABLED=0 go build -ldflags="-s -w" -trimpath -o /app/server # Runtime stage FROM gcr.io/distroless/static-debian11 --from=builder /app/server /usr/local/bin/ USER nonroot:nonroot ENTRYPOINT ["/usr/local/bin/server"] # Builder stage FROM golang:1.21 AS builder WORKDIR /app go.mod go.sum ./ RUN --mount=type=cache,target=/go/pkg/mod \ go mod download . . RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ CGO_ENABLED=0 go build -ldflags="-s -w" -trimpath -o /app/server # Runtime stage FROM gcr.io/distroless/static-debian11 --from=builder /app/server /usr/local/bin/ USER nonroot:nonroot ENTRYPOINT ["/usr/local/bin/server"] # Builder stage FROM golang:1.21 AS builder WORKDIR /app go.mod go.sum ./ RUN --mount=type=cache,target=/go/pkg/mod \ go mod download . . RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ CGO_ENABLED=0 go build -ldflags="-s -w" -trimpath -o /app/server # Runtime stage FROM gcr.io/distroless/static-debian11 --from=builder /app/server /usr/local/bin/ USER nonroot:nonroot ENTRYPOINT ["/usr/local/bin/server"] FROM node:20-alpine AS builder WORKDIR /app package*.json pnpm-lock.yaml ./ RUN corepack -weight: 500;">enable pnpm && pnpm -weight: 500;">install --frozen-lockfile . . RUN pnpm run build FROM node:20-alpine WORKDIR /app --from=builder /app/dist ./dist --from=builder /app/node_modules ./node_modules USER node CMD ["node", "dist/index.js"] FROM node:20-alpine AS builder WORKDIR /app package*.json pnpm-lock.yaml ./ RUN corepack -weight: 500;">enable pnpm && pnpm -weight: 500;">install --frozen-lockfile . . RUN pnpm run build FROM node:20-alpine WORKDIR /app --from=builder /app/dist ./dist --from=builder /app/node_modules ./node_modules USER node CMD ["node", "dist/index.js"] FROM node:20-alpine AS builder WORKDIR /app package*.json pnpm-lock.yaml ./ RUN corepack -weight: 500;">enable pnpm && pnpm -weight: 500;">install --frozen-lockfile . . RUN pnpm run build FROM node:20-alpine WORKDIR /app --from=builder /app/dist ./dist --from=builder /app/node_modules ./node_modules USER node CMD ["node", "dist/index.js"] # Dependencies first (changes rarely) package.json package-lock.json ./ RUN -weight: 500;">npm ci # Source code second (changes often) . . RUN -weight: 500;">npm run build # Dependencies first (changes rarely) package.json package-lock.json ./ RUN -weight: 500;">npm ci # Source code second (changes often) . . RUN -weight: 500;">npm run build # Dependencies first (changes rarely) package.json package-lock.json ./ RUN -weight: 500;">npm ci # Source code second (changes often) . . RUN -weight: 500;">npm run build -weight: 500;">docker buildx build \ --cache-from type=registry,ref=registry.example.com/cache \ --cache-to type=registry,ref=registry.example.com/cache \ -t registry.example.com/app:v1.2.3 . -weight: 500;">docker buildx build \ --cache-from type=registry,ref=registry.example.com/cache \ --cache-to type=registry,ref=registry.example.com/cache \ -t registry.example.com/app:v1.2.3 . -weight: 500;">docker buildx build \ --cache-from type=registry,ref=registry.example.com/cache \ --cache-to type=registry,ref=registry.example.com/cache \ -t registry.example.com/app:v1.2.3 . turbo run build --api="https://cache.example.com" --token="$CACHE_TOKEN" turbo run build --api="https://cache.example.com" --token="$CACHE_TOKEN" turbo run build --api="https://cache.example.com" --token="$CACHE_TOKEN" jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Build container run: -weight: 500;">docker build -t app:${{ github.sha }} . unit-tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Run tests run: -weight: 500;">npm test security-scan: needs: build runs-on: ubuntu-latest steps: - name: Scan image run: trivy image app:${{ github.sha }} integration-tests: needs: build runs-on: ubuntu-latest services: postgres: image: postgres:15 steps: - name: Run integration tests run: ./run-integration-tests.sh jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Build container run: -weight: 500;">docker build -t app:${{ github.sha }} . unit-tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Run tests run: -weight: 500;">npm test security-scan: needs: build runs-on: ubuntu-latest steps: - name: Scan image run: trivy image app:${{ github.sha }} integration-tests: needs: build runs-on: ubuntu-latest services: postgres: image: postgres:15 steps: - name: Run integration tests run: ./run-integration-tests.sh jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Build container run: -weight: 500;">docker build -t app:${{ github.sha }} . unit-tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Run tests run: -weight: 500;">npm test security-scan: needs: build runs-on: ubuntu-latest steps: - name: Scan image run: trivy image app:${{ github.sha }} integration-tests: needs: build runs-on: ubuntu-latest services: postgres: image: postgres:15 steps: - name: Run integration tests run: ./run-integration-tests.sh apiVersion: v1 kind: Pod spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:latest args: - "--dockerfile=Dockerfile" - "--context=-weight: 500;">git://github.com/example/app" - "--destination=registry.example.com/app:v1.2.3" - "--cache=true" apiVersion: v1 kind: Pod spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:latest args: - "--dockerfile=Dockerfile" - "--context=-weight: 500;">git://github.com/example/app" - "--destination=registry.example.com/app:v1.2.3" - "--cache=true" apiVersion: v1 kind: Pod spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:latest args: - "--dockerfile=Dockerfile" - "--context=-weight: 500;">git://github.com/example/app" - "--destination=registry.example.com/app:v1.2.3" - "--cache=true" -weight: 500;">docker buildx build \ --platform linux/amd64,linux/arm64 \ -t registry.example.com/app:v1.2.3 \ --push . -weight: 500;">docker buildx build \ --platform linux/amd64,linux/arm64 \ -t registry.example.com/app:v1.2.3 \ --push . -weight: 500;">docker buildx build \ --platform linux/amd64,linux/arm64 \ -t registry.example.com/app:v1.2.3 \ --push . - name: Scan dependencies run: | -weight: 500;">npm audit --audit-level=high snyk test --severity-threshold=high - name: Scan dependencies run: | -weight: 500;">npm audit --audit-level=high snyk test --severity-threshold=high - name: Scan dependencies run: | -weight: 500;">npm audit --audit-level=high snyk test --severity-threshold=high syft packages registry.example.com/app:v1.2.3 -o spdx-json > sbom.json grype sbom.json syft packages registry.example.com/app:v1.2.3 -o spdx-json > sbom.json grype sbom.json syft packages registry.example.com/app:v1.2.3 -o spdx-json > sbom.json grype sbom.json cosign sign --yes registry.example.com/app:v1.2.3 cosign sign --yes registry.example.com/app:v1.2.3 cosign sign --yes registry.example.com/app:v1.2.3 - Multi-stage Docker builds with BuildKit caching reduce image sizes by 80% and build times by 60% - Remote build caching shares artifacts across developers and CI, eliminating redundant work - Parallel pipeline execution runs independent stages simultaneously for faster feedback - SBOM generation and container signing are now required for European regulated industries - Set up remote registry caching – Share build layers across your team and CI - Implement Turborepo/Nx – Content-addressable caching for monorepos - Optimize layer ordering – Dependencies first, code last - Reduce build times by 60-80% – Measurable results - A machine that costs 4x more per hour but finishes in one-quarter the time costs the same - Factor in developer wait time → faster machines win decisively - Most teams underestimate the impact of machine sizing on total cost - SBOM = Software Bill of Materials - a machine-readable inventory of all software components in your artifact - Required for: Government contracts and regulated industries - Tools: Generate with Syft during builds; scan with Grype for vulnerabilities