Tools: Ultimate Guide: Kubernetes 1.36 Removed gitRepo Volumes — Your Helm Charts Pass Validation, Your Pods Don't Schedule

Tools: Ultimate Guide: Kubernetes 1.36 Removed gitRepo Volumes — Your Helm Charts Pass Validation, Your Pods Don't Schedule

The CI/CD pipeline that lies to you

Where this hides in real codebases

The migration that's not a search-and-replace

What to do this week Kubernetes v1.36 shipped on April 22, 2026, and one removal is worth a runbook entry on its own: the in-tree gitRepo volume driver is gone. gitRepo was the volume type that let a pod clone a git repository at startup straight into a mounted volume. It had been deprecated since v1.11 (mid-2018), flagged for security issues (CVE-2024-10220, the symlink escape that let containers traverse out of the cloned tree), and finally retired in 1.36. There is no replacement field. There is no compatibility shim. The schema validator on a 1.36 API server still accepts the field — it has to, because old manifests in etcd reference it — but the kubelet refuses to mount it. The result is a deploy-time failure that sails through every pre-deploy gate. A Helm chart that includes a gitRepo volume looks fine to every tool in the chain: helm lint passes. helm template renders. kubectl apply --dry-run=server against a 1.36 cluster returns pod/foo created (dry run) — because the API server still validates the schema. The CI pipeline goes green. At actual apply time on a 1.36 node, the kubelet refuses to mount the volume and the pod sits in ContainerCreating forever, with events that say: Nothing in the Helm chart's metadata indicates that this volume type is gone. Nothing in the chart's chart.yaml signals incompatibility with k8s 1.36. The chart's kubeVersion field is advisory, and most public charts don't update it for individual volume removals. Three places where gitRepo volumes survive in 2026 codebases: The third category is the most painful. The operator reports "healthy." The CRD instances are Reconciled. Only when you look at the pods directly do you see they've been stuck ContainerCreating for hours. Kubernetes' own deprecation notice points at three replacements: an init container that runs git clone, the git-sync sidecar project, or an external operator like flux/argo for actual GitOps. Each one has different semantics from gitRepo: The init-container approach is closest to drop-in, but it requires you to add an emptyDir volume the init container writes into and the main container reads from, plus a secret mount if your repo isn't public (gitRepo never supported auth, so anyone migrating from gitRepo is by definition working with public repos — but that means they're now exposed to the same supply-chain issues that made gitRepo unsafe in the first place). git-sync is the closest in spirit. It's an actively maintained project, lives as a sidecar, and does periodic refresh. But it changes pod resource accounting, and on small clusters the extra container can push pods over their memory limits. The Helm-chart fix isn't a one-line swap. Expect it to touch the pod spec, add an emptyDir volume, add an init container or sidecar, configure auth if the repo isn't public, and update probes if your liveness check assumed instant volume availability at container start. Three actions, in order: Worth bonus: Ingress NGINX was retired entirely on March 24, 2026 — no more releases, bugfixes, or security updates. If your cluster's Ingress controller is ingress-nginx (the Kubernetes-project one, not NGINX Inc.'s nginx-ingress), the migration to InGate or NGINX Gateway Fabric is on the same runbook page. The "everything passes, then breaks at deploy" pattern is what makes K8s upgrades load-bearing for an entire engineering org. The 1.36 gitRepo removal is the cleanest example we've seen this year of CI/CD validation that proves nothing about runtime behavior. We built FlareCanary for the API-side version of this same pattern: schema accepts the request, response shape passes validation, but the field semantics changed. Same problem, different layer. If your org runs Kubernetes 1.35 or earlier and any team has a gitRepo volume in their chart, mark the next 1.36 upgrade window as a known-risk deploy. Operator-generated pods are the easiest to miss. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

volumes: - name: app-config gitRepo: repository: "https://github.com/acme/config-repo.git" revision: "main" volumes: - name: app-config gitRepo: repository: "https://github.com/acme/config-repo.git" revision: "main" volumes: - name: app-config gitRepo: repository: "https://github.com/acme/config-repo.git" revision: "main" Warning FailedMount kubelet MountVolume.SetUp failed for volume "app-config": gitRepo volume plugin is no longer supported Warning FailedMount kubelet MountVolume.SetUp failed for volume "app-config": gitRepo volume plugin is no longer supported Warning FailedMount kubelet MountVolume.SetUp failed for volume "app-config": gitRepo volume plugin is no longer supported - Internal "config-as-code" sidecars. Pre-2020 pattern: a sidecar mounts a gitRepo volume to pull the latest config repo on pod restart. Replaced in most teams by ConfigMaps or Vault, but legacy clusters and forgotten staging environments often still run it. - Helm charts pinned to an old version. Charts on Artifact Hub from 2018-2020 era that haven't been re-published. helm install against a pinned version still pulls the old manifest. The chart's stated kubeVersion range often hasn't been narrowed to exclude 1.36. - Custom operators that generate Pod specs. Operators written by platform teams that emit gitRepo volumes for config-loading. The operator itself doesn't fail upgrade, but the pods it generates do — and the operator's own readiness probe is usually unaware that its child workloads aren't running. - Grep your manifests, charts, and operator code for gitRepo:. If you're still on 1.35 or earlier, you have until your next upgrade to fix it. If you're on 1.36 already and apply hasn't broken yet, you're flying on workloads that haven't been restarted since the upgrade. - Audit your operator-generated Pod specs. Run kubectl get pods -A -o json | jq '.items[].spec.volumes[]? | select(.gitRepo)' against your clusters. Anything that comes back is a future FailedMount. - Pin chart versions explicitly with kubeVersion guards. When you migrate, narrow the chart's kubeVersion to < 1.36 for the legacy version and bump the major for the migrated version. This stops helm upgrade --install from silently rolling forward to a broken combination.