volumes: - name: app-config gitRepo: repository: "https://github.com/acme/config-repo.git" revision: "main"
volumes: - name: app-config gitRepo: repository: "https://github.com/acme/config-repo.git" revision: "main"
volumes: - name: app-config gitRepo: repository: "https://github.com/acme/config-repo.git" revision: "main"
Warning FailedMount kubelet MountVolume.SetUp failed for volume "app-config":
gitRepo volume plugin is no longer supported
Warning FailedMount kubelet MountVolume.SetUp failed for volume "app-config":
gitRepo volume plugin is no longer supported
Warning FailedMount kubelet MountVolume.SetUp failed for volume "app-config":
gitRepo volume plugin is no longer supported - Internal "config-as-code" sidecars. Pre-2020 pattern: a sidecar mounts a gitRepo volume to pull the latest config repo on pod restart. Replaced in most teams by ConfigMaps or Vault, but legacy clusters and forgotten staging environments often still run it.
- Helm charts pinned to an old version. Charts on Artifact Hub from 2018-2020 era that haven't been re-published. helm install against a pinned version still pulls the old manifest. The chart's stated kubeVersion range often hasn't been narrowed to exclude 1.36.
- Custom operators that generate Pod specs. Operators written by platform teams that emit gitRepo volumes for config-loading. The operator itself doesn't fail upgrade, but the pods it generates do — and the operator's own readiness probe is usually unaware that its child workloads aren't running. - Grep your manifests, charts, and operator code for gitRepo:. If you're still on 1.35 or earlier, you have until your next upgrade to fix it. If you're on 1.36 already and apply hasn't broken yet, you're flying on workloads that haven't been restarted since the upgrade.
- Audit your operator-generated Pod specs. Run kubectl get pods -A -o json | jq '.items[].spec.volumes[]? | select(.gitRepo)' against your clusters. Anything that comes back is a future FailedMount.
- Pin chart versions explicitly with kubeVersion guards. When you migrate, narrow the chart's kubeVersion to < 1.36 for the legacy version and bump the major for the migrated version. This stops helm upgrade --install from silently rolling forward to a broken combination.