Tools: Ultimate Guide: SwiftDeploy: Building a Manifest-Driven Stack

Tools: Ultimate Guide: SwiftDeploy: Building a Manifest-Driven Stack

The core idea in one view

What makes this practical

Why this is more than a wrapper script

What I learned while building it

Outcome

Verification Appendix (commands + expected checks)

A) Path flow

B) Confirm metrics and policy endpoints

C) Confirm OPA is not exposed through ingress

D) Test deny and recovery scenarios Most deployment setups do not fail because the tools are weak. They fail because the process drifts: one person edits nginx.conf, another tweaks Compose, and soon nobody knows what the real deployment contract is. SwiftDeploy solves that with a simple principle: declare intent once in manifest.yaml, and make everything else derived and verifiable. It started as a config-generation CLI, but it became more useful when policy and observability were added. The interesting part is not that it creates files; it is that it can block unsafe actions when runtime signals say “don’t proceed.” Link: Github Repository That separation is intentional. User traffic and control decisions use different paths. Real manifest excerpt: From this, ./swiftdeploy init renders: The CLI enforces two decision points: Pre-deploy gate

Host conditions (disk/cpu/memory snapshot) are checked against policy before full startup proceeds. Pre-promote gate (canary -> stable)Windowed metrics from /metrics are evaluated before promotion completes. So this is not only “template rendering.” It is decision-aware deployment. Policy paths are easy to get subtly wrong.POST /v1/data/policy/infrastructure/decision is valid.POST /v1/data/infrastructure/decision often returns {} and can waste debugging time. Ingress isolation should be proven, not assumed.Hitting OPA-shaped URLs on Nginx should return app-level 404/non-OPA response, while loopback OPA returns policy JSON. Canary tests need traffic.Windowed error-rate checks can look “healthy” if there is no meaningful request volume during the window. Generated files are outputs, not source files.

If you edit generated Compose/Nginx files directly, init will overwrite them. With this setup, deployment becomes: That is the difference between “it runs on my machine” and “I can prove this rollout is safe.” Run from swiftdeploy-project/. SCREENSHOT_VALIDATE_PASS SCREENSHOT_DEPLOY_HEALTHY SCREENSHOT_PROMOTE_CANARY_AND_HEALTHZ SCREENSHOT_PROMOTE_STABLE_AND_HEALTHZ SCREENSHOT_STATUS_OUTPUT SCREENSHOT_AUDIT_REPORT SCREENSHOT_TEARDOWN_CLEAN Check that required Prometheus metric families are exposed through Nginx ingress: Check infrastructure policy decision endpoint (OPA loopback): Check canary policy decision endpoint (OPA loopback): SCREENSHOT_METRICS_OUTPUT SCREENSHOT_OPA_INFRA_ALLOW SCREENSHOT_OPA_CANARY_ALLOW SCREENSHOT_INGRESS_ISOLATION_CHECK Force deploy deny by tightening thresholds in policy-data/thresholds.json, then: SCREENSHOT_DEPLOY_DENIED_BY_POLICY SCREENSHOT_PROMOTE_DENIED_BY_POLICY SCREENSHOT_PROMOTE_SUCCESS_AFTER_RECOVER Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

services: image: swiftdeploy-api-go:latest port: 3000 mode: stable app_version: "1.0.0" nginx: image: nginxinc/nginx-unprivileged:stable-alpine port: 8080 proxy_timeout: 30s opa: image: openpolicyagent/opa:latest port: 8181 policy: data_file: policy-data/thresholds.json decision_timeout_seconds: 5 services: image: swiftdeploy-api-go:latest port: 3000 mode: stable app_version: "1.0.0" nginx: image: nginxinc/nginx-unprivileged:stable-alpine port: 8080 proxy_timeout: 30s opa: image: openpolicyagent/opa:latest port: 8181 policy: data_file: policy-data/thresholds.json decision_timeout_seconds: 5 services: image: swiftdeploy-api-go:latest port: 3000 mode: stable app_version: "1.0.0" nginx: image: nginxinc/nginx-unprivileged:stable-alpine port: 8080 proxy_timeout: 30s opa: image: openpolicyagent/opa:latest port: 8181 policy: data_file: policy-data/thresholds.json decision_timeout_seconds: 5 ./swiftdeploy build ./swiftdeploy init ./swiftdeploy validate ./swiftdeploy deploy ./swiftdeploy promote canary ./swiftdeploy promote stable ./swiftdeploy status 5 ./swiftdeploy audit ./swiftdeploy teardown --clean ./swiftdeploy build ./swiftdeploy init ./swiftdeploy validate ./swiftdeploy deploy ./swiftdeploy promote canary ./swiftdeploy promote stable ./swiftdeploy status 5 ./swiftdeploy audit ./swiftdeploy teardown --clean ./swiftdeploy build ./swiftdeploy init ./swiftdeploy validate ./swiftdeploy deploy ./swiftdeploy promote canary ./swiftdeploy promote stable ./swiftdeploy status 5 ./swiftdeploy audit ./swiftdeploy teardown --clean curl -sS "http://127.0.0.1:8080/metrics" | grep -E "http_requests_total|http_request_duration_seconds|app_uptime_seconds|app_mode|chaos_active" | head curl -sS "http://127.0.0.1:8080/metrics" | grep -E "http_requests_total|http_request_duration_seconds|app_uptime_seconds|app_mode|chaos_active" | head curl -sS "http://127.0.0.1:8080/metrics" | grep -E "http_requests_total|http_request_duration_seconds|app_uptime_seconds|app_mode|chaos_active" | head curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/infrastructure/decision" \ -H "Content-Type: application/json" \ -d '{"input":{"context":"pre-deploy","disk_free_gb":50,"cpu_load":0.3}}' curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/infrastructure/decision" \ -H "Content-Type: application/json" \ -d '{"input":{"context":"pre-deploy","disk_free_gb":50,"cpu_load":0.3}}' curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/infrastructure/decision" \ -H "Content-Type: application/json" \ -d '{"input":{"context":"pre-deploy","disk_free_gb":50,"cpu_load":0.3}}' curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/canary/decision" \ -H "Content-Type: application/json" \ -d '{"input":{"context":"pre-promote","window_seconds":30,"error_rate":0.001,"p99_latency_ms":100}}' curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/canary/decision" \ -H "Content-Type: application/json" \ -d '{"input":{"context":"pre-promote","window_seconds":30,"error_rate":0.001,"p99_latency_ms":100}}' curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/canary/decision" \ -H "Content-Type: application/json" \ -d '{"input":{"context":"pre-promote","window_seconds":30,"error_rate":0.001,"p99_latency_ms":100}}' curl -sS -o /dev/null -w "%{http_code}\n" "http://127.0.0.1:8080/v1/data/policy/infrastructure/decision" curl -sS -o /dev/null -w "%{http_code}\n" "http://127.0.0.1:8080/v1/data/policy/infrastructure/decision" curl -sS -o /dev/null -w "%{http_code}\n" "http://127.0.0.1:8080/v1/data/policy/infrastructure/decision" ./swiftdeploy teardown ./swiftdeploy deploy ./swiftdeploy teardown ./swiftdeploy deploy ./swiftdeploy teardown ./swiftdeploy deploy ./swiftdeploy promote canary curl -sS -X POST "http://127.0.0.1:8080/chaos" \ -H "Content-Type: application/json" \ -d '{"mode":"error","rate":1.0}' ./swiftdeploy promote stable ./swiftdeploy promote canary curl -sS -X POST "http://127.0.0.1:8080/chaos" \ -H "Content-Type: application/json" \ -d '{"mode":"error","rate":1.0}' ./swiftdeploy promote stable ./swiftdeploy promote canary curl -sS -X POST "http://127.0.0.1:8080/chaos" \ -H "Content-Type: application/json" \ -d '{"mode":"error","rate":1.0}' ./swiftdeploy promote stable curl -sS -X POST "http://127.0.0.1:8080/chaos" \ -H "Content-Type: application/json" \ -d '{"mode":"recover"}' ./swiftdeploy promote stable curl -sS -X POST "http://127.0.0.1:8080/chaos" \ -H "Content-Type: application/json" \ -d '{"mode":"recover"}' ./swiftdeploy promote stable curl -sS -X POST "http://127.0.0.1:8080/chaos" \ -H "Content-Type: application/json" \ -d '{"mode":"recover"}' ./swiftdeploy promote stable - Clients hit Nginx. - Nginx proxies to the Go app on the internal network. - The swiftdeploy CLI generates configs and asks OPA for policy decisions on loopback (127.0.0.1), not through public ingress. - template-output/nginx.conf - template-output/docker-compose.yml - validate confirms preflight health, - deploy starts policy sidecar + stack, - promote switches stable/canary mode safely, - status and audit leave an evidence trail. - Pre-deploy gate Host conditions (disk/cpu/memory snapshot) are checked against policy before full startup proceeds. - Pre-promote gate (canary -> stable) Windowed metrics from /metrics are evaluated before promotion completes. - Policy paths are easy to get subtly wrong. POST /v1/data/policy/infrastructure/decision is valid. POST /v1/data/infrastructure/decision often returns {} and can waste debugging time. - Ingress isolation should be proven, not assumed. Hitting OPA-shaped URLs on Nginx should return app-level 404/non-OPA response, while loopback OPA returns policy JSON. - Canary tests need traffic. Windowed error-rate checks can look “healthy” if there is no meaningful request volume during the window. - Generated files are outputs, not source files. If you edit generated Compose/Nginx files directly, init will overwrite them. - declarative (manifest.yaml as source of truth), - reproducible (templates + generated configs), - observable (/metrics, status), - enforceable (OPA gates), - auditable (history.jsonl -> audit_report.md). - validate reports all checks PASS. - deploy ends with stack healthy. - promote canary enables X-Mode: canary on /healthz. - promote stable removes X-Mode after successful gate. - status appends records to history.jsonl. - audit generates audit_report.md. - Metrics families visible. - OPA responses contain a result object and allow. - 404 or non-OPA response via Nginx path.