Tools: Breaking: I Built an Open-Source Visual Kubernetes Orchestration Platform — No YAML Required

Tools: Breaking: I Built an Open-Source Visual Kubernetes Orchestration Platform — No YAML Required

The Problem With Kubernetes Today

What KubeOrch Does

1. KubeOrch Core (Go)

2. KubeOrch UI (Next.js + TypeScript)

3. OrchCLI (Go)

4. Docs (Astro)

The Architecture Decision I'm Most Proud Of

Getting Started

Try it locally

Run Core directly

Explore the repos

What's Next

Closing Thoughts If you've ever stared at a 400-line Kubernetes YAML file at 2am trying to figure out why your service can't reach its database, this post is for you. I'm a founding engineer, I kept running into the same problem: Kubernetes is incredibly powerful, but it's also brutally complex to get right. The learning curve is steep, the feedback loop is slow, and one wrong indent breaks everything. So I built KubeOrch — an open-source visual orchestration platform that lets you design, connect, and deploy Kubernetes workloads through a drag-and-drop interface. No YAML. No guessing. Just draw your architecture and hit deploy. Here's what I built, how it works under the hood, and why I open-sourced the whole thing. Kubernetes has won the container orchestration wars. It's the de facto standard. But the developer experience hasn't caught up with its adoption. Consider what it takes to deploy a simple web app with a PostgreSQL database and a Redis cache on Kubernetes: That's 9+ YAML files, hundreds of lines, and dozens of ways to silently misconfigure something. And this is the simple case. The tools that exist today — Helm, Kustomize, Lens — either abstract the YAML (but you still write it) or visualize existing clusters (but you still write it first). No one has tackled the core issue: the mental model of a distributed system is visual, but the tooling forces you to express it as text. KubeOrch flips the workflow. Instead of writing manifests and hoping they wire up correctly, you: The platform has four main components: The brains of the operation. A Go API server built on Gin that handles: The visual canvas, built with: The UI is intentionally opinionated. Services snap together intelligently — when you try to connect a Node.js app to PostgreSQL, the UI already knows what that connection means and pre-fills the configuration. A CLI that handles the local dev loop: It supports concurrent operations with file locking to prevent config corruption, auto-detects your dev mode based on which repos you've cloned, and handles hot reload across all services. Install it in one line: Full documentation site covering architecture, getting started, CLI reference, and API reference — built with Astro for fast static generation. The hardest problem in building KubeOrch wasn't the UI or even the Kubernetes API integration — it was automatic service wiring. When two services are connected in the visual canvas, the platform needs to figure out: The naive solution is to ask the user. But that defeats the whole point. The solution I landed on is a service template system with typed ports. Every component in the library (Postgres, Redis, Kafka, etc.) is defined with its ports annotated with type metadata: When you draw a connection, Core matches port types, renders the templates with resolved values, and injects the result as environment variables into the dependent service — with a Secret for anything sensitive. 150+ services are defined this way, covering databases, queues, ML platforms, monitoring stacks, and more. I could have built this as a SaaS. I thought about it. But Kubernetes tooling lives and dies by community trust. Operators don't want their cluster credentials going through a third-party API. They want to run the control plane themselves, audit the code, and contribute fixes. More importantly — the problems KubeOrch solves are universal. Every team fighting with YAML is fighting the same fight. An open-source project that solves this well becomes infrastructure for the entire ecosystem. KubeOrch is Apache 2.0 licensed and structured as a CNCF-aspiring project with full governance documentation: The goal is to eventually donate this to the CNCF sandbox. The groundwork is already laid. Open http://localhost:3001 to see the visual canvas. Core starts at http://localhost:3000. The roadmap has three near-term priorities: If any of these problems interest you, the contributor guide is in the community repo and issues are open. Kubernetes isn't going anywhere. But the developer experience has a long way to go before it matches the power of the underlying platform. KubeOrch is my attempt to close that gap — to make the visual mental model of distributed systems the primary interface, not a second-class visualization layer bolted on top of YAML. If you've felt the pain of Kubernetes configuration, give it a try. And if you want to help build it, the doors are open. GitHub: github.com/KubeOrch Follow me on X/Twitter for more. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

// Example: Core's auto-wiring picks up connection intent and resolves it type Connection struct { SourceService string `json:"source"` TargetService string `json:"target"` SourcePort int `json:"sourcePort"` TargetPort int `json:"targetPort"` } // Core resolves this into env vars, DNS entries, and NetworkPolicies automatically // Example: Core's auto-wiring picks up connection intent and resolves it type Connection struct { SourceService string `json:"source"` TargetService string `json:"target"` SourcePort int `json:"sourcePort"` TargetPort int `json:"targetPort"` } // Core resolves this into env vars, DNS entries, and NetworkPolicies automatically // Example: Core's auto-wiring picks up connection intent and resolves it type Connection struct { SourceService string `json:"source"` TargetService string `json:"target"` SourcePort int `json:"sourcePort"` TargetPort int `json:"targetPort"` } // Core resolves this into env vars, DNS entries, and NetworkPolicies automatically # Initialize a KubeOrch project orchcli init # Start all services (supports hot reload) orchcli start # Fork and contribute to core or UI orchcli init --fork-core orchcli init --fork-ui # Initialize a KubeOrch project orchcli init # Start all services (supports hot reload) orchcli start # Fork and contribute to core or UI orchcli init --fork-core orchcli init --fork-ui # Initialize a KubeOrch project orchcli init # Start all services (supports hot reload) orchcli start # Fork and contribute to core or UI orchcli init --fork-core orchcli init --fork-ui curl -sfL https://raw.githubusercontent.com/KubeOrch/cli/main/install.sh | sh curl -sfL https://raw.githubusercontent.com/KubeOrch/cli/main/install.sh | sh curl -sfL https://raw.githubusercontent.com/KubeOrch/cli/main/install.sh | sh npm install -g @kubeorch/cli npm install -g @kubeorch/cli npm install -g @kubeorch/cli { "name": "postgresql", "ports": [ { "port": 5432, "type": "postgres", "envVarTemplate": "{{TARGET_NAME}}_DATABASE_URL", "valueTemplate": "postgresql://{{USER}}:{{PASSWORD}}@{{SERVICE_DNS}}:5432/{{DB_NAME}}" } ] } { "name": "postgresql", "ports": [ { "port": 5432, "type": "postgres", "envVarTemplate": "{{TARGET_NAME}}_DATABASE_URL", "valueTemplate": "postgresql://{{USER}}:{{PASSWORD}}@{{SERVICE_DNS}}:5432/{{DB_NAME}}" } ] } { "name": "postgresql", "ports": [ { "port": 5432, "type": "postgres", "envVarTemplate": "{{TARGET_NAME}}_DATABASE_URL", "valueTemplate": "postgresql://{{USER}}:{{PASSWORD}}@{{SERVICE_DNS}}:5432/{{DB_NAME}}" } ] } # Install the CLI curl -sfL https://raw.githubusercontent.com/KubeOrch/cli/main/install.sh | sh # Initialize a new project orchcli init # Start everything orchcli start # Install the CLI curl -sfL https://raw.githubusercontent.com/KubeOrch/cli/main/install.sh | sh # Initialize a new project orchcli init # Start everything orchcli start # Install the CLI curl -sfL https://raw.githubusercontent.com/KubeOrch/cli/main/install.sh | sh # Initialize a new project orchcli init # Start everything orchcli start git clone https://github.com/KubeOrch/core.git cd core go mod tidy go run main.go git clone https://github.com/KubeOrch/core.git cd core go mod tidy go run main.go git clone https://github.com/KubeOrch/core.git cd core go mod tidy go run main.go - A Deployment manifest for your app - A Service to expose it - A Deployment + PersistentVolumeClaim for Postgres - A Service for Postgres - A Secret for credentials - A Deployment for Redis - A Service for Redis - A ConfigMap for environment variables - An Ingress with TLS config - NetworkPolicies if you care about security - Drag services onto a canvas (Postgres, Redis, Kafka, your app — 150+ components) - Connect them by drawing lines between ports - Deploy — KubeOrch generates the manifests, resolves dependencies, and applies them to your cluster - JSON-to-YAML transformation — your visual design is stored as a JSON graph internally; Core converts it to production-ready Kubernetes manifests at deploy time - Automatic connection resolution — when you draw a line from your app to Postgres, Core figures out the right DATABASE_URL env var, the right service DNS name, the right port — without you specifying any of it - Nixpacks integration — point Core at a GitHub repo and it builds a container automatically, no Dockerfile needed - Service mesh support — Istio, ingress controllers, and load balancers are first-class citizens - Real-time streaming — WebSocket-based log and metrics streaming from all running containers - React Flow for the drag-and-drop workflow designer - Next.js 15 with TypeScript - shadcn/ui + Tailwind CSS v4 for the component library - Zustand for state management - WebSocket for real-time log streaming - What environment variable should carry the connection string? - What DNS name should the dependent service use? - What port should be exposed? - Does this connection need a NetworkPolicy? - Does it need a Secret, or is the connection string safe to put in a ConfigMap? - Contributor ladder (from contributor → member → maintainer) - Governance policy - API stability policy - RFC/proposal process in the community repo - KubeOrch/core — Go backend, orchestration engine - KubeOrch/ui — Next.js visual canvas - KubeOrch/cli — OrchCLI developer tool - KubeOrch/community — Governance, roadmap, RFCs - KubeOrch/docs — Full documentation - GitOps integration — sync your visual design to a Git repo and trigger deploys on push - Multi-cluster support — manage workloads across multiple clusters from one canvas - Plugin SDK — let the community build and publish custom components to the marketplace