Tools: Latest: 🐳 I Built a Container Dashboard for Your AI Coding Agent — And It's Awesome

Tools: Latest: 🐳 I Built a Container Dashboard for Your AI Coding Agent — And It's Awesome

The Problem

🐳 Enter: Container Dashboard

✨ The Coolest Features

📊 Live TUI Widget

🎯 14 Slash Commands

🤖 13 LLM Tools

🛡️ Safety First — Built-in Confirmations

🧱 How It Works Under the Hood

Runtime Detection: Auto-Discovery

Cross-Runtime Compatibility

Beautiful Terminal Tables

Smart Inspect Parsing

📦 Installation (60 seconds)

🚀 Real-World Workflow

💡 Why I Built This

🛠️ Tech Stack

📊 Stats & Facts

🔗 Get Started

💬 What Do You Think? If you're like me, you live in your terminal. You've got Docker containers running for databases, Redis instances for caching, microservices doing their thing — and you're constantly context-switching to check on them. Now imagine you're working with an AI coding agent — an LLM that can read files, write code, and run commands for you. Every time you need to check a container, you either: I wanted something better. So I built it. Container Dashboard is a pi coding agent extension that brings full container lifecycle management into your AI agent. It's like having Docker Desktop — but inside your LLM-powered terminal, with safety guarantees baked in. It works with Docker, Podman, and Nerdctl — all three major container runtimes. Your AI coding agent's sidebar shows a live container count at all times. You always know what's running without asking: Type /docker:ps and instantly get a formatted table. /docker:stats shows CPU & memory. /docker:logs my-app -n 100 tails logs. Here's the full arsenal: Your AI agent can proactively manage containers using tools like container_ps, container_stats, container_logs, and container_prune_system. It can check what's running, diagnose issues, and clean up — all autonomously but safely. This is my favorite part. Dangerous commands get intercepted: ...it hits a confirmation dialog. The AI literally asks "Are you sure?" before pulling the trigger. 🎯 No more accidental docker system prune -a wiping your CI cache while the AI was "just trying to help." The architecture is surprisingly clean — 5 TypeScript files, ~800 lines total: The extension auto-detects which container runtime you have installed by checking docker, then podman, then nerdctl in priority order. It also grabs the version string so you see Docker v24.0.7 instead of just "Docker." Every function — listContainers, getContainerLogs, pruneSystem, getContainerStats — works identically across Docker, Podman, and Nerdctl because they all share the same CLI interface for basic operations. The extension parses JSON output from docker ps --format '{{json .}}', normalizes status fields, and handles the slight differences between Docker's and Podman's JSON schemas. No more raw JSON. The commands render colorized, formatted tables with proper padding, truncation, and status colors: /docker:inspect takes the raw JSON dump and extracts the useful bits — ports, environment variables, mounted volumes, IP address, command — and displays them as a clean summary instead of a JSON firehose. That's it. The extension auto-detects your container runtime at startup and starts tracking containers immediately. Here's what a typical session looks like: No context switching. No leaving your AI agent. No accidentally running dangerous commands. I've been using AI coding agents for months, and the biggest friction point was always permission boundaries. I wanted my AI to be useful — to actually manage infrastructure, not just generate code. But giving an LLM raw access to docker commands is terrifying. This extension solves that tension: It's a pattern I think we'll see more of: AI agents with guardrails, not blacklists. Give them a sandbox, define safe patterns, and let them do real work. I'm excited about this pattern of guardrailed AI infrastructure management. Have you tried giving your AI coding agent access to Docker or other infrastructure tools? How do you handle the safety vs. agency tradeoff? Drop a comment below — I'd love to hear your thoughts! Built with ❤️ and 🤖 by @k1lgor Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

# The old way: -weight: 500;">docker ps -weight: 500;">docker logs my-app -n 50 -weight: 500;">docker stats -weight: 500;">docker inspect some_container # ... back and forth, breaking your flow # The old way: -weight: 500;">docker ps -weight: 500;">docker logs my-app -n 50 -weight: 500;">docker stats -weight: 500;">docker inspect some_container # ... back and forth, breaking your flow # The old way: -weight: 500;">docker ps -weight: 500;">docker logs my-app -n 50 -weight: 500;">docker stats -weight: 500;">docker inspect some_container # ... back and forth, breaking your flow 🐳 Docker v24.0.7 | ▶ 3 running | ● 8 total 🐳 Docker v24.0.7 | ▶ 3 running | ● 8 total 🐳 Docker v24.0.7 | ▶ 3 running | ● 8 total const dangerousPatterns = [ /(?:-weight: 500;">docker|podman|nerdctl)\s+(?:rm|container\s+rm)\s+-f/i, /(?:-weight: 500;">docker|podman|nerdctl)\s+system\s+prune\s+-a/i, /(?:-weight: 500;">docker|podman|nerdctl)\s+-weight: 500;">stop\s+\$\(-weight: 500;">docker\s+ps\s+-aq\)/i, // ... ]; const dangerousPatterns = [ /(?:-weight: 500;">docker|podman|nerdctl)\s+(?:rm|container\s+rm)\s+-f/i, /(?:-weight: 500;">docker|podman|nerdctl)\s+system\s+prune\s+-a/i, /(?:-weight: 500;">docker|podman|nerdctl)\s+-weight: 500;">stop\s+\$\(-weight: 500;">docker\s+ps\s+-aq\)/i, // ... ]; const dangerousPatterns = [ /(?:-weight: 500;">docker|podman|nerdctl)\s+(?:rm|container\s+rm)\s+-f/i, /(?:-weight: 500;">docker|podman|nerdctl)\s+system\s+prune\s+-a/i, /(?:-weight: 500;">docker|podman|nerdctl)\s+-weight: 500;">stop\s+\$\(-weight: 500;">docker\s+ps\s+-aq\)/i, // ... ]; container-dashboard/ ├── index.ts # Entry point, permission gates, lifecycle hooks ├── runtime.ts # Runtime detection (-weight: 500;">docker → podman → nerdctl), CLI abstraction ├── commands.ts # /-weight: 500;">docker:* slash commands with formatted output ├── tools.ts # 13 LLM tools registered via TypeBox schemas └── widget.ts # Live TUI sidebar widget container-dashboard/ ├── index.ts # Entry point, permission gates, lifecycle hooks ├── runtime.ts # Runtime detection (-weight: 500;">docker → podman → nerdctl), CLI abstraction ├── commands.ts # /-weight: 500;">docker:* slash commands with formatted output ├── tools.ts # 13 LLM tools registered via TypeBox schemas └── widget.ts # Live TUI sidebar widget container-dashboard/ ├── index.ts # Entry point, permission gates, lifecycle hooks ├── runtime.ts # Runtime detection (-weight: 500;">docker → podman → nerdctl), CLI abstraction ├── commands.ts # /-weight: 500;">docker:* slash commands with formatted output ├── tools.ts # 13 LLM tools registered via TypeBox schemas └── widget.ts # Live TUI sidebar widget const RUNTIMES = ["-weight: 500;">docker", "podman", "nerdctl"] as const; export async function detectRuntime(pi: ExtensionAPI): Promise<RuntimeState> { for (const runtime of RUNTIMES) { try { const result = await pi.exec(runtime, ["--version"], { timeout: 3000 }); if (result.code === 0 && result.stdout) { return { runtime, version: result.stdout.trim(), available: true }; } } catch { continue; } } return { runtime: null, version: "", available: false }; } const RUNTIMES = ["-weight: 500;">docker", "podman", "nerdctl"] as const; export async function detectRuntime(pi: ExtensionAPI): Promise<RuntimeState> { for (const runtime of RUNTIMES) { try { const result = await pi.exec(runtime, ["--version"], { timeout: 3000 }); if (result.code === 0 && result.stdout) { return { runtime, version: result.stdout.trim(), available: true }; } } catch { continue; } } return { runtime: null, version: "", available: false }; } const RUNTIMES = ["-weight: 500;">docker", "podman", "nerdctl"] as const; export async function detectRuntime(pi: ExtensionAPI): Promise<RuntimeState> { for (const runtime of RUNTIMES) { try { const result = await pi.exec(runtime, ["--version"], { timeout: 3000 }); if (result.code === 0 && result.stdout) { return { runtime, version: result.stdout.trim(), available: true }; } } catch { continue; } } return { runtime: null, version: "", available: false }; } Containers CONTAINER ID NAME IMAGE STATUS PORTS a1b2c3d4e5f6 my-postgres postgres:16 ▶ running 5432→5432 b2c3d4e5f6a7 redis-cache redis:7-alpine ▶ running 6379→6379 c3d4e5f6a7b8 old-test-container node:18 ● exited — Containers CONTAINER ID NAME IMAGE STATUS PORTS a1b2c3d4e5f6 my-postgres postgres:16 ▶ running 5432→5432 b2c3d4e5f6a7 redis-cache redis:7-alpine ▶ running 6379→6379 c3d4e5f6a7b8 old-test-container node:18 ● exited — Containers CONTAINER ID NAME IMAGE STATUS PORTS a1b2c3d4e5f6 my-postgres postgres:16 ▶ running 5432→5432 b2c3d4e5f6a7 redis-cache redis:7-alpine ▶ running 6379→6379 c3d4e5f6a7b8 old-test-container node:18 ● exited — # From -weight: 500;">npm (recommended) pi -weight: 500;">install -weight: 500;">npm:container-dashboard # Or from GitHub pi -weight: 500;">install -weight: 500;">git:github.com/k1lgor/pi-container-dashboard # Or load locally pi -e ./path/to/index.ts # From -weight: 500;">npm (recommended) pi -weight: 500;">install -weight: 500;">npm:container-dashboard # Or from GitHub pi -weight: 500;">install -weight: 500;">git:github.com/k1lgor/pi-container-dashboard # Or load locally pi -e ./path/to/index.ts # From -weight: 500;">npm (recommended) pi -weight: 500;">install -weight: 500;">npm:container-dashboard # Or from GitHub pi -weight: 500;">install -weight: 500;">git:github.com/k1lgor/pi-container-dashboard # Or load locally pi -e ./path/to/index.ts You: "What containers are running?" 🤖 AI: *calls container_ps* ▶ my-postgres (running) ▶ redis-cache (running) ▶ api-gateway (running) You: "Check the api-gateway logs, something's wrong" 🤖 AI: *calls container_logs("api-gateway", 100)* 📋 Logs for api-gateway: Error: Connection refused to postgres:5432 at ... You: "Restart it" 🤖 AI: *calls container_restart("api-gateway")* 🔄 Restarted api-gateway You: "Clean up old containers, but save the images" 🤖 AI: *calls container_prune* 🗑️ Pruned 3 stopped containers. Freed: 1.2GB You: "What containers are running?" 🤖 AI: *calls container_ps* ▶ my-postgres (running) ▶ redis-cache (running) ▶ api-gateway (running) You: "Check the api-gateway logs, something's wrong" 🤖 AI: *calls container_logs("api-gateway", 100)* 📋 Logs for api-gateway: Error: Connection refused to postgres:5432 at ... You: "Restart it" 🤖 AI: *calls container_restart("api-gateway")* 🔄 Restarted api-gateway You: "Clean up old containers, but save the images" 🤖 AI: *calls container_prune* 🗑️ Pruned 3 stopped containers. Freed: 1.2GB You: "What containers are running?" 🤖 AI: *calls container_ps* ▶ my-postgres (running) ▶ redis-cache (running) ▶ api-gateway (running) You: "Check the api-gateway logs, something's wrong" 🤖 AI: *calls container_logs("api-gateway", 100)* 📋 Logs for api-gateway: Error: Connection refused to postgres:5432 at ... You: "Restart it" 🤖 AI: *calls container_restart("api-gateway")* 🔄 Restarted api-gateway You: "Clean up old containers, but save the images" 🤖 AI: *calls container_prune* 🗑️ Pruned 3 stopped containers. Freed: 1.2GB - Break your flow by typing out commands manually - Trust the AI to blindly run -weight: 500;">docker rm -f without confirmation (yikes!) - Squint at raw JSON output from -weight: 500;">docker inspect - Force--weight: 500;">remove a running container - System prune everything - Stop ALL containers at once - The AI gets agency — it can check logs, -weight: 500;">restart services, clean up disk space - You get safety — every destructive action requires confirmation - Everyone gets pretty output — formatted tables instead of JSON vomit - TypeScript 5 — Fully typed, strict mode - pi coding agent SDK — Extension API hooks - TypeBox — Runtime type validation for LLM tool parameters - Zero external dependencies for the runtime logic — pure pi.exec() calls - GitHub: https://github.com/k1lgor/pi-container-dashboard - -weight: 500;">npm: pi -weight: 500;">install -weight: 500;">npm:container-dashboard - License: MIT — go build something cool with it