apiVersion: agentroll.dev/v1alpha1
kind: AgentDeployment
metadata: name: customer-support-agent
spec: container: image: myregistry/support-agent:v2.1.0 agentMeta: promptVersion: "abc123" modelVersion: "claude-sonnet-4-20250514" toolDependencies: - name: crm-mcp-server version: ">=1.2.0" rollout: strategy: canary steps: - setWeight: 5 pause: { duration: "5m" } analysis: { templateRef: agent-quality-check } - setWeight: 20 pause: { duration: "10m" } analysis: { templateRef: agent-quality-check } - setWeight: 100
apiVersion: agentroll.dev/v1alpha1
kind: AgentDeployment
metadata: name: customer-support-agent
spec: container: image: myregistry/support-agent:v2.1.0 agentMeta: promptVersion: "abc123" modelVersion: "claude-sonnet-4-20250514" toolDependencies: - name: crm-mcp-server version: ">=1.2.0" rollout: strategy: canary steps: - setWeight: 5 pause: { duration: "5m" } analysis: { templateRef: agent-quality-check } - setWeight: 20 pause: { duration: "10m" } analysis: { templateRef: agent-quality-check } - setWeight: 100
apiVersion: agentroll.dev/v1alpha1
kind: AgentDeployment
metadata: name: customer-support-agent
spec: container: image: myregistry/support-agent:v2.1.0 agentMeta: promptVersion: "abc123" modelVersion: "claude-sonnet-4-20250514" toolDependencies: - name: crm-mcp-server version: ">=1.2.0" rollout: strategy: canary steps: - setWeight: 5 pause: { duration: "5m" } analysis: { templateRef: agent-quality-check } - setWeight: 20 pause: { duration: "10m" } analysis: { templateRef: agent-quality-check } - setWeight: 100 - Prompt / system context — the instructions that define the agent's personality and rules
- Model version — the LLM being called (GPT-4o vs Claude Sonnet vs a fine-tuned variant)
- Tool configurations — which external tools the agent can call and their API versions
- Memory / state — accumulated conversation history and learned patterns - Creates an Argo Rollout (not a raw Deployment) with properly translated canary steps
- Manages AnalysisTemplates with agent-specific quality checks — or lets you bring your own
- Tracks the composite version (prompt + model + image tag) as labels on every pod
- Provides opinionated defaults while allowing full customization at every layer - Real evaluation metrics: Replace placeholder analysis with Langfuse/Prometheus integration for actual hallucination rate, tool success rate, and cost-per-task tracking
- Cost-aware scaling: KEDA-based autoscaling using queue depth instead of CPU (agents are I/O bound, not CPU bound)
- MCP tool lifecycle: Manage MCP tool server versions alongside agent versions
- Multi-agent coordination: Coordinated canary deployments across dependent agent networks