Experience Working with OpenClaw (Clawbot)
Real Setup (Explained)
Infrastructure & Tooling
Ubuntu Server
Tailscale (as a service)
Claude Code for Setup
Local Models Tested (Ollama)
Model Performance Ranking
🧠 Overall Ranking (Reasoning + Speed)
☁️ Provider Ranking
My Experience with Local Models
Reality:
Key Challenges
1. Opacity in the TUI
2. Lack of Cross-Channel Consistency
3. Configuration Complexity
Final Thoughts
Conclusion I want to share my experience as a developer working with OpenClaw (Clawbot), including a real-world setup and some practical insights after using it in production-like scenarios. My setup is based on: I also experimented with multiple Ollama local models as part of a fallback strategy, along with cloud models like Kimi 2.5. From a configuration standpoint, OpenClaw is a powerful and flexible system — but that flexibility comes at a cost. Instead of showing raw configuration, here’s how my setup is structured conceptually: Primary cloud model → Alternative cloud → Local fallback → Secondary cloud fallback This setup aims to balance: I chose Ubuntu Server to fully utilize the machine: I used Tailscale running as a background service to access the machine remotely. The experience was excellent: This made it extremely easy to: I used Claude Code to bootstrap and configure the environment. This significantly reduced setup friction: I tested several local models using Ollama: Using Ollama with local models is a great idea in theory. Even running on an RTX 3060 and testing multiple models: Local models were, in practice, a major downgrade. It almost feels like the system gets “lobotomized” when switching to them: This becomes very clear when compared to: The TUI feels like a black box. This makes debugging painful. You must carefully align: OpenClaw has a strong architectural foundation: But it needs improvements in: creates a very powerful personal AI infrastructure. Today, cloud models still massively outperform local ones, even on decent hardware like an RTX 3060. The execution is promising. But the ecosystem — especially around local models — still has a long way to go. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse - AMD Ryzen 5 5600X
- RTX 3060 (LHR)
- NVMe SSD storage
- Ubuntu Server (headless environment) - Primary model: Claude Opus 4.6
- Cloud alternative: Kimi 2.5
- Fallback #1: Local models via Ollama (GPU)
- Fallback #2: OpenAI models
- Interface: Telegram
- Remote access: Tailscale as a service - Performance
- Reliability
- Offline capability - Lower overhead (no GUI)
- Better resource allocation for models
- More predictable performance for long-running processes - Fast and stable connections
- Zero-config networking
- Secure remote access without exposing ports - Manage OpenClaw remotely
- Debug issues
- Interact with the system from anywhere - Faster iteration
- Easier debugging
- Better guidance wiring models and fallbacks - Gemma 3 (12B)
- Qwen 3 (14B, abliterated)
- Qwen 3.5 (9B) - Kimi 2.5 (cloud) → Best overall performance
- Gemma 3 (local)
- Qwen 3 (local)
- Qwen 3.5 (local) - Anthropic (Claude models) → Most reliable reasoning
- OpenAI models → Strong and consistent
- Ollama (local models) → Significantly weaker - Works without internet
- Fully local
- Good fallback strategy - Weak reasoning
- Poor context handling
- Inconsistent outputs - Claude (Anthropic)
- OpenAI models
- Kimi 2.5 (which performed surprisingly well) - Which model is active
- When fallbacks trigger
- Why decisions are made - Telegram ≠ TUI
- No shared continuity
- Fragmented sessions - Silent failures
- Weird behaviors
- Hard-to-debug issues - Multi-model orchestration
- Fallback strategies
- Multi-channel interaction - Transparency (TUI)
- Cross-channel consistency
- Developer experience
- Local model performance - Ubuntu Server
- Cloud + local models