Want Your AI to Stay Private? Run a Fully Local LLM with Open WebUI + Ollama
π§ What This Setup Provides
βοΈ Architecture Overview
π§© Components
1. Ollama
2. Open WebUI
3. Docker
π Installation & Setup
1. Install Ollama
2. Start Ollama
3. Pull a Model
4. Run Open WebUI
π Access the Interface
π Important Fix (Docker Networking)
βΆοΈ Daily Usage
Start WebUI
Stop WebUI
Check Models
Run Model in Terminal
π Troubleshooting
Port already in use (11434)
Model not visible in UI
Connection issue
π Why This Matters
β οΈ Trade-offs
β‘ Final Result
π§Ύ Quick Cheat Sheet
π Final Thought As LLMs become part of daily workflows, one question comes up more often: Where does the data go? Most cloud-based AI tools send prompts and responses to remote servers for processing.
For many use cases, thatβs perfectly fine. But in some situations: You may prefer not to send that data outside your machine. This is where local LLM setups become useful. This setup creates a fully local ChatGPT-like experience: Everything runs locally. Runs LLM models locally and exposes an API. Provides a ChatGPT-like interface with: π https://openwebui.com/ Runs Open WebUI in an isolated container. It simply means Ollama is already running. Check available models: You now have a local ChatGPT-style interface. If Open WebUI cannot detect Ollama: This allows the container to directly access: Without this, Docker may isolate the container from the local API. Ollama is already running β no action required. It is especially useful for: Local models are not identical to large cloud models. But for many use cases, they are more than sufficient. Cloud AI is powerful and convenient. Local AI is controlled and private. Both have their place. This setup simply gives you the option. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse
Browser (Open WebUI) β
Docker Container (Open WebUI) β
Ollama API (localhost:11434) β
Local LLM Model (e.g., mistral)
Browser (Open WebUI) β
Docker Container (Open WebUI) β
Ollama API (localhost:11434) β
Local LLM Model (e.g., mistral)
Browser (Open WebUI) β
Docker Container (Open WebUI) β
Ollama API (localhost:11434) β
Local LLM Model (e.g., mistral)
curl -fsSL https://ollama.com/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
ollama serve
ollama serve
address already in use
address already in use
address already in use
ollama pull mistral
ollama pull mistral
ollama pull mistral
ollama list
ollama list
ollama list
sudo docker run -d \ --network=host \ -v open-webui:/app/backend/data \ -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \ --name open-webui \ --restart unless-stopped \ ghcr.io/open-webui/open-webui:main
sudo docker run -d \ --network=host \ -v open-webui:/app/backend/data \ -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \ --name open-webui \ --restart unless-stopped \ ghcr.io/open-webui/open-webui:main
sudo docker run -d \ --network=host \ -v open-webui:/app/backend/data \ -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \ --name open-webui \ --restart unless-stopped \ ghcr.io/open-webui/open-webui:main
http://localhost:8080
http://localhost:8080
http://localhost:8080
--network=host
--network=host
--network=host
http://127.0.0.1:11434
http://127.0.0.1:11434
http://127.0.0.1:11434
sudo docker start open-webui
sudo docker start open-webui
sudo docker start open-webui
sudo docker stop open-webui
sudo docker stop open-webui
sudo docker stop open-webui
ollama list
ollama list
ollama list
ollama run mistral
ollama run mistral
ollama run mistral
sudo docker restart open-webui
sudo docker restart open-webui
sudo docker restart open-webui
curl http://127.0.0.1:11434
curl http://127.0.0.1:11434
curl http://127.0.0.1:11434
# Start WebUI
sudo docker start open-webui # Open UI
http://localhost:8080 # Check models
ollama list # Run model
ollama run mistral # Stop WebUI
sudo docker stop open-webui
# Start WebUI
sudo docker start open-webui # Open UI
http://localhost:8080 # Check models
ollama list # Run model
ollama run mistral # Stop WebUI
sudo docker stop open-webui
# Start WebUI
sudo docker start open-webui # Open UI
http://localhost:8080 # Check models
ollama list # Run model
ollama run mistral # Stop WebUI
sudo docker stop open-webui - Sensitive code
- Personal notes
- Internal documentation
- Experimental ideas - Runs entirely on your machine
- No external API calls
- No data leaving your system
- Modern chat interface
- Model switching support - Chat history
- Model selection - Prompts stay local
- Files remain on your machine
- No external logging or tracking
- Full control over your environment - Developers working with sensitive code
- Offline workflows
- Learning and experimentation
- Privacy-conscious users - Slightly lower reasoning capability
- Slower responses (CPU-based inference)
- Limited context window (depending on model) - A local LLM (e.g., Mistral)
- A ChatGPT-like interface
- A fully private AI environment
- No dependency on external APIs