Tools: Latest: Self-Hosted AI in 2026: Automating Your Linux Workflow with n8n and Ollama

Tools: Latest: Self-Hosted AI in 2026: Automating Your Linux Workflow with n8n and Ollama

Why Self-Host AI Automation?

The Stack

Step 1: Install Ollama

Step 2: Deploy n8n with Docker

Step 3: Create Your First AI Workflow

Practical Example: The "Log Watcher" Workflow

Performance Tips for 2026

References & Further Reading In 2026, the "Local AI" movement is no longer just a niche hobby for hardware enthusiasts. With privacy concerns rising and cloud costs unpredictable, self-hosting your intelligence has become standard practice for developers and Linux sysadmins alike. Today, we’re looking at how to combine the power of Ollama with the robustness of n8n to build a truly private automation stack. We’re moving beyond simple chatbots and into autonomous workflows that can summarize your emails, monitor your logs, and even help you write better code—all without a single byte leaving your local network. If you haven't installed Ollama yet, it's a single command: To verify it's working and pull a versatile model (like Llama 3): We’ll use Docker Compose to get n8n up and running. Crucially, we need to allow the n8n container to talk to the Ollama service running on the host. Create a docker-compose.yml: Imagine you want a summary of your system logs emailed to you every morning, but you don't want to send raw logs to a cloud AI. Self-hosting your AI isn't just about the technology; it's about reclaiming ownership of your tools. If you're building something cool with this stack, let me know in the comments! Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 500;">curl -fsSL https://ollama.com/-weight: 500;">install.sh | sh -weight: 500;">curl -fsSL https://ollama.com/-weight: 500;">install.sh | sh -weight: 500;">curl -fsSL https://ollama.com/-weight: 500;">install.sh | sh ollama pull llama3 ollama run llama3 "Hello, world!" ollama pull llama3 ollama run llama3 "Hello, world!" ollama pull llama3 ollama run llama3 "Hello, world!" version: '3.8' services: n8n: image: n8nio/n8n:latest -weight: 500;">restart: always ports: - "5678:5678" environment: - N8N_HOST=localhost - N8N_PORT=5678 - N8N_PROTOCOL=http volumes: - n8n_data:/home/node/.local/share/n8n # This allows n8n to reach Ollama on the host machine extra_hosts: - "host.-weight: 500;">docker.internal:host-gateway" volumes: n8n_data: version: '3.8' services: n8n: image: n8nio/n8n:latest -weight: 500;">restart: always ports: - "5678:5678" environment: - N8N_HOST=localhost - N8N_PORT=5678 - N8N_PROTOCOL=http volumes: - n8n_data:/home/node/.local/share/n8n # This allows n8n to reach Ollama on the host machine extra_hosts: - "host.-weight: 500;">docker.internal:host-gateway" volumes: n8n_data: version: '3.8' services: n8n: image: n8nio/n8n:latest -weight: 500;">restart: always ports: - "5678:5678" environment: - N8N_HOST=localhost - N8N_PORT=5678 - N8N_PROTOCOL=http volumes: - n8n_data:/home/node/.local/share/n8n # This allows n8n to reach Ollama on the host machine extra_hosts: - "host.-weight: 500;">docker.internal:host-gateway" volumes: n8n_data: -weight: 500;">docker compose up -d -weight: 500;">docker compose up -d -weight: 500;">docker compose up -d - Zero Latency: No API round-trips to Virginia or Ireland. - Privacy: Your data, your logs, your secrets stay on your hardware. - No Subscriptions: One-time hardware cost, zero monthly fees. - Full Control: Use any model you want, from Llama 3.x to Mistral or DeepSeek. - OS: Any modern Linux distribution (Ubuntu 24.04+ or Debian 13 recommended). - Ollama: The easiest way to run LLMs locally. - n8n: The "Zapier for self-hosters" with built-in AI nodes. - Docker: For easy deployment and isolation. - Open n8n at http://localhost:5678. - Add an Ollama node to your workflow. - Configure the Credentials: Set the URL to http://host.-weight: 500;">docker.internal:11434. - Select your model (e.g., llama3). - Connect it to a trigger—like an HTTP Request or a Cron job. - Node 1 (Execute Command): tail -n 100 /var/log/syslog - Node 2 (Ollama): Prompt: "Summarize these logs and highlight any security warnings or critical errors." - Node 3 (Email/Discord): Send the output to your preferred channel. - GPU Acceleration: If you have an NVIDIA GPU, make sure you have the nvidia-container-toolkit installed so Docker can leverage CUDA. - Model Quantization: Stick to 4-bit or 6-bit quantizations for a good balance of speed and intelligence. - VRAM Matters: For 7B or 8B models, 8GB of VRAM is the sweet spot. For 70B models, you’ll want 24GB+ (or a Mac Studio). - Ollama Official Documentation - n8n Self-Hosted AI Starter Kit - Linux Automation Best Practices (2026)