$ -weight: 500;">curl -fsSL https://ollama.com/-weight: 500;">install.sh | sh
-weight: 500;">curl -fsSL https://ollama.com/-weight: 500;">install.sh | sh
-weight: 500;">curl -fsSL https://ollama.com/-weight: 500;">install.sh | sh
ollama pull llama3
ollama run llama3 "Hello, world!"
ollama pull llama3
ollama run llama3 "Hello, world!"
ollama pull llama3
ollama run llama3 "Hello, world!"
version: '3.8' services: n8n: image: n8nio/n8n:latest -weight: 500;">restart: always ports: - "5678:5678" environment: - N8N_HOST=localhost - N8N_PORT=5678 - N8N_PROTOCOL=http volumes: - n8n_data:/home/node/.local/share/n8n # This allows n8n to reach Ollama on the host machine extra_hosts: - "host.-weight: 500;">docker.internal:host-gateway" volumes: n8n_data:
version: '3.8' services: n8n: image: n8nio/n8n:latest -weight: 500;">restart: always ports: - "5678:5678" environment: - N8N_HOST=localhost - N8N_PORT=5678 - N8N_PROTOCOL=http volumes: - n8n_data:/home/node/.local/share/n8n # This allows n8n to reach Ollama on the host machine extra_hosts: - "host.-weight: 500;">docker.internal:host-gateway" volumes: n8n_data:
version: '3.8' services: n8n: image: n8nio/n8n:latest -weight: 500;">restart: always ports: - "5678:5678" environment: - N8N_HOST=localhost - N8N_PORT=5678 - N8N_PROTOCOL=http volumes: - n8n_data:/home/node/.local/share/n8n # This allows n8n to reach Ollama on the host machine extra_hosts: - "host.-weight: 500;">docker.internal:host-gateway" volumes: n8n_data:
-weight: 500;">docker compose up -d
-weight: 500;">docker compose up -d
-weight: 500;">docker compose up -d - Zero Latency: No API round-trips to Virginia or Ireland.
- Privacy: Your data, your logs, your secrets stay on your hardware.
- No Subscriptions: One-time hardware cost, zero monthly fees.
- Full Control: Use any model you want, from Llama 3.x to Mistral or DeepSeek. - OS: Any modern Linux distribution (Ubuntu 24.04+ or Debian 13 recommended).
- Ollama: The easiest way to run LLMs locally.
- n8n: The "Zapier for self-hosters" with built-in AI nodes.
- Docker: For easy deployment and isolation. - Open n8n at http://localhost:5678.
- Add an Ollama node to your workflow.
- Configure the Credentials: Set the URL to http://host.-weight: 500;">docker.internal:11434.
- Select your model (e.g., llama3).
- Connect it to a trigger—like an HTTP Request or a Cron job. - Node 1 (Execute Command): tail -n 100 /var/log/syslog
- Node 2 (Ollama): Prompt: "Summarize these logs and highlight any security warnings or critical errors."
- Node 3 (Email/Discord): Send the output to your preferred channel. - GPU Acceleration: If you have an NVIDIA GPU, make sure you have the nvidia-container-toolkit installed so Docker can leverage CUDA.
- Model Quantization: Stick to 4-bit or 6-bit quantizations for a good balance of speed and intelligence.
- VRAM Matters: For 7B or 8B models, 8GB of VRAM is the sweet spot. For 70B models, you’ll want 24GB+ (or a Mac Studio). - Ollama Official Documentation
- n8n Self-Hosted AI Starter Kit
- Linux Automation Best Practices (2026)