Tools
Tools: Configure Local LLM with OpenCode
2026-01-16
0 views
admin
Add any OpenAI compatible endpoint to OpenCode ## Prerequisites ## Step 1 – Register the provider in OpenCode auth ## Step 2 – Define the OpenAI-compatible provider ## Key fields explained ## Selecting your model OpenCode doesn’t currently expose a simple “bring your own endpoint” option in its UI. Instead, it ships with a predefined list of cloud providers. OpenCode fully supports OpenAI-compatible APIs, which means you can plug in any compatible endpoint: including vLLM, LM Studio, Ollama (with a proxy), or your own custom server. This post shows how to wire up a local vLLM server as a provider, but the same approach works for any OpenAI-compatible endpoint. vLLM exposes a /v1 API that matches OpenAI’s Chat Completions API, which makes it an ideal drop-in backend. OpenCode stores provider authentication details in: If the file does not exist yet, create it. Add the following entry: Now define the provider itself in: Create the file if it doesn’t exist. After these steps, restart OpenCode if it’s running. Your custom provider and model will appear in the selection list. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
~/.local/share/opencode/auth.json Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
~/.local/share/opencode/auth.json CODE_BLOCK:
~/.local/share/opencode/auth.json CODE_BLOCK:
{ "vllm": { "type": "api", "key": "sk-local" }
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
{ "vllm": { "type": "api", "key": "sk-local" }
} CODE_BLOCK:
{ "vllm": { "type": "api", "key": "sk-local" }
} CODE_BLOCK:
~/.config/opencode/opencode.json Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
~/.config/opencode/opencode.json CODE_BLOCK:
~/.config/opencode/opencode.json CODE_BLOCK:
{ "$schema": "https://opencode.ai/config.json", "provider": { "vllm": { "npm": "@ai-sdk/openai-compatible", "name": "vLLM (local)", "options": { "baseURL": "http://100.108.174.26:8000/v1" }, "models": { "Qwen3-Coder-30B-A3B-Instruct": { "name": "My vLLM model" } } } }, "model": "vllm/Qwen3-Coder-30B-A3B-Instruct", "small_model": "vllm/Qwen3-Coder-30B-A3B-Instruct"
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
{ "$schema": "https://opencode.ai/config.json", "provider": { "vllm": { "npm": "@ai-sdk/openai-compatible", "name": "vLLM (local)", "options": { "baseURL": "http://100.108.174.26:8000/v1" }, "models": { "Qwen3-Coder-30B-A3B-Instruct": { "name": "My vLLM model" } } } }, "model": "vllm/Qwen3-Coder-30B-A3B-Instruct", "small_model": "vllm/Qwen3-Coder-30B-A3B-Instruct"
} CODE_BLOCK:
{ "$schema": "https://opencode.ai/config.json", "provider": { "vllm": { "npm": "@ai-sdk/openai-compatible", "name": "vLLM (local)", "options": { "baseURL": "http://100.108.174.26:8000/v1" }, "models": { "Qwen3-Coder-30B-A3B-Instruct": { "name": "My vLLM model" } } } }, "model": "vllm/Qwen3-Coder-30B-A3B-Instruct", "small_model": "vllm/Qwen3-Coder-30B-A3B-Instruct"
} CODE_BLOCK:
/model Enter fullscreen mode Exit fullscreen mode - OpenCode installed and working
- A running OpenAI-compatible endpoint
(for example: a local vLLM server on http://<host>:8000/v1) - vLLM does not require an API key, but OpenCode expects one to exist.
- Any placeholder value works (sk-local is a common convention).
- If auth.json already exists, merge the vllm block into the existing JSON. - @ai-sdk/openai-compatible
Tells OpenCode to treat this provider as OpenAI-compatible.
- baseURL
Must point to the /v1 endpoint of your server.
- models
The key must exactly match the model ID exposed by the backend.
- model / small_model
Sets the default model used by OpenCode.
how-totutorialguidedev.toaiopenaillmserver