Tools: Oss Chatgpt Webui – 530 Models, Mcp, Tools, Gemini Rag, Image/audio...

Tools: Oss Chatgpt Webui – 530 Models, Mcp, Tools, Gemini Rag, Image/audio...

Major release focused on extensibility, expanded provider support, and enhanced user experience.

Get instant access to 530+ models from 24 providers with extensibility at its core:

See Install Docs for running from Docker or source.

A major change to significantly increase the available models is the switch to utilizing the same models.dev open provider and model catalogue as used and maintained by OpenCode.

llms.json provider configuration is now a superset of models.dev/api.json where its definitions are merged, allowing you to enable providers using just "enabled": true to inherit the configuration from models.dev

The switch to models.dev greatly expands the model selection to over 530 models from 24 different providers, including new support for:

Non OpenAI Compatible LLM and Image generation providers are maintained in the providers extension, registered using the ctx.add_provider() API. There are several different provider implementations to take advantage of features available in each provider, such as Interleaved Thinking support in Anthropic's Messages API which enables all Claude and MiniMax models to reason between tool calls for improved agentic performance.

This actively maintained list of available providers and models are automatically updated into your providers.json daily that can also be manually updated with:

As an optimization only the providers that are referenced in your llms.json are saved. Any additional providers you want to use that are not included in models.dev can be added to your ~/.llms/providers-extra.json, which get merged into your providers.json on every update.

This keeps your local configuration file lightweight by only including the providers that are available for use.

Source: HackerNews