Tools: Getting Started with LLM Gateway in 5 Minutes

Tools: Getting Started with LLM Gateway in 5 Minutes

Source: Dev.to

Step 1: Get an API Key ## Step 2: Make Your First Request ## Using curl ## Using Node.js (OpenAI SDK) ## Using Python ## Using the AI SDK ## Step 3: Enable Streaming ## Step 4: Monitor in the Dashboard ## Step 5: Try a Different Provider ## What's Next This guide walks you through making your first LLM request through LLM Gateway. By the end, you'll have a working API key and a completed request visible in your dashboard. LLM Gateway uses an OpenAI-compatible API. Point your requests to https://api.llmgateway.io/v1 and you're done. If you're using the Vercel AI SDK, you can use the native provider: Or use the OpenAI-compatible adapter: Pass stream: true to any request and the gateway will proxy the event stream unchanged: Every call appears in the dashboard with latency, cost, and provider breakdown. Go back to your project to see your request logged with the model used, token counts, cost, and response time. The best part of using a gateway: switching providers is a one-line change. Try the same request with a different model: Same API, same code. Just a different model string. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK: export LLM_GATEWAY_API_KEY="llmgtwy_XXXXXXXXXXXXXXXX" Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: export LLM_GATEWAY_API_KEY="llmgtwy_XXXXXXXXXXXXXXXX" CODE_BLOCK: export LLM_GATEWAY_API_KEY="llmgtwy_XXXXXXXXXXXXXXXX" COMMAND_BLOCK: curl -X POST https://api.llmgateway.io/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "user", "content": "What is an LLM gateway?"} ] }' Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: curl -X POST https://api.llmgateway.io/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "user", "content": "What is an LLM gateway?"} ] }' COMMAND_BLOCK: curl -X POST https://api.llmgateway.io/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "user", "content": "What is an LLM gateway?"} ] }' CODE_BLOCK: import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://api.llmgateway.io/v1", apiKey: process.env.LLM_GATEWAY_API_KEY, }); const response = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "What is an LLM gateway?" }], }); console.log(response.choices[0].message.content); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://api.llmgateway.io/v1", apiKey: process.env.LLM_GATEWAY_API_KEY, }); const response = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "What is an LLM gateway?" }], }); console.log(response.choices[0].message.content); CODE_BLOCK: import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://api.llmgateway.io/v1", apiKey: process.env.LLM_GATEWAY_API_KEY, }); const response = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "What is an LLM gateway?" }], }); console.log(response.choices[0].message.content); CODE_BLOCK: import requests import os response = requests.post( "https://api.llmgateway.io/v1/chat/completions", headers={ "Content-Type": "application/json", "Authorization": f"Bearer {os.getenv('LLM_GATEWAY_API_KEY')}", }, json={ "model": "gpt-4o", "messages": [ {"role": "user", "content": "What is an LLM gateway?"} ], }, ) response.raise_for_status() print(response.json()["choices"][0]["message"]["content"]) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: import requests import os response = requests.post( "https://api.llmgateway.io/v1/chat/completions", headers={ "Content-Type": "application/json", "Authorization": f"Bearer {os.getenv('LLM_GATEWAY_API_KEY')}", }, json={ "model": "gpt-4o", "messages": [ {"role": "user", "content": "What is an LLM gateway?"} ], }, ) response.raise_for_status() print(response.json()["choices"][0]["message"]["content"]) CODE_BLOCK: import requests import os response = requests.post( "https://api.llmgateway.io/v1/chat/completions", headers={ "Content-Type": "application/json", "Authorization": f"Bearer {os.getenv('LLM_GATEWAY_API_KEY')}", }, json={ "model": "gpt-4o", "messages": [ {"role": "user", "content": "What is an LLM gateway?"} ], }, ) response.raise_for_status() print(response.json()["choices"][0]["message"]["content"]) CODE_BLOCK: import { llmgateway } from "@llmgateway/ai-sdk-provider"; import { generateText } from "ai"; const { text } = await generateText({ model: llmgateway("openai/gpt-4o"), prompt: "What is an LLM gateway?", }); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: import { llmgateway } from "@llmgateway/ai-sdk-provider"; import { generateText } from "ai"; const { text } = await generateText({ model: llmgateway("openai/gpt-4o"), prompt: "What is an LLM gateway?", }); CODE_BLOCK: import { llmgateway } from "@llmgateway/ai-sdk-provider"; import { generateText } from "ai"; const { text } = await generateText({ model: llmgateway("openai/gpt-4o"), prompt: "What is an LLM gateway?", }); CODE_BLOCK: import { createOpenAI } from "@ai-sdk/openai"; const llmgateway = createOpenAI({ baseURL: "https://api.llmgateway.io/v1", apiKey: process.env.LLM_GATEWAY_API_KEY!, }); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: import { createOpenAI } from "@ai-sdk/openai"; const llmgateway = createOpenAI({ baseURL: "https://api.llmgateway.io/v1", apiKey: process.env.LLM_GATEWAY_API_KEY!, }); CODE_BLOCK: import { createOpenAI } from "@ai-sdk/openai"; const llmgateway = createOpenAI({ baseURL: "https://api.llmgateway.io/v1", apiKey: process.env.LLM_GATEWAY_API_KEY!, }); COMMAND_BLOCK: curl -X POST https://api.llmgateway.io/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \ -d '{ "model": "gpt-4o", "stream": true, "messages": [ {"role": "user", "content": "Write a short poem about APIs"} ] }' Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: curl -X POST https://api.llmgateway.io/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \ -d '{ "model": "gpt-4o", "stream": true, "messages": [ {"role": "user", "content": "Write a short poem about APIs"} ] }' COMMAND_BLOCK: curl -X POST https://api.llmgateway.io/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \ -d '{ "model": "gpt-4o", "stream": true, "messages": [ {"role": "user", "content": "Write a short poem about APIs"} ] }' COMMAND_BLOCK: # Anthropic "model": "anthropic/claude-haiku-4-5" # Google "model": "google-ai-studio/gemini-2.5-flash" Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: # Anthropic "model": "anthropic/claude-haiku-4-5" # Google "model": "google-ai-studio/gemini-2.5-flash" COMMAND_BLOCK: # Anthropic "model": "anthropic/claude-haiku-4-5" # Google "model": "google-ai-studio/gemini-2.5-flash" - Sign in to the LLM Gateway dashboard. - Create a new Project. - Copy the API key. - Export it in your shell or add it to a .env file: - Try models in the Playground — test any model with a chat interface before integrating - Browse all models — compare pricing, context windows, and capabilities - Read the full docs — streaming, tool calling, structured output, and more - Join the Discord — get help and share what you're building