Tools
Tools: Use Cursor with LM Studio
2026-03-01
0 views
admin
install (macOS example) ## authenticate ## start a tunnel for LM Studio's port Elevate Your Coding: Integrating Cursor with Local LLMs (LM Studio) and GitHub Copilot Prerequisites
Before you begin, make sure you have: Cursor installed
LM Studio installed (see lmstudio.ai) ngrok (optional, discussed later) A GitHub Copilot subscription (optional but recommended) One or more local models downloaded (e.g. Gemma2, Llama3, DeepSeekCoder) Part 1: Setting Up the Engine – LM Studio & ngrok
The goal of this section is to run a local LLM and expose it as an API that Cursor can consume. Example model selector inside LM Studio. Llama3(8B) or Gemma2(9B) for general use
DeepSeekCoder for coding-heavy tasks
GLM4 for modern language capabilities
Click Download and wait for the model to finish downloading. Server configuration panel with model selected. ngrok config add-authtoken grok http 1234
Copy the resulting "Forwarding" URL (e.g. https://a1b2-c3d4.ngrok-free.app); you'll need it when configuring Cursor. ngrok running and showing a forwarding URL. Sample API call via ngrok Traffic arriving at LM Studio through the ngrok tunnel. Part 2: Connecting the Cockpit – Configuring Cursor
Now lets teach Cursor to use your local model endpoint instead of the default cloud API. Configure the Custom OpenAI API
In the OpenAI API area enter a placeholder key such as lm-studio.
Set the Base URL to your endpoint:
Local: http://localhost:1234/v1
ngrok: https://.ngrok-free.app/v1
Tip: don’t forget the trailing /v1 LM Studio mimics the OpenAI path. Model Overrides
Cursor may not recognize your model name automatically. Add a custom model name matching the ID shown in LM Studio (e.g. llama-3-8b-instruct).
If you get an "Invalid Model" warning, rename the model in LM Studio to something Cursor expects (like gpt-4), or use the override option. Github : https://github.com/ketanvijayvargiya/Cursor-LMStudio Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse - Install LM Studio
Download the appropriate package for your OS.
Run the installer and launch the application.
LM Studio Model List - Choose and Download a Model
Use the search bar to find a model. Some good starting points: - Start the Local Server
Switch to the Local Server tab ( icon).
Select your downloaded model from the dropdown.
Ensure CORS is enabled and note the default port (1234).
Click Start Server. You'll see logs confirming the service is running.
Server Settings - Expose the API with ngrok (Optional but Recommended)
While Cursor can hit http://localhost, using ngrok provides a stable, publicly reachable URL and avoids sandbox restrictions. - Open Cursor AI Settings
Launch Cursor.
Open Settings (Ctrl+Shift+J / Cmd+Shift+J).
Navigate to the Models section.
- Configure the Custom OpenAI API
In the OpenAI API area enter a placeholder key such as lm-studio.
Set the Base URL to your endpoint:
Local: http://localhost:1234/v1
ngrok: https://.ngrok-free.app/v1
Tip: don’t forget the trailing /v1 LM Studio mimics the OpenAI path.
- Model Overrides
Cursor may not recognize your model name automatically. - Verify the Connection
Open Cursor Chat (Ctrl+L / Cmd+L).
Send a simple prompt: "Are you running locally?".
Watch the LM Studio logsyou should see POST requests hit the server.
LM Studio Logs
how-totutorialguidedev.toaiopenaillmgptserverswitchgitgithub