Tools: Build AI Agents Locally and Access Them Anywhere with Langflow

Tools: Build AI Agents Locally and Access Them Anywhere with Langflow

Understanding Langflow in Practice

Why Running It Locally Makes Sense

Getting Started with Installation

Option 1: Using uv

Option 2: Using pip

Option 3: Using Docker

Running Langflow on a Different Port

A More Stable Setup with Docker Compose

Making Your Local Setup Accessible

Create a Public URL

Adding Basic Protection

Building Your First Flow

Exploring a RAG Workflow

Running Everything Locally with Ollama

Using Your Flow as an API

Flexibility Across Tools and Models

Conclusion

Reference There was a time when building AI agents meant stitching together multiple libraries, writing glue code, and debugging small mismatches between APIs. That approach still exists, but tools like Langflow have made the process far more approachable. Instead of writing everything from scratch, you now work on a visual canvas where components connect like building blocks. What makes this even more interesting is that you can run the entire system on your own machine. Your workflows, your data, and your experiments stay local. But that also introduces a small limitation. By default, your setup is only accessible on your own network. If you want to test your agent on your phone, share it with someone else, or integrate it with an external service, you need a way to expose it. This guide walks through that journey in a practical and grounded way. You will install Langflow, run it locally, and then make it accessible from anywhere using a simple tunneling approach. Langflow is a visual environment for building AI pipelines. Instead of writing long scripts, you connect components such as language models, vector databases, APIs, and logic blocks on a canvas. Each workflow you create becomes more than just a visual diagram. It turns into a working API endpoint. That means your flow is not only interactive in the UI but also usable in real applications. What stands out is flexibility. You are not tied to one provider. You can switch between different models, use local inference, or connect external tools depending on your needs. Running Langflow on your own system gives you control that cloud setups cannot always offer. Privacy is the most obvious benefit. If you are working with sensitive documents or internal datasets, keeping everything on your machine avoids unnecessary exposure. Cost is another factor. When paired with local models, you can experiment freely without worrying about usage-based billing. There is also a level of customisation that comes with self-hosting. You decide how your environment is configured, what services it connects to, and how data is stored. You can install Langflow in multiple ways depending on how comfortable you are with Python environments. This method is clean and efficient, especially if you want isolated environments. This works well if you already manage Python environments manually. This approach avoids local dependency management entirely. Everything runs inside a container. You should see the Langflow interface ready to use. If port 7860 is already occupied, you can change it easily: Or set it as an environment variable: For longer-term usage, especially if you want persistence, Docker Compose with a database is a better choice. Create a docker-compose.yml file: This setup ensures your flows and configurations persist across restarts. Once Langflow is running, it is still limited to your local machine. To access it remotely, you need to create a tunnel. This is where Pinggy becomes useful. After running it, you will receive a public URL that maps to your local server. You can now open that link from any device and access your Langflow instance. If you are sharing access, it is a good idea to add a simple authentication layer: This ensures that only users with the credentials can access your setup. Once everything is running, the real value comes from building workflows. A simple starting point is a question answering agent that fetches information from the web. Basic components include: You connect these components visually. Each connection represents how data flows from one step to another. One of the most practical use cases is Retrieval Augmented Generation. In simple terms, you allow your agent to answer questions based on your own documents. The flow usually looks like this: This approach makes your agent far more useful for domain-specific tasks. If you want complete control, you can avoid external APIs entirely. Start by running a local model: Then connect Langflow to: Now your entire pipeline runs on your own machine, from document processing to response generation. Every workflow you build can be called programmatically. If you replace localhost with your public tunnel URL, you can call your agent from anywhere. Langflow supports a wide range of integrations. You can experiment with different models, connect various databases, and integrate external services without changing your entire setup. This flexibility makes it useful not just for experimentation, but also for building real applications. Self-hosting Langflow changes how you approach building AI systems. Instead of relying entirely on external platforms, you gain ownership over your workflows and data. Adding remote access completes the picture. It allows your local setup to behave like a deployable service without the overhead of managing servers. The combination of a visual builder, local control, and simple remote access creates a workflow that feels both powerful and practical. It lowers the barrier to experimentation while still giving you the tools needed for more serious projects. How to Self-Host Langflow and Access It Remotely Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 500;">pip -weight: 500;">install uv uv -weight: 500;">pip -weight: 500;">install langflow uv run langflow run -weight: 500;">pip -weight: 500;">install uv uv -weight: 500;">pip -weight: 500;">install langflow uv run langflow run -weight: 500;">pip -weight: 500;">install uv uv -weight: 500;">pip -weight: 500;">install langflow uv run langflow run -weight: 500;">pip -weight: 500;">install langflow langflow run -weight: 500;">pip -weight: 500;">install langflow langflow run -weight: 500;">pip -weight: 500;">install langflow langflow run -weight: 500;">docker run -p 7860:7860 langflowai/langflow:latest -weight: 500;">docker run -p 7860:7860 langflowai/langflow:latest -weight: 500;">docker run -p 7860:7860 langflowai/langflow:latest http://localhost:7860 http://localhost:7860 http://localhost:7860 langflow run --port 8080 langflow run --port 8080 langflow run --port 8080 export LANGFLOW_PORT=8080 langflow run export LANGFLOW_PORT=8080 langflow run export LANGFLOW_PORT=8080 langflow run services: langflow: image: langflowai/langflow:latest pull_policy: always ports: - "7860:7860" depends_on: - postgres env_file: - .env environment: - LANGFLOW_DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB} - LANGFLOW_CONFIG_DIR=/app/langflow volumes: - langflow-data:/app/langflow postgres: image: postgres:16 environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB} volumes: - langflow-postgres:/var/lib/postgresql/data volumes: langflow-postgres: langflow-data: services: langflow: image: langflowai/langflow:latest pull_policy: always ports: - "7860:7860" depends_on: - postgres env_file: - .env environment: - LANGFLOW_DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB} - LANGFLOW_CONFIG_DIR=/app/langflow volumes: - langflow-data:/app/langflow postgres: image: postgres:16 environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB} volumes: - langflow-postgres:/var/lib/postgresql/data volumes: langflow-postgres: langflow-data: services: langflow: image: langflowai/langflow:latest pull_policy: always ports: - "7860:7860" depends_on: - postgres env_file: - .env environment: - LANGFLOW_DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB} - LANGFLOW_CONFIG_DIR=/app/langflow volumes: - langflow-data:/app/langflow postgres: image: postgres:16 environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB} volumes: - langflow-postgres:/var/lib/postgresql/data volumes: langflow-postgres: langflow-data: POSTGRES_USER=langflow POSTGRES_PASSWORD=changeme POSTGRES_DB=langflow LANGFLOW_SUPERUSER=admin LANGFLOW_SUPERUSER_PASSWORD=changeme LANGFLOW_AUTO_LOGIN=False POSTGRES_USER=langflow POSTGRES_PASSWORD=changeme POSTGRES_DB=langflow LANGFLOW_SUPERUSER=admin LANGFLOW_SUPERUSER_PASSWORD=changeme LANGFLOW_AUTO_LOGIN=False POSTGRES_USER=langflow POSTGRES_PASSWORD=changeme POSTGRES_DB=langflow LANGFLOW_SUPERUSER=admin LANGFLOW_SUPERUSER_PASSWORD=changeme LANGFLOW_AUTO_LOGIN=False -weight: 500;">docker compose up -d -weight: 500;">docker compose up -d -weight: 500;">docker compose up -d ssh -p 443 -R0:localhost:7860 free.pinggy.io ssh -p 443 -R0:localhost:7860 free.pinggy.io ssh -p 443 -R0:localhost:7860 free.pinggy.io ssh -p 443 -R0:localhost:7860 -t free.pinggy.io b:username:password ssh -p 443 -R0:localhost:7860 -t free.pinggy.io b:username:password ssh -p 443 -R0:localhost:7860 -t free.pinggy.io b:username:password ollama pull llama3.2 ollama serve ollama pull llama3.2 ollama serve ollama pull llama3.2 ollama serve http://localhost:11434 http://localhost:11434 http://localhost:11434 -weight: 500;">curl -X POST \ "http://localhost:7860/api/v1/run/<your-flow-id>" \ -H "Content-Type: application/json" \ -d '{"input_value": "What does the document say about pricing?"}' -weight: 500;">curl -X POST \ "http://localhost:7860/api/v1/run/<your-flow-id>" \ -H "Content-Type: application/json" \ -d '{"input_value": "What does the document say about pricing?"}' -weight: 500;">curl -X POST \ "http://localhost:7860/api/v1/run/<your-flow-id>" \ -H "Content-Type: application/json" \ -d '{"input_value": "What does the document say about pricing?"}' - Chat Input for user queries - Search tool for fetching information - Parser to convert structured data into text - Prompt Template to combine inputs - Language model to generate responses - Chat Output to display results - Upload a document - Split it into smaller chunks - Convert those chunks into embeddings - Store them in a vector database - Retrieve relevant pieces during a query - Combine them with the question - Generate a final answer