Tools: Render Free Hosting Review 2026: Deploy Web Apps, Databases, and Cron Jobs for Free

Tools: Render Free Hosting Review 2026: Deploy Web Apps, Databases, and Cron Jobs for Free

What Is Render?

Render Free Tier: What You Actually Get

The Cold Start Reality (Honest Review)

How to Deploy Your First App to Render

Step 1: Create a Render Account

Step 2: Create a New Web Service

Deploy a Python FastAPI App on Render

Project Structure

main.py

requirements.txt

Render Settings

Optional: render.yaml (Infrastructure as Code)

Deploy a Node.js / Express App on Render

Basic Express Server

Free PostgreSQL Database on Render

Create a Free Database

Connect from Python (psycopg2)

Connect from SQLAlchemy (FastAPI)

Static Sites: The Best Free Feature on Render

Deploy a React/Next.js Static Export

Cron Jobs on Render

Deploy an OpenClaw AI Agent on Render for Free

Self-Hosted AI Agent Architecture

Render vs Heroku vs Railway vs Fly.io: Full Comparison

Environment Variables and Secrets

Custom Domains on Render

Render's Auto-Deploy and Preview Deployments

Disk Persistence: Know the Limit

When Render's Free Tier Is Enough

When to Upgrade

Render vs Other Free Hosting Options

Tips for Getting the Most Out of Render's Free Tier

Keep Your Service Warm

Use Environment Groups

Set Health Check Paths

Related Reads

Final Verdict Render is a cloud hosting platform that lets you deploy web services, APIs, databases, static sites, background workers, and cron jobs — all from a Git repository. Think of it as the modern replacement for Heroku: you push code, Render builds and deploys it automatically. Render launched in 2019 and gained massive popularity in October 2022 when Heroku shut down its free tier. Thousands of developers migrated overnight, and Render became the default answer to the question: “Where do I host a side project for free?” In 2026, Render’s free tier is still one of the best deals in cloud hosting — if you understand exactly what you’re getting and where the limits are. This guide covers everything: what’s actually free, how cold starts work, how to deploy a real app, and when to upgrade. Render’s free tier is more generous than most platforms, but the details matter: The 750-hour math: One web service running 24/7 uses about 720 hours per month — so a single always-on free service is technically within limits. But Render spins down free web services after 15 minutes of inactivity, which means those 750 hours don’t stack up the way you’d expect. In practice, the service wakes up when a request comes in and goes back to sleep when traffic stops. Here’s what many Render guides skip over: free web services have a cold start delay of 30–60 seconds. When your service has been inactive for 15+ minutes and receives a new request, Render wakes it up — but the first request sits waiting while the container boots. This is the number one complaint about Render’s free tier, and it’s legitimate. For a public-facing web app, a 50-second loading screen on the first visit is bad UX. There are workarounds: The cold start behavior is a feature of the free tier specifically, not a platform limitation. Render’s paid instances ($7/month starter) stay warm and respond instantly. For personal projects and demos, the free tier is excellent. For anything user-facing, budget $7/month or use static hosting. Render detects the runtime automatically in most cases. For a Python project with a requirements.txt, it sets the build command to pip install -r requirements.txt and asks you for the start command. Here’s a complete working example — a FastAPI app you can deploy to Render’s free tier in under 5 minutes. Note the --port $PORT — Render injects the port via environment variable. Always use $PORT, not a hardcoded port number. After deploying, your API will be live at https://your-service-name.onrender.com. Render provides HTTPS automatically — no certificate setup required. You can define your entire deployment in a render.yaml file committed to your repo: With this file in your repo, you can deploy the same setup repeatedly by clicking New → Blueprint on Render. Useful for open-source projects where users can self-host with one click. Render settings: Build Command: npm install, Start Command: npm start. Done. Render offers one free PostgreSQL database per account with 256MB storage and 97 concurrent connections. The catch: free databases expire after 90 days. You get a notification before expiration and can recreate the database — but you need to handle data migration yourself. Set DATABASE_URL in your Render web service’s Environment Variables by linking it to your database service — Render makes this easy with the “Link a database” dropdown. 90-day expiration strategy: For hobby projects, the 90-day limit is manageable. Before expiration, use pg_dump to export your data, create a new free database, and restore with pg_restore. Takes about 10 minutes. For anything that needs a permanent database, look at Neon or Supabase — both offer free PostgreSQL tiers without expiration. Static site hosting on Render is genuinely unlimited and free with no cold starts. This is the hidden gem of Render’s free tier. What you get with free static sites: This makes Render a strong alternative to Netlify and Vercel for static hosting. The main limitation: Render’s CDN has fewer PoPs than Cloudflare’s network. For global performance, Cloudflare Pages is faster. But for straightforward static hosting, Render’s free tier is excellent. Render settings for a Next.js static export: Render’s cron job service lets you run scheduled scripts on a cron schedule — useful for database cleanups, report generation, data sync jobs, and anything else that needs to run periodically. Cron jobs on the free plan run on shared infrastructure with some execution time limits. For jobs that need to run in under a few minutes, the free plan works well. For long-running jobs, upgrade to a paid plan. OpenClaw is an open-source AI agent platform that you can self-host. Deploying OpenClaw on Render gives you a free, persistent AI agent accessible from anywhere — combine it with a free AI API like Groq or Google Gemini for a fully free AI stack. This setup gives you a free AI agent API hosted on Render, using Groq's free LLM inference. The agent endpoint is publicly accessible at your Render URL — no infrastructure to manage, no monthly bill. For a production OpenClaw setup with memory and tool use, connect the Render PostgreSQL database to store conversation history and agent state. The 256MB free database is enough for thousands of conversation turns. Render vs Heroku: Render is the clear winner for free hosting — Heroku eliminated its free tier in 2022. Even Heroku's cheapest paid plan ($5/month eco dynos) has cold starts and less generous specs than Render's $7/month starter. Render vs Railway: Railway's $5 credit/month is flexible but small — a single web service plus a database can eat through $5 quickly. Render's free tier is more generous for pure hosting hours. Railway edges ahead for developer experience and build speeds. Render vs Fly.io: Fly.io's free tier gives you actual always-on VMs (no cold starts) but requires Docker knowledge and a CLI-first workflow. Render is friendlier for beginners. For serious workloads on a budget, Fly.io often wins on price-to-performance. Render has a solid environment variable system. You can set variables in the dashboard, reference them across services, and create "Secret Files" for things like service account JSON files. Render also supports Environment Groups — shared variable sets you can attach to multiple services. If you have five services that all need the same API key, define it once in an environment group and reference it everywhere. Adding a custom domain to a Render service is free on all plans: No additional cost, no certificate management. Render handles renewal automatically. Every push to your connected branch triggers a new deployment automatically. Render also supports preview deployments for pull requests — each PR gets its own unique URL, so you can test changes before merging. For teams, preview deployments alone are worth using Render. Each preview environment is isolated, uses the same build process, and is torn down when the PR closes. No extra cost on the free tier. Free Render services are ephemeral — the local filesystem is not persistent between deploys or restarts. If your code writes files to disk (uploaded images, cached data, SQLite databases), those files disappear on the next deploy or spin-down. This is standard behavior for PaaS platforms — Heroku, Railway, and Fly.io all have the same constraint on their lower tiers. Designing your app for stateless operation makes it portable across any cloud platform. The free tier is the right choice for: Upgrade to Render's paid plans ($7/month for web services) when: At $7/month for a starter web service plus $7/month for a starter PostgreSQL, you're at $14/month for a production-grade full-stack deployment with no cold starts, persistent data, and 1GB RAM. For a side project with real users, that's a reasonable spend. Render fills the gap between "static only" platforms (Netlify, Cloudflare Pages) and "you need to know Docker" platforms (Fly.io). If you have a Python, Node, Go, or Ruby app and want Git-based deployment with zero devops, Render is the easiest path. Register at cron-job.org, create a job that GET-requests https://your-service.onrender.com/ping every 10 minutes. Your service stays warm and the free tier's cold starts become a non-issue. If you're running multiple free services (a web service, a background worker, a cron job), create an Environment Group in the Render dashboard with shared secrets. One update propagates to all services automatically. Render uses health checks to determine if your service is running. Set a dedicated health check path in your service settings: Set this path in Settings → Health & Alerts → Health Check Path. Render pings this endpoint and restarts the service if it returns non-200 responses. Render is the best free PaaS for developers in 2026 — it replaced Heroku, and for good reason. The combination of 750 free web service hours, unlimited static sites, free PostgreSQL, free Redis, and free cron jobs is genuinely hard to beat. The cold start limitation on free web services is real and matters for public-facing apps. The 90-day database expiration requires active management. But for side projects, internal tools, prototypes, and learning, Render's free tier is more than capable. The developer experience — Git push to deploy, automatic HTTPS, preview deployments per PR, environment groups, dashboard with logs — is excellent. Render has clearly built a product designed for developers first. Start with free. If you need always-on performance, the $7/month starter plan is one of the best values in cloud hosting. For a full-stack app (web service + database), $14/month gets you production infrastructure with no ops overhead. Get started at render.com — no credit card required. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ my-api/ ├── main.py ├── requirements.txt └── render.yaml # optional, for infrastructure-as-code my-api/ ├── main.py ├── requirements.txt └── render.yaml # optional, for infrastructure-as-code my-api/ ├── main.py ├── requirements.txt └── render.yaml # optional, for infrastructure-as-code from fastapi import FastAPI from pydantic import BaseModel import os app = FastAPI(title="My Free API on Render") class MessageRequest(BaseModel): text: str @app.get("/") def health_check(): return {"-weight: 500;">status": "ok", "environment": os.environ.get("RENDER_ENV", "local")} @app.post("/echo") def echo(request: MessageRequest): return {"received": request.text, "length": len(request.text)} @app.get("/items/{item_id}") def get_item(item_id: int, q: str = None): result = {"item_id": item_id} if q: result["query"] = q return result from fastapi import FastAPI from pydantic import BaseModel import os app = FastAPI(title="My Free API on Render") class MessageRequest(BaseModel): text: str @app.get("/") def health_check(): return {"-weight: 500;">status": "ok", "environment": os.environ.get("RENDER_ENV", "local")} @app.post("/echo") def echo(request: MessageRequest): return {"received": request.text, "length": len(request.text)} @app.get("/items/{item_id}") def get_item(item_id: int, q: str = None): result = {"item_id": item_id} if q: result["query"] = q return result from fastapi import FastAPI from pydantic import BaseModel import os app = FastAPI(title="My Free API on Render") class MessageRequest(BaseModel): text: str @app.get("/") def health_check(): return {"-weight: 500;">status": "ok", "environment": os.environ.get("RENDER_ENV", "local")} @app.post("/echo") def echo(request: MessageRequest): return {"received": request.text, "length": len(request.text)} @app.get("/items/{item_id}") def get_item(item_id: int, q: str = None): result = {"item_id": item_id} if q: result["query"] = q return result fastapi==0.115.0 uvicorn[standard]==0.30.0 fastapi==0.115.0 uvicorn[standard]==0.30.0 fastapi==0.115.0 uvicorn[standard]==0.30.0 services: - type: web name: my-fastapi-app runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt startCommand: uvicorn main:app --host 0.0.0.0 --port $PORT plan: free envVars: - key: PYTHON_VERSION value: 3.11.0 services: - type: web name: my-fastapi-app runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt startCommand: uvicorn main:app --host 0.0.0.0 --port $PORT plan: free envVars: - key: PYTHON_VERSION value: 3.11.0 services: - type: web name: my-fastapi-app runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt startCommand: uvicorn main:app --host 0.0.0.0 --port $PORT plan: free envVars: - key: PYTHON_VERSION value: 3.11.0 // server.js const express = require('express'); const app = express(); app.use(express.json()); app.get('/', (req, res) => { res.json({ -weight: 500;">status: 'ok', platform: 'Render' }); }); app.post('/webhook', (req, res) => { console.log('Received webhook:', req.body); res.json({ received: true }); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); }); // server.js const express = require('express'); const app = express(); app.use(express.json()); app.get('/', (req, res) => { res.json({ -weight: 500;">status: 'ok', platform: 'Render' }); }); app.post('/webhook', (req, res) => { console.log('Received webhook:', req.body); res.json({ received: true }); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); }); // server.js const express = require('express'); const app = express(); app.use(express.json()); app.get('/', (req, res) => { res.json({ -weight: 500;">status: 'ok', platform: 'Render' }); }); app.post('/webhook', (req, res) => { console.log('Received webhook:', req.body); res.json({ received: true }); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); }); // package.json (partial) { "scripts": { "-weight: 500;">start": "node server.js" }, "dependencies": { "express": "^4.18.2" } } // package.json (partial) { "scripts": { "-weight: 500;">start": "node server.js" }, "dependencies": { "express": "^4.18.2" } } // package.json (partial) { "scripts": { "-weight: 500;">start": "node server.js" }, "dependencies": { "express": "^4.18.2" } } import os import psycopg2 DATABASE_URL = os.environ.get("DATABASE_URL") conn = psycopg2.connect(DATABASE_URL) cursor = conn.cursor() cursor.execute(""" CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, created_at TIMESTAMP DEFAULT NOW() ) """) conn.commit() cursor.execute("INSERT INTO users (email) VALUES (%s) ON CONFLICT DO NOTHING", ("[email protected]",)) conn.commit() cursor.execute("SELECT * FROM users") rows = cursor.fetchall() print(rows) cursor.close() conn.close() import os import psycopg2 DATABASE_URL = os.environ.get("DATABASE_URL") conn = psycopg2.connect(DATABASE_URL) cursor = conn.cursor() cursor.execute(""" CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, created_at TIMESTAMP DEFAULT NOW() ) """) conn.commit() cursor.execute("INSERT INTO users (email) VALUES (%s) ON CONFLICT DO NOTHING", ("[email protected]",)) conn.commit() cursor.execute("SELECT * FROM users") rows = cursor.fetchall() print(rows) cursor.close() conn.close() import os import psycopg2 DATABASE_URL = os.environ.get("DATABASE_URL") conn = psycopg2.connect(DATABASE_URL) cursor = conn.cursor() cursor.execute(""" CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, created_at TIMESTAMP DEFAULT NOW() ) """) conn.commit() cursor.execute("INSERT INTO users (email) VALUES (%s) ON CONFLICT DO NOTHING", ("[email protected]",)) conn.commit() cursor.execute("SELECT * FROM users") rows = cursor.fetchall() print(rows) cursor.close() conn.close() from sqlalchemy import create_engine, Column, Integer, String, DateTime from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker import os from datetime import datetime DATABASE_URL = os.environ["DATABASE_URL"] # Render PostgreSQL URLs -weight: 500;">start with postgres://, SQLAlchemy needs postgresql:// if DATABASE_URL.startswith("postgres://"): DATABASE_URL = DATABASE_URL.replace("postgres://", "postgresql://", 1) engine = create_engine(DATABASE_URL) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True, index=True) email = Column(String, unique=True, index=True) created_at = Column(DateTime, default=datetime.utcnow) Base.metadata.create_all(bind=engine) from sqlalchemy import create_engine, Column, Integer, String, DateTime from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker import os from datetime import datetime DATABASE_URL = os.environ["DATABASE_URL"] # Render PostgreSQL URLs -weight: 500;">start with postgres://, SQLAlchemy needs postgresql:// if DATABASE_URL.startswith("postgres://"): DATABASE_URL = DATABASE_URL.replace("postgres://", "postgresql://", 1) engine = create_engine(DATABASE_URL) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True, index=True) email = Column(String, unique=True, index=True) created_at = Column(DateTime, default=datetime.utcnow) Base.metadata.create_all(bind=engine) from sqlalchemy import create_engine, Column, Integer, String, DateTime from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker import os from datetime import datetime DATABASE_URL = os.environ["DATABASE_URL"] # Render PostgreSQL URLs -weight: 500;">start with postgres://, SQLAlchemy needs postgresql:// if DATABASE_URL.startswith("postgres://"): DATABASE_URL = DATABASE_URL.replace("postgres://", "postgresql://", 1) engine = create_engine(DATABASE_URL) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True, index=True) email = Column(String, unique=True, index=True) created_at = Column(DateTime, default=datetime.utcnow) Base.metadata.create_all(bind=engine) # next.config.js /** @type {import('next').NextConfig} */ const nextConfig = { output: 'export', // generates static files in /out } module.exports = nextConfig # next.config.js /** @type {import('next').NextConfig} */ const nextConfig = { output: 'export', // generates static files in /out } module.exports = nextConfig # next.config.js /** @type {import('next').NextConfig} */ const nextConfig = { output: 'export', // generates static files in /out } module.exports = nextConfig # render.yaml for a cron job services: - type: cron name: daily-cleanup runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt schedule: "0 2 * * *" # Run at 2 AM UTC daily startCommand: python cleanup.py # render.yaml for a cron job services: - type: cron name: daily-cleanup runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt schedule: "0 2 * * *" # Run at 2 AM UTC daily startCommand: python cleanup.py # render.yaml for a cron job services: - type: cron name: daily-cleanup runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt schedule: "0 2 * * *" # Run at 2 AM UTC daily startCommand: python cleanup.py # cleanup.py import os import psycopg2 from datetime import datetime, timedelta def cleanup_old_records(): conn = psycopg2.connect(os.environ["DATABASE_URL"]) cursor = conn.cursor() cutoff_date = datetime.now() - timedelta(days=30) cursor.execute( "DELETE FROM events WHERE created_at < %s", (cutoff_date,) ) deleted = cursor.rowcount conn.commit() conn.close() print(f"Cleaned up {deleted} old records at {datetime.now()}") if __name__ == "__main__": cleanup_old_records() # cleanup.py import os import psycopg2 from datetime import datetime, timedelta def cleanup_old_records(): conn = psycopg2.connect(os.environ["DATABASE_URL"]) cursor = conn.cursor() cutoff_date = datetime.now() - timedelta(days=30) cursor.execute( "DELETE FROM events WHERE created_at < %s", (cutoff_date,) ) deleted = cursor.rowcount conn.commit() conn.close() print(f"Cleaned up {deleted} old records at {datetime.now()}") if __name__ == "__main__": cleanup_old_records() # cleanup.py import os import psycopg2 from datetime import datetime, timedelta def cleanup_old_records(): conn = psycopg2.connect(os.environ["DATABASE_URL"]) cursor = conn.cursor() cutoff_date = datetime.now() - timedelta(days=30) cursor.execute( "DELETE FROM events WHERE created_at < %s", (cutoff_date,) ) deleted = cursor.rowcount conn.commit() conn.close() print(f"Cleaned up {deleted} old records at {datetime.now()}") if __name__ == "__main__": cleanup_old_records() # render.yaml for OpenClaw + AI API backend services: - type: web name: ai-agent-backend runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt startCommand: uvicorn agent:app --host 0.0.0.0 --port $PORT plan: free envVars: - key: GROQ_API_KEY sync: false # set manually in dashboard - key: DATABASE_URL fromDatabase: name: agent-db property: connectionString databases: - name: agent-db plan: free # render.yaml for OpenClaw + AI API backend services: - type: web name: ai-agent-backend runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt startCommand: uvicorn agent:app --host 0.0.0.0 --port $PORT plan: free envVars: - key: GROQ_API_KEY sync: false # set manually in dashboard - key: DATABASE_URL fromDatabase: name: agent-db property: connectionString databases: - name: agent-db plan: free # render.yaml for OpenClaw + AI API backend services: - type: web name: ai-agent-backend runtime: python buildCommand: -weight: 500;">pip -weight: 500;">install -r requirements.txt startCommand: uvicorn agent:app --host 0.0.0.0 --port $PORT plan: free envVars: - key: GROQ_API_KEY sync: false # set manually in dashboard - key: DATABASE_URL fromDatabase: name: agent-db property: connectionString databases: - name: agent-db plan: free # agent.py — AI agent backend using Groq via OpenClaw import os from fastapi import FastAPI from pydantic import BaseModel from openai import OpenAI app = FastAPI() # Use Groq's free API with OpenAI-compatible endpoint client = OpenAI( api_key=os.environ["GROQ_API_KEY"], base_url="https://api.groq.com/openai/v1" ) class AgentRequest(BaseModel): message: str system_prompt: str = "You are a helpful assistant." @app.post("/chat") async def chat(request: AgentRequest): response = client.chat.completions.create( model="llama-3.3-70b-versatile", messages=[ {"role": "system", "content": request.system_prompt}, {"role": "user", "content": request.message} ] ) return { "response": response.choices[0].message.content, "model": "llama-3.3-70b-versatile", "provider": "groq" } @app.get("/health") def health(): return {"-weight: 500;">status": "ok"} # agent.py — AI agent backend using Groq via OpenClaw import os from fastapi import FastAPI from pydantic import BaseModel from openai import OpenAI app = FastAPI() # Use Groq's free API with OpenAI-compatible endpoint client = OpenAI( api_key=os.environ["GROQ_API_KEY"], base_url="https://api.groq.com/openai/v1" ) class AgentRequest(BaseModel): message: str system_prompt: str = "You are a helpful assistant." @app.post("/chat") async def chat(request: AgentRequest): response = client.chat.completions.create( model="llama-3.3-70b-versatile", messages=[ {"role": "system", "content": request.system_prompt}, {"role": "user", "content": request.message} ] ) return { "response": response.choices[0].message.content, "model": "llama-3.3-70b-versatile", "provider": "groq" } @app.get("/health") def health(): return {"-weight: 500;">status": "ok"} # agent.py — AI agent backend using Groq via OpenClaw import os from fastapi import FastAPI from pydantic import BaseModel from openai import OpenAI app = FastAPI() # Use Groq's free API with OpenAI-compatible endpoint client = OpenAI( api_key=os.environ["GROQ_API_KEY"], base_url="https://api.groq.com/openai/v1" ) class AgentRequest(BaseModel): message: str system_prompt: str = "You are a helpful assistant." @app.post("/chat") async def chat(request: AgentRequest): response = client.chat.completions.create( model="llama-3.3-70b-versatile", messages=[ {"role": "system", "content": request.system_prompt}, {"role": "user", "content": request.message} ] ) return { "response": response.choices[0].message.content, "model": "llama-3.3-70b-versatile", "provider": "groq" } @app.get("/health") def health(): return {"-weight: 500;">status": "ok"} # Access in Python import os DATABASE_URL = os.environ["DATABASE_URL"] API_KEY = os.environ["MY_API_KEY"] DEBUG = os.environ.get("DEBUG", "false").lower() == "true" # Access in Python import os DATABASE_URL = os.environ["DATABASE_URL"] API_KEY = os.environ["MY_API_KEY"] DEBUG = os.environ.get("DEBUG", "false").lower() == "true" # Access in Python import os DATABASE_URL = os.environ["DATABASE_URL"] API_KEY = os.environ["MY_API_KEY"] DEBUG = os.environ.get("DEBUG", "false").lower() == "true" # Access in Node.js const databaseUrl = process.env.DATABASE_URL; const apiKey = process.env.MY_API_KEY; # Access in Node.js const databaseUrl = process.env.DATABASE_URL; const apiKey = process.env.MY_API_KEY; # Access in Node.js const databaseUrl = process.env.DATABASE_URL; const apiKey = process.env.MY_API_KEY; # Simple Python keepalive using cron-job.org (free external cron) # Add this endpoint to your app, then schedule a ping every 10 minutes @app.get("/ping") def ping(): return {"pong": True} # Simple Python keepalive using cron-job.org (free external cron) # Add this endpoint to your app, then schedule a ping every 10 minutes @app.get("/ping") def ping(): return {"pong": True} # Simple Python keepalive using cron-job.org (free external cron) # Add this endpoint to your app, then schedule a ping every 10 minutes @app.get("/ping") def ping(): return {"pong": True} # FastAPI health endpoint @app.get("/health") def health_check(): return {"-weight: 500;">status": "healthy"} # FastAPI health endpoint @app.get("/health") def health_check(): return {"-weight: 500;">status": "healthy"} # FastAPI health endpoint @app.get("/health") def health_check(): return {"-weight: 500;">status": "healthy"} - Scheduled pinger: Use a free cron -weight: 500;">service (like cron-job.org) to ping your Render URL every 10 minutes, keeping it warm - Static site + serverless API: Host your frontend as a Render static site (always on) and use a different free -weight: 500;">service (Cloudflare Workers, Vercel functions) for dynamic endpoints - Accept the trade-off: For internal tools, demos, or low-traffic APIs where cold starts are fine, Render free is perfectly usable - Upgrade to a paid instance ($7/month): Paid instances have no spin-down and are always on - Go to render.com and sign up with GitHub, GitLab, or email - Connect your GitHub or GitLab account when prompted — this is how Render pulls your code - No credit card required to -weight: 500;">start - From the Render dashboard, click New + → Web Service - Select the Git repository you want to deploy - Choose a name, region (US East, US West, Frankfurt, Singapore, or Ohio), and branch - Set the Runtime (Python, Node, Go, Ruby, Rust, Docker, etc.) - Set the Build Command and Start Command - Choose the free plan and click Create Web Service - Runtime: Python 3 - Build Command: -weight: 500;">pip -weight: 500;">install -r requirements.txt - Start Command: uvicorn main:app --host 0.0.0.0 --port $PORT - In the Render dashboard, click New + → PostgreSQL - Name it, choose a region, select Free plan - Click Create Database - Copy the Internal Database URL (use this for services within Render) or External Database URL (use this from outside Render) - No spin-down (always instantly accessible) - Global CDN distribution - Automatic HTTPS - Custom domain support - Auto-deploy on every Git push - Pull request previews (preview deployments for every PR) - 100 GB bandwidth/month - Build Command: -weight: 500;">npm -weight: 500;">install && -weight: 500;">npm run build - Publish Directory: out - Go to your -weight: 500;">service settings → Custom Domains - Add your domain (e.g., api.yourdomain.com) - Add the CNAME record Render shows you to your DNS provider - Wait for DNS propagation (usually under 5 minutes with Cloudflare) - Render auto-provisions an SSL certificate via Let's Encrypt - For databases: Use Render's PostgreSQL instead of SQLite - For file uploads: Store in AWS S3, Cloudflare R2, or Supabase Storage - For caching: Use Redis (Render offers a free Redis instance) or an external cache - Render Disks: Paid feature ($0.25/GB/month) adds persistent disk storage - Side project APIs: A FastAPI or Express backend for your portfolio app — cold starts are acceptable, and 750 hours covers real usage - Internal tools: An admin dashboard or automation API used by a few people — cold starts are tolerable for internal users - Static websites: Portfolios, documentation sites, marketing pages — always on, globally cached, zero cost - Demos and prototypes: Showing something to a client or investor — spin up a real deployed URL, not localhost - Cron jobs: Daily database cleanups, scheduled reports, periodic data syncs - Webhooks: Receiving webhooks from GitHub, Stripe, or other services — the webhook wakes the -weight: 500;">service if needed - Cold starts hurt real users: If your -weight: 500;">service is public-facing and 50-second first loads are unacceptable - You need persistent disk: For file uploads, SQLite, or any write-to-disk workflow - Database expiration is a problem: The 90-day free database limit is untenable for production data - You need more compute: The free tier gives you 0.1 CPU and 512MB RAM — enough for light workloads, not for AI inference or heavy processing - You need team collaboration: Multiple developers, role-based access, audit logs - Supabase vs Neon: Which Free PostgreSQL Database Should You Use in 2026? - Vercel vs Netlify vs Cloudflare Pages: Free Frontend Hosting Compared - Railway App Review 2026: The Best Heroku Alternative for Developers - Oracle Cloud Always Free: Get a 4-Core 24GB ARM VPS for Free - 7 Best Free Web Hosting for Developers: Cloudflare Pages, Vercel, Netlify and More