Tools: Self-Hosting in 2026: What Is Actually Worth Running Yourself

Tools: Self-Hosting in 2026: What Is Actually Worth Running Yourself

The Landscape Has Changed

Definitely Worth Self-Hosting

Databases

Authentication

Analytics

Git Hosting

Monitoring

The Gray Area

Email (Receiving)

Object Storage

VPN / Network

Not Worth Self-Hosting

Transactional Email Sending

Full-Featured Project Management

Cost Comparison: Small Team (5 developers)

My Self-Hosting Stack

Tips From Three Years of Self-Hosting

Final Verdict Three years ago, I moved as many services as possible to self-hosted alternatives. Some decisions were brilliant. Others were expensive time sinks. Here's what I've learned about what's actually worth running on your own metal in 2026. Self-hosting in 2026 is wildly different from even 2022: The question isn't "can you self-host?" anymore. It's "should you?" This one's a no-brainer for most teams. A managed PostgreSQL instance costs $15-50/month minimum. Running PostgreSQL in Docker on a $20 VPS gives you the same thing with more control. Caveat: If you need automatic failover, read replicas, or point-in-time recovery without thinking about it, managed databases are worth the premium. For a small team with good backup practices, self-hosting is fine. Auth is one of the most satisfying things to self-host. Managed auth services charge per MAU (monthly active user), which adds up fast. All of these are dramatically cheaper than Clerk, Auth0, or Firebase Auth once you pass a few thousand users. The trade-off is maintenance time. Managed analytics (Google Analytics, Mixpanel) are either privacy-invasive or expensive at scale. Self-hosted alternatives that actually work: For private repos that don't need GitHub's ecosystem: I run Gitea for personal projects and private experiments. It uses about 100MB of RAM. GitHub stays for open-source and collaboration-heavy work. Woodpecker CI (Drone fork) or Gitea Actions pair well with self-hosted git. For heavier needs, self-hosted GitLab Runner works but requires more resources. Uptime Kuma for status pages and uptime monitoring — it's shockingly good for how simple it is. One Docker container, beautiful UI, notifications to Slack/Discord/email. Running your own mail server is possible and mature tools exist (Stalwart, Maddy, Mail-in-a-Box). But email deliverability is an endless battle. Major providers (Gmail, Outlook) are suspicious of small mail servers. My take: Host your own for receiving (it's easy). Use a transactional email service (Resend, Postmark, SES) for sending. The split approach works well. MinIO is a solid S3-compatible object storage that self-hosts well. But at $0.023/GB/month for S3 (or cheaper with Backblaze B2 at $0.005/GB), the math only works for self-hosting if you have terabytes of data. WireGuard is trivial to self-host and works beautifully. Tools like Headscale (self-hosted Tailscale control server) make it even better. This one's a clear win if you have more than a couple machines to connect. Just use Cloudflare (free tier) or Route53. DNS is too critical to put on a server that might go down. The propagation, redundancy, and edge requirements make self-hosting DNS pointless for almost everyone. You cannot replicate what Cloudflare, Fastly, or CloudFront do with a single server. Don't try. As mentioned above, email deliverability from a small server is pain you don't need. Services like Resend or Postmark cost pennies per email and actually reach inboxes. Running your own Jira or Linear alternative (like Plane, Focalboard, or Taiga) sounds appealing until you realize the time spent maintaining it exceeds the subscription cost. These tools need to be reliable for your whole team — that's a high bar for self-hosted software. The savings are real. But you're trading money for time. Budget 2-4 hours per month for updates, backups, and the occasional debugging session. Here's what I actually run on a single Hetzner CX31 (4 vCPU, 8GB RAM, $15/month): All managed with a single docker-compose.yml, backed up nightly to Backblaze B2 with restic. Total resource usage: ~3GB RAM, 20% CPU average. Self-hosting in 2026 is the best it's ever been. The tooling is mature, the community resources are extensive, and the cost savings are significant. But it's not zero-effort — it's a trade-off between money and attention. Start with one or two services, get comfortable with the maintenance routine, and expand from there. Don't try to self-host everything at once. That way lies burnout and a 3 AM page about disk space. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

# -weight: 500;">docker-compose.yml services: postgres: image: postgres:17 -weight: 500;">restart: always volumes: - pgdata:/var/lib/postgresql/data environment: POSTGRES_PASSWORD: ${DB_PASSWORD} ports: - "127.0.0.1:5432:5432" backup: image: prodrigestivill/postgres-backup-local volumes: - ./backups:/backups environment: POSTGRES_HOST: postgres POSTGRES_DB: myapp SCHEDULE: "@daily" BACKUP_KEEP_DAYS: 30 volumes: pgdata: # -weight: 500;">docker-compose.yml services: postgres: image: postgres:17 -weight: 500;">restart: always volumes: - pgdata:/var/lib/postgresql/data environment: POSTGRES_PASSWORD: ${DB_PASSWORD} ports: - "127.0.0.1:5432:5432" backup: image: prodrigestivill/postgres-backup-local volumes: - ./backups:/backups environment: POSTGRES_HOST: postgres POSTGRES_DB: myapp SCHEDULE: "@daily" BACKUP_KEEP_DAYS: 30 volumes: pgdata: # -weight: 500;">docker-compose.yml services: postgres: image: postgres:17 -weight: 500;">restart: always volumes: - pgdata:/var/lib/postgresql/data environment: POSTGRES_PASSWORD: ${DB_PASSWORD} ports: - "127.0.0.1:5432:5432" backup: image: prodrigestivill/postgres-backup-local volumes: - ./backups:/backups environment: POSTGRES_HOST: postgres POSTGRES_DB: myapp SCHEDULE: "@daily" BACKUP_KEEP_DAYS: 30 volumes: pgdata: # Example: Authon with Docker Compose services: authon: image: authon/server:latest environment: DATABASE_URL: postgres://authon:secret@postgres:5432/authon JWT_SECRET: ${JWT_SECRET} ports: - "127.0.0.1:3100:3100" depends_on: - postgres # Example: Authon with Docker Compose services: authon: image: authon/server:latest environment: DATABASE_URL: postgres://authon:secret@postgres:5432/authon JWT_SECRET: ${JWT_SECRET} ports: - "127.0.0.1:3100:3100" depends_on: - postgres # Example: Authon with Docker Compose services: authon: image: authon/server:latest environment: DATABASE_URL: postgres://authon:secret@postgres:5432/authon JWT_SECRET: ${JWT_SECRET} ports: - "127.0.0.1:3100:3100" depends_on: - postgres services: umami: image: ghcr.io/umami-software/umami:postgresql-latest environment: DATABASE_URL: postgres://umami:secret@postgres:5432/umami ports: - "127.0.0.1:3000:3000" depends_on: - postgres services: umami: image: ghcr.io/umami-software/umami:postgresql-latest environment: DATABASE_URL: postgres://umami:secret@postgres:5432/umami ports: - "127.0.0.1:3000:3000" depends_on: - postgres services: umami: image: ghcr.io/umami-software/umami:postgresql-latest environment: DATABASE_URL: postgres://umami:secret@postgres:5432/umami ports: - "127.0.0.1:3000:3000" depends_on: - postgres - Traefik (reverse proxy + auto SSL) - PostgreSQL 17 - Gitea - Umami - Uptime Kuma - Headscale (WireGuard mesh) - Authon (auth for side projects) - Miniflux (RSS reader, because I still use RSS) - Traefik (reverse proxy + auto SSL) - PostgreSQL 17 - Gitea - Umami - Uptime Kuma - Headscale (WireGuard mesh) - Authon (auth for side projects) - Miniflux (RSS reader, because I still use RSS) - Traefik (reverse proxy + auto SSL) - PostgreSQL 17 - Gitea - Umami - Uptime Kuma - Headscale (WireGuard mesh) - Authon (auth for side projects) - Miniflux (RSS reader, because I still use RSS) - VPS prices dropped — you can get 4 vCPU / 8GB RAM for under $20/month from providers like Hetzner, Netcup, or Oracle's free tier - Docker Compose is mature — most self-hosted projects have a -weight: 500;">docker-compose.yml that just works - Reverse proxy tooling is great — Traefik, Caddy, and nginx-proxy-manager make SSL and routing easy - Backup tools are better — Restic, Borgmatic, and rclone handle backups reliably - Keycloak — enterprise-grade, Java-based, resource-heavy but feature-complete. Best for SAML/enterprise SSO. - Authentik — Python-based, modern UI, good for identity-aware proxying. Nice if you want SSO for your self-hosted apps. - Authon — Node-based, lighter weight, focused on developer APIs. Good if you're building a product that needs auth as an API. - Zitadel — Go-based, good performance, built-in OIDC support. - Umami — the one I recommend most. Clean, fast, privacy-focused. 10 minutes to deploy. - Plausible — similar to Umami, slightly more opinionated. Has a hosted option too. - PostHog — more than analytics (feature flags, session replays), but heavier to run. - Gitea — lightweight, Go-based, fast. Closest to a "light GitHub." - Forgejo — Gitea fork with community governance. - Automate backups from day one. Not "I'll set it up later." Day one. - Use Docker Compose for everything. Don't -weight: 500;">install packages on the host. - Pin image versions. postgres:17 not postgres:latest. You want predictable updates. - Set up monitoring before you need it. Uptime Kuma takes 5 minutes. Do it. - Keep a -weight: 500;">docker-compose.yml in version control. Your server config IS your infrastructure-as-code. - Test your restore process. Backups are useless if you've never tested restoring from them. - Don't self-host things your team depends on for daily work unless you're confident in your uptime.