Tools: Update: Building My Own Self-Hosted VPN Infrastructure

Tools: Update: Building My Own Self-Hosted VPN Infrastructure

Starting Small

The Networking Chaos

The Bug That Broke Everything

The Monitoring Rabbit Hole

What This Project Became

Final Thoughts A few weeks ago I started setting up my own small cloud infrastructure instead of just deploying random projects on platforms like Vercel or Render. Those platforms are genuinely great for getting started, but after a point I wanted something I could completely control myself. Not just deploying apps, but actually managing the infrastructure behind them. I wanted my own VPN, reverse proxy, hosted services, proper HTTPS and eventually a fully automated infrastructure stack that I could rebuild from scratch whenever I wanted. So I rented a small VM on Oracle Cloud Infrastructure and started experimenting. I picked OCI mostly because I didn’t want to spend real money while learning. Their free compute instances are honestly good enough to start understanding how all these systems fit together. Eventually I also want to move parts of this setup onto my own home server. very quickly turned into learning Linux networking, Docker, DNS, HTTPS, reverse proxies, cloud firewalls, WireGuard internals and a lot of debugging. This post is mostly about that process. The VM itself was tiny — just 1 GB RAM and a pretty low-end CPU — which immediately forced me to think carefully about what I was running instead of blindly deploying every tool I found online. At that point I already had a few Dockerized services running, including my portfolio deployment setup and some personal tooling I’ll probably write about later. I wanted the VPN to become part of a larger self-hosted ecosystem instead of being an isolated side project. After researching a bit, I found wg-easy. It handled most of the annoying parts of managing WireGuard for me — peer management, QR code generation, mobile configuration and repetitive setup work — which meant I could focus more on understanding networking itself instead of manually editing configuration files all day. Around the same time I also decided to move away from NGINX. Earlier I was using it for reverse proxying my services and while it worked perfectly fine, I constantly found myself manually handling certificates, creating separate config files and repeating the same setup process for every service. I wanted something simpler and easier to automate. That’s when I switched to Caddy. Automatic HTTPS, simpler configs and easier reverse proxying made the entire infrastructure feel significantly cleaner. I setup subdomain routing, HTTPS for all services and eventually exposed the VPN dashboard through: At that point the setup finally started feeling like actual infrastructure instead of random containers running on a server. The part I underestimated the most was networking itself. TCP vs UDP, OCI security lists, UFW rules, Docker networking, port forwarding — suddenly everything felt like separate layers fighting each other. I had never really worked with real infrastructure networking before, so debugging all of this at once became overwhelming very quickly. Then came the worst part. The VPN connected from my phone… …but there was no internet access. At first I thought the issue had to be something complicated — NAT problems, Docker networking, IP forwarding, WireGuard routes or firewall rules. I kept jumping between layers trying to figure out where the packets were disappearing. I spent hours checking: and none of it seemed to explain the issue. The actual bug turned out to be much simpler. The issue was Cloudflare. My VPN subdomain was still proxied through Cloudflare, which meant: was resolving to Cloudflare IPs instead of my actual OCI server. Since WireGuard uses UDP instead of traditional HTTP reverse proxying, the VPN traffic never actually reached my machine. The biggest clue came from running: There was no handshake. No transfer stats. Nothing. That immediately told me the server wasn’t even hearing from the client. After comparing DNS resolution, public IPs and Cloudflare settings, I realized the DNS record was still orange-cloud proxied. The moment I switched it to: everything suddenly started working. felt weirdly satisfying after spending hours debugging networking layers I barely understood a few days earlier. That moment also changed how I think about debugging infrastructure problems. Most of the time the issue is not some deep magical failure. It’s usually a wrong assumption somewhere between systems interacting with each other. Once the VPN finally worked, I immediately went into the observability rabbit hole. I started experimenting with dashboards, exporters and monitoring stacks using Prometheus, Grafana and eventually Netdata. And honestly, it looked really cool. For a while I had graphs, metrics and dashboards for everything running on the server. Then my tiny 1 GB VM started struggling. That became another important lesson:

just because you can deploy something doesn’t mean you should. The monitoring stack itself was slowly becoming heavier than the actual services I cared about. So I removed most of it and decided to keep monitoring lightweight for now. That decision probably taught me more than successfully running the stack would have. Infrastructure is mostly about tradeoffs. Complexity vs simplicity, features vs operational cost, experimentation vs practicality — every additional tool changes the operational burden of the system. At this point this is no longer “just a VPN project”. Over time it slowly started becoming the foundation for a much larger self-hosted ecosystem involving automated deployments, infrastructure as code, observability and eventually maybe even multi-node orchestration. But I also realized I don’t want to jump directly into complexity just because it sounds impressive. Right now I’m more interested in understanding networking properly, building systems slowly, documenting failures and learning how infrastructure actually behaves instead of blindly following tutorials. Eventually I want this entire setup to become: using tools like Terraform, GitHub Actions and eventually Kubernetes when the infrastructure actually grows large enough to justify it. I’m also planning to open-source the entire setup including: The blog itself focuses more on the story and learning process, while the GitHub repository will contain the actual implementation details and technical setup guides. Repository: infra-101 GitHub Repository I also want to eventually add: so the entire setup becomes easy to reproduce for anyone trying to learn similar things. This project taught me far more than I expected. Not because I memorized commands. But because I had to debug real networking problems, understand cloud infrastructure, deal with DNS behavior, think about security, manage limited resources and make actual architectural decisions. The infrastructure itself is still small. But for the first time it feels like I’m building my own online ecosystem instead of just deploying applications onto someone else’s platform. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

vpn.samay15jan.com vpn.samay15jan.com vpn.samay15jan.com vpn.samay15jan.com vpn.samay15jan.com vpn.samay15jan.com latest handshake: 25 seconds ago latest handshake: 25 seconds ago latest handshake: 25 seconds ago - sysctl configs - Docker host networking - OCI firewall settings - WireGuard peer configs - reproducible - version controlled - reinstallable from scratch - Docker compose files - deployment notes - troubleshooting steps - infrastructure documentation - one-step installers - automated provisioning - detailed wiki documentation - explanations for every major configuration file