Tools: Building My First Self-Hosted Infrastructure Server on Linux - Full Analysis

Tools: Building My First Self-Hosted Infrastructure Server on Linux - Full Analysis

Discovering Omarchy

Setting Up the VM

Installing Nginx and Creating the Project

app1/index.html

app2/index.html

app3/index.html

Docker Compose Setup

Understanding Load Balancing Through Nginx

Running the Containers

NAT vs Bridged Adapter

NAT + Port Forwarding

Bridged Adapter

Stress Testing With wrk

1 Container Test

2 Container Test

3 Container Test

What I Learned From Benchmarking

What I Learned I always thought Linux looked cool from the outside, but I never really understood why developers became obsessed with it until I actually started using it properly. For most of my life, I was just a normal Windows 10 user. My workflow was simple VS Code, browsers, localhost development, and deploying apps without thinking too much about what was happening underneath. Everything worked, but after building more projects, especially AI and web applications, I started feeling disconnected from the actual systems running my applications. I knew how to build projects. I didn’t know how infrastructure worked. That’s when I started exploring Linux seriously. After trying different Linux distributions, I found Omarchy and instantly liked it. The system felt extremely fast, minimal, and surprisingly clean. Compared to Windows, everything felt more focused. The terminal experience, window management, keyboard shortcuts, and multiple pane workflow completely changed how I worked. The coolest part was how efficient everything felt once you got comfortable with it. I used Omarchy for around two months before eventually breaking the entire setup. One bad configuration later, none of the UI loaded anymore and I was left staring at a shell screen with absolutely no idea how to recover the system properly. That moment honestly taught me something important. Linux gives you freedom, but it also expects responsibility. Even after breaking everything though, I still loved the experience enough to come back again. This time I decided to run Omarchy inside VirtualBox on Windows 10 so I could safely experiment without destroying my main environment again. Once the virtual machine was running, I finally had a playground where I could: Build a small self-hosted infrastructure setup using: Docker was already included with Omarchy, so I mainly needed Nginx. Then I created the project folder: Inside the folder I created: The structure looked like this: Each app simply had an index.html file. Simple, but enough to visually verify load balancing. Then I created the Docker Compose configuration using nano. Omarchy already comes with NeoVim pre-installed, and honestly the setup looks extremely cool. Once you get comfortable with it, the workflow feels insanely fast. But in real server environments, especially when you're quickly editing configs over SSH, you'll often see people using something simple like nano. It’s quick, lightweight, always available, and efficient for fast configuration changes. At this point I finally started understanding why containers are so important. Everything became isolated, reproducible, and easy to manage. This was probably the coolest part of the entire project. The Nginx configuration looked like this: That tiny file completely changed how I viewed backend systems. Instead of one server handling everything, Nginx distributes requests between multiple backend containers. This is load balancing. An actual working system running inside my VM. Once everything was ready, I started the infrastructure: The containers started successfully: And when I opened localhost inside the VM, different apps started appearing depending on which backend handled the request. That moment felt genuinely satisfying because I wasn’t just running applications anymore. I was running infrastructure. This part confused me for hours. The setup worked perfectly inside the VM, but Windows couldn’t access it initially. That’s when I learned about NAT and Bridged networking. NAT creates an isolated internal network for the VM. The host machine cannot directly access the virtual machine unless ports are forwarded manually. So I configured port forwarding in VirtualBox like this: Which basically meant: After configuring this properly, I could finally access the Nginx load balancer directly from Windows. Bridged mode felt even cooler. Instead of staying inside an isolated virtual network, the VM became a real device on my local network and got its own IP address. Now I could directly access the server from Windows using: That honestly felt insane the first time it worked. Networking finally stopped feeling abstract. Now came the fun part. Benchmarking the infrastructure. I used wrk to generate traffic and test how the setup behaved with different numbers of backend containers behind the Nginx load balancer. The command I used was: The interesting part was comparing the performance between: With a single backend container, the setup still handled a surprisingly large amount of traffic. But the latency was noticeably higher and there were some request timeouts under heavy load. Adding a second backend container reduced the average latency significantly. The load balancer was now distributing traffic across multiple services instead of pushing everything into a single container. Even though the requests/sec result was lower during this run, the overall responsiveness and latency consistency improved. With three backend containers running behind Nginx, the setup handled the highest total request throughput. This was the point where load balancing finally felt real to me. Instead of one application trying to handle everything alone, requests were being distributed across multiple backend services. Even inside a small VM setup, you could clearly see how scaling horizontally changes system behavior. The most interesting part wasn’t just the numbers. It was understanding the concept behind them. Before this, “scaling” always sounded like some massive enterprise topic only huge companies cared about. But after running these tests manually, I finally understood the core idea: Instead of making one server infinitely stronger, you can distribute traffic across multiple smaller services. That’s basically the foundation of modern infrastructure. The biggest thing I learned from this entire process is how much complexity modern platforms hide from developers. When we deploy apps today, we rarely think about: But building things manually teaches an entirely different level of understanding. I also realized infrastructure engineering feels very different from normal application development. A lot of the work is debugging, configuration, networking, and slowly understanding how systems communicate with each other. Sometimes nothing works for hours. Sometimes you break your entire Linux setup. Sometimes networking feels cursed. But when everything finally connects together correctly, it feels incredibly rewarding. And honestly, building a self-hosted infrastructure server inside a Linux VM might be one of the coolest things I’ve learned so far. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 600;">sudo -weight: 500;">pacman -S nginx -weight: 600;">sudo -weight: 500;">pacman -S nginx -weight: 600;">sudo -weight: 500;">pacman -S nginx mkdir nginx-lb cd nginx-lb mkdir nginx-lb cd nginx-lb mkdir nginx-lb cd nginx-lb <h1>App 1</h1> <h1>App 1</h1> <h1>App 1</h1> <h1>App 2</h1> <h1>App 2</h1> <h1>App 2</h1> <h1>App 3</h1> <h1>App 3</h1> <h1>App 3</h1> version: '3' services: app1: image: nginx volumes: - ./app1:/usr/share/nginx/html app2: image: nginx volumes: - ./app2:/usr/share/nginx/html app3: image: nginx volumes: - ./app3:/usr/share/nginx/html nginx-lb: image: nginx container_name: nginx-lb ports: - "80:80" volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf version: '3' services: app1: image: nginx volumes: - ./app1:/usr/share/nginx/html app2: image: nginx volumes: - ./app2:/usr/share/nginx/html app3: image: nginx volumes: - ./app3:/usr/share/nginx/html nginx-lb: image: nginx container_name: nginx-lb ports: - "80:80" volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf version: '3' services: app1: image: nginx volumes: - ./app1:/usr/share/nginx/html app2: image: nginx volumes: - ./app2:/usr/share/nginx/html app3: image: nginx volumes: - ./app3:/usr/share/nginx/html nginx-lb: image: nginx container_name: nginx-lb ports: - "80:80" volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf events {} http { upstream backend { server app1:80; server app2:80; server app3:80; } server { listen 80; location / { proxy_pass http://backend; } } } events {} http { upstream backend { server app1:80; server app2:80; server app3:80; } server { listen 80; location / { proxy_pass http://backend; } } } events {} http { upstream backend { server app1:80; server app2:80; server app3:80; } server { listen 80; location / { proxy_pass http://backend; } } } -weight: 500;">docker compose up -d -weight: 500;">docker compose up -d -weight: 500;">docker compose up -d Windows localhost:8080 -> VM port 80 Windows localhost:8080 -> VM port 80 Windows localhost:8080 -> VM port 80 http://10.93.5.4 http://10.93.5.4 http://10.93.5.4 wrk -t4 -c100 -d30 http://127.0.0.1 wrk -t4 -c100 -d30 http://127.0.0.1 wrk -t4 -c100 -d30 http://127.0.0.1 Running 30s test @ http://127.0.0.1 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 21.78ms 28.04ms 238.82ms 86.21% Req/Sec 1.36k 1.06k 8.34k 75.55% 141456 requests in 30.05s, 34.13MB read Socket errors: connect 0, read 0, write 0, timeout 102 Requests/sec: 4706.85 Transfer/sec: 1.14MB Running 30s test @ http://127.0.0.1 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 21.78ms 28.04ms 238.82ms 86.21% Req/Sec 1.36k 1.06k 8.34k 75.55% 141456 requests in 30.05s, 34.13MB read Socket errors: connect 0, read 0, write 0, timeout 102 Requests/sec: 4706.85 Transfer/sec: 1.14MB Running 30s test @ http://127.0.0.1 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 21.78ms 28.04ms 238.82ms 86.21% Req/Sec 1.36k 1.06k 8.34k 75.55% 141456 requests in 30.05s, 34.13MB read Socket errors: connect 0, read 0, write 0, timeout 102 Requests/sec: 4706.85 Transfer/sec: 1.14MB Running 30s test @ http://127.0.0.1 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.39ms 10.07ms 146.04ms 89.23% Req/Sec 1.76k 1.01k 6.95k 82.19% 110705 requests in 30.10s, 26.71MB read Socket errors: connect 0, read 0, write 0, timeout 100 Requests/sec: 3678.10 Transfer/sec: 0.89MB Running 30s test @ http://127.0.0.1 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.39ms 10.07ms 146.04ms 89.23% Req/Sec 1.76k 1.01k 6.95k 82.19% 110705 requests in 30.10s, 26.71MB read Socket errors: connect 0, read 0, write 0, timeout 100 Requests/sec: 3678.10 Transfer/sec: 0.89MB Running 30s test @ http://127.0.0.1 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.39ms 10.07ms 146.04ms 89.23% Req/Sec 1.76k 1.01k 6.95k 82.19% 110705 requests in 30.10s, 26.71MB read Socket errors: connect 0, read 0, write 0, timeout 100 Requests/sec: 3678.10 Transfer/sec: 0.89MB Thread Stats Avg Stdev Max +/- Stdev Latency 13.85ms 11.87ms 247.23ms 87.54% Req/Sec 1.74k 672.33 6.35k 75.85% 207100 requests in 30.03s, 49.97MB read Requests/sec: 6895.35 Transfer/sec: 1.66MB Thread Stats Avg Stdev Max +/- Stdev Latency 13.85ms 11.87ms 247.23ms 87.54% Req/Sec 1.74k 672.33 6.35k 75.85% 207100 requests in 30.03s, 49.97MB read Requests/sec: 6895.35 Transfer/sec: 1.66MB Thread Stats Avg Stdev Max +/- Stdev Latency 13.85ms 11.87ms 247.23ms 87.54% Req/Sec 1.74k 672.33 6.35k 75.85% 207100 requests in 30.03s, 49.97MB read Requests/sec: 6895.35 Transfer/sec: 1.66MB - experiment freely - break things safely - learn networking - build servers manually - understand infrastructure properly - load balancing - benchmarking - an Nginx config - a Docker Compose setup - Host Port: 8080 - Guest Port: 80 - -t4 → 4 threads - -c100 → 100 concurrent connections - -d30 → run test for 30 seconds - 1 backend container - 2 backend containers - 3 backend containers - reverse proxies - traffic routing - load balancing