Tools: Latest: Build, Deploy and Reverse Proxy a Rust API: HNG DevOps Stage 1
A Quick Recap
The Task
Why Rust?
Writing the API
The Cross-Compilation Challenge
Setting Up the systemd Service
Configuring Nginx as a Reverse Proxy
Final Verification
The Big Picture This is part of my HNG DevOps internship series. Follow along as I document every stage.
Previous article: How I Secured a Linux Server from Scratch: HNG DevOps Stage 0 In Stage 0, I provisioned a Linux server on Oracle Cloud, hardened SSH access, configured UFW, set up Nginx, and secured everything with a Let's Encrypt SSL certificate. If you missed that, the link is above. Stage 1 builds directly on top of that foundation. This time, the task was to write an actual API, deploy it on any server (me I used the same server that was used for stage 0), and configure Nginx to act as a reverse proxy in front of it. Here is a summary of what needed to be done: Three endpoints, all returning Content-Type: application/json, HTTP status 200, and responding within 500ms: The task suggested Node.js, Python, PHP, or Go as expected options. I went with Rust using the Axum framework instead, and here is why. One of the evaluation criteria was that all endpoints must respond within 500ms. Rust is compiled to native machine code, has no garbage collector, and starts up in milliseconds. The memory footprint of the running service on the server was 1.2MB. For comparison, a Node.js Express app doing the same thing would typically sit around 40-60MB. For a task where performance is explicitly measured, Rust made sense. The API logic itself is also simple enough that the extra strictness of the type system doesn't slow you down. It was a good fit. I used Axum, which is the most popular async web framework in the Rust ecosystem. Here is the full main.rs: A few things worth noting here. The app binds to 127.0.0.1 (localhost) on port 3000, not 0.0.0.0. This means it is only reachable from within the server itself. The outside world cannot hit port 3000 directly. Nginx will be the one receiving public traffic and forwarding it internally. This is intentional and is exactly what the task requires. The PORT environment variable is also read at startup, with 3000 as a fallback. This makes the binary flexible across different environments. My server runs on Oracle Cloud's free tier Ampere A1 chip, which is aarch64 (ARM 64-bit). My local machine is a Mac with Apple Silicon, which is also ARM but a different target entirely. I needed to produce a Linux ARM binary from macOS. The reason I chose to build locally and copy the binary to the server rather than build directly on the server is practical. Oracle's free tier has limited CPU and RAM. Rust compilation is memory-hungry and would have been painfully slow, possibly even crashing on a 1GB RAM instance. First, confirm your server's architecture: Add the cross-compilation target locally: I first tried using cross, a popular Rust cross-compilation tool: The issue is that cross on Apple Silicon tries to install an x86_64 Linux toolchain, which doesn't make sense on an ARM Mac. This is a known compatibility problem with ARM Macs. The fix was to bypass cross entirely and use Docker directly. This approach pulls a pre-built cross-compilation image and runs the build inside a container, with the project folder mounted in: The $HOME/.cargo/registry mount is important. It caches your downloaded crates so subsequent builds don't re-download everything from scratch. After a few minutes, the binary was ready at: Copy it to the server: Verify the architecture is correct: Quick smoke test on the server: All three returned the expected JSON. Now on to making it run permanently. Running the binary manually works, but it stops the moment you close the terminal or the server reboots. systemd is the Linux service manager that keeps processes alive automatically. Think of it as telling the operating system: "this process should always be running." First, move the binary to a proper system location: Create the service file: A few things to understand in this file: After=network.target tells systemd to only start this service after the network is available. Since this is an API that binds to a network port, that ordering matters. Restart=always means if the process ever crashes for any reason, systemd will automatically restart it after RestartSec=5 seconds. WantedBy=multi-user.target means this service starts automatically on every normal system boot. Now enable and start it: The output should show Active: active (running). Notice the memory usage: That is 1.2 megabytes. The entire running API server. That is Rust. With the API running on port 3000 internally, the last step is to tell Nginx to forward public requests to it. This is called a reverse proxy and it is how virtually every production web application is deployed. Nginx becomes the single public entry point, handling SSL, routing, and security, while the app just focuses on responding to requests. Open the Nginx config from Stage 0: Replace the entire content with this updated version that adds the three new proxy locations: The proxy_set_header lines are worth understanding. Host $host passes the original domain name to the app. X-Real-IP $remote_addr passes the real client IP address. Without these, your app would see all requests as coming from 127.0.0.1, which makes logging and debugging harder. All three should return clean JSON with no HTML, no errors, and no delay. Stage 1 introduced a pattern that shows up in almost every real production deployment: The app never talks to the internet directly. Nginx sits in front of it and controls everything that comes in and goes out. This separation means you can add rate limiting, authentication, caching, or swap out the app entirely, all without changing what the outside world sees. The systemd service pattern is equally important. In production, you never want to manually restart a service after a crash or reboot. systemd handles that automatically, and if something does go wrong, journalctl -u hng-api gives you the full logs to debug with. Stage 2 is next. Follow along as I keep documenting the journey. Find me on Dev.to | GitHub Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse