Tools
Going Beyond Static Hosting: Deploying a Node.js Contact Form API on AWS EC2
2025-12-20
0 views
admin
AWS#Cloud#Node#Backend#Learninginpublic#WomeninTech When people talk about hosting portfolios, the usual recommendation is static hosting using platforms like Amazon S3, Vercel, Netlify, or CloudFront. And honestly, that’s often the right architectural choice.
But as part of my cloud learning journey, I wanted to understand what those managed services actually abstract away. Instead of stopping at static hosting, I decided to build and deploy a backend service on AWS EC2 to power the contact form on my portfolio.
This article walks through what I built, why I chose EC2, and what I learned along the way.
Why EC2 for a Contact Form?
To be clear, EC2 is not the “best” tool for hosting a static portfolio. Managed services exist for a reason, and they usually provide better scalability, security, and simplicity for this kind of use case.
However, EC2 is one of the best tools for learning. I wanted to understand how applications run on real servers, how backend services are deployed, how traffic flows through a system, and how uptime and reliability are handled in practice.
Specifically, I wanted hands-on experience working with Linux servers, managing long-running processes, configuring reverse proxies, and handling real HTTP requests. That learning goal is what made EC2 the right choice for this project.
What I Built
For this project, I built a small Node.js and Express API with two simple endpoints. One endpoint handles health checks using a GET /api/health route, while the second endpoint, POST /api/contact, receives contact form submissions from my portfolio.
Instead of relying on third-party form services, my frontend sends form data directly to this backend API, giving me full control over the request flow and server behavior.
High-Level Architecture
At a high level, the architecture looks like this: a user interacts with my portfolio in the browser, the request travels over the internet to an AWS EC2 instance, Nginx receives the incoming traffic, and then forwards API requests to the Node.js application running on port 3000 and managed by PM2.
This setup mirrors how many real-world production systems are structured, even at much larger scales.
Step-by-Step Overview
I started by configuring the EC2 security group. SSH access on port 22 was restricted to my IP address, while HTTP traffic on port 80 was allowed from anywhere so the API could be publicly accessible.
Next, I launched the EC2 instance using an Ubuntu AMI and a free-tier eligible instance type. Once the instance was running, I connected to it via SSH using a key pair. This was my first reminder that servers are not “deploy-and-forget” resources, access control and security matter from the very beginning.
After connecting, I installed Node.js (LTS) and npm on the server. This allowed the backend to run in the cloud exactly as it did on my local machine.
I then copied my backend project to the EC2 instance, installed the dependencies using npm install, and verified that the API worked by running node server.js directly on the server.
Of course, running a Node application this way only works while the SSH session is active. To solve this, I introduced PM2, a process manager for Node.js applications. PM2 runs the application in the background, restarts it automatically if it crashes, keeps it running after SSH disconnects, and ensures it restarts on server reboot. This transformed my backend from a fragile process into a persistent service.
Since my Node.js app listens on port 3000, I didn’t want to expose it directly to the public internet. I installed Nginx and configured it as a reverse proxy so that all public traffic comes in on port 80, while requests to /api/* are forwarded internally to the Node application. This step introduced me to real-world request routing and proxy behavior, including fixing a classic trailing-slash issue in the proxy pass configuration.
Testing End-to-End
Once everything was wired together, I tested the setup end-to-end. I used curl directly from the server, inspected requests in the browser’s Network tab, and submitted the contact form from the live portfolio itself.
Seeing consistent 200 OK responses from a cloud-hosted backend felt very different from testing locally and made the entire system feel real.
What I Learned
This project helped me understand how backend services actually run on cloud servers, why process managers like PM2 are essential, and how Nginx fits into real production deployments. It also clarified the difference between simply running code and operating a service that needs to stay online.
More importantly, it gave me confidence working with Linux, AWS EC2, Node.js backends, Nginx, and production-style workflows. It also reinforced why managed services exist and how much complexity they quietly remove.
And that was exactly the goal.
If you’re early in your cloud journey and curious about what happens under the hood, I highly recommend trying something similar. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse
how-totutorialguidedev.toailinuxubuntuservernetworkroutingnginxnode