Tools
Tools: My Resume in My Pocket 24/7. Powered by AWS
2026-01-23
0 views
admin
What this project includes: ## HTML + CSS ## (resume structure and styling) ## Amazon S3 ## (private origin for static assets) ## Amazon CloudFront ## (HTTPS + CDN for my resume subdomain) ## Custom DNS ## (resume.michael-burbank.com) ## JavaScript ## (visitor counter on page load) ## API Gateway ## (REST API layer) ## AWS Lambda ## (backend compute) ## Python + boto3 ## (AWS SDK inside Lambda) ## Amazon DynamoDB ## (on-demand visitor count storage) ## AWS SAM / SAM CLI ## (Infrastructure-as-Code (IaC)) ## Testing ## (Python tests for backend logic) ## GitLab CI/CD ## (pipelines for front and backend) ## GitHub Mirroring ## (one push to main updates both platforms) ## Architecture Diagrams ## Runtime: Frontend request flow ## (Browser → Route 53 → CloudFront → S3) ## Runtime: Backend request flow ## (Browser → API Gateway → Lambda → DynamoDB) ## Deployment & CI/CD Pipeline ## (GitLab -> AWS) ## Feedback and Self-Awareness ## GitHub to GitLab migration ## AWS SAM / SAM CLI fundamentals (IaC) ## Two remotes + GitHub mirroring from GitLab ## REST API design practices ## API Gateway (real API layer, no direct DB calls) ## GitLab CI pipelines (automation + repeatability) ## Docker (Desktop) for local API and DB testing From phones and wallets to car keys and coffee receipts, there's never been a perfect way to carry a polished PDF resume. So I built a resume I can carry 24/7. A live website on AWS, secured with HTTPS, delivered globally through a CDN, and backed by a serverless visitor counter. In this post, I break down the milestones I completed over 18 days, from S3 + CloudFront (OAC private origin) to API Gateway, Lambda, SAM (SAM CLI), and DynamoDB. Live Site: https://resume.michael-burbank.com/ I designed and built a cloud-hosted resume that's secure, automated, and powered by a serverless backend, not just "another static webpage". I built my resume as a real webpage using HTML for structure and CSS for styling. This made it easy to update and share, while providing better flexibility than a PDF that may or may not be lost and outdated. I store the resume’s static files (HTML/CSS/JS) in an S3 bucket and use it as a private CloudFront origin. The bucket is not public—CloudFront accesses it using Origin Access Control (OAC) so objects can only be fetched through CloudFront. To serve the site securely using HTTPS and improve load times, I placed CloudFront in front of the S3 bucket. CloudFront caches the site at edge locations and delivers it globally. When resume changes are needed, I update the content followed by CloudFront invalidations to force edge caches to fetch the latest version from the S3 origin. CloudFront uses Origin Access Control (OAC) to securely access my private S3 bucket, enforced by a restrictive bucket policy. My main domain points to an Amazon EC2-hosted personal website, running on Amazon Linux (AL) 2023. So for the Cloud Resume Challenge, I created the subdomain resume.michael-burbank.com and routed it to the CloudFront distribution that serves my S3 resume. This keeps the two sites separated while still living under the same domain. Whenever a visitor loads the site, JavaScript calls my API and renders the updated visitor count on the page. Instead of letting the browser communicate directly with DynamoDB, I used API Gateway to expose a REST API endpoint the website can call over HTTPS. API Gateway triggers Lambda. Lambda is the only component that reads and updates the DynamoDB table. Lambda handles the visitor count logic. It runs only when invoked, increments the count, and returns the updated value. Within my Lambda function, I used Python and the AWS SDK (boto3) to interact with DynamoDB and return a clean JSON response back to the website. I used DynamoDB on-demand capacity to store the visitor count, keeping costs low and removing capacity planning. I defined the backend infrastructure, DynamoDB, API Gateway, and Lambda using AWS SAM, so I can deploy with the SAM CLI instead of clicking around in the AWS console. Provisioning and configuring infrastructure using IaC is my favorite part of building different software projects. I wrote pytest tests for the Lambda logic so changes can be validated automatically before deploying to prod. I initially configured a working pipeline using GitHub Actions as the challenge suggests, then I migrated to GitLab CI so both repos use a single, consistent CI/CD approach. Each repo uses the .gitlab-ci.yml pipeline to reduce manual steps, automate deployments, all while improving repeatability. Maintaining my GitHub presence is a must, given that many companies or other developers reference GitHub more often than referencing GitLab. My solution? Utilize two different remote repositories, ensuring this project still aligned with the minimal requirements and the source-control milestone. I mirrored my GitLab repositories to GitHub. This was the first time I had ever configured two different, but parallel and in-sync remote repositories. When I push to main in GitLab (origin) using the CLI, the same commits automatically push to the GitHub repo(s), keeping GitHub in-sync with GitLab. I configured this for both the front and backend repositories. Below are the runtime request flows (frontend + backend), followed by the deployment pipeline that ships changes to AWS. CloudFront terminates HTTPS using ACM, serves cached content from edge locations, and uses OAC so only CloudFront can fetch objects from the private S3 origin. The browser calls API Gateway over HTTPS (CORS enabled). API Gateway invokes Lambda (proxy integration). Lambda uses boto3 to GetItem (GET) or UpdateItem (POST/PUT) in DynamoDB, then returns JSON { "count": n }. CloudWatch captures logs/metrics. GitLab CI runs tests, deploys serverless resources via SAM/CloudFormation, syncs static site assets to S3 and invalidates CloudFront so edge caches refresh. I learned a lot through this project - not just AWS services, but how the puzzle pieces fit together in terms of a real production workflow. These were the biggest skills I gained while building and shipping my Cloud Resume Challenge end-to-end: I started with GitHub Actions (as the challenge suggested) but migrated to GitLab mid-project to GitLab CI to standardize my pipelines. That forced me to think through runner behavior, environment variables, and deployment steps instead of copying a template. It also gave me a real "change-in-flight" experience without breaking production. Defining the backend in SAM strengthened my ability in treating infrastructure like versioned code. Instead of "click ops", I gain more skills in deploying repeatability, roll changes safely, and keep my API/Lambda/DynamoDB consistent across updates. I configured my workflows so one push to main in GitLab also syncs to GitHub. This helped me keep my GitHub presence active while using GitLab CI as the primary CI/CD platform instead of GitHub Actions. It was also my first time running a multi-remote workflow cleanly. I strengthened my ability to keep the front and backend behavior separate from each other while maintaining an efficient contract between the two. Designing the endpoint around "increment and return the updated count" made the frontend less complex and kept the database logic server-side where it rightfully belongs. API Gateway became the front door: HTTPS access, routing, and a clean interface between the browser and Lambda. It made the whole design feel like a real system rather than "JavaScript communicating with the database". Building pipelines for both the front and backend reinforced how much automation reduces human error. Once it worked, deployments stopped being a "process" and became a push-to-main routine. Using Docker for local API/DB testing taught me how to validate logic without relying on cloud deployments for every change. That feedback loop is faster, cheaper, and much closer to how teams develop and test, a part of the Software Development Life Cycle (SDLC). If you’re doing the Cloud Resume Challenge, drop your link — I’ll check it out! Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse
how-totutorialguidedev.toaimllinuxserverdnsroutingdockerpythonjavascriptdatabasegit