Tools
Enterprise Architecture for a blog nobody reads
2025-12-12
0 views
admin
The Architecture ## Validation ## Layer 1: Cloudflare (Edge Protection) ## Layer 2: Host Hardening ## Layer 3: Container Isolation ## Layer 4: Secrets Management ## Layer 5: Storage & Backups ## Layer 6: Monitoring The previous incarnation of this site lived happily on a Digital Ocean droplet - until react2shell came along. I put the whole thing together rather haphazardly and left my Umami login page open to the public. My droplet was compromised and became part of a botnet only a few days after CVE-2025-55182 was announced. React2Shell is a critical (CVSS 10.0) unauthenticated remote code execution vulnerability in React Server Components. The vulnerability allows attackers to execute arbitrary code on the server via a specially crafted HTTP request. In my case the attackers installed Nezha and Sliver. So this time around, I figured I'd do the complete opposite. How secure could I make my blog whilst spending as little as possible? My blog runs on Ghost, which requires MySQL. Umami v3 requires Postgres. The cheapest hosted databases are around $15/month each – $30 just to store a few megabytes of data. If I followed Docker/AWS best practices - Ghost and Umami would run as seperate ECS services on Fargate. That would cost ~$23/month. And that's before a NAT Gateway (~$32/month) or fck-nat (much cheaper). I considered Fargate Spot – typically 70% cheaper. The price of my two containers would drop from ~$23 to ~$7. But I would want to run at least two of each ($14). Being spot instances they can be turned off with a two minute warning whenever AWS needs the capacity back . However to run more than one instance of each, I would need a load balancer (~$16/month). Basically, hosting my blog "properly" wasn't worth the money. Since I already use AWS, I decided to over-engineer a cheaper solution. My 'enterprise architecture' is a Docker Compose stack running on a $12/month Lightsail instance, managed via Terraform. The damage? About $14.80/month (just over a tenner). That figure accounts for the instance plus a few extras Infracost missed, like the disk storage and an external KMS key. Since my backups are tiny, S3 costs are basically rounding errors. For the rest of the infrastructure, I use Cloudflare's free tier. I had to enter card details for R2 (Cloudflare's object storage) but there's no way I'm getting close to hitting any of these limits. Checkov approves, once I'd told it I wasn't really enterprise enough for SSO. If React2Shell v2 drops tomorrow, the attack surface is much smaller. Shodan won't even know what lives at umami.clegginabox.co.uk. There's no open port or favicon to fingerprint, no version header to scrape. Just the Cloudflare Access page. The Lightsail instance itself is locked down: No public SSH. SSH access is only available through Lightsail's browser-based console, which requires AWS console authentication (with 2FA). There's no port 22 exposed to the internet. Kernel hardening: Sysctl settings to prevent IP spoofing, disable ICMP redirects, enable SYN flood protection and disable IPv6. Automatic updates: Unattended upgrades are enabled. Security patches apply automatically. Firewall: UFW is configured as a secondary layer (though Lightsail's firewall takes precedence). Can't hurt to have two firewalls right? Even if an attacker compromises Ghost or Umami, I want to limit what they can do. Non-root users: Every container runs as a non-root user. No privilege escalation: All containers have no-new-privileges set - preventing processes from gaining additional privileges via setuid binaries or other mechanisms. Read-only filesystems: The cloudflared container runs with a read-only root filesystem. An attacker can't write persistent backdoors. Network segmentation: Containers can only talk to what they need Ghost can reach MySQL but not Postgres. Umami can reach Postgres but not MySQL. Neither database is accessible from the tunnel container. If Ghost gets compromised, the attacker can't pivot to the Umami database (and vice versa). Health checks with dependencies: Containers don't start until their dependencies are healthy. This prevents race conditions and ensures clean startup order. Performance tuning for a small instance: My 2GB instance didn't have much in the way of free RAM with everything running . MySQL uses ~400MB of RAM with it's standard config. I'd like to run a little comment system at some point without crashing the whole thing. No secrets are hardcoded. Database passwords, SMTP credentials, R2 keys etc are all stored in AWS SSM Parameter Store and encrypted with KMS. When the instance starts up, it uses a scoped IAM user to fetch the secrets and write them to environment variables. Unlike EC2 which has instance profiles. Lightsail does not. The credentials therefore persist in the instance and would be accessible for anyone with shell access. This is less than ideal but the policy follows least privilege: Separate data disk: Persistent data (databases, ghost) live on an attached 8GB Lightsail disk mounted at /mnt/data. My Lightsail instance comes with a 60GB disk but it's ephemeral. Media on R2: Ghost uploads images directly to Cloudflare R2. Media is served from a custom domain with Cloudflare's CDN in front. Fast load times for visitors and less load on my instance. Daily backups: A cron job dumps MySQL and Postgres to S3 daily: Cross-region replication: Adding this turned out to be way more complex than I'd expected. The backup bucket replicates to another region. In the very unlikely event that eu-west-2 burns down, I still have my data. Though I'd imagine I'd have bigger worries than my blog if half of London was on fire. Image updates: Diun watches all containers and emails me when new versions are available. I'm not running :latest tags (except Ghost, which I build myself). I want to know when updates are released but choose when to deploy them. Backup monitoring: Failed backups send email notifications. New Relic: I haven't got round to implementing this again yet, it's next on the list. Obviously this is seriously over-engineered for a personal blog. It's not enterprise either. Deployments mean spinning up a new instance and running a bash script to bootstrap everything - which takes the site down for a few minutes. The bootstrap credential is less than ideal but is it worth spending more money and using EC2 to get around it? Not really. Cloudflare is a single point of trust. If someone breaches that account the whole thing falls down. But does anyone else offer what they do for free? Ghost itself is probably the weakest link in the chain. Node's dependency tree is vast - when the maintainer of event-stream handed the project to a stranger in 2018, that stranger quietly added code to steal Bitcoin wallets. It only took a few days from React2Shell being announced for my previous site to be compromised. This has been a fun little project though. If (when) my site breaks again, I can spin up a brand new one with two commands in the terminal. I've published the Terraform on GitHub. I've only recently started using Cloudflare & I've not been using Terraform all that long, so I'd genuinely appreciate feedback - if you spot something stupid or have suggestions, please open an issue or PR. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
➜ blog infracost breakdown --path . --show-skipped Name Monthly Qty Unit Monthly Cost aws_lightsail_instance.ghost └─ Virtual server (Linux/UNIX) 730 hours $11.77 aws_kms_key.replica ├─ Customer master key 1 months $1.00 ├─ Requests Monthly cost depends on usage: $0.03 per 10k requests ├─ ECC GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests └─ RSA GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests module.s3_bucket_backup.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB module.s3_bucket_backup_replica.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB OVERALL TOTAL $12.77 *Usage costs can be estimated by updating Infracost Cloud settings, see docs for other options. ──────────────────────────────────
40 cloud resources were detected:
∙ 4 were estimated
∙ 33 were free
∙ 3 are not supported yet, see https://infracost.io/requested-resources: ∙ 1 x aws_lightsail_disk ∙ 1 x aws_lightsail_disk_attachment ∙ 1 x aws_lightsail_instance_public_ports ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Project ┃ Baseline cost ┃ Usage cost* ┃ Total cost ┃
┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━╋━━━━━━━━━━━━┫
┃ main ┃ $13 ┃ - ┃ $13 ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━┻━━━━━━━━━━━━┛ Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
➜ blog infracost breakdown --path . --show-skipped Name Monthly Qty Unit Monthly Cost aws_lightsail_instance.ghost └─ Virtual server (Linux/UNIX) 730 hours $11.77 aws_kms_key.replica ├─ Customer master key 1 months $1.00 ├─ Requests Monthly cost depends on usage: $0.03 per 10k requests ├─ ECC GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests └─ RSA GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests module.s3_bucket_backup.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB module.s3_bucket_backup_replica.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB OVERALL TOTAL $12.77 *Usage costs can be estimated by updating Infracost Cloud settings, see docs for other options. ──────────────────────────────────
40 cloud resources were detected:
∙ 4 were estimated
∙ 33 were free
∙ 3 are not supported yet, see https://infracost.io/requested-resources: ∙ 1 x aws_lightsail_disk ∙ 1 x aws_lightsail_disk_attachment ∙ 1 x aws_lightsail_instance_public_ports ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Project ┃ Baseline cost ┃ Usage cost* ┃ Total cost ┃
┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━╋━━━━━━━━━━━━┫
┃ main ┃ $13 ┃ - ┃ $13 ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━┻━━━━━━━━━━━━┛ CODE_BLOCK:
➜ blog infracost breakdown --path . --show-skipped Name Monthly Qty Unit Monthly Cost aws_lightsail_instance.ghost └─ Virtual server (Linux/UNIX) 730 hours $11.77 aws_kms_key.replica ├─ Customer master key 1 months $1.00 ├─ Requests Monthly cost depends on usage: $0.03 per 10k requests ├─ ECC GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests └─ RSA GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests module.s3_bucket_backup.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB module.s3_bucket_backup_replica.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB OVERALL TOTAL $12.77 *Usage costs can be estimated by updating Infracost Cloud settings, see docs for other options. ──────────────────────────────────
40 cloud resources were detected:
∙ 4 were estimated
∙ 33 were free
∙ 3 are not supported yet, see https://infracost.io/requested-resources: ∙ 1 x aws_lightsail_disk ∙ 1 x aws_lightsail_disk_attachment ∙ 1 x aws_lightsail_instance_public_ports ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Project ┃ Baseline cost ┃ Usage cost* ┃ Total cost ┃
┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━╋━━━━━━━━━━━━┫
┃ main ┃ $13 ┃ - ┃ $13 ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━┻━━━━━━━━━━━━┛ CODE_BLOCK:
_ _ ___| |__ ______ | | _________ / __| '_ \ / _ \/__ | |/ / _ \ \ / / | ( __| | | |__ / (__| < (_) \ V / \ ___|_| |_|\___ |\ ___|_|\_\___ / \_/ By Prisma Cloud | version: 3.2.495 terraform scan results: Passed checks: 60, Failed checks: 0, Skipped checks: 2 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
_ _ ___| |__ ______ | | _________ / __| '_ \ / _ \/__ | |/ / _ \ \ / / | ( __| | | |__ / (__| < (_) \ V / \ ___|_| |_|\___ |\ ___|_|\_\___ / \_/ By Prisma Cloud | version: 3.2.495 terraform scan results: Passed checks: 60, Failed checks: 0, Skipped checks: 2 CODE_BLOCK:
_ _ ___| |__ ______ | | _________ / __| '_ \ / _ \/__ | |/ / _ \ \ / / | ( __| | | |__ / (__| < (_) \ V / \ ___|_| |_|\___ |\ ___|_|\_\___ / \_/ By Prisma Cloud | version: 3.2.495 terraform scan results: Passed checks: 60, Failed checks: 0, Skipped checks: 2 COMMAND_BLOCK:
resource "aws_lightsail_instance_public_ports" "ghost" { instance_name = aws_lightsail_instance.ghost.name port_info { protocol = "tcp" from_port = 22 to_port = 22 cidr_list_aliases = ["lightsail-connect"] # Browser SSH only }
} Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
resource "aws_lightsail_instance_public_ports" "ghost" { instance_name = aws_lightsail_instance.ghost.name port_info { protocol = "tcp" from_port = 22 to_port = 22 cidr_list_aliases = ["lightsail-connect"] # Browser SSH only }
} COMMAND_BLOCK:
resource "aws_lightsail_instance_public_ports" "ghost" { instance_name = aws_lightsail_instance.ghost.name port_info { protocol = "tcp" from_port = 22 to_port = 22 cidr_list_aliases = ["lightsail-connect"] # Browser SSH only }
} COMMAND_BLOCK:
# IP Spoofing protection
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1 # Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0 # Ignore source-routed packets
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0 # SYN flood protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2 # Ignore ICMP broadcasts
net.ipv4.icmp_echo_ignore_broadcasts = 1 # Log martian packets
net.ipv4.conf.all.log_martians = 1 # Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
# IP Spoofing protection
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1 # Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0 # Ignore source-routed packets
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0 # SYN flood protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2 # Ignore ICMP broadcasts
net.ipv4.icmp_echo_ignore_broadcasts = 1 # Log martian packets
net.ipv4.conf.all.log_martians = 1 # Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 COMMAND_BLOCK:
# IP Spoofing protection
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1 # Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0 # Ignore source-routed packets
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0 # SYN flood protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2 # Ignore ICMP broadcasts
net.ipv4.icmp_echo_ignore_broadcasts = 1 # Log martian packets
net.ipv4.conf.all.log_martians = 1 # Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 COMMAND_BLOCK:
services: ghost: image: ghcr.io/clegginabox/clegginabox.co.uk:latest restart: always user: "1000:1000" expose: - "2368" environment: url: https://${GHOST_DOMAIN} # Database Config database__client: mysql database __connection__ host: mysql database __connection__ user: ghost database __connection__ password: ${MYSQL_PASSWORD} database __connection__ database: ghost # Mail Config mail__transport: SMTP mail__from: "noreply@${GHOST_DOMAIN}" mail __options__ host: email-smtp.${AWS_REGION}.amazonaws.com mail __options__ port: "587" mail __options__ secure: "false" mail __options__ auth__user: ${MAIL_USER} mail __options__ auth__pass: ${MAIL_PASS} # Object storage config storage__active: s3 storage __s3__ region: auto storage __s3__ bucket: ${R2_BUCKET} storage __s3__ endpoint: https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com storage __s3__ accessKeyId: ${R2_ACCESS_KEY} storage __s3__ secretAccessKey: ${R2_SECRET_KEY} storage __s3__ assetHost: ${R2_PUBLIC_DOMAIN} storage __s3__ forcePathStyle: true volumes: - /mnt/data/ghost:/var/lib/ghost/content depends_on: mysql: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend - ghost-db tunnel: image: cloudflare/cloudflared:2025.11.1 restart: always command: tunnel run read_only: true environment: - TUNNEL_TOKEN=${TUNNEL_TOKEN} depends_on: umami: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend mysql: image: mysql:8.4.7 restart: always user: "999:999" command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: ghost MYSQL_USER: ghost MYSQL_PASSWORD: ${MYSQL_PASSWORD} volumes: - /mnt/data/mysql:/var/lib/mysql healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - ghost-db umami: image: ghcr.io/umami-software/umami:3.0.2 restart: always user: "1000:1000" expose: - "3000" environment: DATABASE_URL: postgresql://umami:${POSTGRES_PASSWORD}@postgres:5432/umami APP_SECRET: ${UMAMI_SECRET} depends_on: postgres: condition: service_healthy init: true healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - frontend - umami-db postgres: image: postgres:18.1-alpine restart: always user: "70:70" command: - -c - shared_buffers=64MB - -c - effective_cache_size=128MB - -c - work_mem=4MB - -c - maintenance_work_mem=32MB environment: POSTGRES_DB: umami POSTGRES_USER: umami POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} volumes: - /mnt/data/postgres:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U umami -d umami"] interval: 30s timeout: 10s retries: 5 start_period: 10s security_opt: - no-new-privileges:true networks: - umami-db diun: image: crazymax/diun:4.30.0 restart: always user: "1000:1000" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - /mnt/data/diun:/data environment: TZ: Europe/London DIUN_WATCH_SCHEDULE: 0 8 * * * # Check daily at 8am DIUN_PROVIDERS_DOCKER: true DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: true DIUN_NOTIF_MAIL_HOST: email-smtp.${AWS_REGION}.amazonaws.com DIUN_NOTIF_MAIL_PORT: 587 DIUN_NOTIF_MAIL_SSL: false DIUN_NOTIF_MAIL_USERNAME: ${MAIL_USER} DIUN_NOTIF_MAIL_PASSWORD: ${MAIL_PASS} DIUN_NOTIF_MAIL_FROM: "noreply@${GHOST_DOMAIN}" DIUN_NOTIF_MAIL_TO: ${NOTIF_MAIL_TO} security_opt: - no-new-privileges:true # Segregate containers - ghost doesn't need access to postgres etc
networks: frontend: ghost-db: internal: true umami-db: internal: true Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
services: ghost: image: ghcr.io/clegginabox/clegginabox.co.uk:latest restart: always user: "1000:1000" expose: - "2368" environment: url: https://${GHOST_DOMAIN} # Database Config database__client: mysql database __connection__ host: mysql database __connection__ user: ghost database __connection__ password: ${MYSQL_PASSWORD} database __connection__ database: ghost # Mail Config mail__transport: SMTP mail__from: "noreply@${GHOST_DOMAIN}" mail __options__ host: email-smtp.${AWS_REGION}.amazonaws.com mail __options__ port: "587" mail __options__ secure: "false" mail __options__ auth__user: ${MAIL_USER} mail __options__ auth__pass: ${MAIL_PASS} # Object storage config storage__active: s3 storage __s3__ region: auto storage __s3__ bucket: ${R2_BUCKET} storage __s3__ endpoint: https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com storage __s3__ accessKeyId: ${R2_ACCESS_KEY} storage __s3__ secretAccessKey: ${R2_SECRET_KEY} storage __s3__ assetHost: ${R2_PUBLIC_DOMAIN} storage __s3__ forcePathStyle: true volumes: - /mnt/data/ghost:/var/lib/ghost/content depends_on: mysql: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend - ghost-db tunnel: image: cloudflare/cloudflared:2025.11.1 restart: always command: tunnel run read_only: true environment: - TUNNEL_TOKEN=${TUNNEL_TOKEN} depends_on: umami: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend mysql: image: mysql:8.4.7 restart: always user: "999:999" command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: ghost MYSQL_USER: ghost MYSQL_PASSWORD: ${MYSQL_PASSWORD} volumes: - /mnt/data/mysql:/var/lib/mysql healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - ghost-db umami: image: ghcr.io/umami-software/umami:3.0.2 restart: always user: "1000:1000" expose: - "3000" environment: DATABASE_URL: postgresql://umami:${POSTGRES_PASSWORD}@postgres:5432/umami APP_SECRET: ${UMAMI_SECRET} depends_on: postgres: condition: service_healthy init: true healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - frontend - umami-db postgres: image: postgres:18.1-alpine restart: always user: "70:70" command: - -c - shared_buffers=64MB - -c - effective_cache_size=128MB - -c - work_mem=4MB - -c - maintenance_work_mem=32MB environment: POSTGRES_DB: umami POSTGRES_USER: umami POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} volumes: - /mnt/data/postgres:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U umami -d umami"] interval: 30s timeout: 10s retries: 5 start_period: 10s security_opt: - no-new-privileges:true networks: - umami-db diun: image: crazymax/diun:4.30.0 restart: always user: "1000:1000" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - /mnt/data/diun:/data environment: TZ: Europe/London DIUN_WATCH_SCHEDULE: 0 8 * * * # Check daily at 8am DIUN_PROVIDERS_DOCKER: true DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: true DIUN_NOTIF_MAIL_HOST: email-smtp.${AWS_REGION}.amazonaws.com DIUN_NOTIF_MAIL_PORT: 587 DIUN_NOTIF_MAIL_SSL: false DIUN_NOTIF_MAIL_USERNAME: ${MAIL_USER} DIUN_NOTIF_MAIL_PASSWORD: ${MAIL_PASS} DIUN_NOTIF_MAIL_FROM: "noreply@${GHOST_DOMAIN}" DIUN_NOTIF_MAIL_TO: ${NOTIF_MAIL_TO} security_opt: - no-new-privileges:true # Segregate containers - ghost doesn't need access to postgres etc
networks: frontend: ghost-db: internal: true umami-db: internal: true COMMAND_BLOCK:
services: ghost: image: ghcr.io/clegginabox/clegginabox.co.uk:latest restart: always user: "1000:1000" expose: - "2368" environment: url: https://${GHOST_DOMAIN} # Database Config database__client: mysql database __connection__ host: mysql database __connection__ user: ghost database __connection__ password: ${MYSQL_PASSWORD} database __connection__ database: ghost # Mail Config mail__transport: SMTP mail__from: "noreply@${GHOST_DOMAIN}" mail __options__ host: email-smtp.${AWS_REGION}.amazonaws.com mail __options__ port: "587" mail __options__ secure: "false" mail __options__ auth__user: ${MAIL_USER} mail __options__ auth__pass: ${MAIL_PASS} # Object storage config storage__active: s3 storage __s3__ region: auto storage __s3__ bucket: ${R2_BUCKET} storage __s3__ endpoint: https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com storage __s3__ accessKeyId: ${R2_ACCESS_KEY} storage __s3__ secretAccessKey: ${R2_SECRET_KEY} storage __s3__ assetHost: ${R2_PUBLIC_DOMAIN} storage __s3__ forcePathStyle: true volumes: - /mnt/data/ghost:/var/lib/ghost/content depends_on: mysql: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend - ghost-db tunnel: image: cloudflare/cloudflared:2025.11.1 restart: always command: tunnel run read_only: true environment: - TUNNEL_TOKEN=${TUNNEL_TOKEN} depends_on: umami: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend mysql: image: mysql:8.4.7 restart: always user: "999:999" command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: ghost MYSQL_USER: ghost MYSQL_PASSWORD: ${MYSQL_PASSWORD} volumes: - /mnt/data/mysql:/var/lib/mysql healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - ghost-db umami: image: ghcr.io/umami-software/umami:3.0.2 restart: always user: "1000:1000" expose: - "3000" environment: DATABASE_URL: postgresql://umami:${POSTGRES_PASSWORD}@postgres:5432/umami APP_SECRET: ${UMAMI_SECRET} depends_on: postgres: condition: service_healthy init: true healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - frontend - umami-db postgres: image: postgres:18.1-alpine restart: always user: "70:70" command: - -c - shared_buffers=64MB - -c - effective_cache_size=128MB - -c - work_mem=4MB - -c - maintenance_work_mem=32MB environment: POSTGRES_DB: umami POSTGRES_USER: umami POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} volumes: - /mnt/data/postgres:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U umami -d umami"] interval: 30s timeout: 10s retries: 5 start_period: 10s security_opt: - no-new-privileges:true networks: - umami-db diun: image: crazymax/diun:4.30.0 restart: always user: "1000:1000" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - /mnt/data/diun:/data environment: TZ: Europe/London DIUN_WATCH_SCHEDULE: 0 8 * * * # Check daily at 8am DIUN_PROVIDERS_DOCKER: true DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: true DIUN_NOTIF_MAIL_HOST: email-smtp.${AWS_REGION}.amazonaws.com DIUN_NOTIF_MAIL_PORT: 587 DIUN_NOTIF_MAIL_SSL: false DIUN_NOTIF_MAIL_USERNAME: ${MAIL_USER} DIUN_NOTIF_MAIL_PASSWORD: ${MAIL_PASS} DIUN_NOTIF_MAIL_FROM: "noreply@${GHOST_DOMAIN}" DIUN_NOTIF_MAIL_TO: ${NOTIF_MAIL_TO} security_opt: - no-new-privileges:true # Segregate containers - ghost doesn't need access to postgres etc
networks: frontend: ghost-db: internal: true umami-db: internal: true CODE_BLOCK:
ghost: user: "1000:1000" mysql: user: "999:999" postgres: user: "70:70" umami: user: "1000:1000" Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
ghost: user: "1000:1000" mysql: user: "999:999" postgres: user: "70:70" umami: user: "1000:1000" CODE_BLOCK:
ghost: user: "1000:1000" mysql: user: "999:999" postgres: user: "70:70" umami: user: "1000:1000" CODE_BLOCK:
security_opt: - no-new-privileges:true Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
security_opt: - no-new-privileges:true CODE_BLOCK:
security_opt: - no-new-privileges:true CODE_BLOCK:
tunnel: read_only: true Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
tunnel: read_only: true CODE_BLOCK:
tunnel: read_only: true COMMAND_BLOCK:
networks: frontend: # Ghost, Umami, Tunnel ghost-db: # Ghost + MySQL only internal: true umami-db: # Umami + Postgres only internal: true Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
networks: frontend: # Ghost, Umami, Tunnel ghost-db: # Ghost + MySQL only internal: true umami-db: # Umami + Postgres only internal: true COMMAND_BLOCK:
networks: frontend: # Ghost, Umami, Tunnel ghost-db: # Ghost + MySQL only internal: true umami-db: # Umami + Postgres only internal: true COMMAND_BLOCK:
command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 COMMAND_BLOCK:
command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 COMMAND_BLOCK:
resource "aws_iam_policy" "ghost_instance_policy" { name = "ghost-instance-policy" description = "Allows Ghost instance to read SSM secrets and write S3 backups" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = ["ssm:GetParameter", "ssm:GetParameters"] Resource = "arn:aws:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/ghost/*", Condition = { Bool = { "aws:SecureTransport" = "true" } } }, { Effect = "Allow" Action = ["s3:PutObject"] Resource = "${module.s3_bucket_backup.s3_bucket_arn}/*" }, # SSM KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.ssm_key.arn }, # Backup S3 KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.backup_key.arn } ] })
} Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
resource "aws_iam_policy" "ghost_instance_policy" { name = "ghost-instance-policy" description = "Allows Ghost instance to read SSM secrets and write S3 backups" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = ["ssm:GetParameter", "ssm:GetParameters"] Resource = "arn:aws:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/ghost/*", Condition = { Bool = { "aws:SecureTransport" = "true" } } }, { Effect = "Allow" Action = ["s3:PutObject"] Resource = "${module.s3_bucket_backup.s3_bucket_arn}/*" }, # SSM KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.ssm_key.arn }, # Backup S3 KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.backup_key.arn } ] })
} COMMAND_BLOCK:
resource "aws_iam_policy" "ghost_instance_policy" { name = "ghost-instance-policy" description = "Allows Ghost instance to read SSM secrets and write S3 backups" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = ["ssm:GetParameter", "ssm:GetParameters"] Resource = "arn:aws:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/ghost/*", Condition = { Bool = { "aws:SecureTransport" = "true" } } }, { Effect = "Allow" Action = ["s3:PutObject"] Resource = "${module.s3_bucket_backup.s3_bucket_arn}/*" }, # SSM KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.ssm_key.arn }, # Backup S3 KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.backup_key.arn } ] })
} COMMAND_BLOCK:
# MySQL
docker compose -f /opt/ghost/docker-compose.yml exec -T mysql mysqldump \ -u ghost \ -p"$MYSQL_PASSWORD" \ --single-transaction \ --quick \ --no-tablespaces \ ghost | gzip > "$BACKUP_DIR/ghost_$DATE.sql.gz" # Postgres
docker compose -f /opt/ghost/docker-compose.yml exec -T postgres pg_dump \ -U umami \ umami | gzip > "$BACKUP_DIR/umami_$DATE.sql.gz" # Upload
aws s3 cp "$BACKUP_DIR/ghost_$DATE.sql.gz" "s3://$S3_BUCKET/ghost/"
aws s3 cp "$BACKUP_DIR/umami_$DATE.sql.gz" "s3://$S3_BUCKET/umami/" Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
# MySQL
docker compose -f /opt/ghost/docker-compose.yml exec -T mysql mysqldump \ -u ghost \ -p"$MYSQL_PASSWORD" \ --single-transaction \ --quick \ --no-tablespaces \ ghost | gzip > "$BACKUP_DIR/ghost_$DATE.sql.gz" # Postgres
docker compose -f /opt/ghost/docker-compose.yml exec -T postgres pg_dump \ -U umami \ umami | gzip > "$BACKUP_DIR/umami_$DATE.sql.gz" # Upload
aws s3 cp "$BACKUP_DIR/ghost_$DATE.sql.gz" "s3://$S3_BUCKET/ghost/"
aws s3 cp "$BACKUP_DIR/umami_$DATE.sql.gz" "s3://$S3_BUCKET/umami/" COMMAND_BLOCK:
# MySQL
docker compose -f /opt/ghost/docker-compose.yml exec -T mysql mysqldump \ -u ghost \ -p"$MYSQL_PASSWORD" \ --single-transaction \ --quick \ --no-tablespaces \ ghost | gzip > "$BACKUP_DIR/ghost_$DATE.sql.gz" # Postgres
docker compose -f /opt/ghost/docker-compose.yml exec -T postgres pg_dump \ -U umami \ umami | gzip > "$BACKUP_DIR/umami_$DATE.sql.gz" # Upload
aws s3 cp "$BACKUP_DIR/ghost_$DATE.sql.gz" "s3://$S3_BUCKET/ghost/"
aws s3 cp "$BACKUP_DIR/umami_$DATE.sql.gz" "s3://$S3_BUCKET/umami/" - No exposed ports. There are zero inbound ports on my Lightsail instance (except SSH via AWS's browser console). All traffic flows through Cloudflare. - The Tunnel: The cloudflared container creates an encrypted outbound connection to the Cloudflare edge. When users access the domain, Cloudflare routes requests through this pre-established tunnel. The cloudflared container then acts as an internal reverse proxy, directing traffic to Ghost or Umami based on hostname.
- WAF & DDoS: Cloudflare's Web Application Firewall sits in front of everything. Rate limiting, bot detection and DDoS mitigation happen before traffic ever reaches my infrastructure.
- Caching: Static assets are cached at Cloudflare's edge. This reduces load on my tiny instance and means most requests never hit my server at all. Ghost's media assets are served directly from R2 via a custom domain. There's a little Cloudflare Worker that Ghost calls via webhook to purge the cache when necessary.
- Zero Trust Access: This is the key difference from last time. Sensitive routes — /ghost/* (admin panel) and the Umami dashboard are both protected by Cloudflare Access. Users must authenticate via email code before Cloudflare even allows the request through the tunnel.
how-totutorialguidedev.toaimllinuxkernelserverbashshellcronnetworkfirewallmysql