Tools: Complete PaaS Exit Playbook: Heroku to Self-Hosted in 72 Hours - 2025 Update

Tools: Complete PaaS Exit Playbook: Heroku to Self-Hosted in 72 Hours - 2025 Update

Complete PaaS Exit Playbook: Heroku to Self-Hosted in 72 Hours

The Economics That Force the Move

Day 1: Containerize (8 hours)

Step 1: Create a Dockerfile

Step 2: Create docker-compose.yml

Step 3: Test locally

Day 2: Provision and Migrate Data (8 hours)

Step 1: Provision the server

Step 2: Bootstrap the server

Step 3: Migrate the database

Step 4: Migrate files/assets

Day 3: Go Live (4 hours)

Step 1: Deploy and verify

Step 2: Set up CI/CD

Step 3: Flip DNS

Step 4: Monitor for 48 hours

What You Keep

What You Gain

When NOT to Self-Host

Free Migration Assessment We've migrated 6 startups off Heroku and Render in the past year. Average cost reduction: 87%. No client has gone back. This is the exact playbook we use. Three days, start to finish. Here's a real client breakdown (Series A, Rails app, ~5K DAU): Actual client paid $240/mo because they chose managed Postgres on a larger plan and a beefier server for headroom. Still 91% savings. If you're on Heroku, you likely have a Procfile. The translation is direct: If using Heroku's ephemeral filesystem, you're probably already on S3. Just update the credentials in your env. If using Heroku's built-in file storage... that data is gone on every deploy anyway. Nothing to migrate. Now git push deploys — same as Heroku. Keep Heroku running for 48 hours as rollback. Watch: Be honest with yourself: For the other 90% of startups: you're overpaying for convenience you've already outgrown. Not sure if migration makes sense for your stack? We'll review your current Heroku/Render setup, estimate your self-hosted costs, and give you an honest recommendation in 15 minutes. Book a call: techsaas.cloud/contact Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

# Heroku Procfile: web: bundle exec puma -C config/puma.rb # Docker equivalent: FROM ruby:3.2-slim AS base WORKDIR /app # Install dependencies RUN -weight: 500;">apt-get -weight: 500;">update && -weight: 500;">apt-get -weight: 500;">install -y \ build-essential libpq-dev nodejs -weight: 500;">npm && \ rm -rf /var/lib/-weight: 500;">apt/lists/* COPY Gemfile Gemfile.lock ./ RUN bundle -weight: 500;">install --deployment --without development test COPY . . RUN bundle exec rake assets:precompile # Production stage FROM ruby:3.2-slim WORKDIR /app RUN -weight: 500;">apt-get -weight: 500;">update && -weight: 500;">apt-get -weight: 500;">install -y libpq-dev && \ rm -rf /var/lib/-weight: 500;">apt/lists/* COPY --from=base /app /app USER 1000:1000 EXPOSE 3000 CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"] # Heroku Procfile: web: bundle exec puma -C config/puma.rb # Docker equivalent: FROM ruby:3.2-slim AS base WORKDIR /app # Install dependencies RUN -weight: 500;">apt-get -weight: 500;">update && -weight: 500;">apt-get -weight: 500;">install -y \ build-essential libpq-dev nodejs -weight: 500;">npm && \ rm -rf /var/lib/-weight: 500;">apt/lists/* COPY Gemfile Gemfile.lock ./ RUN bundle -weight: 500;">install --deployment --without development test COPY . . RUN bundle exec rake assets:precompile # Production stage FROM ruby:3.2-slim WORKDIR /app RUN -weight: 500;">apt-get -weight: 500;">update && -weight: 500;">apt-get -weight: 500;">install -y libpq-dev && \ rm -rf /var/lib/-weight: 500;">apt/lists/* COPY --from=base /app /app USER 1000:1000 EXPOSE 3000 CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"] # Heroku Procfile: web: bundle exec puma -C config/puma.rb # Docker equivalent: FROM ruby:3.2-slim AS base WORKDIR /app # Install dependencies RUN -weight: 500;">apt-get -weight: 500;">update && -weight: 500;">apt-get -weight: 500;">install -y \ build-essential libpq-dev nodejs -weight: 500;">npm && \ rm -rf /var/lib/-weight: 500;">apt/lists/* COPY Gemfile Gemfile.lock ./ RUN bundle -weight: 500;">install --deployment --without development test COPY . . RUN bundle exec rake assets:precompile # Production stage FROM ruby:3.2-slim WORKDIR /app RUN -weight: 500;">apt-get -weight: 500;">update && -weight: 500;">apt-get -weight: 500;">install -y libpq-dev && \ rm -rf /var/lib/-weight: 500;">apt/lists/* COPY --from=base /app /app USER 1000:1000 EXPOSE 3000 CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"] services: app: build: . user: "1000:1000" ports: - "127.0.0.1:3000:3000" environment: - DATABASE_URL=postgres://app:${DB_PASS}@postgres:5432/app_prod - REDIS_URL=redis://redis:6379/0 - RAILS_ENV=production - SECRET_KEY_BASE=${SECRET_KEY} depends_on: - postgres - redis deploy: resources: limits: memory: 1G cpus: '2.0' networks: - backend postgres: image: postgres:16-alpine user: "999:999" volumes: - pgdata:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=${DB_PASS} - POSTGRES_DB=app_prod deploy: resources: limits: memory: 1G networks: - backend redis: image: redis:7-alpine volumes: - redisdata:/data deploy: resources: limits: memory: 256M networks: - backend traefik: image: traefik:v3 ports: - "443:443" - "80:80" volumes: - /var/run/-weight: 500;">docker.sock:/var/run/-weight: 500;">docker.sock:ro - ./traefik:/etc/traefik networks: - backend volumes: pgdata: redisdata: networks: backend: services: app: build: . user: "1000:1000" ports: - "127.0.0.1:3000:3000" environment: - DATABASE_URL=postgres://app:${DB_PASS}@postgres:5432/app_prod - REDIS_URL=redis://redis:6379/0 - RAILS_ENV=production - SECRET_KEY_BASE=${SECRET_KEY} depends_on: - postgres - redis deploy: resources: limits: memory: 1G cpus: '2.0' networks: - backend postgres: image: postgres:16-alpine user: "999:999" volumes: - pgdata:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=${DB_PASS} - POSTGRES_DB=app_prod deploy: resources: limits: memory: 1G networks: - backend redis: image: redis:7-alpine volumes: - redisdata:/data deploy: resources: limits: memory: 256M networks: - backend traefik: image: traefik:v3 ports: - "443:443" - "80:80" volumes: - /var/run/-weight: 500;">docker.sock:/var/run/-weight: 500;">docker.sock:ro - ./traefik:/etc/traefik networks: - backend volumes: pgdata: redisdata: networks: backend: services: app: build: . user: "1000:1000" ports: - "127.0.0.1:3000:3000" environment: - DATABASE_URL=postgres://app:${DB_PASS}@postgres:5432/app_prod - REDIS_URL=redis://redis:6379/0 - RAILS_ENV=production - SECRET_KEY_BASE=${SECRET_KEY} depends_on: - postgres - redis deploy: resources: limits: memory: 1G cpus: '2.0' networks: - backend postgres: image: postgres:16-alpine user: "999:999" volumes: - pgdata:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=${DB_PASS} - POSTGRES_DB=app_prod deploy: resources: limits: memory: 1G networks: - backend redis: image: redis:7-alpine volumes: - redisdata:/data deploy: resources: limits: memory: 256M networks: - backend traefik: image: traefik:v3 ports: - "443:443" - "80:80" volumes: - /var/run/-weight: 500;">docker.sock:/var/run/-weight: 500;">docker.sock:ro - ./traefik:/etc/traefik networks: - backend volumes: pgdata: redisdata: networks: backend: -weight: 500;">docker compose up --build # Hit localhost:3000, verify everything works # Run your test suite against Docker -weight: 500;">docker compose up --build # Hit localhost:3000, verify everything works # Run your test suite against Docker -weight: 500;">docker compose up --build # Hit localhost:3000, verify everything works # Run your test suite against Docker # Hetzner CLI (or use their web UI) hcloud server create \ --name prod-01 \ --type cx41 \ --image ubuntu-24.04 \ --ssh-key my-key \ --location nbg1 # Hetzner CLI (or use their web UI) hcloud server create \ --name prod-01 \ --type cx41 \ --image ubuntu-24.04 \ --ssh-key my-key \ --location nbg1 # Hetzner CLI (or use their web UI) hcloud server create \ --name prod-01 \ --type cx41 \ --image ubuntu-24.04 \ --ssh-key my-key \ --location nbg1 # SSH in and run -weight: 500;">apt -weight: 500;">update && -weight: 500;">apt -weight: 500;">upgrade -y -weight: 500;">apt -weight: 500;">install -y -weight: 500;">docker.io -weight: 500;">docker-compose-v2 -weight: 500;">systemctl -weight: 500;">enable -weight: 500;">docker # Create deploy user useradd -m -s /bin/bash deploy usermod -aG -weight: 500;">docker deploy # Set up firewall ufw allow 22/tcp ufw allow 80/tcp ufw allow 443/tcp ufw -weight: 500;">enable # SSH in and run -weight: 500;">apt -weight: 500;">update && -weight: 500;">apt -weight: 500;">upgrade -y -weight: 500;">apt -weight: 500;">install -y -weight: 500;">docker.io -weight: 500;">docker-compose-v2 -weight: 500;">systemctl -weight: 500;">enable -weight: 500;">docker # Create deploy user useradd -m -s /bin/bash deploy usermod -aG -weight: 500;">docker deploy # Set up firewall ufw allow 22/tcp ufw allow 80/tcp ufw allow 443/tcp ufw -weight: 500;">enable # SSH in and run -weight: 500;">apt -weight: 500;">update && -weight: 500;">apt -weight: 500;">upgrade -y -weight: 500;">apt -weight: 500;">install -y -weight: 500;">docker.io -weight: 500;">docker-compose-v2 -weight: 500;">systemctl -weight: 500;">enable -weight: 500;">docker # Create deploy user useradd -m -s /bin/bash deploy usermod -aG -weight: 500;">docker deploy # Set up firewall ufw allow 22/tcp ufw allow 80/tcp ufw allow 443/tcp ufw -weight: 500;">enable # Export from Heroku heroku pg:backups:capture --app your-app heroku pg:backups:download --app your-app # Import to new Postgres -weight: 500;">docker compose up -d postgres -weight: 500;">docker compose exec -T postgres pg_restore \ -U postgres -d app_prod < latest.dump # Export from Heroku heroku pg:backups:capture --app your-app heroku pg:backups:download --app your-app # Import to new Postgres -weight: 500;">docker compose up -d postgres -weight: 500;">docker compose exec -T postgres pg_restore \ -U postgres -d app_prod < latest.dump # Export from Heroku heroku pg:backups:capture --app your-app heroku pg:backups:download --app your-app # Import to new Postgres -weight: 500;">docker compose up -d postgres -weight: 500;">docker compose exec -T postgres pg_restore \ -U postgres -d app_prod < latest.dump # On the server -weight: 500;">docker compose up -d -weight: 500;">docker compose logs -f app # Watch for startup errors # Health check -weight: 500;">curl -s https://your-domain.com/health | jq . # On the server -weight: 500;">docker compose up -d -weight: 500;">docker compose logs -f app # Watch for startup errors # Health check -weight: 500;">curl -s https://your-domain.com/health | jq . # On the server -weight: 500;">docker compose up -d -weight: 500;">docker compose logs -f app # Watch for startup errors # Health check -weight: 500;">curl -s https://your-domain.com/health | jq . # .gitea/workflows/deploy.yml (or .github/workflows) name: Deploy on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Deploy run: | ssh deploy@your-server "cd /app && -weight: 500;">git pull && -weight: 500;">docker compose up -d --build" # .gitea/workflows/deploy.yml (or .github/workflows) name: Deploy on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Deploy run: | ssh deploy@your-server "cd /app && -weight: 500;">git pull && -weight: 500;">docker compose up -d --build" # .gitea/workflows/deploy.yml (or .github/workflows) name: Deploy on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Deploy run: | ssh deploy@your-server "cd /app && -weight: 500;">git pull && -weight: 500;">docker compose up -d --build" # Update your domain's A record to the new server IP # TTL: -weight: 500;">start at 60 seconds, increase after verification # Update your domain's A record to the new server IP # TTL: -weight: 500;">start at 60 seconds, increase after verification # Update your domain's A record to the new server IP # TTL: -weight: 500;">start at 60 seconds, increase after verification - Response times (should be same or faster) - Error rates - Database connections - Memory/CPU usage - Full control — no vendor can change pricing under you - 10x capacity headroom — a $15/month server handles more than 4 Heroku dynos - Better debugging — SSH into the box, inspect everything - No add-on tax — every Heroku add-on has a free self-hosted alternative - No ops experience and no budget to learn: Stay on PaaS until you have someone who can SSH into a server confidently - Compliance requirements: Some industries require specific cloud certifications - True auto-scaling needs: If you go from 100 to 100,000 requests in seconds, managed infrastructure is worth it