Tools: How We Dogfood Deploynix: Running Our Own Platform on Our Own Platform (2026)

Tools: How We Dogfood Deploynix: Running Our Own Platform on Our Own Platform (2026)

Our Infrastructure Setup

The Deployment Pipeline

Monitoring and Health Alerts

Real-Time Monitoring with WebSockets

Database Backups

SSL and Domain Management

Lessons Learned the Hard Way

The Feedback Loop

Why This Matters for You There is a particular kind of pressure that comes from running your deployment platform on itself. If Deploynix goes down, we cannot use Deploynix to fix Deploynix. It is turtles all the way down, and every turtle needs to be extremely reliable. Dogfooding is not just a buzzword we throw around at team meetings. It is the single most important quality assurance practice we have. Every feature we ship, every workflow we design, every edge case we handle, we experience firsthand because we are our own most demanding customer. When a deployment takes two seconds longer than it should, we feel it. When a monitoring alert fires with insufficient context, we are the ones squinting at it at midnight. This post pulls back the curtain on exactly how we run Deploynix on Deploynix. Our infrastructure choices, our deployment pipeline, the monitoring that keeps us honest, and the lessons we have learned the hard way. The Deploynix platform itself is a Laravel 12 application. It runs on the same infrastructure that any Deploynix customer would set up, with zero special treatment. We do not have secret internal tools or a separate deployment system. What you get is what we use. Our production environment consists of several Deploynix-managed servers provisioned across Hetzner for our primary European infrastructure. Here is the breakdown: App Servers: We run two app servers behind a Deploynix load balancer using the Least Connections balancing method. Each app server runs FrankenPHP as our Octane driver, which gives us the persistent worker processes and performance characteristics that a deployment platform demands. When a customer triggers a deployment, the request needs to be handled quickly and reliably, and FrankenPHP delivers that. Database Server: A dedicated MySQL database server handles all persistent data. We chose a dedicated database server rather than running MySQL alongside the application because database operations during heavy deployment periods can be resource-intensive. Separating concerns means a burst of concurrent deployments does not starve the database of CPU or memory. Cache Server: A dedicated Valkey server handles our caching, session storage, and queue management. Valkey is Redis-compatible, so everything in the Laravel ecosystem that works with Redis works seamlessly with Valkey. We moved to Valkey because it is genuinely open source and actively maintained by a broad community. Worker Servers: We run two dedicated worker servers that process our queue jobs. These handle the heavy lifting: provisioning servers, running deployments, syncing SSL certificates, executing backup routines, and processing webhook events from GitHub, GitLab, and Bitbucket. Worker servers are arguably the most critical part of our infrastructure because they do the actual work that customers are paying for. Load Balancer: A single load balancer distributes incoming traffic across our app servers. We use Least Connections rather than Round Robin because our requests vary wildly in duration. An API call to check server status returns in milliseconds, but a request that triggers a full deployment and streams output back via WebSockets can hold a connection open for minutes. Every code change to the Deploynix platform goes through the same deployment pipeline that our customers use. Here is what that looks like in practice. Our repository lives on GitHub, and we have configured automatic deployments through Deploynix's Git provider integration. When we merge a pull request to our production branch, Deploynix receives a webhook from GitHub and initiates a zero-downtime deployment. The deployment process executes the following steps in order, using the deploy script we have configured in the Deploynix dashboard: Before Install Hook: We put the application into maintenance mode, but only for the specific server being deployed. Since we run behind a load balancer, the other app server continues serving traffic. This is one of those features we built specifically because we needed it ourselves. Install Phase: Deploynix clones the latest code from our repository, runs composer install --no-dev --optimize-autoloader to pull in PHP dependencies, and runs npm ci && npm run build to compile our frontend assets. The install phase runs in a new release directory, completely separate from the currently running code. After Install Hook: This is where database migrations run. We execute php artisan migrate --force to apply any schema changes. We also run php artisan config:cache, php artisan route:cache, and php artisan view:cache to optimize performance. Activation Phase: Deploynix atomically swaps the symlink from the old release to the new one. This is the zero-downtime magic. The old code serves requests right up until the symlink changes, and the new code takes over instantly. There is no gap, no 502 errors, no dropped connections. After Activation Hook: We restart the Octane workers so they pick up the new code, restart the queue workers on our worker servers, and take the server out of maintenance mode. The entire deployment takes roughly 45 seconds per server. Because we deploy to our app servers sequentially behind the load balancer, there is always at least one server handling requests throughout the entire process. We eat our own monitoring for breakfast. Every server in our infrastructure has Deploynix health monitoring enabled, and we have configured alert thresholds that match what we recommend to our customers. CPU Monitoring: We alert when CPU usage exceeds 80% for more than five minutes. During heavy deployment periods, our worker servers can spike to 70-75% CPU, so we tuned the threshold to avoid false positives while still catching genuine problems. Memory Monitoring: Memory alerts fire at 85% utilization. FrankenPHP worker processes maintain a stable memory footprint, but we have caught memory leaks in queue jobs through this monitoring. One time, a job that processed large Git repositories was not releasing memory properly after cloning. We caught it because our own monitoring flagged the worker server's memory climbing steadily over a few hours. Disk Monitoring: Disk usage alerts at 80%. Deployment releases accumulate over time, and while Deploynix automatically cleans up old releases, log files and database backups can fill a disk faster than you expect. We learned this the hard way during a period when we were shipping multiple deployments per day during a feature sprint. Health Checks: Deploynix pings a health endpoint on each app server every minute. If a server fails three consecutive health checks, we get an alert. We have configured our Laravel health endpoint to verify database connectivity, cache accessibility, and queue worker responsiveness. A 200 response means everything is genuinely healthy, not just that Nginx is running. We use Laravel Reverb for our WebSocket infrastructure, and naturally, Reverb runs on our Deploynix-managed servers. When you watch a deployment log streaming in real time on the Deploynix dashboard, that data flows through Reverb. Running Reverb on our own platform was one of the most valuable dogfooding decisions we made. We discovered connection timeout issues, reconnection edge cases, and scaling concerns that we never would have found through synthetic testing alone. When a customer deploys and watches the output stream, they expect every line to appear instantly. We expect the same thing when we deploy, and that expectation drove us to optimize our WebSocket infrastructure significantly. Our production database is backed up daily using Deploynix's built-in backup system. We store backups in an AWS S3 bucket with versioning enabled and a lifecycle policy that retains daily backups for 30 days and weekly backups for a year. We test backup restoration quarterly. Not because we have ever needed to restore from backup in production, but because a backup you have never tested restoring is not really a backup. It is a hope. Deploynix makes it straightforward to download a backup and restore it to a test server, and we go through that process regularly to verify that our backup pipeline produces usable snapshots. The Deploynix platform itself runs behind SSL certificates that are automatically provisioned and renewed by Deploynix. We use Let's Encrypt certificates with DNS validation through Cloudflare, which allows us to obtain wildcard certificates for our vanity domain system. Every Deploynix customer gets a free *.deploynix.cloud vanity domain for their sites, and that wildcard certificate is managed by our own certificate automation. We issue the certificate locally on our application server, store it encrypted in the database, and sync it to all managed servers that need it. The renewal process runs daily as a scheduled task, checking whether the certificate is within 30 days of expiry and renewing it automatically if so. Dogfooding has taught us things that no amount of testing or user research could have revealed. Here are the lessons that shaped the platform most significantly. Deploy scripts need to be idempotent. Early on, we had a deploy script that ran a database seeder. The seeder was not idempotent, so if a deployment failed partway through and we redeployed, we ended up with duplicate data. Now Deploynix encourages deploy scripts that can run multiple times safely, and we document this pattern prominently. Worker servers need independent deployment. When we first started, deploying to an app server would also restart queue workers. But queue workers were processing long-running jobs, and restarting them mid-job caused failures. We restructured so that worker servers deploy independently, with graceful queue worker restarts that finish current jobs before picking up new code. Rollback must be instant. We built the rollback feature because we needed it ourselves. A bad migration once made it past our staging environment and caused errors in production. Being able to roll back to the previous release in under five seconds, by simply swapping the symlink back, saved us from extended downtime. The rollback feature remains one of the most important things Deploynix offers. Firewall rules matter more than you think. Our database server should only accept connections from our app servers and worker servers. When we first set up our infrastructure, the database was accessible from any server on the same network. Deploynix's firewall management now makes it simple to restrict access, and our own database server has strict rules allowing only the specific IP addresses of our app and worker servers. Scheduled deployments are not just for customers. We built the scheduled deployment feature because enterprise customers asked for it, but we started using it ourselves for deploying database-heavy migrations during low-traffic hours. Being able to merge a PR on Friday afternoon and schedule the deployment for Saturday at 3 AM gives us peace of mind without requiring someone to be online at odd hours. The most valuable aspect of dogfooding is the tight feedback loop it creates. When we add a new feature to Deploynix, we use it immediately in our own workflow. If the feature is confusing, we feel the confusion. If it is slow, we feel the slowdown. If it is missing an edge case, we hit the edge case. This feedback loop means that by the time a feature reaches our customers, it has already survived the scrutiny of a team that depends on it for their livelihood. We are not going to ship a deployment system with a subtle bug in the rollback mechanism, because our own rollbacks need to work flawlessly. Every Deploynix team member has access to our production Deploynix dashboard. Developers, designers, and support staff all see the same interface that customers see. There is no internal admin panel with extra information or shortcuts. When a team member says "this workflow feels clunky," they are speaking from direct experience. When you use Deploynix to manage your Laravel applications, you are using the same platform, the same features, and the same infrastructure patterns that we use ourselves. We are not building a tool and throwing it over the wall. We are building a tool that we need to be excellent because our own production systems depend on it. Every deployment you trigger goes through code paths that we exercise dozens of times per week. Every health alert you receive uses the same monitoring pipeline that watches our own servers. Every backup you configure uses the same system that protects our own data. Dogfooding does not guarantee perfection. We still find bugs, we still have incidents, and we still have features that need improvement. But it guarantees that we find those problems quickly, that we feel the urgency to fix them, and that we understand the real-world impact of every decision we make. That is the kind of accountability that makes a deployment platform trustworthy. Templates let you quickly answer FAQs or store snippets for re-use. as well , this person and/or