Tools: dotCloud Is Dead. Its Internal Tool Runs the Internet. - Full Analysis

Tools: dotCloud Is Dead. Its Internal Tool Runs the Internet. - Full Analysis

2010: dotCloud and the PaaS Graveyard

The Tool Nobody Was Supposed to See

PyCon 2013: Five Minutes

The Dockerfile Changed Everything

Kill the Product, Become the Tool

The Kubernetes Problem

What dotCloud Left Inside Docker

What Builders Should Take From This On March 15, 2013, Solomon Hykes gave a 5-minute lightning talk at PyCon Santa Clara. He showed maybe 15 people in a half-empty room how to package an app into an isolated container and run it anywhere. Polite applause. The video later hit half a million views. Within 18 months, his company would rename itself after that little demo tool and be valued at over a billion dollars. The company was called dotCloud. The tool was Docker. And it almost never shipped. Solomon Hykes and his co-founders started dotCloud around 2010. The pitch was straightforward: a Platform-as-a-Service that could run any language stack. Push your code, they'd handle the infrastructure. Python, Ruby, Node.js, Java, PHP, whatever. At a time when Heroku was mostly Ruby and Python, dotCloud's multi-language support was the differentiator. The problem: the PaaS market was brutal. Heroku had developer mindshare locked down. Google App Engine was free. Cloud Foundry went open source. Engine Yard, Nodejitsu, and about a dozen others were fighting for the same pool of developers who wanted managed hosting but didn't want to SSH into a box. dotCloud went through Y Combinator (S10 batch), raised a $10M Series A, hired a team, and built the platform. It worked fine. It wasn't bad. But "not bad" doesn't win in a market where Heroku's git push heroku main had already become muscle memory for an entire generation of developers. By late 2012, dotCloud was stuck. Not dead, but not growing. The kind of startup purgatory where you have enough revenue to keep the lights on but not enough momentum to raise another round. Everyone on the team could feel it stalling. Under the hood, dotCloud ran customer applications in isolated Linux Containers (LXC). Not VMs. Each customer's app got its own container with its own filesystem, network stack, and process space. The engineering team had built an internal tool to manage all of this: packaging apps into container images, distributing them across servers, running them in isolation. This internal tool was good. Genuinely good. The team kept refining it even as the PaaS stalled, because the deployment infrastructure was the most interesting engineering problem they had. Here's the thing about containers in 2012 though. Using LXC directly was miserable: Nobody outside sysadmin circles wanted to touch this. Containers were powerful but actively hostile to application developers. dotCloud's internal tool wrapped all of this complexity behind a simple interface. Build. Ship. Run. Hykes started wondering: what if that wrapper was more valuable than the PaaS it powered? March 15, 2013. PyCon Santa Clara. Solomon got a lightning talk slot. Not a keynote. Not a regular session. Five minutes, squeezed in at the end of a track. He opened his laptop and ran a live demo. Build a container image. Push it. Pull it on another machine. Run it. Same environment, same dependencies, same behavior. The whole thing took about three minutes. Two commands. What previously took a page of LXC configuration and manual networking setup was now two lines in a terminal. The room was small and the applause was polite. But the GitHub repo went up the same day. Thousands of stars within a week. The most-discussed infrastructure project on Hacker News within a month. dotCloud's PaaS was suddenly the least interesting thing about the company. The timing couldn't have been better. AWS was getting more complex, not less. The "works on my machine" problem was universal and unsolved. Heroku's walled garden felt increasingly limiting for anyone who needed more control. Docker arrived at the exact moment that developers wanted containers but didn't want to deal with LXC. Containers existed before Docker. LXC since 2008. FreeBSD jails since 2000. Solaris Zones since 2004. So why did Docker win? Look how boring that is. That's the whole point. A Dockerfile is a recipe any developer can read, version control, and share. It's text. You can review it in a PR, diff two versions, paste it in a README. Somebody on the other side of the planet can build the exact same image from the same file. Before Docker, container configuration was imperative. Run these commands in this order and hope the resulting state is correct. The Dockerfile made it declarative. Describe what you want. Docker figures out how to build it. The layered caching system meant rebuilds were fast: change your application code and only the last few layers rebuild. Dependencies stay cached. Then there was Docker Hub. Think npm, but for container images: The npm/GitHub insight applied to infrastructure. Make sharing and consuming trivially easy and adoption explodes. This wasn't an original idea. It was a well-executed stolen idea, and execution mattered more. dotCloud's years building a developer-facing PaaS directly shaped these decisions. The layered image system came from optimizing PaaS deploy speed. The registry model came from distributing app images across dotCloud's internal infrastructure. The minimal CLI came from years of learning what developers actually want (fewer moving parts, always). The PaaS failed. The PaaS experience built the tool that won. By October 2013, just seven months after the PyCon talk, dotCloud Inc. officially renamed itself to Docker Inc. The PaaS product was sold off to cloudControl (which later went bankrupt itself). The entire company pivoted to the open-source container tool. This decision was risky. Open-source companies in 2013 didn't have a clear playbook for making money. Docker's strategy was essentially: get massive adoption, figure out the business model later. The adoption part worked. Docker became a verb. "Just dockerize it" entered the developer vocabulary. Every CI/CD pipeline, every cloud provider, every DevOps job description absorbed Docker like it had always been there. The business model part took longer. And it never fully worked. Google had been running containers internally for over a decade using a system called Borg. When Docker made containers mainstream, Google open-sourced Kubernetes in 2014 as the answer to "okay, you've got containers, now how do you run thousands of them?" Docker's answer was Docker Swarm: simpler, more opinionated, integrated directly into the Docker CLI. Kubernetes was harder to learn. More YAML. More concepts. More abstraction layers. Kubernetes won. Completely. The CNCF (Cloud Native Computing Foundation), backed by Google, Red Hat, and others, built a massive ecosystem around Kubernetes. AWS, Azure, and GCP all launched managed Kubernetes services. Docker Swarm became a footnote that people mention in job interviews when they want to seem thorough. Solomon Hykes stepped down as CTO in 2017 and left Docker Inc. in 2018. The company went through layoffs, sold the enterprise business to Mirantis in 2019, and refocused on Docker Desktop and Docker Hub. They survived. But the company that kicked off the container revolution couldn't capture most of the value it created. Docker's DNA is dotCloud's DNA. You can still see it today. Layered images came from dotCloud needing to deploy customer app updates fast. Shipping an entire filesystem every time was too slow. The layered approach means only changed layers get transferred. A PaaS optimization that became a container primitive. The daemon architecture (dockerd) came from dotCloud's infrastructure, where a central process managed all customer containers on a host. Every time you run docker ps, you're talking to a design pattern from a PaaS that doesn't exist anymore. The registry model mirrors how dotCloud distributed app images across their hosting nodes. Docker Hub is dotCloud's internal image distribution system, turned inside out and made public. When you type docker build today, you're using a tool shaped by the problems of a PaaS that lost to Heroku in 2011. dotCloud's team kept improving their deployment tooling even when the PaaS product was stalling, because it was the most interesting engineering problem they had. They didn't build Docker as a strategic pivot. They built it because they were engineers and the problem was compelling. Then they recognized something that most organizations suppress: their internal tool had more pull than their actual product. The willingness to kill dotCloud and let Docker live was the hard decision. Most companies can't make it. The product has a team, a roadmap, customers, revenue. The side project has enthusiasm and GitHub stars. Choosing the side project feels irresponsible until it's obviously the right call, and by then it's often too late. Sometimes the plumbing is the product. The trick is recognizing it before the company runs out of runway. Watch the talk: Solomon Hykes' original PyCon 2013 lightning talk is still on YouTube. It's 5 minutes long and you can feel the moment the audience starts paying attention. Worth watching just for the historical artifact. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

# LXC circa 2012: manual, fragile, enjoyed by nobody -weight: 600;">sudo lxc-create -t ubuntu -n myapp -weight: 600;">sudo lxc--weight: 500;">start -n myapp -d -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">apt-get -weight: 500;">install -y python3 python3--weight: 500;">pip -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">pip -weight: 500;">install -r /mnt/shared/requirements.txt # now configure networking by hand # now configure storage by hand # now pray it works the same way on the other server # LXC circa 2012: manual, fragile, enjoyed by nobody -weight: 600;">sudo lxc-create -t ubuntu -n myapp -weight: 600;">sudo lxc--weight: 500;">start -n myapp -d -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">apt-get -weight: 500;">install -y python3 python3--weight: 500;">pip -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">pip -weight: 500;">install -r /mnt/shared/requirements.txt # now configure networking by hand # now configure storage by hand # now pray it works the same way on the other server # LXC circa 2012: manual, fragile, enjoyed by nobody -weight: 600;">sudo lxc-create -t ubuntu -n myapp -weight: 600;">sudo lxc--weight: 500;">start -n myapp -d -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">apt-get -weight: 500;">update -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">apt-get -weight: 500;">install -y python3 python3--weight: 500;">pip -weight: 600;">sudo lxc-attach -n myapp -- -weight: 500;">pip -weight: 500;">install -r /mnt/shared/requirements.txt # now configure networking by hand # now configure storage by hand # now pray it works the same way on the other server -weight: 500;">docker build -t myapp . -weight: 500;">docker run -p 5000:5000 myapp -weight: 500;">docker build -t myapp . -weight: 500;">docker run -p 5000:5000 myapp -weight: 500;">docker build -t myapp . -weight: 500;">docker run -p 5000:5000 myapp FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN -weight: 500;">pip -weight: 500;">install --no-cache-dir -r requirements.txt COPY . . EXPOSE 5000 CMD ["python", "app.py"] FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN -weight: 500;">pip -weight: 500;">install --no-cache-dir -r requirements.txt COPY . . EXPOSE 5000 CMD ["python", "app.py"] FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN -weight: 500;">pip -weight: 500;">install --no-cache-dir -r requirements.txt COPY . . EXPOSE 5000 CMD ["python", "app.py"] -weight: 500;">docker push myregistry/myapp:v1.2 -weight: 500;">docker pull myregistry/myapp:v1.2 -weight: 500;">docker push myregistry/myapp:v1.2 -weight: 500;">docker pull myregistry/myapp:v1.2 -weight: 500;">docker push myregistry/myapp:v1.2 -weight: 500;">docker pull myregistry/myapp:v1.2 # Docker Swarm: the simpler bet that lost -weight: 500;">docker swarm init -weight: 500;">docker -weight: 500;">service create --replicas 3 --name web myapp:latest # Kubernetes: the complex bet that won # (this deployment.yaml is 30+ lines, but you get an ecosystem) -weight: 500;">kubectl apply -f deployment.yaml -weight: 500;">kubectl get pods # Docker Swarm: the simpler bet that lost -weight: 500;">docker swarm init -weight: 500;">docker -weight: 500;">service create --replicas 3 --name web myapp:latest # Kubernetes: the complex bet that won # (this deployment.yaml is 30+ lines, but you get an ecosystem) -weight: 500;">kubectl apply -f deployment.yaml -weight: 500;">kubectl get pods # Docker Swarm: the simpler bet that lost -weight: 500;">docker swarm init -weight: 500;">docker -weight: 500;">service create --replicas 3 --name web myapp:latest # Kubernetes: the complex bet that won # (this deployment.yaml is 30+ lines, but you get an ecosystem) -weight: 500;">kubectl apply -f deployment.yaml -weight: 500;">kubectl get pods - My project: Hermes IDE | GitHub - Me: gabrielanhaia