Tools: To Install and Use Docker Compose on Ubuntu How

Tools: To Install and Use Docker Compose on Ubuntu How

Source: DigitalOcean

By Tony Tran, Erika Heidi and Vinayak Baranwal Docker simplifies the process of managing application processes in containers. While containers are similar to virtual machines in certain ways, they are more lightweight and resource-friendly. This allows developers to break down an application environment into multiple isolated services. For applications depending on several services, orchestrating all the containers to start up, communicate, and shut down together can quickly become unwieldy. Docker Compose is a tool that allows you to run multi-container application environments based on definitions set in a YAML file. It uses service definitions to build fully customizable environments with multiple containers that can share networks and data volumes. This guide will walk you through installing Docker Compose on an Ubuntu server and running a simple container. From there, you will learn to build a multi-service environment using a WordPress application and a MySQL database. We will also cover more advanced topics, including scaling services, defining custom networks, and using modular include directives. Finally, this article provides a migration guide from the older docker-compose (v1) to the modern docker compose (v2) and a detailed section on troubleshooting common issues like port conflicts and permission errors. To follow this article, you will need: Note: Starting with Docker Compose v2, Docker has migrated towards using the compose CLI plugin command, and away from the original docker-compose as documented in our How to Install Docker Compose on Ubuntu (Step-by-Step Guide). While the installation differs, in general the actual usage involves dropping the hyphen from docker-compose calls to become docker compose. For full compatibility details, check the official Docker documentation on command compatibility between the new compose and the old docker-compose. There are two ways to install Docker Compose on Ubuntu: We’ll discuss both ways in this section. First, let’s set up the Docker apt repository. Now, you can install Docker Compose using the following command: Docker Compose is now successfully installed on your system. To verify that the installation was successful, you can run: You’ll see output similar to this: Note: Docker Compose v5 is the next major release after v2. The Docker team skipped v3 and v4 to avoid confusion with the legacy Compose file format version numbers (2.x and 3.x) from the v1 era. Compose v5 is functionally compatible with v2, your existing docker-compose.yml files and commands work without changes. The primary addition in v5 is an official Go SDK, which lets developers integrate Compose functionality directly into their own tools and automation. The docker-compose-plugin package installed in this guide will update to v5 through the standard apt upgrade process. You can track releases on the official Compose GitHub Releases page. To make sure you obtain the most updated stable version of Docker Compose, you’ll download this software from its official Github repository. First, confirm the latest version available in their releases page. At the time of this writing, the most current stable version is v2.40.2. Use the following command to download: Next, set the correct permissions so that the docker compose command is executable: In the next section, you’ll see how to set up a docker-compose.yml file and get a containerized environment up and running with this tool. To demonstrate how to set up a docker-compose.yml file and work with Docker Compose, you’ll create a web server environment using the official Nginx image from Docker Hub, the public Docker registry. This containerized environment will serve a single static HTML file. Start off by creating a new directory in your home folder, and then moving into it: In this directory, set up an application folder to serve as the document root for your Nginx environment: Using your preferred text editor, create a new index.html file within the app folder: Place the following content into this file: Save and close the file when you’re done. If you are using nano, you can do that by typing CTRL+X, then Y and ENTER to confirm. Next, create the docker-compose.yml file: Insert the following content in your docker-compose.yml file: In modern Docker Compose, the version field is optional and often omitted, as Compose can automatically detect the configuration version. The example above does not include a version field, which is the recommended approach for most new projects. You only need to specify version for legacy compatibility. You then have the services block, where you set up the services that are part of this environment. In your case, you have a single service called web. This service uses the nginx:alpine image and sets up a port redirection with the ports directive. All requests on port 8000 of the host machine (the system from where you’re running Docker Compose) will be redirected to the web container on port 80, where Nginx will be running. The volumes directive will create a shared volume between the host machine and the container. This will share the local app folder with the container, and the volume will be located at /usr/share/nginx/html inside the container, which will then overwrite the default document root for Nginx. Save and close the file. You have set up a demo page and a docker-compose.yml file to create a containerized web server environment that will serve it. In the next step, you’ll bring this environment up with Docker Compose. With the docker-compose.yml file in place, you can now execute Docker Compose to bring your environment up. The following command will download the necessary Docker images, create a container for the web service, and run the containerized environment in background mode: Docker Compose will first look for the defined image on your local system, and if it can’t locate the image it will download the image from Docker Hub. You’ll see output like this: Note: If you encounter a “permission denied” error when running docker compose up, this typically means your non-root user does not have permission to access the Docker daemon’s socket. By default, the Docker daemon binds to a Unix socket (/var/run/docker.sock) which is owned by the root user. To fix this, you must add your non-root user to the docker group, which is created during Docker’s installation. Run the following command to add your user to the docker group: After running this command, you will need to log out and log back in for the group changes to take effect. You can also activate the changes for the current terminal session by typing: This command should resolve any permission errors related to the Docker socket. For a full walkthrough, please refer to Step 2 of How to Install Docker on Ubuntu – Step-by-Step Guide. Your environment is now up and running in the background. To verify that the container is active, you can run: This command will show you information about the running containers and their state, as well as any port redirections currently in place: You can now access the demo application by pointing your browser to either localhost:8000 if you are running this demo on your local machine, or your_server_domain_or_IP:8000 if you are running this demo on a remote server. You’ll see a page like this: The shared volume you’ve set up within the docker-compose.yml file keeps your app folder files in sync with the container’s document root. If you make any changes to the index.html file, they will be automatically picked up by the container and thus reflected on your browser when you reload the page. In the next step, you’ll see how to manage your containerized environment with Docker Compose commands. You’ve seen how to set up a docker-compose.yml file and bring your environment up with docker compose up. You’ll now see how to use Docker Compose commands to manage and interact with your containerized environment. To check the logs produced by your Nginx container, you can use the logs command: You’ll see output similar to this: If you want to pause the environment execution without changing the current state of your containers, you can use: To resume execution after issuing a pause: The stop command will terminate the container execution, but it won’t destroy any data associated with your containers: If you want to remove the containers, networks, and volumes associated with this containerized environment, use the down command: Notice that this won’t remove the base image used by Docker Compose to spin up your environment (in your case, nginx:alpine). This way, whenever you bring your environment up again with a docker compose up, the process will be much faster since the image is already on your system. In case you want to also remove the base image from your system, you can use: Note: Please refer to our guide on How to Install Docker on Ubuntu – Step-by-Step Guide for a more detailed reference on Docker commands. The true power of Docker Compose is in managing multiple services that work together. The Nginx example was a single service. Let’s create a more practical, multi-service application: a WordPress website connected to a MySQL database. This setup involves two services: wordpress (running the application) and db (running the database). We will also use Docker volumes to ensure the database data persists even if the container is removed. Let’s create a new directory for this application: For this example, we don’t need any local files, as the images will contain all the necessary software. Create a new docker-compose.yml file: Paste the following configuration. This file is more complex, so we will examine each part. Note: We’ve hardcoded the values for password here for illustration purposes. When using in an actual environment, make sure to use environment variables in .env file to avoid exposing your credentials. Save and close the file. Remember to replace the <^...^> placeholders with strong, secure passwords. Let’s break down the new directives in this file: Now, bring this multi-service application up: Compose will pull both the mysql and wordpress images and then create the containers, starting the db service first. You can now access your new WordPress site by navigating to localhost:8001 or <^your_server_domain_or_IP<^>:8001 in your browser. You should see the WordPress installation screen. For a more detailed example, check out our article on How To Install WordPress With Docker Compose Beyond web applications, Docker Compose is an extremely useful tool for creating reproducible data science and machine learning (AI/ML) environments. AI/ML projects are known for their complex dependencies, including specific Python versions, libraries like TensorFlow or PyTorch, and system-level drivers like the NVIDIA CUDA Toolkit. Docker Compose captures this entire environment in configuration files, solving the “it works on my machine” problem, which is critical for reproducible research. In this example, you will create a multi-service AI/ML environment consisting of: Prerequisite: This example requires an NVIDIA GPU on your host machine and the NVIDIA Container Toolkit to be installed. Without it, the container will fail to start when requesting GPU resources. First, create a directory for your project. Inside it, you will create a docker-compose.yml file, a jupyter directory, a Dockerfile for Jupyter, and a requirements.txt file. The structure should look something like this: This file lists the Python packages for your data science environment. Add your required packages. For this example, we’ll include libraries for data manipulation, database connection, and a deep learning framework. Save and close the file. This file defines your custom JupyterLab service. It uses the official Jupyter scipy-notebook image (hosted on Quay.io) as its base and installs the packages from requirements.txt. Paste the following content: This file instructs Docker to use quay.io/jupyter/scipy-notebook as the starting point, copy your requirements.txt into the container, and then use pip to install the packages. Now, create the main docker-compose.yml file. This file will orchestrate both the db service and your custom jupyter service. Paste the following configuration, replacing the <^...^> placeholders with your own secure credentials. This configuration file introduces several important concepts: With your files in place, you are ready to build and run the services. From your ai-project directory, run the docker compose up command. You must add the --build flag the first time to tell Compose to build your custom jupyter image. Compose will first build the jupyter image (which may take a few minutes as it downloads TensorFlow), then pull the postgres image, and finally start both containers. You can now access the JupyterLab interface by navigating to http://localhost:8888 (or <^your_server_ip^>:8888) in your browser. You will be prompted for a token, which you can get from the container logs: Look for a line similar to http://127.0.0.1:8888/lab?token=a1b2c3d4e5f6... Inside a Jupyter notebook, you can now connect to your database using the hostname db and the credentials you provided. Your environment also has access to the host’s GPU for model training. Docker Compose includes features for scaling services and managing the networks they communicate on. Imagine your Nginx web server from Step 2 is getting too much traffic. You can scale the web service to run multiple container instances. Docker Compose can manage this automatically. There are two primary ways to scale a service. You can define the desired number of instances directly in your docker-compose.yml file using the deploy and replicas keys. This feature was originally part of Docker Swarm but is now available for standard Compose deployments. Modify your Nginx docker-compose.yml from Step 2: When you run docker compose up -d, Compose will create three web containers. However, you will have a problem: all three will try to bind to host port 8000. This will cause a “port is already allocated” error for the second and third containers. To resolve this in a production setup, you would remove the ports mapping from the web service. This way, the web containers are only accessible within the Docker network. A separate load balancer service would be the only one with a public port. It would then distribute traffic to the three web replicas. An example configuration would look like this (this is an advanced example): In this setup, the load_balancer listens on port 80 and routes requests to web_1, web_2, and web_3 internally. This replicas key is most useful when used with a reverse proxy (like Traefik or another Nginx instance) that can load-balance requests between the replicas within the Docker network, without each replica needing to expose a port on the host. A common method for scaling is the --scale flag, a carry-over from Compose v1. This flag is applied at runtime and overrides any replicas key. However, you must be careful with port definitions. If your service defines a fixed host port mapping (e.g., "8000:80"), running docker compose up --scale will cause an error for the second and third containers as they all try to bind to the same host port. To use --scale for services that expose ports, you must not use a fixed host port mapping. Option 1: Map to Random Host Ports (Good for Development) You can modify your docker-compose.yml to specify only the container port. This tells Docker to map port 80 in each container to a random, available port on your host machine. Now, when you run the scale command: Run docker compose ps to see the result. Each container will be running on a different, randomly assigned host port: Option 2: Use a Reverse Proxy (Good for Production) The other solution, as mentioned in the replicas section, is to remove the ports directive from the web service entirely. You would then use a separate load balancer container (which has the only public port) to manage and distribute traffic to the scaled replicas within the Docker network. To stop and remove all three containers, the command remains the same: By default, Docker Compose creates a single bridge network for your application. Every service in the file is attached to it, which is how the wordpress container was able to find the db container just by using its service name (db). However, you can define your own custom networks for better isolation and control. Let’s modify the WordPress example to use a custom bridge network. Here’s what we added: If you had a third service (like an analytics tool) that you did not attach to app_net, it would be completely isolated and unable to communicate with the db or wordpress containers. For multi-host clustering with Docker Swarm, you would change the driver from bridge to overlay. The bridge driver is for communication between containers on a single host, which is the standard for most Docker Compose use cases. As your applications grow, your docker-compose.yml file can become large and difficult to manage. Docker Compose supports an include directive, allowing you to split your configuration across multiple files. Imagine you want to separate your WordPress and database definitions, and perhaps have a common docker-compose.override.yml for development-specific settings (like binding a port on the database). Your directory structure might look like this: This file will only define the db service. This file will only define the wordpress service. Now, your main docker-compose.yml file becomes very simple. It just uses include to pull in the other files. When you run docker compose up -d, Compose will read all three files, merge the configurations, and start the db and wordpress services exactly as if they were defined in a single file. This approach makes your configuration much more modular and reusable. Note: The include: directive requires Docker Compose v2.20 or later. If your system uses an older version, you can combine files using docker compose -f docker-compose.db.yml -f docker-compose.web.yml up. The docker compose command you installed in Step 1 is “Compose v2.” It is a Go-based plugin built directly into the Docker CLI. The original version, “Compose v1,” was a separate Python tool invoked with docker-compose (a hyphen). As of July 2023, Compose v1 is no longer supported and has been deprecated. Most commands are identical, just with the hyphen removed. Here is a table comparing common v1 commands to their v2 equivalents. As you can see, for most day-to-day use, the only change is docker-compose -> docker compose. If you use scripts, you can update them by simply removing the hyphen. If you still have the old docker-compose v1 installed, you can remove it to avoid confusion: Or, if it was installed by apt: When you work with Docker Compose, you may encounter issues related to file syntax, permissions, or container runtime conflicts. Most problems can be diagnosed and resolved by methodically checking your configuration, permissions, and container logs. Let’s look at some of the most common errors and their solutions. This is one of the most common errors for new Docker users. You run docker compose up and see an error message about the Docker daemon socket. Problem: Your non-root user does not have permission to communicate with the Docker daemon, which runs as root. Solution: You must add your user to the docker group, which was created during Docker’s installation. Add your user to the docker group: For the new group membership to take effect, you must log out and log back in. Alternatively, you can activate the group changes for your current terminal session by typing: This should resolve any permission errors related to the Docker socket. Docker Compose fails to run and reports that your docker-compose.yml file is invalid. You try to start your environment, but the command fails with an error message that the address is already in use. Problem: Another process on your host machine is already listening on the port you are trying to map (in this case, port 8000). This is often another Docker container or a local development server. Solution: You have two options: Stop the other process. You can find the process using the port with this command: If it is another Docker container, stop it with docker stop <container_id>. Change the host port in your docker-compose.yml file. This is often the simplest fix. Change the ports mapping from "8000:80" to a different port, such as "8001:80". In a multi-service application (like the WordPress and MySQL example), the wordpress container starts but its logs show Connection refused or MySQL server has gone away when trying to connect to the db service. Problem: You have used depends_on, but this directive only waits for the db container to start. It does not wait for the MySQL application inside the container to be fully initialized and ready to accept connections. Solution: The application (in this case, WordPress) must be configured to retry its connection. Most modern images have this retry logic built-in. For images that do not, you must implement a healthcheck. You can add a healthcheck to your db service. The wordpress service’s depends_on can then be configured to wait for the database to be “healthy,” not just “started.” Example Healthcheck for MySQL: You may encounter permission errors in your container logs, or find that your container’s data directory is empty. Problem: The container runs as a specific user (e.g., www-data with User ID 33), but the host directory you mounted is owned by your user (e.g., ubuntu with User ID 1000). The container’s user does not have permission to write to the host directory. Solution: Change the ownership of the host directory to match the user ID inside the container. You can find the container’s user ID by running docker compose exec <service_name> id. For example, if the ID is 33 (common for www-data): Problem: Your docker-compose.yml uses a relative path like ./app, but your index.html file is not being served. Solution: Docker Compose interprets relative paths from the directory where you run the docker compose up command. Always run Compose commands from the same directory that contains your docker-compose.yml file. Your application container logs show Host not found or Could not resolve host: db. Check Service Names: Ensure the hostname in your application’s configuration (e.g., the WORDPRESS_DB_HOST environment variable) is exactly the same as the service name in your docker-compose.yml (e.g., db). Inspect the Network: Use docker compose ps to find your project’s name (e.g., compose-demo). Then, inspect the default network: The JSON output will list all containers attached to this network. If one of your services is missing, check your docker-compose.yml for any custom networks configuration that might be isolating it. You type docker-compose up (with a hyphen) and see a “command not found” error. Always use docker compose (space) when following this guide. Docker Compose is a tool for defining and running multi-container Docker applications. It uses a single YAML file (by default, docker-compose.yml) to configure all of your application’s components, which are called services. This file also defines the networks that allow the services to communicate with each other and the volumes used for persistent data. With this single file, you can manage your entire application stack with simple commands: It is most useful for managing applications that require multiple components, such as a website that needs a web server (like Nginx), an application backend (like WordPress), and a database (like MySQL). The recommended method is to install Docker Compose as a plugin for the Docker CLI. This is done by installing the docker-compose-plugin package from Docker’s official apt repository. First, ensure you have followed the official Docker documentation to set up Docker’s apt repository on your Ubuntu system. Update your package list: Install the Docker Compose plugin: Verify the installation by checking the version. The command uses a space, not a hyphen. Because Docker Compose (v2) is installed as a system package using apt, you can update it using the standard Ubuntu software update process. Refresh your local package index: Run a system-wide upgrade, which will include the Compose plugin: Alternatively, if you only want to update the plugin itself, you can run: Yes, Docker Compose is frequently used in production, particularly for applications that run on a single host. It provides a straightforward way to define, deploy, and manage the lifecycle of your application’s services, networks, and volumes. For a single-server deployment, it is a very effective and simple-to-manage solution. For more complex scenarios that require coordinating containers across multiple hosts (a cluster), other tools are more common. These include Docker Swarm (which uses a similar Compose file syntax) and Kubernetes (which is the industry standard for large-scale container orchestration). This table outlines the primary differences between the Docker CLI and the Docker Compose CLI. In short, you use docker to interact with a single container. You use docker compose to manage your entire application stack (e.g., web, app, db) all at once. You run multiple containers by defining them as services in your docker-compose.yml file. Create a file named docker-compose.yml. Inside this file, use the services: key to define each container you want to run. Here is a practical example that defines and runs two containers: a WordPress site and a MySQL database. Save the file and open a terminal in the same directory. Run a single command to start both containers: Docker Compose will read the file, create a shared network for the services, pull both the mysql and wordpress images, and start a container for each service. Docker Compose is a standard tool for development environments because it solves several common problems. Consistent Environments: It ensures every developer on a team runs the exact same services (database, cache, web server) with the exact same versions and configurations. This is defined in the docker-compose.yml file, which is committed to version control. This eliminates the “it works on my machine” problem. Simplicity: It replaces complex setup scripts and long docker run commands. A developer only needs to run docker compose up to start the entire application stack and docker compose down to stop it. Service Isolation: Developers can work on multiple projects on the same machine without dependency conflicts. Project A can use PostgreSQL 9.6 and Project B can use PostgreSQL 14, as each database runs in an isolated container managed by its own Compose file. Easy Integration Testing: Because Compose starts all of an application’s dependencies together, it creates a perfect local environment for running integration tests that verify how services interact with each other. No. The top-level version field (for example, version: "3.8") is no longer required and is ignored by Docker Compose v2. Modern Compose automatically detects the schema without it. The field was used by older versions of Compose to determine which features were available. You will still see it in older tutorials and existing projects. It does no harm if present, but there is no reason to add it to new files. The examples in this guide omit it intentionally. In this guide, you installed Docker Compose and configured a complete multi-container application. You started with a basic docker-compose.yml file for an Nginx web server and progressed to a more complex, realistic stack involving a WordPress application and a MySQL database. You have learned to manage the entire application lifecycle, from building and running services to stopping and removing them. You are now familiar with key concepts for managing applications effectively, including service scaling, custom network definitions, and splitting your configuration into modular files using the include directive. By following the migration guide and troubleshooting steps, you can also resolve common issues like port conflicts, “Permission denied” errors, and YAML syntax mistakes. The skills covered here will allow you to build consistent, reproducible development environments and deploy single-host applications with confidence. For a complete reference of all available docker compose commands, check the official documentation. For more Docker-related tutorials, check out the following articles: Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases. Learn more about our products This curriculum introduces open-source cloud computing to a general audience along with the skills necessary to deploy applications and websites securely to the cloud. Browse Series: 39 tutorials Dev/Ops passionate about open source, PHP, and Linux. Former Senior Technical Writer at DigitalOcean. Areas of expertise include LAMP Stack, Ubuntu, Debian 11, Linux, Ansible, and more. Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments. This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Wonderful tutorial! I followed this tutorial for installing. Then I found this one that has a good tip on using bash aliases. Is there any difference in installing docker compose in .docker/cli-plugins/ vs installing it in /usr/local/bin/ ? The latter is what the linked guide says to do. This is what your older guide (20.04) and the one linked above is asking me to do. Thx for the tutorial! I am constantly running into the issue that I cant mount any directory into the container which is outside of the current folder structure of the home directory of the user who installed docker compose. The container directory is empty but should contain the files in the host directory. E.g. Please complete your information! Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation. Full documentation for every DigitalOcean product. The Wave has everything you need to know about building a business, from raising funding to marketing your product. Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter. New accounts only. By submitting your email you agree to our Privacy Policy Scale up as you grow — whether you're running one virtual machine or ten thousand. From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.