Tools: Use Ansible to Install Docker on Ubuntu (2026)

Tools: Use Ansible to Install Docker on Ubuntu (2026)

Source: DigitalOcean

By Tony Tran, Erika Heidi and Manikandan Kurup Server automation now plays an essential role in systems administration, due to the disposable nature of modern application environments. Configuration management tools such as Ansible are typically used to streamline the process of automating server setup by establishing standard procedures for new servers while also reducing human error associated with manual setups. Ansible offers a simple architecture that doesn’t require special software to be installed on nodes. It also provides a robust set of features and built-in modules which facilitate writing automation scripts. This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system. Note: This guide has been tested on Ubuntu 24.04 LTS. While it may also work on other recent Ubuntu releases, some steps, such as repository configuration, may require minor adjustments. In order to execute the automated setup provided by the playbook in this guide, you’ll need: Note: Before proceeding, you first need to make sure your Ansible control node is able to connect and execute commands on your Ansible host(s). For a connection test, check Step 3 of How to Install and Configure Ansible on Ubuntu. This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu. Set up your playbook once, and use it for every installation after. Running this playbook will perform the following actions on your Ansible hosts: Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables. To begin, log into a sudo enabled user on your Ansible control node server. The playbook.yml file is where all your tasks are defined. A task is the smallest unit of action you can automate using an Ansible playbook. But first, create your playbook file using your preferred text editor: This will open an empty YAML file. Before diving into adding tasks to your playbook, start by adding the following: Almost every playbook you come across will begin with declarations similar to this. hosts declares which servers the Ansible control node will target with this playbook. become states whether all commands will be done with escalated root privileges. vars allows you to store data in variables. If you decide to change these in the future, you will only have to edit these single lines in your file. Here’s a brief explanation of each variable: Note: If you want to see the playbook file in its final finished state, jump to Step 5. YAML files can be particular with their indentation structure, so you may want to double-check your playbook once you’ve added all your tasks. By default, tasks are executed synchronously by Ansible in order from top to bottom in your playbook. This means task ordering is important, and you can safely assume one task will finish executing before the next task begins. All tasks in this playbook can stand alone and be re-used in your other playbooks. An important concept in Ansible is idempotency, which means that running the same playbook multiple times will always result in the same system state without causing unintended changes. In this playbook, each task is written to be idempotent. For example, package installation tasks use state: latest or state: present, ensuring packages are only installed or updated when necessary. Similarly, repository and key configuration tasks will only make changes if they are not already in place. This makes the playbook safe to re-run at any time, which is especially useful when provisioning new servers or applying updates to existing environments. Add your first tasks of installing aptitude, a tool for interfacing with the Linux package manager, and installing the required system packages. Ansible will ensure these packages are always installed on your server: Here, you’re using the apt Ansible builtin module to direct Ansible to install your packages. Modules in Ansible are shortcuts to execute operations that you would otherwise have to run as raw bash commands. Ansible safely falls back onto apt for installing packages if aptitude is not available, but Ansible has historically preferred aptitude. You can add or remove packages to your liking. This will ensure all packages are not only present, but on the latest version, and done after an update with apt is called. Your task will install the latest version of Docker from the official repository. The Docker GPG key is added to verify the download, the official repository is added as a new package source, and Docker will be installed. Additionally, the Docker Compose plugin and the Docker module for Python will be installed as well: This approach uses an APT keyring plus the repository’s signed-by attribute, which is the current recommended method on modern Ubuntu releases and avoids the deprecated apt_key module. Docker Compose is installed via the docker-compose-plugin package, which provides the modern docker compose CLI syntax. This replaces the legacy standalone docker-compose binary used in older setups. Note: With Docker Compose v2, commands use the integrated syntax docker compose (with a space) instead of the legacy docker-compose. The actual creation of your Docker containers starts here with the pulling of your desired Docker image. By default, these images come from the official Docker Hub. Using this image, containers will be created according to the specifications laid out by the variables declared at the top of your playbook. Before adding these tasks, ensure you have the community.docker Ansible collection installed on your control node. Install it with: Now add the following tasks to your playbook: community.docker.docker_image is used to pull the Docker image you want to use as the base for your containers. community.docker.docker_container allows you to specify the specifics of the containers you create, along with the command you want to pass them. with_sequence is the Ansible way of creating a loop, and in this case it will loop the creation of your containers according to the count you specified. This is a basic count loop, so the item variable here provides a number representing the current loop iteration. This number is used here to name your containers. Your playbook should look roughly like the following, with minor differences depending on your customizations: Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks. Note: This is a gentle reminder to be mindful of your indentations. If you run into an error, this is very likely the culprit. YAML suggests using 2 spaces as an indent, as was done in this example. Once you’re satisfied with your playbook, you can exit your text editor and save. You’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on every server in your inventory by default, but you’ll specify your server this time. To execute the playbook only on server1, connecting as sammy, you can use the following command: The -l flag specifies your server and the -u flag specifies which user to log into on the remote server. You will get output similar to this: Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide. This indicates your server setup is complete! Your output doesn’t have to be exactly the same, but it is important that you have zero failures. When the playbook is finished running, log in via SSH to the server provisioned by Ansible to check if the containers were successfully created. Log in to the remote server with: And list your Docker containers on the remote server: You should see output similar to this: This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server. While this playbook is designed to run smoothly on a properly configured system, you may occasionally encounter issues depending on your server’s network configuration, existing package state, or permissions. This section covers the most common problems you might face and provides step-by-step guidance for resolving them. Errors related to the Docker GPG key typically occur when the playbook cannot download or process the repository signing key. This might happen due to network connectivity issues, DNS problems, or file permission restrictions on the /etc/apt/keyrings directory. If you see GPG key errors in the Ansible output, first verify that the Docker GPG key URL is reachable from your managed node. You can test this manually by attempting to download the key with curl: If the download succeeds, try updating the apt cache manually to see if there are any lingering repository configuration issues: If problems persist, check that your system has proper network access and DNS resolution. You can test DNS resolution with nslookup download.docker.com or verify general connectivity with ping 8.8.8.8. Repository configuration errors usually manifest as “package not found” messages during the Docker installation step. The most common cause is a mismatch between the repository URL and your Ubuntu version’s codename, though this playbook uses ansible_distribution_release to detect the correct codename automatically. To diagnose repository issues, inspect your configured APT sources to verify that the Docker repository was added correctly: You should see an entry that includes [signed-by=/etc/apt/keyrings/docker.gpg] and the correct codename for your Ubuntu version (such as focal for 20.04, jammy for 22.04, or noble for 24.04). If the repository configuration looks correct but packages still cannot be found, try running sudo apt update to refresh the package index, then re-run the playbook. Package installation can fail for several reasons, including conflicts with existing packages, interrupted previous installations, or locked package databases. When Ansible reports package installation errors, the underlying issue is usually with APT’s internal state rather than the playbook itself. To troubleshoot package installation problems, start by ensuring the APT package cache is current. An outdated cache can cause APT to look for package versions that no longer exist in the repository. Next, check for broken package dependencies that might be blocking new installations: This command attempts to repair any incomplete or broken package installations on the system. Additionally, verify that no other package manager process is running in the background. If another APT process is active (such as automatic updates), it will lock the package database and prevent your playbook from making changes. You can check for running package processes with ps aux | grep apt. By default, Docker requires root privileges to interact with the Docker daemon. If you try to run Docker commands as a regular user without sudo, you’ll encounter “permission denied” errors. While the playbook installs Docker successfully, it doesn’t automatically add your user account to the docker group. To run Docker commands without sudo, add your user to the docker group: This command adds your current user to the docker group, which grants permission to communicate with the Docker daemon. However, group membership changes don’t take effect in your current shell session. You’ll need to log out and log back in for the new group membership to be recognized by the system. Alternatively, you can start a new shell session with the updated groups using newgrp docker. Keep in mind that users in the docker group effectively have root-level access to the system, since they can run containers with full filesystem access. Only add trusted users to this group. If the Docker service fails to start automatically after installation, the problem could be related to systemd configuration, conflicting services, or resource constraints. Docker is managed as a systemd service, so you can use standard systemd troubleshooting tools to diagnose the issue. First, check the current status of the Docker service to see if there are any obvious error messages: The output will show whether the service is active, failed, or stopped, along with recent log entries. For more detailed diagnostic information, examine the full Docker service logs using journalctl: Add the --no-pager flag if you want to scroll through the entire log history, or use -n 50 to see just the last 50 lines. Common issues include port conflicts (if another service is using Docker’s default ports) or storage driver problems. If the logs reveal a configuration issue that you’ve corrected, try restarting the Docker service: You can also ensure Docker starts automatically on boot with sudo systemctl enable docker. The Docker Python module (installed via the docker package) is required by Ansible’s community.docker collection to interact with the Docker API. Installation failures for this module typically stem from pip configuration issues, Python version incompatibilities, or missing build dependencies. If the playbook fails during the “Install Docker Module for Python” task, start by verifying that pip is installed and functional. Check your Python 3 installation: The Docker Python module requires Python 3.6 or later. If your Python version is current but the module installation still fails, try installing it manually to see more detailed error output: If you encounter compilation errors during installation, your system may be missing development headers or build tools. Installing the python3-dev package usually resolves these issues. Also note that if you’re using a Python virtual environment, ensure you’re installing the module in the correct environment. Connection failures occur when your Ansible control node cannot establish an SSH connection to the managed hosts. This is one of the most fundamental requirements for Ansible to work, so connection issues must be resolved before the playbook can execute any tasks. To diagnose connection problems, first verify that you can SSH to the target host manually using the same credentials that Ansible would use: If manual SSH access works but Ansible still fails to connect, the issue likely involves your Ansible inventory configuration or SSH key authentication. Check your inventory file to ensure the hostnames or IP addresses are correct and that any custom SSH ports or connection parameters are properly specified. Verify that your SSH key is loaded in your SSH agent with ssh-add -l. If your key isn’t listed, add it with ssh-add ~/.ssh/id_rsa (or whatever path points to your private key). Also confirm that the corresponding public key is present in the ~/.ssh/authorized_keys file on the managed host. When a playbook task fails, Ansible stops execution and provides an error message indicating which task failed and why. These error messages are your primary tool for diagnosing issues, so read them carefully. The error output usually includes the task name, the module that failed, and specific details about what went wrong. For initial troubleshooting, review the error message in the Ansible output to identify which task failed. The message often contains clues about the root cause, such as missing permissions, unreachable URLs, or syntax errors in your playbook. If the standard output doesn’t provide enough detail, run the playbook in verbose mode to see exactly what Ansible is doing behind the scenes: The -vvv flag provides maximum verbosity, showing the actual commands being executed, the full output from each task, and detailed debugging information. This is particularly helpful for diagnosing network issues, permission problems, or unexpected module behavior. Before re-running the playbook, verify that all the resources it needs are accessible—the Docker GPG key URL should be reachable, your managed hosts should have internet connectivity to download packages, and there should be sufficient disk space for package installation. Remember that because this playbook is idempotent, you can safely re-run it after fixing any issues. Tasks that already completed successfully will show as “ok” rather than “changed,” and Ansible will resume work on the tasks that previously failed. Any recent Ansible release should work, but Ansible 2.10+ is a good baseline recommendation. That version (and newer) uses the collections layout by default, which matters because this guide uses the community.docker collection for the docker_image and docker_container tasks. You can confirm your installed version with ansible --version on the control node. Yes. The playbook installs the docker-compose-plugin package, which provides Docker Compose v2. With Compose v2, you run Compose as a Docker subcommand (docker compose, with a space) instead of the legacy standalone binary (docker-compose). Ansible makes it easy to keep server setup repeatable and auditable. Because the tasks are idempotent, you can re-run the playbook to converge systems on the desired state (for example, ensure the Docker repository is configured and the right packages are installed) without having to write your own “already configured?” checks. Playbooks are also easier to review, version-control, and extend alongside other provisioning steps. It means you can run the same playbook repeatedly and get predictable results. Each task checks the current system state and only makes changes when necessary. For example, if Docker is already installed, the repository is already present, and the packages are up to date, Ansible will report those tasks as ok instead of reinstalling everything. Yes. Add all target hosts to your inventory (for example, under the same group), then run the playbook against that group. Ansible will execute tasks across multiple hosts in parallel, and you can control the level of parallelism with the forks setting in your Ansible configuration. It depends on your goals. A custom playbook keeps everything in one place and makes it easy to see exactly what changes are being applied (which can be helpful for internal review or compliance requirements). A well-known Galaxy role like geerlingguy.docker can save time and is widely used, but it adds an external dependency that you’ll want to review and keep up to date. SSH into the managed node and confirm Docker is responding: If you’ve added your user to the docker group, you can run Docker commands without sudo after logging out and back in. To confirm Compose v2 is available, run docker compose version. Automating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams’ development processes. In this guide, you demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server running Ubuntu 24.04, including support for the modern Docker Compose plugin. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container Ansible module. If you’d like to include other tasks in this playbook to further customize your initial server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks. For more Ansible-related tutorials, check out the following articles: Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases. Learn more about our products Dev/Ops passionate about open source, PHP, and Linux. Former Senior Technical Writer at DigitalOcean. Areas of expertise include LAMP Stack, Ubuntu, Debian 11, Linux, Ansible, and more. With over 6 years of experience in tech publishing, Mani has edited and published more than 75 books covering a wide range of data science topics. Known for his strong attention to detail and technical knowledge, Mani specializes in creating clear, concise, and easy-to-understand content tailored for developers. This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Thanks for the playbook! Isn’t apt_key now deprecated? Or does Ansible use gpg under the hood instead? I was just looking at the latest guide on Docker, and it shows: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo “deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null ERROR! couldn’t resolve module/action ‘community.docker.docker_image’. This often indicates a misspelling, missing collection, or incorrect module path. The error appears to be in ‘/mnt/c/Users/deyinche/ansible/pb_install_docker_ce.yaml’: line 50, column 5, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: What is the rationale behind including aptitude in the playbook? I’m aware that ansible can utilize it if so desired, but it defaults to off per the apt module documentation so the playbook here isn’t even using it. Am I missing something? Great article, I loved it, I have only been using ansible for about 2 wks but I have updated all of the company servers on GCP, Azure and AWS using Ansible (great solution). We were using VMware Automation/Orchestration before, but this is good too. One thing, I am not finding python3-pip. I have python3 installed but when I try to install it a mint server (Desc: Linux Mint 20, Rel: 20, CodeName: ulyana). I found an answer online: python3 -m easy_install install pip This does not work for Ubuntu 22 - I changed the repo section to reference kudu and got this error: Can this tutorial perhaps be updated to a working version for Ubuntu 22 ? This comment has been deleted Please complete your information! Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation. Full documentation for every DigitalOcean product. The Wave has everything you need to know about building a business, from raising funding to marketing your product. Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter. New accounts only. By submitting your email you agree to our Privacy Policy Scale up as you grow — whether you're running one virtual machine or ten thousand. From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.