Tools: How to Use Ansible to Install and Set Up Docker on Ubuntu (2026)
Source: DigitalOcean
By Tony Tran, Erika Heidi and Vinayak Baranwal Server automation now plays an essential role in systems administration, due to the disposable nature of modern application environments. Configuration management tools such as Ansible are typically used to streamline the process of automating server setup by establishing standard procedures for new servers while also reducing human error associated with manual setups. Ansible offers a simple architecture that doesn’t require special software to be installed on nodes. It also provides a full set of built-in modules that facilitate writing automation scripts. This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system. For an introduction to configuration management concepts and how Ansible fits into modern infrastructure workflows, see An Introduction to Configuration Management with Ansible. The tutorial has been validated on Ubuntu 22.04 and 24.04 LTS against Ansible 2.14 and Docker CE. It installs Docker CE from the official Docker APT repository, Docker Compose v2 via the docker-compose-plugin APT package, and creates a configurable number of containers in a single run. In order to execute the automated setup provided by the playbook in this guide, you’ll need: Before proceeding, you first need to make sure your Ansible control node is able to connect and execute commands on your Ansible host(s). For a connection test, check Step 3 of How to Install and Configure Ansible on Ubuntu. This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu. Set up your playbook once, and use it for every installation after. Running this playbook will perform the following actions on your Ansible hosts: Once the playbook has finished running, you will have Docker and Docker Compose v2 installed, and a number of containers created based on the options you defined within your configuration variables. To begin, log in to a sudo-enabled user on your Ansible control node server. The playbook.yml file is where all your tasks are defined. A task is the smallest unit of action you can automate using an Ansible playbook. But first, create your playbook file using your preferred text editor: This will open an empty YAML file. Before diving into adding tasks to your playbook, start by adding the following: Almost every playbook you come across will begin with declarations similar to this. hosts declares which servers the Ansible control node will target with this playbook. become states whether all commands will be done with escalated root privileges. vars allows you to store data in variables. If you decide to change these in the future, you will only have to edit these single lines in your file. Here’s a brief explanation of each variable: Note: If you want to see the playbook file in its final finished state, jump to Step 6. YAML files can be particular with their indentation structure, so you may want to double-check your playbook once you’ve added all your tasks. Before Docker can be installed, the managed node needs a set of APT utilities and Python packages that Ansible relies on to add third-party repositories and communicate with Docker. These two tasks handle that preparation. Ansible tasks in a playbook execute sequentially from top to bottom. Each task finishes before the next begins, so the dependency packages will always be present before the Docker tasks run. The apt module is Ansible’s interface to the system package manager. Setting state: latest ensures each package is present at its most current version. Setting update_cache: true runs the equivalent of apt update before the install, so the package index is current. Here is what each dependency does and why it is required before Docker can be added: If a package in this list conflicts with your environment or is already installed through another method, you can remove it from the pkg list. The only packages that are strictly required for this playbook to succeed are apt-transport-https, ca-certificates, curl, software-properties-common, and python3-pip. Docker CE is not available in Ubuntu’s default APT repositories. These four tasks add Docker’s GPG key, register its official repository, install Docker CE, and install the Python Docker SDK — in that order, because each step depends on the one before it: Note: The apt_key module is deprecated in newer Ansible releases. On Ubuntu and later, the preferred approach stores the GPG key as a binary armored file under /etc/apt/keyrings/. The task shown here continues to work, but for new production playbooks consider using the get_url module to download the key to /etc/apt/keyrings/docker.asc instead. Note: The jammy codename corresponds to Ubuntu 22.04. For Ubuntu 20.04, replace jammy with focal. For Ubuntu 24.04, use noble. To make this task version-agnostic, substitute {{ ansible_distribution_release }} for the hardcoded codename. The apt_key task tells APT to trust packages signed with Docker’s GPG key. Without it, APT would reject the download. The apt_repository task adds Docker’s official package source to /etc/apt/sources.list.d/, making docker-ce visible to the package manager. Once both are in place, the apt task installs Docker CE from that repository. The pip task at the end of this block installs the Python docker library on the managed node. This library is what allows Ansible’s community.docker modules to communicate with the Docker daemon in later playbook runs. Without it, any task that uses community.docker.docker_container or community.docker.docker_image will fail with a Python import error. Docker Compose v2 is installed as the docker-compose-plugin APT package and integrates directly with the Docker CLI. It replaces the deprecated standalone docker-compose binary (v1), which is no longer maintained. With v2, the command is docker compose (with a space); the v1 binary used docker-compose (with a hyphen). Installing via APT rather than pip ensures that Compose is updated alongside Docker CE through your system package manager. Add the following task to your playbook, directly after the docker-ce install task: After running the full playbook, you can verify the installation with an Ansible ad-hoc command: The expected output confirms the installed version: Note: For deeper coverage of Docker Compose workflows, including defining multi-container applications with docker-compose.yml, see How to Install and Use Docker Compose on Ubuntu 22.04. These two tasks pull a Docker image from Docker Hub and create a set of containers from it. The number of containers, the image used, and the command each container runs are all controlled by the variables you defined in Step 1, so you can change them without touching the task definitions. The docker_image task pulls the image defined in default_container_image from Docker Hub. If the image is already present on the managed node from a previous run, this task reports ok and skips the download. The docker_container task creates each container using the pulled image. A few details worth understanding: If default_container_image is set to a private image that requires Docker Hub authentication, the pull task will fail with a 401 error. In that case, add a docker_login task before the pull task using community.docker.docker_login with your Docker Hub credentials stored in Ansible Vault. Your playbook should look roughly like the following, with minor differences depending on your customizations: Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks. Note: This is a gentle reminder to be mindful of your indentations. If you run into an error, this is very likely the culprit. YAML suggests using 2 spaces as an indent, as was done in this example. Once you’re satisfied with your playbook, you can exit your text editor and save. You’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on every server in your inventory by default, but you’ll specify your server this time. Before running the playbook, make sure your inventory file lists the target server. A minimal single-host inventory file looks like this: To execute the playbook only on server1, connecting as sammy, you can use the following command: The -l flag specifies your server and the -u flag specifies which user to log in to on the remote server. You will get output similar to this: Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide. This indicates your server setup is complete! Your output doesn’t have to be exactly the same, but it is important that you have zero failures. When the playbook is finished running, log in via SSH to the server provisioned by Ansible to check if the containers were successfully created. Log in to the remote server with: And list your Docker containers on the remote server: You should see output similar to this: This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server. One of Ansible’s core strengths is executing the same playbook against many servers in a single run. To target multiple hosts, extend your inventory file by listing each server under a named group: The playbook’s hosts: all directive already targets every host in the inventory file. To limit execution to only the docker_hosts group, change that line to hosts: docker_hosts. Run the playbook against the full inventory with: Ansible executes all tasks in parallel across every host in the group, completing the Docker installation on all servers in a single run. Most failures with this playbook fall into one of four categories: SSH connectivity, privilege escalation, Python version mismatches, and GPG key errors. The following covers each one with the exact error and the fix. If Ansible cannot reach the managed node, the task output shows: If the managed node’s user requires a password for sudo, Ansible will hang or fail with: Add the --ask-become-pass flag to your playbook command: Alternatively, configure passwordless sudo for the Ansible user on the managed node by adding the following line to /etc/sudoers via visudo: Ansible requires Python on the managed node to execute tasks. If Python is not installed or Ansible is targeting the wrong interpreter, you will see: Install Python 3 on the managed node: The raw module runs commands without requiring Python, making it safe for this bootstrap step. Once Python is installed, subsequent playbook runs will work normally. If the apt_key task fails with a certificate or network error, it usually means the control node cannot reach download.docker.com, or the key URL has changed. Verify connectivity first: A 200 OK response confirms the URL is reachable. If the URL returns a 404, Docker may have updated the key location. Check the official Docker installation documentation for the current GPG key URL and update the apt_key task accordingly. Idempotency means that running the same playbook multiple times produces the same end state without making unintended changes. An idempotent playbook is safe to re-run after a configuration drift, a failed partial run, or a scheduled automation job. Ansible achieves idempotency through its modules. The apt module checks whether a package is already installed at the correct version before attempting an install. The apt_key module checks whether the key is already present before importing it. The apt_repository module checks whether the repository is already configured before modifying /etc/apt/sources.list.d/. If Docker is already installed from a previous run, those tasks report ok instead of changed. A first run on a fresh server will show several changed tasks: A re-run on a server where Docker is already fully configured will show zero changes: This behavior makes the playbook safe to integrate into CI/CD pipelines and scheduled automation. Running it on a schedule to enforce a desired state carries no risk of reinstalling packages, duplicating repository entries, or disrupting running containers. The community.docker Ansible collection provides modules for managing Docker resources directly from your playbooks. Key modules include docker_container for running and stopping containers, docker_image for pulling and building images, and docker_network for creating bridge and overlay networks. These modules communicate with the Docker daemon on the managed node through the Python docker library. Without the Python SDK installed on the managed node, these modules will fail with an error similar to: Install the collection on your control node before using these modules: The pip: name: docker task already present in the playbook installs the Python SDK on each managed node. This task is what enables community.docker modules to communicate with the Docker daemon during subsequent playbook runs. The following task demonstrates how to use the docker_container module to run an nginx container after Docker is installed: For complete module documentation, see the community.docker docker_container module reference. The geerlingguy.docker Ansible Galaxy role provides a community-maintained, tested alternative to the manual playbook approach in this tutorial. When to use the Galaxy role: Teams that want a fast setup without managing individual tasks, and where accepting an external dependency is acceptable. The role handles Docker installation in a single declaration and is regularly updated by the community. When to use the manual approach from this tutorial: When you need full visibility into each task, want to customize package selection, or cannot accept external dependencies in your CI/CD pipeline at runtime. Install the role on your control node: A minimal playbook using the role: The table below compares the two approaches across common operational dimensions: To build your own reusable role based on the tasks in this tutorial, see How to Use Ansible Roles to Abstract Your Infrastructure Environment. Q: What version of Ansible is required to run this playbook? Minimum version 2.9. Ansible 2.14 or later is recommended for full compatibility with the community.docker collection and modern apt module behavior. Check your version with ansible --version. Q: Does this playbook install Docker Compose v1 or v2? The updated playbook installs Docker Compose v2 via the docker-compose-plugin APT package. The v1 standalone binary (docker-compose) is deprecated and no longer maintained. With v2, the command is docker compose (with a space), not docker-compose. Q: Is this playbook idempotent? Is it safe to run more than once? Yes. Ansible’s apt, apt_key, and apt_repository modules check the current state of the system before making changes. Re-running the playbook on an already-configured host results in all tasks reporting ok with zero changed — no packages are reinstalled, no repositories are duplicated. Q: How do I extend this playbook to install Docker on multiple servers at once? Add all target servers under a named group in your inventory file (for example, [docker_hosts]). Update the playbook’s hosts directive to match that group name. Ansible executes the playbook against all hosts in the group in parallel. Q: What is the difference between this manual playbook and the geerlingguy.docker Galaxy role? The manual approach in this tutorial gives you full visibility and control over every task. The Galaxy role is faster to implement and community-maintained, but it introduces an external dependency and hides the individual steps behind role variables. Use the manual approach when you need task-level transparency or cannot fetch external roles at runtime. Q: Why do I need to install the Docker Python SDK on the managed node? Ansible’s community.docker modules (docker_container, docker_image, docker_network) communicate with the Docker daemon through the Python docker library. Without it installed on the managed node, these modules fail with an import error. The pip: name: docker task in the playbook handles this installation. Q: Can I use this playbook on Ubuntu 20.04 or Ubuntu 24.04? Yes, with one change. The APT repository line uses jammy as the Ubuntu codename. Replace jammy with focal for Ubuntu 20.04 or noble for Ubuntu 24.04. To make the playbook version-agnostic, use the Ansible fact variable {{ ansible_distribution_release }} in place of the hardcoded codename. Q: How do I verify that Docker is correctly installed and running after the playbook completes? Run the following ad-hoc command from the control node: Or SSH into the managed node and check the service status directly: In this tutorial, you used Ansible to automate Docker and Docker Compose installation on one or more Ubuntu servers. You built a playbook that installs Docker CE from the official APT repository, adds Docker Compose v2 via the docker-compose-plugin package, and creates Docker containers using configurable variables. You also explored how to scale the playbook to multiple nodes, how Ansible’s idempotency model makes the playbook safe to re-run, and how the community.docker collection extends playbook-level container management. With this playbook in place, you can provision Docker on any number of Ubuntu servers in a single command. The inventory file controls which servers are targeted, and the playbook variables control what containers are created. This workflow fits naturally into CI/CD pipelines, scheduled infrastructure checks, and multi-environment deployment processes. To go further, explore the Docker installation guide for manual reference, the community.docker module documentation for advanced container configuration options, Ansible roles for reusable infrastructure patterns, and configuration management best practices for deeper playbook design. Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases. Learn more about our products Dev/Ops passionate about open source, PHP, and Linux. Former Senior Technical Writer at DigitalOcean. Areas of expertise include LAMP Stack, Ubuntu, Debian 11, Linux, Ansible, and more. Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments. This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Thank you very much for this great tutorial! I just ran into one issue in STEP 4. Make sure you have the ansible docker modules installed, as they may not be provided out of the box. You may check this with ansible-galaxy collection list in the latest ansible versions or by checking your collections folder. To install the missing module just run ansible-galaxy collection install community.docker Please complete your information! Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation. Full documentation for every DigitalOcean product. The Wave has everything you need to know about building a business, from raising funding to marketing your product. Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter. New accounts only. By submitting your email you agree to our Privacy Policy Scale up as you grow — whether you're running one virtual machine or ten thousand. From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.