Tools: How To Install and Use Docker on Rocky Linux (2026)
Source: DigitalOcean
By Tony Tran and Manikandan Kurup Docker is an application that makes it simple and easy to run application processes in containers. Containers are similar to lightweight environments that package an application and its dependencies together. Unlike virtual machines, containers share the host operating system’s kernel, making them more portable and resource-efficient. For a detailed introduction to the different components of Docker, check out our article on The Docker Ecosystem. In this tutorial, you’ll install and use Docker Engine on Rocky Linux. You’ll explore why Docker remains a popular choice despite Rocky Linux’s default Podman runtime, configure Docker permissions, and work with containers and images. You’ll also use Docker Compose to manage multi-container applications, create custom images by committing container changes, and push images to Docker Hub for distribution. All the commands in this tutorial should be run as a non-root user. If root access is required for a command, it will be preceded by sudo. Rocky Linux includes Podman as the default container runtime, marking a significant shift in Red Hat’s container strategy. Podman is a daemonless container engine designed to run OCI (Open Container Initiative) compliant containers without requiring a central background service. This design choice reflects a focus on security and simplicity, eliminating the need for a privileged daemon process that manages all containers on the system. Docker, on the other hand, uses a traditional client-server architecture where the Docker daemon (dockerd) runs as a background service with root privileges. The Docker CLI client communicates with this daemon through a REST API to manage containers, images, networks, and volumes. Despite these fundamental architectural differences, both tools are fully capable of running OCI-compatible containers, meaning containers built for one platform will generally run on the other. Understanding the technical and practical differences between Docker and Podman will help you choose the right tool for your needs: Architecture: Docker uses a daemon-based architecture where a single dockerd process manages all containers on the system. This daemon runs with root privileges and handles container lifecycle management, image pulls, networking, and storage. Podman, by contrast, is daemonless. Each container runs as a direct child process of the command that started it, requiring no background service. Security: Podman supports rootless containers by default, allowing non-privileged users to run containers without requiring root access or sudo. This significantly reduces the attack surface by limiting what a compromised container can access. Docker can also run rootless containers, but this requires additional configuration and is not the default mode of operation. Ecosystem: Docker has broader ecosystem support, including the widely-used Docker Hub registry with millions of pre-built images, Docker Compose for multi-container orchestration, and Docker Desktop for local development on Windows and macOS. Many third-party tools, monitoring solutions, and deployment platforms are built specifically around Docker’s APIs and tooling. Tooling: Numerous CI/CD systems, cloud platforms, and development tools are designed with Docker-first integration. While Podman offers a Docker-compatible CLI and socket emulation, some tools may require additional configuration or may not fully support Podman yet. Compatibility: Docker Compose, while having a Podman equivalent (podman-compose), works seamlessly with Docker out of the box. The Docker API is well-established and widely documented, making troubleshooting and finding solutions easier. You may choose Docker over Podman if you: In this tutorial, Docker is used because of its mature ecosystem, built-in Compose functionality, and widespread industry adoption. However, if your primary concerns are security through rootless containers and avoiding daemon dependencies, Podman is an excellent alternative that you can explore after understanding container fundamentals with Docker. The Docker installation package available in the default Rocky Linux repositories may not be the latest version. To get the latest version, install Docker from the official Docker repository. This section shows you how to do just that. But first, update the package database: Next, install the required package to manage repositories and add the official Docker repository: While there is no Rocky Linux–specific repository provided by Docker, Rocky Linux is binary-compatible with Red Hat Enterprise Linux (RHEL) and can use the RHEL-compatible Docker repository. With the repository added, install Docker along with its required components: After installation has completed, start the Docker daemon: Verify that it’s running: The output should be similar to the following, showing that the service is active and running: Lastly, make sure it starts at every server reboot: Installing Docker provides both the Docker service (daemon) and the docker command-line utility. In the next steps, you’ll verify the installation and begin using Docker. By default, running the docker command requires root privileges, meaning you have to prefix the command with sudo. This is because the Docker daemon runs as the root user and manages system-level resources. It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this: If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group: You will need to log out of the server and back in for this change to take effect. If you need to add a different user to the docker group, specify the username explicitly: Security Warning: Adding a user to the docker group grants privileges equivalent to the root user. Members of this group can control the Docker daemon and access the host system. Only add trusted users, especially on shared or production systems. The rest of this tutorial assumes you are running the docker command as a user in the docker group. If you choose not to, prepend the commands with sudo. The Docker daemon (dockerd) runs as the root user and listens on a Unix socket located at /var/run/docker.sock. This socket serves as the primary communication channel between the Docker client (the docker command you run in your terminal) and the Docker daemon. The socket file itself has restricted permissions by default, allowing only root and members of the docker group to access it. When you add a user to the docker group, you’re granting them read and write access to this socket. This allows them to send commands to the Docker daemon without needing to elevate their privileges using sudo for each operation. The Docker daemon then executes these commands with root privileges, enabling container operations that require system-level permissions such as manipulating network interfaces, mounting filesystems, and managing kernel features like namespaces and cgroups. This convenience comes with an important security trade-off: because the Docker daemon runs with root privileges and executes commands on behalf of docker group members, those users effectively have root-equivalent privileges on the host system. The ability to instruct a root-privileged daemon to perform operations makes docker group membership functionally equivalent to having direct root access. Understanding the specific risks associated with docker group membership is essential for making informed decisions about user permissions on your system. Docker group members can: Given these significant security implications, follow these guidelines when managing docker group membership: With Docker installed and working, now’s the time to become familiar with the command-line utility. Using docker consists of passing it a chain of options and subcommands followed by arguments. The syntax takes this form: To view all available subcommands, type: The list of available subcommands will vary depending on your installed Docker version. Some commonly used commands include: To view the options available to a specific command, type: To view system-wide information, use: Docker containers are run from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need to run Docker containers have images available there. To check whether you can access and download images from Docker Hub, type: The output, which should include the following, indicates that Docker is working correctly: You can search for images available on Docker Hub by using the docker search command. For example, to search for the Rocky Linux image, type: The command returns a list of images whose names match the search string. The output will be similar to this: In the OFFICIAL column, OK indicates an image built and maintained by the organization behind the project. Once you’ve identified the image you would like to use, you can download it using the pull subcommand: After an image has been downloaded, you can run a container using the run subcommand: If the image is not already available locally, Docker will automatically download it before running the container. Note: This container exits immediately because no interactive process is attached. To see the images that have been downloaded to your system, type: The output should look similar to the following: As you’ll see later in this tutorial, images that you use to run containers can be modified and used to create new images, which can then be uploaded (pushed) to Docker Hub or other Docker registries. The hello-world container you ran in the previous step is an example of a container that runs and exits after displaying a message. Containers, however, can also be interactive and run long-lived processes. As an example, let’s run a container using the latest Rocky Linux image. The combination of the -i (interactive) and -t (pseudo-TTY) options allows you to access a shell inside the container: Your command prompt should change to reflect that you’re now working inside the container. It will look similar to this: Note: The value shown in the prompt (for example, <container-id>) is the unique identifier of the running container. You can now run commands inside the container. For example, install the MariaDB server: You do not need to prefix commands with sudo inside the container because you are operating as the root user by default. To exit the container, type: Docker Compose is a tool that simplifies the process of defining and running multi-container Docker applications. Instead of starting each container individually with separate docker run commands, Docker Compose allows you to define all your application’s services, networks, and volumes in a single YAML configuration file. With one command, you can then create and start all the services from your configuration, making it ideal for development environments, testing, and simple production deployments. Docker Compose is particularly valuable when your application consists of multiple interconnected services. For example, a typical web application might include a web server, an application server, a database, and a cache, each running in its own container. Docker Compose manages the lifecycle of all these containers together, ensuring they can communicate with each other and start in the correct order. Docker Compose is included as a plugin with modern Docker installations (Docker Engine 20.10 and later). The plugin integrates directly with the Docker CLI, so you use docker compose as a subcommand rather than a separate docker-compose binary. To verify it is available on your system, run: You should see output similar to: If the command is not found, Docker Compose may not have been installed correctly. Refer back to Step 1 to ensure you installed the docker-compose-plugin package. To demonstrate Docker Compose, you’ll create a simple multi-container application consisting of a web server and a Redis cache. This example shows how Docker Compose orchestrates multiple services that might work together in a real application. Create a file named docker-compose.yml in your current directory: This YAML configuration file defines the structure of your multi-container application. Let’s break down each section: services:: Defines the containers that make up your application. Each service runs in its own container, and Docker Compose manages them as a group. web:: Defines a service named “web” that will run an NGINX web server. The service name becomes the hostname that other containers can use to communicate with this service on the default Docker Compose network. image: nginx:latest: Specifies that this service uses the official NGINX image from Docker Hub. The latest tag pulls the most recent stable version. Docker Compose will automatically pull this image from Docker Hub if it’s not already available locally. ports: - "8080:80": Maps port 8080 on your host machine to port 80 inside the container. This allows you to access the NGINX web server by visiting http://your_server_ip:8080 in your browser, replacing your_server_ip with your Rocky Linux server’s IP address or hostname. The format is "HOST_PORT:CONTAINER_PORT". NGINX listens on port 80 by default inside the container, and this mapping makes it accessible from your host system. redis:: Defines a service named “redis” that will run a Redis in-memory data store. When services are defined in the same docker-compose.yml file, Docker Compose automatically creates a dedicated network for them, allowing the containers to communicate with each other using their service names as hostnames. For example, the web container could connect to Redis using the hostname redis. To create and start all the services defined in your docker-compose.yml file, run: The -d flag runs the containers in detached mode, meaning they run in the background and don’t occupy your terminal. Docker Compose performs several actions when you run this command: You should see output similar to this: The output shows that Docker Compose created a network and successfully started both containers. Docker Compose automatically generates container names by combining your directory name (or project name) with the service name and an index number. If you run the command without the -d flag, Docker Compose will display the log output from all containers in your terminal, which is useful for debugging but prevents you from running other commands in that terminal session. To confirm that both services are running correctly, use the docker ps command: You should see both the nginx and redis containers running. The output will look similar to this: The output provides several important details: To verify the web server is working correctly, you can access it by opening a web browser and visiting: You should see the default NGINX welcome page, which confirms that the web server is running and accessible. The page typically displays “Welcome to nginx!” along with basic information about the server. Alternatively, you can test it from the command line using curl: This will display the HTML of the NGINX welcome page in your terminal. When you’re finished working with your multi-container application, you can stop and remove all containers, networks, and other resources created by Docker Compose with a single command: This command performs a clean shutdown by: You should see output similar to: The containers are removed, but the Docker images (nginx:latest and redis:latest) remain on your system. This means the next time you run docker compose up, the containers will start much faster since the images don’t need to be downloaded again. If you want to verify that the containers have been removed, run docker ps -a again, and they should no longer appear in the list. When you start a container from a Docker image, you can create, modify, and delete files just like on a regular system. These changes apply only to that container. You can start and stop it, but if you remove it using the docker rm command, the changes will be lost. This section shows you how to save the state of a container as a new Docker image. After installing MariaDB inside the Rocky Linux container, you now have a container that differs from the original image used to create it. To save the state of the container as a new image, first exit from it: Then commit the changes to a new Docker image using the following command. The -m option specifies a commit message, and -a specifies the author. The container ID is the one you noted earlier: Note: The new image is saved locally on your system. You can push it to a registry like Docker Hub to share it with others. Best Practice: While docker commit is useful for quick experiments, production workflows typically use a Dockerfile. Dockerfiles provide a repeatable and version-controlled way to build images. After the operation completes, list the Docker images on your system: The output should be similar to this: In this example, rockylinux-mariadb is the new image derived from the Rocky Linux base image. The size difference reflects the changes made, such as installing MariaDB. After using Docker for a while, you’ll have multiple containers on your system, some running and others stopped. To view currently running containers, use: You will see output similar to the following: To view all containers (both running and stopped), use the -a option: To view the most recently created container, use the -l option: The STATUS column indicates the state of the container, such as: To stop a running container, use: You can find the container-id in the output of the docker ps command. The next logical step after creating a new image from an existing image is to share it with others on Docker Hub or another Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there. This section shows you how to push a Docker image to Docker Hub. To create an account on Docker Hub, register at Docker Hub. After creating your account, log into Docker Hub from your terminal. You’ll be prompted to authenticate: Enter your password when prompted. If you specified the correct password, authentication should succeed. Then you can push your image using: It may take some time for the upload to complete. When finished, the output will look similar to this: After pushing an image to a registry, it should be listed on your account’s dashboard. If a push attempt results in an error of this sort, then you likely did not log in: Log in, then repeat the push attempt. To install Docker on Rocky Linux, follow these steps: Update the package database: Install repository management tools: Add the Docker repository: Install Docker and components: Start the Docker service: Enable Docker to start at boot: Yes, you can run Docker on Rocky Linux. Rocky Linux is binary-compatible with Red Hat Enterprise Linux (RHEL), allowing it to use Docker’s RHEL-compatible repositories without issues. While Rocky Linux includes Podman as the default container runtime, Docker remains a popular choice due to its mature ecosystem, extensive documentation, Docker Compose integration, and widespread industry adoption. Many organizations prefer Docker for its compatibility with existing workflows, CI/CD pipelines, and third-party tools that expect Docker-specific APIs. Docker Compose is included as a plugin when you install Docker from the official repository: Automatic installation: Install the docker-compose-plugin package along with Docker using the command: Verify installation: Confirm Docker Compose is available: Use docker compose (with a space) instead of the older docker-compose (with a hyphen) for all operations By default, yes, but you can configure it to work without sudo. Docker requires sudo privileges because the Docker daemon runs as root and manages system-level resources. However, you can configure Docker to work without sudo by adding your user to the docker group using sudo usermod -aG docker $(whoami). After running this command, you need to log out and back in for the change to take effect, and then you can run Docker commands without sudo. Be aware that this convenience comes with significant security implications. Members of the docker group effectively have root-equivalent privileges on the host system, as they can mount sensitive directories, create privileged containers, and execute commands with elevated permissions. Only add trusted users to the docker group, especially on shared or production systems. Docker CE (Community Edition) and Podman have several fundamental differences: Recommendation: Choose Docker for ecosystem compatibility and mature tooling. Choose Podman for enhanced security through rootless operation and daemonless architecture. Before installing Docker from the official repository: Run the removal command: The command is safe to run even if none of these packages are installed; DNF will simply report they’re not present After removing conflicting packages, add the Docker repository and install Docker CE as described in Step 1 of this tutorial Follow these verification steps: Check the service status: Confirm the output shows “active (running)” Run a test container: (Use with or without sudo depending on your docker group membership) Verify the test output shows a message confirming your installation is working. Check the Docker version: For detailed client and server information, use: View system-wide information: This displays details including: If all these commands execute without errors, your Docker installation is working correctly. Follow these steps to push an image to Docker Hub: Create a Docker Hub account if you don’t have an account. Log in from the terminal: Enter your password when prompted Tag your image using the format username/image-name. If you’ve already created an image, tag it with: Wait for the upload to complete. The upload may take several minutes depending on image size and network speed. Verify on Docker Hub. Once complete, the image will appear in your Docker Hub account dashboard. Your image is now available for others to pull: If you encounter an “unauthorized: authentication required” error, verify you’re logged in with docker login before attempting the push again In this tutorial, you installed Docker on a Rocky Linux server and learned the fundamentals of working with containers and images. You explored Docker’s architecture and also learned how to configure Docker permissions, understanding both the convenience and security implications of docker group membership. Beyond installation, you learned how to pull and run containers, work with Docker images, use Docker Compose to manage multi-container applications, commit container changes to create custom images, and push images to Docker Hub for distribution. These skills form the foundation for containerizing applications, managing development environments, and deploying services consistently across different systems. As next steps, you can explore writing Dockerfiles to automate image builds, experiment with Docker networking and volumes for persistent data storage, or integrate Docker into your CI/CD pipelines. The container management skills you’ve developed here will serve you well whether you’re building microservices, setting up development environments that match production, or deploying applications at scale with orchestration tools like Kubernetes. For more Docker-related tutorials, check out the following articles: Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases. Learn more about our products With over 6 years of experience in tech publishing, Mani has edited and published more than 75 books covering a wide range of data science topics. Known for his strong attention to detail and technical knowledge, Mani specializes in creating clear, concise, and easy-to-understand content tailored for developers. This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Please complete your information! Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation. Full documentation for every DigitalOcean product. The Wave has everything you need to know about building a business, from raising funding to marketing your product. Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter. New accounts only. By submitting your email you agree to our Privacy Policy Scale up as you grow — whether you're running one virtual machine or ten thousand. From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.