Tools: Docker Fundamentals: The Tip of the Iceberg

Tools: Docker Fundamentals: The Tip of the Iceberg

Docker Fundamentals: The Tip of the Iceberg ### What is a Docker Image? ### Docker Container vs Virtual Machine ### What is a Dockerfile? ### Basic Docker Commands In this AI Era, where everything is "AI this" and "AI that," I see that the industry needs people who can manage, plan, and monitor system architecture more than ever before. That's why I chose to learn something new — SRE / DevOps / Solution Architecture — and containerizing apps is just the tip of the iceberg. Docker is a tool that lets you containerize your app, so that the infamous "It works on my machine" problem can finally disappear. And this is what I learned about the fundamentals of Docker. If you look at the Docker docs, there's a lot to explore — but honestly, it's not that much to grasp the fundamentals. It basically comes down to looping back and forth between four things: images, containers, Dockerfile, and basic Docker commands. A Docker Image is simply a packaged app. The analogy I like to use: a Docker Image is like a video game disc, and a Docker Container is like a PlayStation — it's where you run that disc. You can play the same game on a different PlayStation using the same disc, right? Same idea here. If a Docker Image is the packaged app, the Docker Container is the instance that runs it. You might be thinking — isn't a Docker container just like a virtual machine? The answer is actually no. A Docker container runs on top of the Host OS and shares the host's processes, while a Virtual Machine runs directly on the hardware. The most visible difference between the two is size — because a VM contains a full OS, it's much larger than a Docker container. Docker images and containers are typically megabytes (MB) in size, while VMs are usually gigabytes (GB). A Dockerfile is a new kind of required file that will live in your project. It contains a set of instructions to containerize your project. You might be wondering — how exactly does this solve the "It works on my machine" problem? Here's the thing: with a Dockerfile, you build your own project image and share it. Inside the Dockerfile, you specify what environment the image will run on — it can be node:xx-alpine, ubuntu:xx, or many others. With those instructions starting from FROM, you can build and share your project image, and other engineers can run your project without having to replicate your local setup. To run an image (whether one you built or one you pulled from DockerHub), just run: To see the list of currently running containers: Or to see all containers (including stopped ones): To build an image from your project (make sure your Dockerfile exists), run this in your project directory: This will build a Docker image with my-project as the image name. When I said Docker is just the tip of the iceberg, I wasn't lying — because within Docker itself, you can orchestrate images, manage the network between containers, manage storage and data inside containers, and much more. I'll cover that in next week's post about Docker Compose — stay tuned! Templates let you quickly answer FAQs or store snippets for re-use. as well , this person and/or COMMAND_BLOCK:

docker run {image_name} COMMAND_BLOCK:
docker run {image_name} COMMAND_BLOCK:
docker run {image_name} COMMAND_BLOCK:
docker ps -a COMMAND_BLOCK:
docker ps -a COMMAND_BLOCK:
docker ps -a COMMAND_BLOCK:
docker build -t my-project . COMMAND_BLOCK:
docker build -t my-project . COMMAND_BLOCK:
docker build -t my-project .