-
Notifications
You must be signed in to change notification settings - Fork 0
Docker QA
Here are the answers to your first 10 Docker interview questions, with clear explanations and examples:
Answer: Docker is an open-source containerization platform that allows developers to build, package, and run applications in lightweight, portable containers. It ensures that software runs consistently across different environments — from a developer’s laptop to production servers.
Key Benefits:
- Portability: “Works on my machine” issues are eliminated.
- Isolation: Each container runs independently.
- Efficiency: Containers share the same OS kernel, making them lightweight.
- Scalability: Easily scale applications up or down.
Example: You can run a Node.js application inside a Docker container with all dependencies, making it deployable anywhere without worrying about version conflicts.
Feature | Docker (Container) | Virtual Machine (VM) |
---|---|---|
OS Layer | Shares the host OS kernel | Has a separate guest OS |
Size | Lightweight (MBs) | Heavyweight (GBs) |
Startup Time | Seconds | Minutes |
Performance | Near-native | Slower due to hypervisor overhead |
Isolation | Process-level | Hardware-level |
Use Case | Microservices, CI/CD, app portability | Full OS virtualization |
Example:
- Docker: Runs multiple microservices on a single host efficiently.
- VM: Suitable for running multiple different OS instances.
Answer: A Docker container is a runtime instance of a Docker image — an isolated environment that contains everything an application needs to run (code, libraries, dependencies).
Key points:
- Containers are ephemeral (can be stopped, deleted, recreated easily).
- They run on the same OS kernel as the host.
- They provide consistent runtime environments.
Example: Running a Python app:
docker run python:3.9 python app.py
This command runs your Python application in an isolated container.
Answer: Docker Engine is the core component that runs and manages Docker containers. It is a client-server application consisting of:
-
Docker Daemon (
dockerd
) – runs in the background and manages containers, images, networks, and volumes. - Docker CLI – command-line interface to interact with Docker.
- REST API – used for communication between the CLI and the daemon.
Example: When you run:
docker run nginx
The CLI sends the command to the Docker Engine, which downloads the NGINX image and starts a container.
Answer:
The Docker daemon (dockerd
) is a background process responsible for:
- Building and managing Docker images.
- Creating, running, and stopping containers.
- Managing networks and storage volumes.
- Listening for Docker API requests.
Example:
When you execute docker build .
, the CLI sends a build request to the daemon, which processes the Dockerfile and builds the image.
Answer: The Docker Command-Line Interface (CLI) allows users to interact with Docker Daemon using simple commands.
Common Commands:
-
docker run
– create and start a container -
docker ps
– list running containers -
docker images
– list images -
docker build
– build an image from a Dockerfile -
docker stop
– stop a running container
Example:
docker run -d -p 8080:80 nginx
This runs an NGINX container in detached mode and maps port 80 inside the container to port 8080 on the host.
Answer: A Docker image is a read-only template used to create containers. It contains the application code, runtime, libraries, and configuration files.
Key points:
- Images are built from Dockerfiles.
- They are layered, allowing reusability and fast builds.
- Stored locally or in registries like Docker Hub.
Example: The official Ubuntu image:
docker pull ubuntu
docker run -it ubuntu bash
Answer: A Dockerfile is a text file containing a series of instructions to build a Docker image.
Example Dockerfile:
FROM python:3.9
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
Explanation:
-
FROM
: base image -
WORKDIR
: working directory -
COPY
: copy source code -
RUN
: execute commands during build -
CMD
: command to run when container starts
To build:
docker build -t myapp .
Answer: Docker Hub is a cloud-based container registry provided by Docker to store, share, and distribute Docker images.
Features:
- Public and private repositories.
- Integration with GitHub for automated builds.
- Pull official and community images.
Example: You can pull an official MySQL image from Docker Hub:
docker pull mysql:8
Answer:
Use the docker pull
command followed by the image name (and optional tag).
Syntax:
docker pull <repository>:<tag>
Example:
docker pull nginx:latest
- If no tag is specified, Docker pulls the latest tag by default.
- The image is downloaded and stored locally for use.
Here are the next 10 Docker interview questions (Q11–Q20) with detailed, practical answers and examples — focused on container lifecycle management and commands 👇
Answer:
You can list running or all containers using the docker ps
command.
Commands:
-
List running containers:
docker ps
-
List all containers (including stopped ones):
docker ps -a
Example Output:
CONTAINER ID IMAGE COMMAND STATUS
2d4e1fbdc5b1 nginx "/docker-entrypoint.…" Up 2 minutes
b3d2e8cc6df7 ubuntu "/bin/bash" Exited (0) 3 days ago
Tip:
You can add -q
to list only container IDs:
docker ps -aq
Answer:
Use the docker stop
command followed by the container ID or name.
Syntax:
docker stop <container_id_or_name>
Example:
docker stop nginx_container
Note:
- Gracefully stops the container by sending a SIGTERM signal.
- Use
docker kill
to force stop immediately (sends SIGKILL).
Answer:
Use the docker rm
command to delete a stopped container.
Syntax:
docker rm <container_id_or_name>
Example:
docker rm myapp_container
To remove all stopped containers:
docker container prune
(or)
docker rm $(docker ps -aq)
-f
(force) flag:
docker rm -f <container_id>
Answer:
Use the docker images
or docker image ls
command.
Example:
docker images
Output Example:
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 4bb46517cac3 2 days ago 142MB
ubuntu 20.04 f643c72bc252 3 weeks ago 72.8MB
Show all intermediate images:
docker images -a
Answer:
Use the docker rmi
command to delete an image.
Syntax:
docker rmi <image_id_or_name>
Example:
docker rmi nginx:latest
Remove all unused images:
docker image prune
Remove all images (used + unused):
docker rmi $(docker images -q)
Answer:
Use the docker rename
command.
Syntax:
docker rename <old_name> <new_name>
Example:
docker rename webapp_container frontend_app
Now you can use the new name to manage the container:
docker stop frontend_app
Answer:
Detached mode (-d
) runs the container in the background.
Syntax:
docker run -d <image_name>
Example:
docker run -d -p 8080:80 nginx
Explanation:
-
-d
→ detached mode -
-p 8080:80
→ maps port 80 in the container to 8080 on the host
To see if it’s running:
docker ps
Answer:
You can attach to a container’s STDIN/STDOUT stream using docker attach
.
Syntax:
docker attach <container_id_or_name>
Example:
docker attach ubuntu_container
Tip: To detach without stopping the container:
- Press Ctrl + P + Q
Alternative (safer):
Use docker exec -it
to run a shell inside the container:
docker exec -it ubuntu_container bash
Answer:
Use the docker logs
command.
Syntax:
docker logs <container_id_or_name>
Examples:
-
View logs of a container:
docker logs myapp
-
Stream logs in real-time (like
tail -f
):docker logs -f myapp
-
Show last 50 lines:
docker logs --tail 50 myapp
Feature | docker run | docker start |
---|---|---|
Purpose | Creates and starts a new container from an image | Restarts an existing stopped container |
Creates container? | Yes | No |
New process each time? | Always new | Uses existing container |
Example | docker run -d nginx |
docker start nginx_container |
When used | When you want a new instance | When you want to reuse an old one |
Example:
docker run --name test_nginx -d nginx
docker stop test_nginx
docker start test_nginx
The second command restarts the same container, preserving previous data.
Perfect 👍 Let’s continue with Docker Networking — Questions 21 to 30, explained with clear definitions, commands, and real-world examples.
Answer: The default Docker network driver is the bridge driver.
When you run a container without specifying a network, it automatically connects to the default “bridge” network created by Docker (bridge
network named bridge).
Example:
docker run -d nginx
This command attaches the container to the default bridge network.
View the default network:
docker network ls
Output:
NETWORK ID NAME DRIVER SCOPE
b2c5d63e5c2a bridge bridge local
Answer: Docker supports several built-in network drivers:
Network Type | Driver | Description |
---|---|---|
bridge | bridge | Default network for standalone containers on a single host. |
host | host | Removes network isolation and uses the host’s network directly. |
none | null | Disables networking for the container. |
overlay | overlay | Enables multi-host communication across Swarm or Docker cluster. |
macvlan | macvlan | Assigns a MAC address and appears as a physical device on the network. |
custom | bridge/overlay | User-defined bridge or overlay networks for container grouping. |
Command to list all networks:
docker network ls
Answer:
Use the docker network create
command.
Syntax:
docker network create <network_name>
Example:
docker network create my_custom_network
Verify:
docker network ls
Specify a driver (optional):
docker network create --driver bridge my_bridge_network
Answer: There are two ways to attach a container to a network:
-
During container creation:
docker run -d --name webapp --network my_custom_network nginx
-
After container creation:
docker network connect my_custom_network webapp
To disconnect:
docker network disconnect my_custom_network webapp
Use case: Connect a backend API and frontend app to the same network for internal communication.
Answer: Bridge networking is the default network mode where Docker creates a private internal network on the host.
- Containers on the same bridge network can communicate via container names.
- External access is provided through port mapping (
-p
).
Example:
docker run -d --name app1 --network bridge nginx
docker run -d --name app2 --network bridge ubuntu sleep infinity
Ping test:
docker exec app2 ping app1
Use case: Used for connecting containers on a single Docker host.
Answer: In host networking, the container shares the host’s network stack directly.
Features:
- No network isolation between container and host.
- Faster performance since no NAT (network address translation).
- Useful for applications that need direct host access (e.g., monitoring agents, firewalls).
Example:
docker run -d --network host nginx
Behavior:
- The container uses the host’s IP.
- If you open port 80 on the host, the NGINX server will be directly accessible.
Answer: The none network disables all network connectivity for a container.
Example:
docker run -d --network none alpine sleep infinity
Behavior:
- No access to external network or other containers.
- Used for security or isolated batch jobs where no networking is needed.
To verify:
docker inspect <container_id> | grep "IPAddress"
It will return null
.
Answer: An overlay network connects multiple Docker hosts (nodes) in a Swarm or cluster environment. It uses VXLAN encapsulation to allow containers on different hosts to communicate securely.
Key features:
- Works across multiple physical/virtual machines.
- Enables multi-host container communication.
- Often used in Docker Swarm and Kubernetes (CNI plugins).
Example:
docker network create -d overlay my_overlay_net
Use case: Used in distributed applications like microservices running across several nodes.
Answer:
Use the docker network inspect
command to view detailed network configuration.
Syntax:
docker network inspect <network_name>
Example:
docker network inspect bridge
Output (trimmed):
[
{
"Name": "bridge",
"Driver": "bridge",
"Containers": {
"2d4e1fbdc5b1": {
"Name": "nginx",
"IPv4Address": "172.17.0.2/16"
}
}
}
]
You can view:
- Connected containers
- Subnets
- IP addresses
- Gateway configuration
Answer: In a user-defined bridge network, Docker creates a built-in DNS service that allows containers to communicate using container names instead of IP addresses.
Example:
docker network create mynet
docker run -d --name db --network mynet mysql
docker run -d --name web --network mynet nginx
Inside web
container:
ping db
Excellent ✅ — Now we’ll move into Docker Volumes & Data Persistence (Q31–Q40) — one of the most important topics for DevOps & Cloud Engineer interviews. These questions will cover volume types, data sharing, backup strategies, and persistent storage management with examples.
Answer: A Docker volume is a persistent storage mechanism used to store and share data between containers or between a container and the host system.
Key features:
- Managed by Docker (stored under
/var/lib/docker/volumes/
) - Independent of the container lifecycle
- Can be shared among multiple containers
Example:
docker volume create mydata
docker run -d -v mydata:/app/data nginx
Even if the container is deleted, data in mydata
persists.
Answer: Docker volumes are used for:
- Data persistence: Keep data even if a container is deleted.
- Data sharing: Allow multiple containers to access the same data.
- Separation of concerns: Keep app data separate from the container image.
- Performance: Better than bind mounts, especially on Linux systems.
- Backup & migration: Volumes can be backed up, restored, and moved.
Example Use Case: A MySQL container storing its database files in a volume:
docker run -d -v mysql_data:/var/lib/mysql mysql:8
Answer: There are three main types of Docker-managed storage:
Type | Description | Use Case |
---|---|---|
Volumes | Managed by Docker, stored in /var/lib/docker/volumes/
|
Persistent data (e.g., databases, configs) |
Bind Mounts | Maps a host directory to the container | Direct access to host files (e.g., local dev) |
tmpfs Mounts | Stored in host memory (RAM) only | Temporary data (e.g., caching, secrets) |
Example commands:
# Volume
docker run -v myvol:/data nginx
# Bind Mount
docker run -v /home/user/app:/app nginx
# tmpfs Mount
docker run --tmpfs /app/cache nginx
Answer: Create a volume:
docker volume create myvolume
List all volumes:
docker volume ls
Example Output:
DRIVER VOLUME NAME
local myvolume
local mysql_data
Inspect a specific volume:
docker volume inspect myvolume
Answer:
Use the -v
or --mount
option with docker run
.
Using -v
(short syntax):
docker run -d -v mydata:/usr/share/nginx/html nginx
Using --mount
(preferred for clarity):
docker run -d --mount source=mydata,target=/usr/share/nginx/html nginx
Check mounted volumes:
docker inspect <container_id> | grep Mounts -A5
Feature | Volumes | Bind Mounts |
---|---|---|
Managed by | Docker | Host system |
Storage Location | /var/lib/docker/volumes/ |
Any path on host |
Backup | Easy (via docker volume ) |
Manual |
Performance | Better, especially on Linux | Depends on host FS |
Portability | More portable | Path-dependent |
Security | More isolated | Full host access risk |
Example:
# Volume
docker run -v appdata:/data nginx
# Bind Mount
docker run -v /home/user/app:/data nginx
Answer: You can share the same volume between multiple containers.
Example:
docker volume create shared_data
docker run -d --name app1 -v shared_data:/data nginx
docker run -d --name app2 -v shared_data:/data alpine sleep infinity
Both containers can read/write to /data
:
docker exec app2 ls /data
Use case: Multiple containers sharing config files or logs.
Answer:
You can use tar
inside a temporary container:
docker run --rm -v mydata:/data -v $(pwd):/backup busybox tar czf /backup/mydata_backup.tar.gz -C /data .
✅ This creates a compressed backup file mydata_backup.tar.gz
.
docker run --rm -v mydata:/data -v $(pwd):/backup busybox tar xzf /backup/mydata_backup.tar.gz -C /data
Use case: Useful for migrating volumes between servers or disaster recovery.
Answer:
Remove a specific volume:
docker volume rm mydata
Remove all unused (dangling) volumes:
docker volume prune
Remove volume with container (using --volumes
):
docker rm -v mycontainer
Answer: If a container using a named volume is deleted, the data inside the volume remains intact.
Example:
docker run -d --name web -v mydata:/app nginx
docker rm -f web
✅ Volume mydata
still exists and can be reused:
docker run -d -v mydata:/app nginx
However: If you used an anonymous volume, it’s automatically deleted when the container is removed unless explicitly retained.
Check anonymous volumes:
docker volume ls
Here are Docker Interview Questions (Q41–Q50) — focused on Dockerfile and Image Build Process, with detailed explanations and examples 👇
A Dockerfile is a text file that contains a set of instructions used to build a Docker image automatically. It defines how the image should be created, including base image, dependencies, configurations, and the application itself.
Example:
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3
COPY app.py /app/app.py
WORKDIR /app
CMD ["python3", "app.py"]
Purpose:
- Automates image creation
- Ensures consistency across environments
- Makes application deployment repeatable and version-controlled
The FROM instruction sets the base image for subsequent instructions in the Dockerfile.
Every Dockerfile must start with a FROM
instruction.
Example:
FROM ubuntu:20.04
Key Points:
-
You can have multiple FROM statements to create multi-stage builds.
-
Base images can be:
- Official images (
alpine
,ubuntu
,node
) - Custom images from private/public registries.
- Official images (
The RUN instruction executes commands inside the image at build time. It’s used to install packages, configure environments, or set up dependencies.
Example:
RUN apt-get update && apt-get install -y nginx
Note:
Each RUN
command creates a new image layer.
To optimize, combine multiple RUN commands using &&
to reduce image layers.
CMD
specifies the default command to run when a container starts.
Example:
CMD ["python3", "app.py"]
Behavior:
-
Only one CMD is allowed per Dockerfile (last one takes effect).
-
It can be overridden at container run time:
docker run myimage python3 other.py
Feature | CMD | ENTRYPOINT |
---|---|---|
Purpose | Provides default command/arguments | Defines the executable that always runs |
Overridable | Fully overridden when user provides command | User input becomes arguments to ENTRYPOINT |
Typical Usage | Default parameters | Fixed executable with user arguments |
Example:
ENTRYPOINT ["python3"]
CMD ["app.py"]
Running:
docker run myimage test.py
→ Executes python3 test.py
The COPY instruction copies files or directories from your local filesystem (build context) into the image filesystem.
Syntax:
COPY <src> <dest>
Example:
COPY ./app /usr/src/app
Notes:
- Works only with local files.
- Does not support URLs or automatic extraction.
ADD
is similar to COPY
but has extra capabilities:
- Can copy from URLs.
- Automatically extracts tar archives (
.tar
,.tar.gz
, etc.).
Example:
ADD https://example.com/file.tar.gz /tmp/
ADD app.tar.gz /usr/src/app/
Feature | COPY | ADD |
---|---|---|
Source | Local files only | Local files + Remote URLs |
Archive extraction | No | Automatically extracts .tar , .tar.gz
|
Simplicity | Recommended for static copy | More complex behavior |
Best Practice:
Use COPY for predictable behavior unless you specifically need ADD
features.
WORKDIR
sets the working directory inside the container for all subsequent instructions (RUN
, CMD
, ENTRYPOINT
, etc.).
Example:
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
Equivalent to:
cd /app
If the directory doesn’t exist, Docker will create it automatically.
EXPOSE
indicates which network ports the container will listen on at runtime.
Example:
EXPOSE 80
Key Points:
-
It’s informational only — it doesn’t publish the port.
-
To make the port accessible externally, use:
docker run -p 8080:80 myimage
-
Helps other developers understand which ports the app uses.
Nice — here are Q51–Q60 (you actually asked 10 more items, I included the final disk-usage question as Q60). Short, practical answers + commands you’ll use daily.
Use docker build
in the directory with the Dockerfile
(build context):
docker build -t myimage:latest .
-t
tags the image (name:tag
). .
is the build context path (current directory). Use --file
to specify a different Dockerfile.
Use docker tag
to add/modify a tag for an existing image:
docker tag <image-id-or-name>:<oldtag> myrepo/myimage:1.0
Example:
docker tag myimage:latest myusername/myimage:1.0
- Login:
docker login
- Tag (if not already) and push:
docker tag myimage:latest myusername/myimage:1.0
docker push myusername/myimage:1.0
For private registries, use the registry hostname in the tag (e.g., registry.example.com/myimage:tag
).
Use docker stats
for live resource usage (CPU, memory, network, I/O):
docker stats # live for all running containers
docker stats <container> # live for a specific container
Add --no-stream
for a single snapshot.
Use docker restart
:
docker restart <container-name-or-id>
You can also stop/start separately:
docker stop <container>
docker start <container>
Use docker container prune
(interactive confirm) or remove stopped containers programmatically:
docker container prune # prompts for confirmation
Or:
docker rm $(docker ps -aq -f status=exited)
Several options:
- Remove dangling images (untagged):
docker image prune
- Remove all unused images (not referenced by any container):
docker image prune -a
Be careful — -a
deletes images not used by any container.
Remove unused networks:
docker network prune
This will remove networks not used by any containers (prompts for confirmation).
docker system prune
cleans up unused Docker objects in one command:
docker system prune
By default it removes:
- stopped containers
- dangling images
- unused networks
Add -a
to also remove all unused images (not just dangling) and --volumes
to remove unused volumes:
docker system prune -a --volumes
Warning: -a
and --volumes
can delete data you might need. Use with care.
Use:
docker system df
It displays Docker’s disk usage summary: images, containers, local volumes, build cache, plus size details per item. Combine with --verbose
for per-image/container breakdown:
docker system df --verbose
Perfect — these Q61–Q70 dive into Docker’s architecture, image internals, and lightweight base image concepts — the kind of technical depth DevOps/Cloud interviews love.
Docker follows a client-server architecture with three main components:
1. Docker Client:
- CLI or API tool to interact with the Docker daemon.
- Commands like
docker build
,docker run
,docker ps
are sent to the daemon.
2. Docker Daemon (dockerd
):
- Background service that manages Docker objects (images, containers, networks, volumes).
- Listens on a Unix socket or TCP socket for API requests from clients.
- Uses containerd for container lifecycle management.
3. Docker Registry:
- Stores and distributes Docker images (public: Docker Hub, or private registries).
Architecture flow:
Docker CLI → Docker Daemon → containerd → runc → Linux Kernel (cgroups, namespaces)
containerd
is a container runtime daemon that handles the low-level container lifecycle operations such as:
- Image transfer & storage (pull/push)
- Container creation, execution, and supervision
- Snapshot management
It acts as an interface between Docker Engine and the runc runtime (which actually creates containers).
In short:
dockerd
(high-level management) → containerd
(runtime orchestration) → runc
(container creation).
OCI = Open Container Initiative, a Linux Foundation project that standardizes container technology.
It defines:
- OCI Image Specification: – Format for building and storing container images.
- OCI Runtime Specification: – Defines how to run containers (namespaces, cgroups, process isolation).
Why it matters: Ensures interoperability — any OCI-compliant image can run with any OCI-compliant runtime (e.g., Docker, Podman, CRI-O).
Docker images are built in layers, each representing an instruction in the Dockerfile (FROM
, RUN
, COPY
, etc.).
Key facts:
- Layers are read-only and cached.
- New layers are added for changes; old layers are reused (build caching).
- Reduces storage and build time.
Example:
FROM ubuntu:20.04 # Base layer
RUN apt-get install nginx # Creates new layer
COPY app/ /var/www/html # Adds another layer
Docker uses UnionFS (Union File System) like OverlayFS, which merges multiple filesystem layers into a single unified view.
Mechanism:
- Each image layer is read-only.
- When a container runs, Docker adds a writable layer on top.
- Changes made inside a container are written only to this top writable layer.
Benefits:
- Efficient storage (shared base layers)
- Fast container creation (layer reuse)
A base image is the starting point for a Docker image build — the foundation on which you install software and run applications.
Types:
-
Official OS base images:
e.g.,
ubuntu:20.04
,debian
,alpine
- Scratch (empty base image): Used to create minimal images (for Go or C binaries)
Example:
FROM ubuntu:22.04
Or a scratch image:
FROM scratch
COPY myapp /
CMD ["/myapp"]
Ephemeral containers are temporary, short-lived containers created primarily for debugging running containers or troubleshooting environments without restarting them.
Key points:
- Don’t have persistent storage or networking.
- Introduced via
kubectl debug
in Kubernetes anddocker run --rm
in Docker. - Automatically removed after exit.
Example:
docker run --rm -it ubuntu bash
(--rm
removes it after exit — hence ephemeral.)
Feature | Docker Image | Snapshot |
---|---|---|
Definition | Immutable template for containers | Point-in-time copy of a container’s writable layer |
Purpose | Used to create containers | Used to save container state |
Persistence | Permanent (reusable) | Temporary (for rollback or backup) |
Creation |
docker build , docker commit
|
docker commit <container> <newimage> can act like snapshot |
In Docker, docker commit
behaves like creating a snapshot of a running container → produces a new image.
Official images are curated, verified, and maintained images published by Docker or the community on Docker Hub.
Examples: nginx
, redis
, ubuntu
, mysql
, node
, python
Characteristics:
- Security-verified and regularly updated
- Follow Docker best practices
- No namespace prefix (e.g.,
nginx:latest
, notusername/nginx
)
Check:
docker pull nginx
Alpine Linux is a minimal, security-focused Linux distribution designed for small container images.
Why it’s popular:
- Extremely lightweight (~5 MB base image)
- Faster builds & deployments
- Lower attack surface
- Package manager (apk) for quick installs
Example:
FROM alpine:latest
RUN apk add --no-cache python3 py3-pip
Use case: Ideal for microservices or cloud-native apps where minimal footprint is key.
Excellent — these Q71–Q80 cover core container runtime operations, resource limits, and lifecycle management — practical and common in DevOps/Cloud Engineer interviews.
You can pass environment variables using the -e
or --env
option in docker run
or via Dockerfile / Compose.
Example 1 — Command line:
docker run -e DB_USER=admin -e DB_PASS=secret myapp
Example 2 — from file:
docker run --env-file .env myapp
Example 3 — Dockerfile:
ENV APP_ENV=production
Example 4 — Docker Compose:
environment:
- APP_ENV=production
- DB_USER=admin
Use the -p
or --publish
flag in docker run
:
Syntax:
docker run -p <host_port>:<container_port> <image>
Example:
docker run -d -p 8080:80 nginx
➡️ Maps port 80 inside the container to 8080 on the host
You can verify with docker ps
.
Use docker inspect
to get detailed JSON output of container configuration and runtime info.
Example:
docker inspect <container_id_or_name>
To get specific info (e.g., IP address):
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_id>
Useful for:
- Volume mappings
- Environment variables
- Network details
- Mount points
- Logs and metadata
You can get it using the docker inspect
command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>
Or check all containers’ IPs:
docker ps -q | xargs docker inspect -f '{{.Name}} - {{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
Use docker exec
with -it
for interactive mode.
Examples:
docker exec -it <container_name> /bin/bash
or if Bash is not available:
docker exec -it <container_name> sh
To run a specific command:
docker exec -it webapp ls /var/www/html
Use the --restart
policy flag in docker run
.
Policies:
-
no
– (default) never restart -
on-failure
– restart only on non-zero exit -
always
– always restart -
unless-stopped
– restart unless manually stopped
Example:
docker run -d --restart unless-stopped nginx
Use the --memory
flag to cap container RAM usage.
Example:
docker run -d --memory="512m" nginx
Additional options:
-
--memory-swap
– total memory + swap -
--memory-reservation
– soft limit (used for scheduling)
Example with swap:
docker run -d --memory="512m" --memory-swap="1g" myapp
There are several flags for CPU control:
Option 1: --cpus
docker run -d --cpus="1.5" myapp
→ limits to 1.5 CPU cores.
Option 2: --cpu-shares
docker run -d --cpu-shares=512 myapp
→ relative weight (default 1024 = 1 full share).
Option 3: Pin to specific cores
docker run -d --cpuset-cpus="0,1" myapp
→ container can run only on CPU 0 and 1.
Use the --rm
flag. It automatically removes the container filesystem and metadata after exit.
Example:
docker run --rm -it ubuntu bash
Use Case: Temporary debugging, short-lived jobs, CI/CD build/test containers.
Docker images themselves can’t be directly “renamed,” but you can tag them with a new name and remove the old tag.
Example:
docker tag oldimage:latest newimage:latest
docker rmi oldimage:latest
This effectively “renames” it by re-tagging and removing the old name.
Perfect 👏 — these Q81–Q90 focus on Docker’s core architecture, container isolation, pros/cons, and technical internals — all common in DevOps interviews for mid–senior engineers.
Docker consists of several core components that work together to build, run, and manage containers:
Component | Description |
---|---|
Docker Client | CLI tool (docker ) used to send commands to the Docker daemon. |
Docker Daemon (dockerd ) |
Background process that manages Docker objects (images, containers, volumes, networks). |
Docker Images | Immutable, read-only templates used to create containers. |
Docker Containers | Running instances of images that encapsulate applications. |
Docker Registry | Repository for storing and distributing images (e.g., Docker Hub). |
Docker Compose | Tool to define and manage multi-container applications using YAML. |
Architecture Flow:
Docker CLI → Docker Daemon → containerd → runc → Linux Kernel (namespaces + cgroups)
Containers achieve process and resource isolation using Linux kernel features:
-
Namespaces → Isolate system resources per container:
-
pid
(process IDs) -
net
(network interfaces) -
mnt
(file systems) -
uts
(hostname) -
ipc
(inter-process communication) -
user
(user and group IDs)
-
-
Control Groups (cgroups) → Limit and prioritize resource usage:
- CPU, memory, disk I/O, and network bandwidth control.
-
Union File System (OverlayFS) → Provides isolated and layered filesystem per container.
Result: Each container behaves like an independent system but shares the host kernel.
✅ Key Benefits:
- Portability: Run anywhere — on-prem, cloud, or across OSes.
- Consistency: Same environment for development, testing, and production.
- Efficiency: Lightweight and fast compared to full VMs.
- Isolation: Containers run isolated from each other.
- Scalability: Easily scale microservices and stateless workloads.
- Version Control: Images are immutable and versioned.
- Faster CI/CD: Speeds up build, test, and deploy cycles.
- Security: Containers share the same kernel — higher attack surface than VMs.
- Persistent Storage: Data loss risk without volume management.
- Networking Complexity: Multi-container communication can be tricky.
- Monitoring & Logging: Requires external tools for visibility.
- Performance Overhead: Slight overhead compared to native execution.
- Windows/Mac limitations: Non-Linux hosts rely on virtualization (e.g., WSL2, Hyper-V).
Docker tags label different versions or variants of the same image.
Syntax:
<repository>:<tag>
Examples:
nginx:latest
nginx:1.25.2
python:3.10-slim
Purpose:
- Distinguish versions (
1.0
,2.0
) - Indicate environments (
dev
,prod
) - Manage image updates easily.
Default tag: latest
(if none specified).
Each Docker image is made up of layers, representing a filesystem change (e.g., file added, package installed).
Example:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
COPY . /app
Each instruction = new layer.
Advantages:
- Caching: Reuse unchanged layers to speed up builds.
- Storage efficiency: Shared layers between images.
- Version control: Easier rollback and incremental builds.
✅ Yes, absolutely.
You can start any number of containers from a single image — each container is an isolated runtime instance.
Example:
docker run -d --name web1 nginx
docker run -d --name web2 nginx
Both run from the same nginx
image, but have separate:
- Filesystems
- Network interfaces
- Process spaces
✅ Yes, using Volumes or Bind Mounts.
-
Volumes (managed by Docker):
docker run -v mydata:/data myapp
Data persists even if the container is deleted.
-
Bind Mounts (host path mapped to container path):
docker run -v /host/data:/container/data myapp
-
Tmpfs Mounts: Temporary in-memory storage for fast ephemeral data.
Purpose: Retain logs, DB data, or application state beyond container lifecycle.
Feature | Docker | Podman |
---|---|---|
Daemon | Requires Docker daemon (dockerd ) |
Daemonless architecture |
Rootless mode | Optional | Native rootless support |
CLI compatibility | Docker CLI | Docker-compatible CLI |
Container runtime | Uses containerd & runc
|
Uses crun or runc
|
Security | Runs containers as root (default) | Runs as non-root by default |
Systemd integration | Manual setup | Native systemd support |
Kubernetes compatibility | via Docker Desktop or Mirantis | Built-in podman generate kube
|
In short: Podman = a more secure, daemonless alternative to Docker with better rootless operation.
Namespaces are a Linux kernel feature Docker uses to isolate resources between containers.
Each container runs in its own set of namespaces:
Namespace | Isolates |
---|---|
PID | Process IDs (each container has its own process tree) |
NET | Network interfaces, IP addresses, ports |
IPC | Inter-process communication (message queues, semaphores) |
MNT | Filesystems (each container has its own root FS) |
UTS | Hostname and domain name |
USER | User and group IDs (root inside container ≠ root on host) |
This ensures that processes inside a container can’t “see” or affect others outside it.
Excellent 👍 — here are Q91–Q100, covering Docker CLI basics, Swarm, Compose, registries, and cleanup operations — these are common for DevOps/Cloud & CI/CD-focused interviews.
Use the following commands:
docker version
Shows client and server versions (Docker Engine, API version, etc.)
or
docker --version
Simpler, shows only the version number (e.g., Docker version 27.1.1, build 123abc
).
Use:
docker info
This displays detailed information about the Docker installation:
- Server version and storage driver
- Number of containers and images
- Plugins (volume, network)
- Docker root directory
- Kernel version and OS
- Swarm mode status
It’s useful for debugging environment issues.
Docker Swarm is Docker’s native container orchestration tool that allows managing a cluster of Docker nodes as a single virtual system.
Key Features:
- Cluster management via Swarm Mode (
docker swarm init
) - Service deployment and scaling (
docker service create
,docker service scale
) - Load balancing and service discovery
- Rolling updates and rollback
- Secure TLS communication between nodes
Example workflow:
docker swarm init
docker service create --name web -p 80:80 nginx
docker service scale web=3
Note: Kubernetes has largely replaced Swarm in enterprise environments, but Swarm remains simpler for small clusters.
Docker Compose is a tool to define and manage multi-container applications using a YAML file (docker-compose.yml
).
Benefits:
-
Manage multiple containers as one unit.
-
Define networking, volumes, environment variables, and dependencies.
-
Simple commands like:
docker-compose up -d docker-compose down
Example:
version: '3'
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: example
✅ Yes, but it’s not recommended by best practices.
Containers are meant to run a single process (microservice pattern). However, you can run multiple processes using:
-
Process managers (e.g.,
supervisord
,systemd
) - Shell scripts that start multiple services
Example:
CMD service nginx start && service php-fpm start && tail -f /dev/null
Better practice: Use separate containers for each process and link them with Docker Compose or Kubernetes.
Docker provides multiple cleanup commands:
-
Remove stopped containers:
docker container prune
-
Remove unused images:
docker image prune -a
-
Remove unused networks:
docker network prune
-
Remove all unused data (containers, images, volumes, build cache):
docker system prune -a --volumes
Tip: Run periodically in CI/CD or cron to free disk space.
A Docker registry is a repository for storing and distributing Docker images.
Types:
- Public: e.g., Docker Hub, GitHub Container Registry, Azure Container Registry (ACR)
- Private: self-hosted registries (for internal use)
Registries store repositories, and each repository contains tagged image versions.
Example:
docker pull nginx:latest
This pulls from the public Docker Hub registry by default.
Feature | Docker Hub | Private Registry |
---|---|---|
Hosting | Public cloud service by Docker | Self-hosted or cloud-managed |
Access | Public by default (can be private) | Fully private, controlled access |
Use case | Open-source and community sharing | Enterprise/internal images |
URL | docker.io/<repo> |
registry.company.com/<repo> |
Authentication | Docker ID | Custom (LDAP, OAuth, etc.) |
Example:
docker pull docker.io/nginx
docker pull registry.company.com/app:1.0
You can easily run a local registry container using Docker’s official image:
docker run -d -p 5000:5000 --name registry registry:2
Then tag and push images to it:
docker tag myapp localhost:5000/myapp
docker push localhost:5000/myapp
To pull:
docker pull localhost:5000/myapp
Tip: Add SSL or authentication for production use.
docker-compose.yml
is a YAML configuration file that defines how to run and connect multiple Docker containers.
Sections:
-
version:
→ Compose file format version -
services:
→ Each container definition (image, ports, env, volumes) -
volumes:
→ Named volumes -
networks:
→ Custom networks
Example:
version: '3'
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
db_data:
Command usage:
docker-compose up -d
docker-compose ps
docker-compose down
Purpose: Simplifies multi-container orchestration, especially for development and testing environments.
Excellent 👏 — you’re now moving into Intermediate Docker Interview Questions (Q101–Q200). Let’s cover Q101–Q120, focusing on Dockerfile Optimization and Docker Compose fundamentals — these topics show your real-world Docker mastery in CI/CD and microservice deployments.
Minimizing image size improves build speed, security, and deployment efficiency.
Best Practices:
-
Use lightweight base images
- e.g.,
alpine
,debian:slim
, orscratch
- e.g.,
-
Combine RUN commands
RUN apt-get update && apt-get install -y python3 && rm -rf /var/lib/apt/lists/*
-
Use
.dockerignore
to exclude unnecessary files (e.g.,.git
, logs). -
Multi-stage builds — compile in one stage, copy final binary only.
-
Clean temporary files in the same layer.
-
Use specific COPY commands instead of copying entire context.
-
Leverage distroless images for minimal runtime footprint.
Multi-stage builds allow using multiple FROM
statements to create lean final images by copying only necessary artifacts.
Advantages:
- Smaller final images
- Cleaner builds (no build tools in production image)
- Easier maintenance
Example:
# Stage 1: Build
FROM golang:1.20 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp
# Stage 2: Run
FROM alpine:latest
COPY --from=builder /app/myapp /usr/local/bin/
CMD ["myapp"]
✅ Only the compiled binary is included — not Go or build dependencies.
Feature | ENTRYPOINT | CMD |
---|---|---|
Purpose | Defines the executable | Defines default arguments |
Override behavior | CMD arguments are appended | CMD is overridden completely |
Example | See below |
Example 1 — ENTRYPOINT with CMD:
ENTRYPOINT ["python3"]
CMD ["app.py"]
→ docker run myimage
runs python3 app.py
→ docker run myimage test.py
runs python3 test.py
Example 2 — CMD alone:
CMD ["python3", "app.py"]
→ Can be fully replaced: docker run myimage python3 test.py
COPY
is simpler and more predictable.
COPY | ADD |
---|---|
Copies local files only | Supports URLs and auto-extracts .tar
|
Recommended for static file copying | Can have unexpected behaviors |
Best Practice:
Use COPY
unless you need ADD
’s special features (e.g., unpacking tarballs).
Docker caches layers — you can reuse previous builds to speed up future builds.
Tips:
-
Place frequently changing instructions (like
COPY . .
) at the end. -
Keep
RUN
commands deterministic (no timestamps/randomness). -
Install dependencies before copying app code:
COPY requirements.txt . RUN pip install -r requirements.txt COPY . .
-
Use
--build-arg
for environment-based caching.
.dockerignore
excludes files/folders from the build context, preventing them from being sent to the Docker daemon.
Benefits:
- Smaller build context → faster builds
- Keeps secrets,
.git
, and local configs out of images
Example:
.git
node_modules
*.log
.env
Use the ARG
instruction with --build-arg
in docker build
.
Dockerfile:
ARG VERSION=latest
RUN echo "Building version $VERSION"
Build Command:
docker build --build-arg VERSION=1.0 -t myapp:1.0 .
Note:
ARG
values exist only at build-time (unlikeENV
, which persists in containers).
You can override CMD in the docker run
command.
Example: Dockerfile:
CMD ["nginx", "-g", "daemon off;"]
Override at runtime:
docker run mynginx nginx -t
Or override via Docker Compose:
command: nginx -T
✅ Common steps:
-
Verbose output:
docker build --no-cache --progress=plain .
-
Check error layer — read logs after each
RUN
/COPY
. -
Use intermediate shell debugging:
RUN echo "Step reached" && ls /app
-
Use BuildKit logs:
DOCKER_BUILDKIT=1 docker build .
-
Test interactively: Build partial image and debug:
docker run -it <partial-image> sh
Lightweight images reduce build time, attack surface, and memory footprint.
Examples:
alpine
, debian:slim
, busybox
, or distroless
.
Benefits:
- Smaller image size
- Faster deployment
- Fewer vulnerabilities
- Ideal for microservices
Example:
FROM python:3.12-alpine
→ Size: ~40 MB (vs ~1 GB full image)
##⚙️ Docker Compose (Q111–Q120)
Docker Compose allows defining and managing multi-container applications with a single YAML configuration (docker-compose.yml
).
Example use cases:
- Web app + DB + cache services
- CI/CD test environments
- Local microservice orchestration
By adding multiple services under the services:
key.
Example:
version: '3'
services:
web:
image: nginx
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: password
A service defines a container configuration (image, ports, volumes, env vars). Each service = one container (or multiple replicas).
Example:
services:
app:
image: myapp
ports:
- "5000:5000"
Compose automatically creates a default bridge network for communication between containers. You can also define custom networks:
Example:
networks:
backend:
services:
web:
image: nginx
networks:
- backend
db:
image: mysql
networks:
- backend
Benefit: Services can talk to each other via service names (DNS-based).
Use:
docker-compose up -d
-
-d
runs in detached mode. - Without
-d
, logs stream to terminal.
It builds images (if needed) and starts all defined services.
docker-compose down
Stops and removes:
- Containers
- Networks
- Volumes (optional with
-v
)
To stop but not remove, use:
docker-compose stop
Use:
docker-compose logs
Follow logs in real-time:
docker-compose logs -f
View logs for a specific service:
docker-compose logs db
docker-compose restart
Or restart a single service:
docker-compose restart web
You can also use:
docker-compose up -d --force-recreate
Default file name:
docker-compose.yml
Docker Compose automatically detects this file in the current directory.
Use the -f
or --file
flag:
Example:
docker-compose -f docker-compose.prod.yml up -d
You can even combine multiple files (for overrides):
docker-compose -f docker-compose.yml -f docker-compose.override.yml up
Excellent 👍 — you’re progressing perfectly into the Intermediate Docker Interview Set (Q121–Q140) covering Environment, Variables, Volumes, Networks, and Communication — all critical for DevOps & Cloud Engineer interviews (especially real-world Docker Compose deployments).
A .env
file allows you to define environment variables that Compose automatically loads.
Example .env
:
DB_USER=admin
DB_PASS=secret
APP_ENV=production
docker-compose.yml:
services:
db:
image: mysql
environment:
MYSQL_USER: ${DB_USER}
MYSQL_PASSWORD: ${DB_PASS}
Command:
docker-compose up -d
✅ Compose automatically substitutes values from .env
.
Note: .env
file must be in the same directory as your docker-compose.yml
.
You can define environment variables in several ways:
Option 1 – Inline:
environment:
- APP_ENV=prod
- DB_USER=admin
Option 2 – Key-value map:
environment:
DB_USER: admin
DB_PASS: secret
Option 3 – From .env file:
env_file:
- .env
Option 4 – CLI override:
docker run -e APP_ENV=dev myapp
Secrets store sensitive data (e.g., passwords, tokens) securely.
Example:
version: '3.7'
services:
db:
image: mysql
secrets:
- db_password
secrets:
db_password:
file: ./db_password.txt
Usage inside container:
Secrets are mounted under /run/secrets/<secret_name>
.
Note: Docker Swarm or Compose v3.7+ supports secrets natively.
Older Compose versions used links
to connect containers.
Now, linking is deprecated — replaced by networks.
Legacy Example:
web:
image: nginx
links:
- db
db:
image: mysql
Modern Equivalent:
services:
web:
image: nginx
networks: [backend]
db:
image: mysql
networks: [backend]
networks:
backend:
✅ Containers communicate by service name (DNS-based).
Restart policies control how Docker handles container restarts when they exit or fail.
Supported policies:
-
no
→ (default) never restart -
always
→ always restart when stopped -
on-failure
→ restart only on non-zero exit code -
unless-stopped
→ restart unless manually stopped
Example:
restart: on-failure
Policy | Behavior |
---|---|
always | Always restarts container, even after Docker daemon restarts. |
unless-stopped | Restarts unless manually stopped (docker stop ). If Docker restarts, it will not restart stopped containers. |
✅ Best Practice: Use unless-stopped
in production for controlled restarts.
Feature | depends_on |
links |
---|---|---|
Purpose | Controls startup order | Provides legacy networking and environment aliases |
Networking | No networking configuration | Creates network alias (deprecated) |
Compose Version | Supported in v3+ | Deprecated after v3 |
Example | depends_on: [db] |
links: [db] |
Example:
web:
depends_on:
- db
→ Ensures db
starts before web
.
You can define named or anonymous volumes.
Example (Named Volume):
version: '3'
services:
db:
image: mysql
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:
Example (Bind Mount):
services:
app:
volumes:
- ./code:/usr/src/app
✅ Volumes persist data even after containers stop.
You can define and attach containers to user-defined networks.
Example:
version: '3'
services:
web:
image: nginx
networks:
- frontend
db:
image: mysql
networks:
- backend
networks:
frontend:
backend:
Benefits:
- Containers can communicate by name within the same network.
- Isolates traffic between frontend/backend networks.
You can override environment variables in multiple ways (priority order):
-
CLI:
APP_ENV=staging docker-compose up
-
Compose override file:
docker-compose -f docker-compose.yml -f docker-compose.override.yml up
-
Environment variables in shell: Variables from shell take precedence over
.env
. -
Service-level overrides in
docker-compose.yml
.
By default, containers on different user-defined networks cannot communicate. To connect them:
-
Attach container to multiple networks:
docker network connect <network> <container>
-
Or define multiple networks in Compose:
services: web: networks: [frontend, backend]
Docker includes a built-in DNS server that automatically maps container names → IP addresses.
Example:
If service db
runs in the same network as web
,
the web
container can connect using:
mysql -h db -u root -p
DNS is provided by Docker’s embedded DNS resolver, linked to the docker0
bridge or user-defined network.
Use:
docker network connect <network_name> <container_name>
Example:
docker network connect backend existing-container
You can verify:
docker inspect <container_name> | grep Network
User-defined bridge networks are custom, isolated networks where containers can communicate securely via DNS names.
Advantages:
- Container name-based DNS
- Network-level isolation
- Configurable subnets and IP ranges
Example:
docker network create my_bridge
Attach containers:
docker run -d --network my_bridge nginx
Docker’s embedded DNS server:
- Provides automatic name resolution for containers in user-defined networks.
- Each container can resolve other containers by service name.
- Works on port
127.0.0.11
.
Example:
ping db
inside the web
container resolves automatically to the db
container IP.
If containers are in the same host:
- Internally (on Docker network): ✅ Works fine (each container has isolated networking).
- Externally (published with
-p
): ❌ Conflict error if you try to map the same host port twice.
Example:
docker run -p 8080:80 nginx
docker run -p 8080:80 httpd # Error: port already in use
Command | Purpose |
---|---|
EXPOSE |
Documentation — tells which ports container listens on. Does not publish port externally. |
-p |
Actively publishes and maps container port → host port. |
Example:
EXPOSE 80
vs
docker run -p 8080:80 nginx
Steps:
-
Check network list:
docker network ls
-
Inspect network details:
docker network inspect <network>
-
Ping or curl inside container:
docker exec -it <container> ping <other_container>
-
Verify IP tables (Linux):
iptables -L -n
-
Check DNS resolution:
cat /etc/resolv.conf
Port forwarding maps a host port to a container port, allowing external access to a containerized service.
Syntax:
docker run -p <host_port>:<container_port> <image>
Example:
docker run -p 8080:80 nginx
→ Requests to localhost:8080
are forwarded to port 80
inside the container.
Use multiple -p
flags or EXPOSE
instructions.
Example (CLI):
docker run -p 80:80 -p 443:443 nginx
Example (Dockerfile):
EXPOSE 80 443
Example (Compose):
ports:
- "80:80"
- "443:443"