12 min read5 days ago
–
Audience: Beginners, students, and IT / DevOps professionals Purpose: A practical & conceptual Docker reference you’ll actually revisit Author tone: Instructor mindset, production-aware, interview-relevant
Why This Article Exists (Read This First)
Most Docker tutorials either:
- drown beginners in theory, or
- dump commands with zero context
This guide is written the way Docker is actually learned in the industry:
Understand the problem → understand the Docker concept → run the command → see why it matters later
Whether you’re new to DevOps, SRE, Cloud, or Platform role or preparing for one, this is your foundation.
1. What is Docker & Containerization?
Containerization — The Core Idea
Containerization packages an a…
12 min read5 days ago
–
Audience: Beginners, students, and IT / DevOps professionals Purpose: A practical & conceptual Docker reference you’ll actually revisit Author tone: Instructor mindset, production-aware, interview-relevant
Why This Article Exists (Read This First)
Most Docker tutorials either:
- drown beginners in theory, or
- dump commands with zero context
This guide is written the way Docker is actually learned in the industry:
Understand the problem → understand the Docker concept → run the command → see why it matters later
Whether you’re new to DevOps, SRE, Cloud, or Platform role or preparing for one, this is your foundation.
1. What is Docker & Containerization?
Containerization — The Core Idea
Containerization packages an application with everything it needs to run, but without virtualizing the entire OS.
Instead of:VM → Guest OS → AppContainers use:Host OS Kernel → Container → App
Why Containers Are Lightweight
- No separate OS per app
- Faster startup (seconds vs minutes)
- Less CPU & memory usage
Docker in the Real World
Docker is used when:
- You deploy microservices
- You build CI/CD pipelines
- You run apps across multiple environments (dev → prod)
Docker Architecture
Press enter or click to view image in full size
2. Docker Hub, Registry & Images
Docker images are stored and distributed using registries, which act as centralized repositories for container images. Docker Hub is the most common public registry, while cloud providers offer private registries for enterprise use. Images are built in layers, typically starting from a base image, and custom images are created by adding application code and configuration on top. Registries make it possible to share, version, and deploy the same image consistently across different environments.
Docker Registry (Think: Image Warehouse)
A registry stores Docker images, just like GitHub stores code.
Examples:
- Docker Hub (public)
- AWS ECR, Azure ACR, GCP Artifact Registry (private)
Base Image
A base image is the starting layer of your container. Common examples:
docker pull ubuntu #Pull ubuntu base image from Dockher Hub docker pull node:20-alpinedocker pull python:3.12-slim
Industry tip*: Always prefer
*alpine*or*slim*images unless you need full OS features.*
Custom Image
A custom image is an image you create to package your application in a repeatable, portable way. Think of it as:
Custom Image = Base Image + Application Code + Runtime Configuration.
Why Custom Images Matter in Real Systems
In real-world teams:
- Developers do not deploy containers manually
- CI/CD pipelines build images once
- The same image is promoted from dev → test → prod
This guarantees:
- Environment consistency
- Faster deployments
- Easy rollbacks
What Typically Goes Into a Custom Image
- Application binaries or source code
- Language runtime (Java, Node, Python, etc.)
- OS packages and libraries
- Application configuration defaults
Example Flow (How Teams Actually Use Custom Images)
- Developer writes application code
- Dockerfile defines how to package it
- CI pipeline builds the image
- Image is pushed to a registry
- Orchestrator (Swarm / Kubernetes) deploys it
Important distinction*: Containers are disposable, images are immutable. You fix issues by building a new image, not by changing a running container.*
3. Essential Docker Commands
Images
docker imagesList all images on docker host.
docker pull <image>Pulls the image from registry
Attach to a Running Container
docker attach <container_id>Useful when troubleshooting startup issues.
Create & Run a Container
docker run ubuntuPulls image if not available and creates and runs a container from the image.
Interactive Mode (Debugging & Learning)
docker run -it ubuntu
-i→ interactive-t→ terminal
Detached Mode (Production Style)
docker run -d nginxContainer runs in background — exactly how services run in prod.
Attach to a Running Container
docker attach <container_id>Useful when troubleshooting startup issues.
Execute Commands Inside Container
docker exec -it <container_id> bashPreferred over attach for production debugging.
Custom contaner name
docker run --name=<customName> nginx -dUseful for tagging containers
Cleanup Commands
docker rm -f $(docker ps -aq) # remove all containersdocker system prune # aggressive, remove all images & containersdocker image prune # safer, images only
⚠ In real systems, pruning blindly can cause outages. Never run docker system prune blindly on shared or production hosts. It may remove volumes and cached images still in use
Port Mapping (How Users Access Containers)
docker run -d -p 8080:80 nginx>> Browser → Host:8080 → Container:80
4. Container Storage & Docker Volumes
Container storage is different from traditional servers because containers are ephemeral by design — when a container is removed, its data is lost. Docker volumes solve this by providing persistent storage that exists outside the container lifecycle. Volumes allow data to survive container restarts and redeployments, making them essential for databases, logs, and any stateful workload running in containers.
Why Storage Is Special in Containers
Containers are ephemeral. Delete container = data gone.
This is fine for:
- application binaries
- temporary files
But not for:
- databases
- user data
- logs
Docker volumes solve this problem.
What Is a Docker Volume?
A volume is Docker-managed persistent storage that lives outside the container lifecycle.
Container → Volume → Host storage
Deleting a container does not delete the volume.
Named Volumes (Production Standard)
docker volume create myvoldocker run -v myvol:/data ubuntu
Why use named volumes:
- Docker-managed
- Portable
- Safer than host path mounts
Best choice for databases and production workloads.
Bind Mounts (Developer Use)
docker run -v /host/app:/container/app httpd
Used for:
- Local development
- Live code changes
Trade-off:
- Tightly coupled to host
- Higher security risk
Bind mounts for dev, volumes for prod.
5. Dockerfile (How Images Are Actually Built)
What Is a Dockerfile?
A Dockerfile is a text file that defines how to build a Docker image in a repeatable and automated way. It is a *repeatable recipe *to build images.
Think of it as:
Infrastructure instructions for your applicationEvery time you run docker build, Docker follows this file *line by line *to build an image
Why Dockerfiles Matter
Without a Dockerfile:
- Builds are manual
- Environments drift
- Debugging becomes guesswork
With a Dockerfile:
- Builds are consistent
- Images are reproducible
- CI/CD pipelines become possible
In production, nobody builds images manually — Dockerfiles are mandatory.
How Docker Builds Images
Each instruction in a Dockerfile:
- Creates a new image layer
- Is cached if unchanged
- Makes rebuilds faster
Dockerfile → Image Layers → Final Image
Understanding layers helps you:
- Optimize build time
- Reduce image size
- Debug build failures
Common Dockerfile Instructions (Must-Know)
FROM – base image (required)
RUN – install packages or dependencies
COPY / ADD – copy files into image
WORKDIR – set working directory
ENV – environment variables
EXPOSE – document container ports
CMD – default startup command
ENTRYPOINT – fixed startup behavior
Simple Real-World Example
FROM python:3.12-slimWORKDIR /appCOPY requirements.txt .RUN pip install -r requirements.txtCOPY . .CMD ["python", "app.py"]
This image can now be:
- tested locally
- pushed to a registry
- deployed anywhere
One Critical Best Practice
Never change running containers. Always rebuild images.
If something breaks:
- Fix the Dockerfile
- Build a new image
- Redeploy
This principle is foundational to modern DevOps.
Build & Push Workflow
Creating a Dockerfile is only half the story. To actually use an image in real systems, it must be built, tagged, and pushed to a registry.
This workflow is how teams deliver applications.
Step 1: Build the Image
docker build -t myapp:v1 .
- Reads the
Dockerfilein current directory - Executes instructions top-to-bottom
- Produces a local image
The
*.*tells Docker to use the current directory as build context.
Step 2: Authenticate with Registry
docker login
Required before pushing images to:
- Docker Hub
- Private registries (ECR, ACR, etc.)
Without login, docker push will fail.
Step 3: Tag the Image
docker tag myapp:v1 username/myapp:v1
Tags associate the image with:
- Registry namespace
- Repository name
- Version
Tags are how versions, rollbacks, and promotions are managed.
Step 4: Push to Registry
docker push username/myapp:v1
- Uploads image layers
- Makes the image available to other systems
- Enables deployments on any machine or cluster
How This Fits into CI/CD
In production:
- Builds happen in CI pipelines
- Images are pushed automatically
- Orchestrators pull images by tag
Code → Dockerfile → Image → Registry → Deployment
Key idea*: You deploy images, not source code.*
One Golden Rule
Never push
***latest***in production. Always version your images.
This makes debugging, rollbacks, and incident recovery predictable.
6. Docker Networking (How Containers Talk)
Docker networking controls how containers discover and communicate with each other without relying on fixed IP addresses. Docker provides built-in networking that handles routing, isolation, and name-based discovery, allowing containers and microservices to talk reliably even when they restart or move.
Default Bridge Network
- Automatically created (
docker0) - Containers can communicate via IP addresses
- DNS name resolution is limited and unreliable
- Container IPs can change on restart
- IP change make it unsuitable for microservices
Custom Bridge Network (Best Practice)
docker network create mynetdocker run -d — network mynet — name api busyboxdocker run -d — network mynet — name db busyboxdocker network lsdocker inspect <network_id>
Why this matters:
- DNS-based service discovery
- Docker provides embedded DNS
- Containers communicate here using container names, not IPs
- IP changes do not break communication
Host & None Networks
host→ no isolation (container shares the host’s n/w namespace & IP)none→ full isolation (container has no network interface and no IP)
Used rarely, but asked often in interviews.
7. Controlling Docker Image Size (Very Important)
Controlling Docker image size is critical for faster builds, quicker deployments, and reduced security risk. Smaller images pull faster across networks, start quicker, and contain fewer OS packages that could introduce vulnerabilities. Techniques like using slim or alpine base images, removing unnecessary dependencies, and leveraging multi-stage Dockerfiles help ensure images contain* only what is required to run the application*—nothing more.
Why Image Size Matters
- Faster pulls
- Faster deployments
- Smaller attack surface
Multi‑Stage Dockerfile (Production Pattern)
#Stage1FROM golang AS buildWORKDIR /appCOPY . .RUN go build -o app#Stage2FROM scratchCOPY — from=build /app /appENTRYPOINT [“/app”]
Result:
- No compiler
- No shell
- Only your binary
8. Docker Compose (Microservices on One Machine)
Docker Compose simplifies running multi-container applications on a single machine by defining services, networks, and volumes in a single YAML file. Instead of starting containers manually, Compose lets you bring up an entire microservices stack with one command, ensuring consistent networking and configuration across environments. It is commonly used for local development, testing, and small-scale deployments.
Get Shreyas’s stories in your inbox
Join Medium for free to get updates from this writer.
Instead of managing containers one by one, Compose allows you to describe:
- services — containers you want to run
- image / build — image source or Dockerfile
- ports — expose services to host
- volumes — persistent storage
- environment — configuration values
- depends_on — startup order
all in **one declarative file **(docker-compose.yml).
*Think of Docker Compose as *“docker run, but for microservices.”
Why Docker Compose Exists
Without Docker Compose:
- You run multiple
docker runcommands - You manually create networks and volumees
- Reproducing setups across machines is painful
With Docker Compose:
- One configuration file
- One command to start everything
- Same setup works for every developer
## Sample Exampleversion: "3.9"services: web: image: nginx ports: - "8080:80" api: build: . depends_on: - web
This starts:
- an Nginx web service
- an API service
- both connected automatically on the same network
Running a Compose Application
docker compose up -d docker compose psdocker compose logs
- Creates networks and volumes automatically
- Starts all services in correct order
- Runs everything in detached mode
To stop everything:
docker compose down
Networking in Docker Compose
By default:
- Compose creates a private bridge network
- Services communicate using service names
- No manual IP management required (Using service names, not IPs)
Use Docker Compose when:
- Running multiple services on one machine
- Developing microservices locally
- You don’t need cluster-level orchestration
Docker Compose is not a replacement for Kubernetes or Swarm — it’s a local orchestration tool.
9. Docker Swarm — Container Orchestration
Docker Compose works well on one machine. But real systems don’t stop there.
Once you have:
- multiple servers
- multiple replicas
- node failures
- rolling updates
you need container orchestration.
This is where Docker Swarm comes in.
What Is Docker Swarm?
Docker Swarm is Docker’s native container orchestration platform. It turns multiple Docker hosts into a single logical cluster.
From the user’s perspective:
You manage the cluster as if it were one machine, not many.
Swarm handles:
- scheduling containers
- service discovery
- load balancing
- failure recovery
All using familiar Docker commands.
Why Orchestration Is Necessary
Without orchestration:
- Containers die and stay dead
- Scaling is manual
- Traffic routing breaks
- Deployments cause downtime
With orchestration:
- Desired state is declared
- Swarm keeps enforcing it
- Failures are handled automatically
You don’t manage containers. You manage outcomes.
Swarm Architecture (Mental Model)
+------------------+| Manager Node | ← decision maker & run containers+--------+---------+ | v+------------------+| Worker Nodes | ← only run containers+------------------+
Manager nodes
- Maintain cluster state
- Schedule services
- Use RAFT consensus
- Can also run containers
Worker nodes
- Run application containers
- Execute manager instructions
Initializing a Swarm Cluster
docker swarm initdocker swarm join-token managerdocker swarm join-token worker#Run the output with token on worker / other manager machinesdocker node ls
This single command:
- creates a Swarm cluster
- promotes the node to manager
- enables overlay networking for microservices across a cluster
- sets up internal DNS and routing mesh
Other nodes join using a token or can be removed
docker swarm join --token xxxxxxdocker node rm <worker-name/id>
Services: The Core Unit in Swarm
In Swarm, you do not deploy containers directly. You deploy services.
docker service create --name web nginxdocker service ps web
A service defines:
- which image to run
- how many replica
docker service create --name=web --replicas=4 nginx - which ports to expose
Ex: -p 8989:3000
Swarm decides:
- where containers run
- how traffic reaches them
Scaling Made Simple
docker service scale web=7 #increase from 4 to 7 containersdocker service scale web=2 #decrease from 7to 2containersdocker service rm webb #delete the service and all its containers
Swarm automatically:
- distributes replicas across nodes
- load balances traffic
- replaces failed containers
No manual container management required.
Rolling Updates & Versioning
Updating an application is done by updating the service, not replacing containers.
docker service update --image nginx:1.25 webdocker service update --image <pvt-registry> web
Swarm performs:
- rolling updates
- controlled replacement
- zero-downtime deployments
If something goes wrong:
docker service rollback web
Rollback is a first-class feature*, not an afterthought.*
Service Port Mapping & Routing Mesh
When you publish a port:
docker service create --name=web -p 8080:80 nginx
Traffic can enter through any node in the cluster.
Swarm’s routing mesh:
- receives traffic
- forwards it to healthy containers
- load balances automatically
You don’t need external load balancers to get started.
Fault Tolerance & Self-Healing
If:
- a container crashes
- a node goes offline
Swarm:
- detects the failure
- reschedules containers
- maintains desired replica count
This is what makes Swarm production-ready. (for small enterprise app)
When to Use Docker Swarm
Docker Swarm is ideal when:
- You want orchestration with minimal complexity
- You already use Docker heavily
- You want production concepts without Kubernetes overhead
It’s also an excellent learning bridge to Kubernetes.
11. Swarm Failure Scenarios & Quorum (Very Important)
Any manager goes offline, Quorum will find another reachable Manager machine to make it the Leader of the cluster and reschedule containers.
Why Quorum Exists
Swarm managers use the RAFT consensus algorithm. Managers need *majority agreement *If Quorum is lost: No scaling, No updates & Cluster becomes read-only
Quorum Formula
Quorum = (N / 2) + 1
Real Failure Scenarios
Managers Can Lose Result1 0 Single point of failure3 1 Safe5 2 Safer
📌 Production rule: Always run odd number of managers.
12. Docker Security & Vulnerability Scanning
Containers make applications easier to deploy — not automatically secure.
A common misconception is:
“Containers are isolated, so security is handled.”
That assumption is dangerous.
Why Container Security Still Matters
Docker containers:
- Share the*** host kernel***
- Include OS packages and libraries
- Ship third-party dependencies
This means vulnerabilities can exist inside images, even if the container itself is isolated.
Containers reduce risk — they do not eliminate it.
Image Vulnerabilities Travel with the Image
If an image contains:
- outdated OS packages
- vulnerable libraries
- insecure defaults
Every container created from that image inherits those risks.
That’s why image scanning is the first and most important security step.
Trivy — Image Vulnerability Scanner
Trivy is a widely used, open-source tool for scanning container images.
trivy image nginxtrivy image mycustomimage
Trivy detects:
- OS-level CVEs
- Language dependency vulnerabilities
- Severity levels (LOW → CRITICAL)
This allows teams to block risky images before deployment.
Security Is a Lifecycle, Not a Tool
Effective container security spans multiple stages:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++Stage: Goal: Examples:++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++Build Catch vulnerabilities early Trivy, SnykRegistry Enforce image policies Private registriesRuntime Detect suspicious behavior FalcoApp Layer Test running services OWASP ZAP
The earlier you detect issues, the cheaper and safer they are to fix.
Practical Security Best Practices
- Use minimal base images (
alpine,slim) - Scan images during CI builds
- Avoid running containers as root
- Never hardcode secrets in images
Security improves by reducing what you ship, not by adding complexity.
Final Thoughts
Docker is not optional anymore.
If you understand:
- Images
- Volumes
- Networks
- Compose
- Swarm fundamentals
You are ready to move to Kubernetes with confidence.
Next Article*: Kubernetes Architecture — explained without buzzwords*
If this helped you, clap 👏 — it tells Medium to show it to more learners.
Press enter or click to view image in full size