Are you beginning Docker and finding it a little overwhelming? The commands can be unwieldy, and best practices are not clearly spelled out. I have three things I wish I knew when starting Docker, which may help you.
Large software projects like Docker often hide crucial best practices and warnings deep within technical documentation. Beginners face a deluge of technical details but few clear and concise guidelines to help them map out their learning path. When I started, I didn’t understand how to manage multiple dependent services, handle complex commands, or the precise dangers of running containerized processes as root. If these crucial details had been clearly outlined, I could have saved a lot of time and avoided potentially costly mistakes.
Not using Docker Compose for mu…
Are you beginning Docker and finding it a little overwhelming? The commands can be unwieldy, and best practices are not clearly spelled out. I have three things I wish I knew when starting Docker, which may help you.
Large software projects like Docker often hide crucial best practices and warnings deep within technical documentation. Beginners face a deluge of technical details but few clear and concise guidelines to help them map out their learning path. When I started, I didn’t understand how to manage multiple dependent services, handle complex commands, or the precise dangers of running containerized processes as root. If these crucial details had been clearly outlined, I could have saved a lot of time and avoided potentially costly mistakes.
Not using Docker Compose for multiservice configurations
Docker is well-known for running single containerized services; you provide it with a command and the necessary options, and it will run your service. But what’s perhaps lesser-known to beginners is that it can coordinate multiple services together using Docker Compose. Docker Compose can make pulling and configuring one or more services much simpler, especially if they depend on each other.
Docker Compose is actually a subcommand of Docker, and it takes a YAML configuration file (called docker-compose.yaml) to specify the services. The following is a snippet from the official Gitea (a Git server) documentation:
# docker-compose.yaml
volumes:
postgres-data:
gitea-data:
services:
server:
image: docker.gitea.com/gitea:nightly
volumes:
- gitea-data:/data
depends_on:
- db
db:
image: docker.io/library/postgres:14
volumes:
- postgres-data:/var/lib/postgresql/data
The previous example uses the nightly (unstable, development) build of Gitea, which worked in November 2025, but it may not work with Postgres 14 at some point in the future. The YAML file is for demonstration purposes only.
The previous example instructs Docker to create two services, where one “depends_on” the other. It’s a non-functional example because it critically lacks certain options. However, the following example is fully functional.
# docker-compose.yaml
networks:
gitea:
external: false
volumes:
postgres-data:
gitea-data:
services:
server:
image: docker.gitea.com/gitea:nightly
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea
- TZ=America/New_York
restart: always
networks:
- gitea
volumes:
- gitea-data:/data
ports:
- "3000:3000"
- "2222:22"
depends_on:
- db
db:
image: docker.io/library/postgres:14
restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
networks:
- gitea
volumes:
- postgres-data:/var/lib/postgresql/data
There are several things of note from that Compose file:
- The environment variables provide configuration options for each service—for example, creating credentials for the database and giving those credentials to Gitea.
- The Gitea service stores repository data in the /data directory within the container, and we mount that as a persistent volume on the host.
- The Postgres service stores data in the /var/lib/postgresql/data directory within the container, and we mount that as a persistent volume on the host.
A persistent volume stores data outside the container, in /var/lib/docker/volumes.
Now, navigate to the directory containing the docker-compose.yaml file and execute docker compose up. Docker will then pull the necessary images and start both services, which will operate seamlessly together. Using Docker Compose simplifies the process of specifying a complex stack of applications, making it easier and more straightforward than writing a Bash script.
Running containerized processes as root
When I began using Docker, I just assumed containers were fully isolated; that running a rootful process inside a container meant it couldn’t do any harm. I was wrong.
By default, processes running inside Docker containers share the same UID namespace with the host. If you run a root process inside a container, it has UID 0 on the host but with restricted capabilities. However, this still creates a serious security risk, because attackers could exploit vulnerabilities to break out of the container and gain root access to the host machine.
Processes isolated to localhost are not entirely free from risk either. If your containerized applications process external data in any way (e.g., monitoring the network, reading documents, receiving traffic, etc.), then your system is at risk from malicious data and possible exploitation. At minimum, you should treat rootful containers like they’re root-level processes and avoid risky actions with them altogether.
The solution is to run Docker in rootless mode. If you follow the provided steps in Docker’s documentation, running a root-level process inside the container should map it to an unprivileged user on the host. The same applies for the Docker daemon too—both run processes without real root privileges.
If you don’t want to do that, you should at least run processes with the –user flag or a USER directive in your Dockerfile. However, sometimes that’s insufficient, because your containerized process needs to modify protected areas within the container.
I personally use Podman instead of Docker because it runs containers as unprivileged users by default. Podman serves as a drop-in replacement for Docker, is 100% compatible with it, and there is also a Podman Desktop—I highly recommend Podman.
Not using a UI, shell completions, or aliases
Docker commands can be lengthy, and if you’re like me, you execute them often. That leads to a scenario where typing dozens of commands interrupts your workflow.
Take, for example, the Pi-hole command (as suggested by the official documentation):
docker run \
--name pihole \
-p 53:53/tcp \
-p 53:53/udp \
-p 80:80/tcp \
-p 443:443/tcp \
-e TZ=Europe/London \
-e FTLCONF_webserver_api_password="correct horse battery staple" \
-e FTLCONF_dns_listeningMode=all \
-v ./etc-pihole:/etc/pihole \
-v ./etc-dnsmasq.d:/etc/dnsmasq.d \
--cap-add NET_ADMIN \
--restart unless-stopped \
pihole/pihole:latest
In reality, to execute that command, you would use either Docker Compose or a systemd service. However, Docker commands are often unwieldy, and there’s no denying that.
To manage these lengthy commands, we have three options at our disposal: a user interface (UI), shell completions, and aliases.
A UI is my personal favorite, and while Docker Desktop is often recommended, I personally prefer a terminal UI—lazydocker. Both are great choices, and the former is more suitable for most. However, if you rely heavily on keyboard shortcuts, give lazydocker a shot.
Still, there will be times when you need to execute Docker commands directly. With shell completions, it’s not only simple to choose options but also to pick container names—type your Docker command and hit the Tab key when you want it to provide a list of options to choose from.
Aliases are another great choice, both inside and outside a container. You can map Docker aliases on your host or map service aliases inside a container. The former helps to shorten the Docker commands; the latter helps to execute the contents of a container more expressively.
Docker isn’t as straightforward as people think. It’s great at packaging up a tool or service, but it comes with pitfalls and can be unwieldy to use. When I began using Docker, these tips were not immediately available to me, so I’m making them available to you. Essentially:
- Use Docker Compose for complex configurations.
- Ensure you run processes as a limited user.
- Use tools to make Docker administration easier.