Published 14 minutes ago
Samir Makwana is a technology journalist and editor from India since past 18 years and his work appears on MakeUseOf, HowToGeek, GSMArena, BGR, GuidingTech, The Inquisitr, TechInAsia, TechWiser, and others. He has written news, features, and gadget reviews for national technology media publications. His passion is to help people with their technology problems and gadget purchases. For that, he has worked for some of the biggest international technology publications, covering news, explainers, how-to guides, listicles, and product-buying guides.
He has worked as an editor and managed teams since 2015. His expertise broadly covers computers, smartphones, game consoles, headphones, smart home products, browsers, and apps.
[Docker Desktop](https://www.xda-deve…
Published 14 minutes ago
Samir Makwana is a technology journalist and editor from India since past 18 years and his work appears on MakeUseOf, HowToGeek, GSMArena, BGR, GuidingTech, The Inquisitr, TechInAsia, TechWiser, and others. He has written news, features, and gadget reviews for national technology media publications. His passion is to help people with their technology problems and gadget purchases. For that, he has worked for some of the biggest international technology publications, covering news, explainers, how-to guides, listicles, and product-buying guides.
He has worked as an editor and managed teams since 2015. His expertise broadly covers computers, smartphones, game consoles, headphones, smart home products, browsers, and apps.
Docker Desktop offers the convenience of a single GUI-driven app for experimenting, testing, and deploying containers. It’s a no-nonsense tool for quickly spinning up containers on Windows and macOS. I started my self-hosting journey with Docker Desktop, and everything felt almost magical at first.
Running the same containers natively revealed their true nature. Containers have processes that interact directly with the host operating system’s kernel. In that sense, native Docker felt like driving a manual car: more control, more engagement, and a better understanding of what’s happening under the hood. Meanwhile, Docker Desktop felt like an automatic car that simply gets me to the destination. Besides, the difference between the two approaches turned out to be more significant than I initially expected.
Related
Managing and configuring containers
Grasping the underlying controls
Docker Desktop uses the native virtualization technology — WSL2 on Windows and Hypervisor framework on macOS — to run Docker inside a Linux virtual machine. As a result, it was difficult to understand the underlying architecture. Even after checking the logs, I still couldn’t understand why some containers refused to start. The containers that did start weren’t visible in the operating system’s process list, which made troubleshooting a bewildering experience.
On Linux, native Docker interfaces directly with the Linux kernel, removing any extra layer entirely. Containers get isolated using namespaces, making each appear like a separate system. Control groups (cgroups) employ limits on CPU and memory usage. Docker-related processes are inspectable with standard tools, and I often use the ps aux | grep docker command to see the running processes.
For a consistent self-hosted setup, it’s important to keep the containers running continuously. While Docker Desktop-initiated VMs often made things sluggish, the native containers on Linux ran with relatively less overhead. If something goes wrong, restarting Docker with a simple systemctl restart docker command is easier than trying to resurrect a slow desktop application with its virtual machine.
Handling storage and volumes
Persistence is key
Docker Desktop makes storage management virtually invisible. Though I needed to select a base directory for all containers, the filesystem rested inside a virtual machine. So, to inspect container data, I needed to drop into a shell that was inside the desktop app. With native containers, storage became explicit. I created persistent directories to mount and inspect easily from the host system.
I prefer defining volumes and bind mounts in Docker Compose rather than the CLI when referring to a container’s documentation. With Compose, tweaking permissions and mount points was easier in the YAML file for restarting the container than recreating everything from scratch. That’s how I learned that /var/lib/docker is the primary location for all Docker-related data on a Linux host, and that tampering with the data stored there isn’t a good idea.
Dealing with networking
Direct access helps
The Docker Desktop app offers “one-click” ease through its virtualized bridges, but that convenience comes with architectural trade-offs. Since the containers run inside a hidden VM, the container network is isolated from the host OS (Windows or macOS). Any communication between the host OS and containers needs port forwarding to bridge two separate network stacks. That worked with simpler setups, but it became difficult when configuring complex routing, IP tables, firewall rules, or VPN traffic.
Dealing with containers natively meant sharing the container’s network stack more directly with the host OS. I learned a lot about networking by defining the bridges, working with IP tables, and inter-container communication. That helped route container traffic through a dedicated VPN container, such as Gluetun, where visibility and control are important. In comparison, Docker Desktop feels opaque and slow for VPN-based container networking. That extra virtualization layer bogs down the speed and adds friction.
Hardening security and permissions
Learning real-world implications
Docker Desktop features a simplified security context that masks many errors and failures behind the veil of a VM. Even though logs exist, diagnosing issues is difficult, and most problems are solved by restarting the containers or the entire app rather than yak-shaving till the root cause.
In contrast, security and permissions are now part of routine debugging. The Docker daemon and the container’s processes run as root by default. Mapping the container user IDs (UIDs) to the host OS’s user IDs (UIDs) reveals how permission mismatches occur, files appear with different ownership, and there’s always a risk of privilege escalation. That’s what gave me a better perspective on Linux security and permissions with container isolation.
Related
From clicking buttons to understanding containerization
Docker Desktop is beginner-friendly and friction-free. It’s an excellent starting point for learning basic Docker commands and workflows. It’s the native Docker that exposes the real mechanism behind storage, networking, resource management, and security. If your goal extends beyond running containers to understanding how they work in a production environment, then running native Docker helps develop the debugging skills and understanding that Docker Desktop can’t provide.
Credit: Source: Docker