5 Docker Secrets Even Senior Developers Don't Know

Namespaces, cgroups, OverlayFS, filesystem benchmarks, and why Docker Desktop is a VM in disguise — demystifying the engine room of containerization.

  • Docker
  • Linux
  • DevOps
  • Security
  • Performance

Table of Contents

  1. It’s Not a Mini-VM; It’s a Souped-Up Linux Process
  2. Docker on Your Mac or PC is a Clever Illusion
  3. Image Layers Are Like Overhead Projector Transparencies
  4. Your Choice of Filesystem Drastically Affects Performance (and the Winner Might Surprise You)
  5. Docker’s Defaults Can Be Dangerously Permissive
  6. Conclusion: Beyond the Magic

For years, Docker has been an indispensable part of my workflow. Like most developers, I initially treated it as a magic box—the ultimate fix for dependency conflicts and the end of the “it works on my machine” excuse. But true mastery requires looking under the hood. My curiosity pushed me beyond standard commands and into the engine room: deciphering dense documentation, dissecting image layers, and exploring the Linux primitives that make it all possible, like cgroups, namespaces, and the overlay filesystem.

What I discovered is that Docker isn’t magic at all. A significant number of senior developers who leverage Docker daily are unfamiliar with these core mechanics. This article demystifies the containerization giant, revealing the elegant principles you’ve been using all along.

1. It’s Not a Mini-VM; It’s a Souped-Up Linux Process

One of the most persistent misconceptions about Docker is that a container is just a “lightweight virtual machine.” While this provides a rough mental model, the reality is fundamentally different and is the key to Docker’s efficiency. A virtual machine (VM) runs its own complete guest operating system, virtualizing the underlying hardware through a hypervisor. In contrast, a Docker container shares the kernel of the host operating system.

This is made possible by the clever application of two core Linux kernel technologies, orchestrated by a low-level container runtime called runc:

  • Namespaces: This feature provides resource isolation. Namespaces wrap a set of system resources and present them to a process to make it look like it has its own isolated instance. For example, the PID namespace isolates process IDs, the Network namespace isolates network interfaces and IP stacks, and the Mount namespace provides a separate view of the filesystem hierarchy.
  • Control Groups (cgroups): This is the other half of the equation. While namespaces provide isolation, cgroups govern resource allocation and limitation. They allow Docker to manage and limit the CPU, memory, and disk I/O available to a group of processes, ensuring that one container can’t monopolize host resources and starve others.

This distinction is critical. By sharing the host kernel and using namespaces and cgroups for isolation and resource management, a container avoids the overhead of booting a full operating system. It runs as a discrete, sandboxed process, taking up no more memory than any other executable, which is precisely why containers are so much more lightweight and resource-efficient than VMs.

2. Docker on Your Mac or PC is a Clever Illusion

Here’s a fact that surprises many developers: Docker, which is fundamentally built on Linux-exclusive kernel features like namespaces and cgroups, cannot run natively on macOS or Windows. So how is it possible that you can seamlessly run docker run commands on your non-Linux machine?

The answer is a brilliant piece of engineering that creates a clever illusion. To make Docker work on macOS and Windows, Docker Desktop installs and runs a lightweight Linux virtual machine behind the scenes. The Docker commands you type into your terminal aren’t executing directly on your host OS; they are being passed to and executed inside this hidden VM.

This isn’t just trivia; it’s a crucial diagnostic clue. While the developer experience is designed to be seamless, you are not running containers “natively.” This underlying virtualization layer is why some deep networking configurations or filesystem interactions can sometimes behave differently than they would in a pure Linux environment. It’s a testament to the Docker team’s work that this complex abstraction feels so simple to the end-user.

3. Image Layers Are Like Overhead Projector Transparencies

A Docker image is not a single, monolithic file. It is a composite structure made up of multiple, immutable, read-only layers stacked on top of each other. The best way to visualize this is to think of the transparent sheets used with an old-school overhead projector.

A Union File System, such as OverlayFS which Docker uses, works like this analogy. You start with a base sheet (the base image layer, like ubuntu). Each subsequent instruction in your Dockerfile (like RUN apt install or COPY . .) adds a new transparent sheet on top of the stack. When you view the final image, you see a unified, coherent filesystem, even though it’s composed of many distinct layers.

When you run a container from this image, a final, writable layer is added on top. If you try to modify a file that exists in a lower, read-only layer, the filesystem performs a copy-on-write (CoW) operation. The file is copied from the read-only layer up to the top writable container layer, and your changes are applied to this new copy. The original file in the lower layer remains untouched. Technically, this happens using directories like a read-only lowerdir for the base layers and a writable upperdir for the changes, which are then presented as a single merged directory to the container.

This layered architecture is a game-changer for efficiency. Because layers are immutable, they can be cached and shared between multiple images. If you have ten images that all use the same ubuntu base, that base layer is only stored once on your disk. When you pull or build an image, you only need to download or rebuild the layers that have changed, making image distribution and build times incredibly fast.

4. Your Choice of Filesystem Drastically Affects Performance (and the Winner Might Surprise You)

While Docker can run on any modern Linux filesystem, the performance of your containers—especially for I/O-intensive applications—can vary dramatically based on the underlying “backing filesystem” of the host machine. You might assume that a newer, more feature-rich filesystem would automatically provide the best performance, but the data suggests otherwise.

In an academic study that conducted an in-depth performance evaluation of Docker on Ext4, XFS, and Btrfs across several server workloads (web server, mail server, file server), one filesystem consistently delivered the most optimal performance.

The surprising winner was Ext4. This is counter-intuitive for many, as Ext4 is an older and more traditional filesystem compared to a newer, native copy-on-write filesystem like Btrfs. Yet, for many common workloads, its simpler and time-tested design proved superior. For a DevOps team, this is a powerful reminder: benchmark your actual workloads. Don’t chase the new, trendy technology without data, especially when it comes to a foundational layer like the filesystem.

The study’s abstract summarizes the key conclusion:

The experimental results indicate that Ext4 is the most optimal filesystem, among the considered filesystems, for the considered experimental settings. In addition, the XFS filesystem is not suitable for workloads that are dominated by synchronous random write components (e.g., characteristical for mail workload), while the Btrfs filesystem is not suitable for workloads dominated by random write and sequential write components (e.g., file server workload).

This demonstrates a critical lesson: “newer” doesn’t always mean “better,” and performance is highly dependent on the specific workload. For many common, write-heavy applications, the reliable choice often turns out to be the fastest.

5. Docker’s Defaults Can Be Dangerously Permissive

Docker is an incredibly powerful tool, but its design often prioritizes a frictionless “it just works” experience. This creates a tension between convenience and control, where some powerful features have defaults that can lead to significant security vulnerabilities if used without understanding the implications. It is the operator’s responsibility to be vigilant and override these convenient but insecure defaults.

Two critical examples stand out:

  • The --privileged flag: Running a container with this flag is not a minor permission increase. As the official documentation states, it “gives all capabilities to the container… and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do.” This flag effectively breaks the isolation boundary between the container and the host. It should be avoided unless you have a very specific use case, such as running Docker inside of Docker, and you must treat that container with the same caution as any other root process on your host.
  • Publishing ports (-p): There is a common and dangerous misunderstanding about port publishing. By default, running a command like docker run -p 8080:80 does not just map port 80 in the container to port 8080 on your host’s localhost interface. It makes that port available to the “outside world.” This means any service bound to that port is accessible from your network. To securely map a port for host-only access, you must explicitly bind it to the localhost IP address: -p 127.0.0.1:8080:80.

The key takeaway is that Docker’s convenience features can have serious security implications. It is critical to understand what commands are actually doing under the hood rather than simply copying and pasting them from tutorials.

Conclusion: Beyond the Magic

Understanding these truths—that a container is an isolated process, that Docker Desktop relies on a hidden VM, that images are hyper-efficient layered filesystems, that performance hinges on fundamentals, and that convenience can mask security risks—transforms you from a user into an architect.

These insights don’t just make you a better Docker user; they make you a more thoughtful systems engineer who understands the elegant principles underlying one of modern development’s most essential tools.

← All posts