Docker, Why we don't use it - and neither should you

Why we don't use Docker: We simply don't need it or narrowly need it.
Photo: Alexander Bobrov, Pexels.

What is Docker and what can you do with it. A brief explanation for anyone who would like to know more about Docker. Or who would like to freshen it up.

What is Docker


Docker is software aimed at developers. Frequently used for applications in service delivery. It uses container technology. A container is not a virtual machine, but an isolated application. A container runs an application such as the Nginx web server or the PostgreSQL database server.

Why we prefer not to use Docker, unless


Docker clearly has it's place in the land of developers and administrators.
It can add value when managing multiple systems. Additional advantage. Containers, could be easily and quickly transferred from system to another and vice versa.

Unfortunately more and more stand-alone single server software is emerging, as Docker only or will be in the near future. Adding Docker to a single server implementation is needlessly complicated. It gives an extra layer to debug, in case of failures. A simple single app like an analytics service, uptime monitor or relatively simple Nginx, MySQL, PHP server (LEMP) stack doesn't really need Docker at all.

Also, in addition to the apps itself, you need to have far-reaching knowledge of Docker itself. So you will have to delve into an additional program. If you want to manage the system properly. The level of Docker knowledge required isn't there yet for the target audience.

Therefore, the single app target audience is usually not the one to go all in on Docker. In conclusion. Why we don't use Docker: We simply don't need it or narrowly need it. Just use it without a dined purpose is not our choice. We follow the principle of less is more, unless there is really no other option.

The speed factor; Duplicated apps, dependencies and performance degradation


Docker containers are relatively small and require less server performance (CPU, RAM etc.) than a virtual machine with applications. Even though it is still relatively large compared to upstream apps. And an additional process running on the hardware server. On a physical server with limited resources, container processes may overspend operating system CPU core, RAM memory or free space. Which can cause the host server to crash. 

It is also common to use multiple duplicated containers with the same application and purpose. For example, if there are multiple applications (Dockerized) running with a database server and each associated with their own database container. Resulting in two or even more containers running with the same SQL database server. One with MySQL, the other with MySQL software. Just with another container name. And so on and so forth. Which creates unnecessary overhead and likely performance degradation.

Then about the size of the containers. A Fedora-based container with just text editor Nano (Fedora 36, 694 KB) and distributed version control system Git (Fedora 36, 54,6 KB) gives a container an astonishing 830MB in size. Grown by a factor of x +- 1108 in size just to use some simple tools. From additional dependencies packed into each container over and over again. Since these tools and dependencies are usually present on the host. You will then receive an overhead of factor x2 on apps and dependencies when running 1 simple container on a single host. This multiplies itself as (if) more containers are added.

A root-based application


An isolated application, upon hearing the term, seems to be isolated secured. In reality, by default, the Docker daemon runs its containers and container services as root. Whereas directly installed software is usually installed with its own non-root user.

In fact in the latest release of Enterprise Linux like Red Hat (SSH) and Ubuntu (default via Sudo privileges). The root user is disabled by default. There's a decent explanation. An exploited vulnerability can compromise your host.

Since the attacker could almost always gets in at the root level, as the Docker daemon is running as root. It is possible to use Docker non-root. But not every container supports this. Since it is not always tested at non-root level (after all, this is not standard). It could work quite well but the question arises that whether it works optimally. A follow-up question is whether you should want to take that risk.

Don't use Docker if you're looking for an easy solution


As indicated earlier, we don't recommend using Docker should you use a simple single server app. However even in the case of a large and complex application, implementing Docker involves significant additional cost and effort. Building, maintaining and pairing of individual different containers on multiple servers takes a lot of time and effort. 

The Docker ecosystem is quite fragmented. Not all containers work together equally well (guaranteed). After all, each container (the majority) is developed by a separate organization with its own predetermined interests. There quite a competition with container administrators eager to get to the top of the list, which can result in product incompatibility. Or vendor lock-in to put away the competition.

A container is easy to read into and and easily installed, with just one command (Low-Hanging Fruit). But then the adventure only begins. It presents a fair amount of risks that you can't control or hedge yourself in all of them. For example, if the developer changes its course. On which your fate and dependence may depend considerably.

Should the latter scenario occur. Then, of course, you can decide to develop containers yourself. T he downside is that this of course involves even more time and thus costs. 

Resuming 


Just because we prefer secure and lightweight apps. And not to use Docker unless there really is no other option, doesn't mean you shouldn't use it either. The choice is yours and of course up to the end user. So many people so many wishes, eventually.