March 5, 2025
Why we love Docker
The topic of this article may seem a bit cliché in 2025 but believe it or not, we at the imgproxy team still meet a bunch of people who prefer building imgproxy from source instead of using our ready-to-use Docker images. Whenever someone asks us why we've chosen Docker as our primary distribution method, we have to explain it from scratch. So, we decided to write this article to have a single place to refer to. Whether you received a link to this article in response to your question or just stumbled upon it, here are our reflections on why we love Docker.
User: "I'm getting an error when I try to process this image. Can you help me?"
Support: "Sure! Which version of imgproxy are you using?"
User: "The latest one. I've built it from the master branch."
Support: "Oh, I see. Could you try our Docker image instead?"
User: "I just tried, and it works! Thanks!"
The "It works on my machine" problem
Every developer has faced this problem at least once in their career. You wrote some code, and it works perfectly on your machine, but when your colleague tries to run it, they get a bunch of errors. Or your CI pipeline fails, yet you can't reproduce the issue locally. Or even worse: your code works on your machine today, tomorrow you update your OS, and everything breaks. Ring a bell, right?
The reason for this differs from case to case. It could be a different operating system, different versions of dependencies or the toolset, or even different environment variables. But every time it ends up with time wasted on debugging and fixing the issue. That's why more and more teams are moving towards using Docker or other containerization tools even for local development.
Docker runs your application in an isolated environment. A Docker image must contain everything your application needs to run: the operating system, the runtime, the dependencies, and the environment variables. This way, when you share your Docker image with your colleagues, you can be sure they will get the same result as you.
The "It works on my machine" problem is particularly acute for developers working on software that runs on users' machines. Debugging issues occurring on machines that you can't access is a nightmare. To reproduce the issue, you need to replicate the user's environment as closely as possible, and even this doesn't guarantee success. The easiest way to avoid this is to ensure your users run your software in a perfectly tailored environment, and distributing your software as a Docker image is the most handy way to achieve this.
In imgproxy, we have a base Docker image that contains all the necessary dependencies and tools to build imgproxy from source. We use this image during development and we build our production images on top of it. This way, we can be sure that imgproxy users will get the same result as we do.
Managing dependencies
This problem partially overlaps with the previous one. And this is the most common source of issues for our users who decide to build imgproxy from source. Complex software like imgproxy usually has a lot of dependencies, and these dependencies can have their dependencies, and so on. Some of these dependencies have to meet specific version requirements, some need to be patched, and some may contain bugs or vulnerabilities that were fixed in newer versions. You can't guarantee that your users' systems provide the required versions of all dependencies, and you definitely can't ask them to install patched versions of some libraries just for your software.
This is where containerization covers your back. In imgproxy, we build the latest versions of all the necessary dependencies from the source and pack them into our Docker images. We can even patch some of them if their released versions don't contain the required fixes yet. This way, our users don't have to bother with installing dependencies. They just need to run our Docker image, and everything will work as expected.
Upgrades and rollbacks
Even if you don't upgrade your software often, you may still need to do it occasionally to get new features, bug fixes, or security patches. What's the usual routine for upgrading software? If the software is provided by your OS package manager, you just run apt-get upgrade
(or something similar) and pray that nothing breaks. If the software is provided as a binary, you download the new version, replace the old one, update the dependencies, and pray again. If the software is built from source, you need to pull the latest changes, rebuild the software, update the dependencies, and, you guessed it, pray. Too much praying for a thing that should go smoothly, right?
With Docker, upgrading your software is just a matter of changing the tag of the Docker image you use. Nothing can't even theoretically be broken because Docker images don't mess with your system and thus don't affect other software. Do you need the changes that are not yet released? No problem! Most of the dockerized software has Docker images built on every commit. Just use the latest
tag, and you're good to go. Are you afraid of using the latest
tag? That's reasonable. Just use a SHA hash of the Docker image instead of a tag. SHA hashes are unique for every image, so you can be sure you're using the exact image you need.
But the most important question is: what if something goes wrong? What if the new version contains a regression or a bug that breaks your application? What if the upgrade process itself messed up something? This is when your hands start to shake, and your heart starts to beat at the frequency of your CPU, especially if you were upgrading the software on a production server. You need to do all the things in reverse, and this is not always easy or even possible. Let's hope you bothered to make a snapshot of your server before the upgrade.
With Docker, you can just change the image's tag back! That's it. Breathe slowly; you're safe. More than that, if you checked the new version on your local machine before upgrading the production server (you did, right?), you can be 99.9999% sure that the upgrade in production will go smoothly, thanks to the isolated environment Docker provides.
Cleaning the garage
Installing and removing software is a usual thing in our lives. We may want to test a new tool or check if some software could fit our project. However, new software usually brings new dependencies or requires new versions of existing ones, and it creates new files like configs, cache, and stuff like that. It's like bringing stuff to your garage: you bring a new thing, then a thing for that thing, then some parts for it, and so on. And when it comes to cleaning, you can't figure out what you can throw away and what you can't. We bet you have a box of cables, connectors, and weird parts of something, and you don't throw them away because you're not sure if this part is from the dishwasher you got rid of five years ago or from your favorite coffee machine. The same happens with software: you can remove the software itself, but it will leave libraries it used, configuration files somewhere in the system, and some other stuff you can't even imagine. Commands like apt-get autoremove
or brew cleanup
help, but they don't solve the problem completely.
That's why I appreciate it when developers provide containerized versions of their software. I can download a Docker image that won't bring garbage to my system and remove it without leaving any traces.
"But I don't want to install Docker just for your software!"
This is a valid point. Docker runs its engine in the background, and it consumes resources. Though Docker Engine comes with useful features, it may be overkill for some users. Luckily, you don't necessarily need Docker to run Docker images. Podman is a daemon-less container engine for running OCI containers. It's compatible with Docker images and can run them in rootless mode.
If you don't like the idea of installing Podman either, you can consider it a single dependency for the software you want to run (and for all the containerized software you'll need to run in the future).
While Docker may look scary to ones who aren't familiar with it, it brings a lot of benefits to both developers and users. Containerization solves a lot of problems we face in software development and distribution, and it's a must-have tool in your toolbox. We hope this article helped you understand why we love Docker and why we use it as our primary distribution method. And if you avoided Docker before, we hope we gave you enough reasons for a second thought.