Call me crusty, old-fart, unwilling to embrace change… but docker has always felt like a cop-out to me as a dev. Figure out what breaks and fix it so your app is more robust, stop being lazy.
I pretty much refuse to install any app which only ships as a docker install.
No need to reply to this, you don’t have to agree and I know the battle has been already lost. I don’t care. Hmmph.
It eliminates the dependency of specific distributions problem and, maybe more importantly, it solves the dependency of specific distribution versions problem (i.e. working fine now but might not work at all later in the very same distribution because some libraries are missing or default configuration is different).
For example, one of the games I have in my GOG library is over 10 years old and has a native Linux binary, which won’t work in a modern Debian-based distro by default because some of the libraries it requires aren’t installed (meanwhile, the Windows binary will work just fine with Wine). It would be kinda deluded to expect the devs would keep on updating the Linux native distro (or even the Windows one) for over a decade, whilst if it had been released as a Docker app, that would not be a problem.
So yeah, stuff like Docker does have a reasonable justification when it comes to isolating from some external dependencies which the application devs have no control over, especially when it comes to future-proofing your app: the Docker API itself needs to remain backwards compatible, but there is no requirement that the Linux distros are backwards compatible (something which would be much harder to guarantee).
Mind you, Docker and similar is a bit of a hack to solve a systemic (cultural even) problem in software development which is that devs don’t really do proper dependency management and just throw in everything and the kitchen sink in terms of external libraries (which then depend on external libraries which in turn depend on more external libraries) into the simplest of apps, but that’s a broader software development culture problem and most of present day developers only ever learned the “find some library that does what you need and add it to the list of dependencies of your build tool” way of programming.
I would love it if we solved what’s essentially the core Technical Architecture problem of in present day software development practices, but I have no idea how we can do so, hence the “hack” of things like Docker of pretty much including the whole runtime environment (funnilly enough, a variant of the old way of having your apps build statically with every dependency) to work around it.
Docker is more than a cop out for than one use case. It’s a way for quickly deploy an app irrespective of an environment, so you can scale and rebuild quickly. It fixes a problem that used to be solved by VMs, so in that way it’s more efficient.
Well, nope. For example, FreeBSD doesn’t support Docker – I can’t run dockerized software “irrespective of environment”. It has to be run on one of supported platforms, which I don’t use unfortunately.
A lack of niche OS compatibility isn’t much of a downside. Working on 99.9% of all active OS’s is excellent coverage for a skftware suite.
Besides, freebsd has podman support, which is something like 95% cross compatible with docker. You basically do have docker support on freebsd, just harder.
To deploy a docker container to a Windows host you first need to install a Linux virtual machine (via WSL which is using Hyper-V under the hood).
It’s basically the same process for FreeBSD (minus the optimizations), right?
Containers still need to match the host OS/architecture they are just sandboxed and layer in their own dependencies separate from the host.
But yeah you can’t run them directly. Same for Windows except I guess there are actual windows docker containers that don’t require WSL but if people actually use those it’d be news to me.
Don’t you get it? We’ve saved time and added some reliability to the software! It. Sure it takes 3-5x the resources it needs and costs everyone else money - WE saved time and can say it’s reliable. /S
None, in fact, because I still haven’t got in to using docker! But that is one of the factors that pushes it down the list of things to learn.
I’ve had a number of low-storage laptops, mostly on account of low budget. Ever since taking an 8GB netbook for work (and personal) in the mountains, I’ve developed space-saving strategies and habits!
I love docker… I use it at work and I use it at home.
But I don’t see much reason to use it on a laptop? It’s more of a server thing. I have no docker/podman containers running on my PCs, but I have like 40 of em on my home NAS.
Yeah, I wonder if these people are just being grumpy grognards about something they don’t at all understand? Personal computers are not the use case here.
I hate that it puts package management in Devs hands. The same Devs that usually want root access to run their application and don’t know a vulnerability scan for the life of them. So now rather than having the one up to date version of a package on my system I may have 3 different old ones with differing vulnerabilities and devs that don’t want to change it because “I need this version because it works!”
If this is your take your exposure has been pretty limited. While I agree some devs take it to the extreme, Docker is not a cop out. It (and similar containerization platforms) are invaluable tools.
Using devcontainers (Docker containers in the IDE, basically) I’m able to get my team developing in a consistent environment in mere minutes, without needing to bother IT.
Using Docker orchestration I’m able to do a lot in prod, such as automatic scaling, continuous deployment with automated testing, and in worst case near instantaneous reverts to a previously good state.
And that’s just how I use it as a dev.
As self hosting enthusiast I can deploy new OSS projects without stepping through a lengthy install guide listing various obscure requirements, and if I did want to skip the container (which I’ve only done a few things) I can simply read the Dockerfile to figure out what I need to do instead of hoping the install guide covers all the bases.
And if I need to migrate to a new host? A few DNS updates and SCP/rsync later and I’m done.
I’ve been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We’re an Ops team though so it’s a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.
So far I’ve helped my team of 5 get on them. Some other teams are starting as well. We’ve got Windows, Linux, and Mac OSX that developers are running on their work machine (for now), and the only container specific issue we ever encounter is port conflicts, which are well documented with easy to change environment variables to control.
The only real caveat right now is we have a bunch of micro services, and so their supporting services (redis, mariadb, etc.) end up running multiple times, so their is some performance loss from that. But they’re all designed to be independent, only talking to each other via their API, so the approach works.
Mostly infrastructure as code with folks installing software natively on their windows host (terraform, ansible, powershell modules, but we also do some NPM stuff too). I’m trying to get people used to running a container instead of installing things on their host so I don’t have to chase people down when they run commands using the wrong version or something.
Agreed there – it’s good for onboarding devs and ensuring consistent build environment.
Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…
But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.
And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.
The container should always be updated to march production. In a non-container environment every developer has to do this independently but with containers it only has to be done once and then the developers pull the update which is a git style diff.
Best practice is to have the people who update the production servers be responsible for updating the containers, assuming they aren’t deploying the containers directly.
It’s essentinally no different than updating multiple servers, except one of those servers is then committed to a local container respository.
This also means there are snapshots of each update which can be useful in its own way.
Docker or containers in general provide isolation too, not just declarative image generation, it’s all neatly packaged into one tool that isn’t that heavy on the system either, it’s not a cop out at all.
If I could choose, not for laziness, but for reproducibility and compatibility, I would only package software in 3 formats:
Nix package
Container image
Flatpak
The rest of the native packaging formats are all good in their own way, but not as good. Some may have specific use cased that make them best like Appimage, soooo result…
Yeah, no universal packaging format yet
I’m not a dev exactly. But I got my Linux skills using Slackware and I still have no problem compiling something if there is no package for it. In some cases I will use a appimage(Cura) but for the most part I just install natively. I use ubuntu but always start eliminating snaps on any install and it really doesn’t take that long.
It’s about predictable troubleshooting for a bug, not about whether you can install it. No doubt you can, but now the dev has to figure out what particular feature in your OS is causing the issue.
I had this recently, installed Distrobox, which is just a set of scripts, on Aurora. Could not --clone a container, no how. Blow the OS out and install Fedora 41 which is what Aurora is derived from except it’s rpm-ostree, and not a problem cloning a Distrobox. Closed the bug as there was no point trying to figure out what went on there for some weird edge case of using a specific distro.
Aren’t you at all curious why it failed though? (If not, no harm no foul – I certainly know time diagnosing a bug is always in short supply, from personal experience). What if it’s a symptom of something important that might happen later even in Fedora 41?
Sometimes it just feels like containers are used as justification for devs to blow off bug reports. As a dev I want to understand why a failure occurs.
I’m curious enough, but this seemed like it was going to be hard to track down a way to fix it, and I needed that laptop working for other things, and Aurora was being really flakey in other ways as well so I just nuked it.
I’m happy to burn time debugging an issue for a project, but when I tried to track down a way to bugreport to Aurora, I didn’t find anything easily. And this promised to turn into a fingerpointing issue, so I moved on.
Which app? Because it all depends of how much access and permissions the app needs. Managing volumes or changing devices is usually the problem. So far I’ve only had to layer two apps (on bazzite, though): veracrypt and vorta. To access old backups. Everything else works fine, even desktop integration. Although I prefer to use box buddy to handle distrobox as a UI, which runs as a flatpak without problem. It’s been great so far at resolving that kind of issues: bug, update, now it works.
So I started out with a bugreport to BoxBuddy because that’s where I first found it. Then they said it was just running a distrobox create--clone yaddayadda so I then went to distrobox issues. Then when I tried it in a dualboot on that same machine of Fedora 41 and it worked, I went searching for the Aurora issues tracker with no luck, and then I got on with my life.
I agree that it’s a “cop-out”, but the issue it mitigates is not an individual one but a systemic one. We’ve made it very, very difficult for apps not to rely on environmental conditions that are effectively impossible to control without VMs or containerization. That’s bad, but it’s not fixable by asking all app developers to make their apps work in every platform and environment, because that’s a Herculean task even for a single program. (Just look at all the compatibility work in a codebase that really does work everywhere, such as vim.)
I love docker, it of course comes with some inefficiencies, but let’s be real, getting an app to run on every possible environment with any possible other app or configuration or… that could interfere with yours in some way is hell.
In an ideal world, something like docker is indeed not needed, but the past decades have proven beyond a doubt that alas, we don’t live in this utopia. So something like docker that just sets up a private environment for the app so that nothing else can interfere with it… why not? Anything i’ve got running on docker is just so stable. I never have to worry that any change i do might affect those apps. Updating them is automated, …
Not wasting my and the developers time in exchange for a bit of computer resources, sounds like a good deal. If we find a better way for apps to be able to run on any environment, that would of course be even better, but we haven’t, so docker it is :).
I don’t refuse to install dockerized software - but my system does. While for some people this might be unthinkable, not everyone runs Linux or some proprietary shit. There are many reasons to be unhappy with the trend.
Call me crusty, old-fart, unwilling to embrace change… but docker has always felt like a cop-out to me as a dev. Figure out what breaks and fix it so your app is more robust, stop being lazy.
I pretty much refuse to install any app which only ships as a docker install.
No need to reply to this, you don’t have to agree and I know the battle has been already lost. I don’t care. Hmmph.
It eliminates the dependency of specific distributions problem and, maybe more importantly, it solves the dependency of specific distribution versions problem (i.e. working fine now but might not work at all later in the very same distribution because some libraries are missing or default configuration is different).
For example, one of the games I have in my GOG library is over 10 years old and has a native Linux binary, which won’t work in a modern Debian-based distro by default because some of the libraries it requires aren’t installed (meanwhile, the Windows binary will work just fine with Wine). It would be kinda deluded to expect the devs would keep on updating the Linux native distro (or even the Windows one) for over a decade, whilst if it had been released as a Docker app, that would not be a problem.
So yeah, stuff like Docker does have a reasonable justification when it comes to isolating from some external dependencies which the application devs have no control over, especially when it comes to future-proofing your app: the Docker API itself needs to remain backwards compatible, but there is no requirement that the Linux distros are backwards compatible (something which would be much harder to guarantee).
Mind you, Docker and similar is a bit of a hack to solve a systemic (cultural even) problem in software development which is that devs don’t really do proper dependency management and just throw in everything and the kitchen sink in terms of external libraries (which then depend on external libraries which in turn depend on more external libraries) into the simplest of apps, but that’s a broader software development culture problem and most of present day developers only ever learned the “find some library that does what you need and add it to the list of dependencies of your build tool” way of programming.
I would love it if we solved what’s essentially the core Technical Architecture problem of in present day software development practices, but I have no idea how we can do so, hence the “hack” of things like Docker of pretty much including the whole runtime environment (funnilly enough, a variant of the old way of having your apps build statically with every dependency) to work around it.
Docker is more than a cop out for than one use case. It’s a way for quickly deploy an app irrespective of an environment, so you can scale and rebuild quickly. It fixes a problem that used to be solved by VMs, so in that way it’s more efficient.
Well, nope. For example, FreeBSD doesn’t support Docker – I can’t run dockerized software “irrespective of environment”. It has to be run on one of supported platforms, which I don’t use unfortunately.
A lack of niche OS compatibility isn’t much of a downside. Working on 99.9% of all active OS’s is excellent coverage for a skftware suite.
Besides, freebsd has podman support, which is something like 95% cross compatible with docker. You basically do have docker support on freebsd, just harder.
How is POSIX a niche? 🤨
Just POSIX and no other compatibility? Pretty niche, man.
To deploy a docker container to a Windows host you first need to install a Linux virtual machine (via WSL which is using Hyper-V under the hood).
It’s basically the same process for FreeBSD (minus the optimizations), right?
Containers still need to match the host OS/architecture they are just sandboxed and layer in their own dependencies separate from the host.
But yeah you can’t run them directly. Same for Windows except I guess there are actual windows docker containers that don’t require WSL but if people actually use those it’d be news to me.
There’s also this cursed thing called Windows containers
Now let me go wash my hands, keyboard and my screen after typing that
Yeah I keep discord in one so that it can’t hook my GPU and audio devices.
Well that’s where Java comes in /slaps knee
Why put in a little effort when we can just waste a gigabyte of your hard drive instead?
I have similar feelings about how every website is now a JavaScript application.
Yeah, my time is way more valuable than a gigabyte of drive space. In what world is anyone’s not today?
A gigabyte of drive space is something like 10-20 cents on a good SSD.
It’s a gigabyte of every customer’s drive space.
The value add is even better from a customers perspective.
That I can install far less software on legacy devices because everything new is ridiculously bloated?
Don’t you get it? We’ve saved time and added some reliability to the software! It. Sure it takes 3-5x the resources it needs and costs everyone else money - WE saved time and can say it’s reliable. /S
3-5x the resources my ass.
Mine, on my 128gb dual boot laptop.
How many docker containers would you deploy on a laptop? Also 128gb is tiny even for an SSD these days .
None, in fact, because I still haven’t got in to using docker! But that is one of the factors that pushes it down the list of things to learn.
I’ve had a number of low-storage laptops, mostly on account of low budget. Ever since taking an 8GB netbook for work (and personal) in the mountains, I’ve developed space-saving strategies and habits!
I love docker… I use it at work and I use it at home.
But I don’t see much reason to use it on a laptop? It’s more of a server thing. I have no docker/podman containers running on my PCs, but I have like 40 of em on my home NAS.
Yeah, I wonder if these people are just being grumpy grognards about something they don’t at all understand? Personal computers are not the use case here.
I hate that it puts package management in Devs hands. The same Devs that usually want root access to run their application and don’t know a vulnerability scan for the life of them. So now rather than having the one up to date version of a package on my system I may have 3 different old ones with differing vulnerabilities and devs that don’t want to change it because “I need this version because it works!”
If this is your take your exposure has been pretty limited. While I agree some devs take it to the extreme, Docker is not a cop out. It (and similar containerization platforms) are invaluable tools.
Using devcontainers (Docker containers in the IDE, basically) I’m able to get my team developing in a consistent environment in mere minutes, without needing to bother IT.
Using Docker orchestration I’m able to do a lot in prod, such as automatic scaling, continuous deployment with automated testing, and in worst case near instantaneous reverts to a previously good state.
And that’s just how I use it as a dev.
As self hosting enthusiast I can deploy new OSS projects without stepping through a lengthy install guide listing various obscure requirements, and if I did want to skip the container (which I’ve only done a few things) I can simply read the Dockerfile to figure out what I need to do instead of hoping the install guide covers all the bases.
And if I need to migrate to a new host? A few DNS updates and SCP/rsync later and I’m done.
I’ve been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We’re an Ops team though so it’s a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.
So far I’ve helped my team of 5 get on them. Some other teams are starting as well. We’ve got Windows, Linux, and Mac OSX that developers are running on their work machine (for now), and the only container specific issue we ever encounter is port conflicts, which are well documented with easy to change environment variables to control.
The only real caveat right now is we have a bunch of micro services, and so their supporting services (redis, mariadb, etc.) end up running multiple times, so their is some performance loss from that. But they’re all designed to be independent, only talking to each other via their API, so the approach works.
…what do you mean by using dev containers? Are your people doing development on their host machine?
Mostly infrastructure as code with folks installing software natively on their windows host (terraform, ansible, powershell modules, but we also do some NPM stuff too). I’m trying to get people used to running a container instead of installing things on their host so I don’t have to chase people down when they run commands using the wrong version or something.
Agreed there – it’s good for onboarding devs and ensuring consistent build environment.
Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…
But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.
And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.
The container should always be updated to march production. In a non-container environment every developer has to do this independently but with containers it only has to be done once and then the developers pull the update which is a git style diff.
Best practice is to have the people who update the production servers be responsible for updating the containers, assuming they aren’t deploying the containers directly.
It’s essentinally no different than updating multiple servers, except one of those servers is then committed to a local container respository.
This also means there are snapshots of each update which can be useful in its own way.
You know, all this talk about these benefits… when PHP has had this for ages, no BS needed.
I’ll see myself out.
Docker or containers in general provide isolation too, not just declarative image generation, it’s all neatly packaged into one tool that isn’t that heavy on the system either, it’s not a cop out at all.
If I could choose, not for laziness, but for reproducibility and compatibility, I would only package software in 3 formats:
The rest of the native packaging formats are all good in their own way, but not as good. Some may have specific use cased that make them best like Appimage, soooo result…
Yeah, no universal packaging format yet
I’m not a dev exactly. But I got my Linux skills using Slackware and I still have no problem compiling something if there is no package for it. In some cases I will use a appimage(Cura) but for the most part I just install natively. I use ubuntu but always start eliminating snaps on any install and it really doesn’t take that long.
It’s about predictable troubleshooting for a bug, not about whether you can install it. No doubt you can, but now the dev has to figure out what particular feature in your OS is causing the issue.
I had this recently, installed Distrobox, which is just a set of scripts, on Aurora. Could not --clone a container, no how. Blow the OS out and install Fedora 41 which is what Aurora is derived from except it’s rpm-ostree, and not a problem cloning a Distrobox. Closed the bug as there was no point trying to figure out what went on there for some weird edge case of using a specific distro.
Aren’t you at all curious why it failed though? (If not, no harm no foul – I certainly know time diagnosing a bug is always in short supply, from personal experience). What if it’s a symptom of something important that might happen later even in Fedora 41?
Sometimes it just feels like containers are used as justification for devs to blow off bug reports. As a dev I want to understand why a failure occurs.
I’m curious enough, but this seemed like it was going to be hard to track down a way to fix it, and I needed that laptop working for other things, and Aurora was being really flakey in other ways as well so I just nuked it.
I’m happy to burn time debugging an issue for a project, but when I tried to track down a way to bugreport to Aurora, I didn’t find anything easily. And this promised to turn into a fingerpointing issue, so I moved on.
Which app? Because it all depends of how much access and permissions the app needs. Managing volumes or changing devices is usually the problem. So far I’ve only had to layer two apps (on bazzite, though): veracrypt and vorta. To access old backups. Everything else works fine, even desktop integration. Although I prefer to use box buddy to handle distrobox as a UI, which runs as a flatpak without problem. It’s been great so far at resolving that kind of issues: bug, update, now it works.
So I started out with a bugreport to BoxBuddy because that’s where I first found it. Then they said it was just running a
distrobox create --clone yaddayadda
so I then went to distrobox issues. Then when I tried it in a dualboot on that same machine of Fedora 41 and it worked, I went searching for the Aurora issues tracker with no luck, and then I got on with my life.I agree that it’s a “cop-out”, but the issue it mitigates is not an individual one but a systemic one. We’ve made it very, very difficult for apps not to rely on environmental conditions that are effectively impossible to control without VMs or containerization. That’s bad, but it’s not fixable by asking all app developers to make their apps work in every platform and environment, because that’s a Herculean task even for a single program. (Just look at all the compatibility work in a codebase that really does work everywhere, such as vim.)
I love docker, it of course comes with some inefficiencies, but let’s be real, getting an app to run on every possible environment with any possible other app or configuration or… that could interfere with yours in some way is hell.
In an ideal world, something like docker is indeed not needed, but the past decades have proven beyond a doubt that alas, we don’t live in this utopia. So something like docker that just sets up a private environment for the app so that nothing else can interfere with it… why not? Anything i’ve got running on docker is just so stable. I never have to worry that any change i do might affect those apps. Updating them is automated, …
Not wasting my and the developers time in exchange for a bit of computer resources, sounds like a good deal. If we find a better way for apps to be able to run on any environment, that would of course be even better, but we haven’t, so docker it is :).
Fair enough… I admit I’m a bit of an old curmudgeon, set in my ways. :s
I don’t refuse to install dockerized software - but my system does. While for some people this might be unthinkable, not everyone runs Linux or some proprietary shit. There are many reasons to be unhappy with the trend.