I’ve heard of immutable OS’s like Fedora Silverblue. As far as I understand it, this means that “system files” are read-only, and that this is more secure.
What I struggle to understand is, what does that mean in practical terms? How does installing packages or configuring software work, if system files can’t be changed?
Another thing I don’t really understand is what the benefits as an end user? What kinds of things can I do (or can be done by malware or someone else) to my Arch system that couldn’t be done on an immutable system? I get that there’s a security benefit just in that malware can’t change system files – but that is achieved by proper permission management on traditional systems too.
And I understand the benefit of something declarative like NixOS or Guix, which are also immutable. But a lot of OS’s seem to be immutable but not purely declarative. I’m struggling to understand why that’s useful.
An immutable distro, to my understanding, locks core components of Linux (mainly /sys afaik) from interaction from not only bad actors but also the user so that you can’t fuck up you’re system in a way like Linus from LTT (removing X11 by forcefully ignoring all warnings). Applications can be installed as Flatpak, AppImage, Snap or through OverlayFS from regular repositories.
Advantages to (non- tech savvy) users are an additional layer against their own mistakes and easier support since the important stuff is identical on every install of the given distro.
This is, IMO, the biggest yet least obvious advantage of immutable systems. A traditional Linux environment is “just as safe” as the immutable setups, if only the user/administrator is perfect, never makes a mistake, and always makes the right decisions for now and the future.
Given reality tends to differ from the above, having a system that, at a bare minimum, provides you the “oh shit go back” button to system-level changes, and at best provides a clear, reproducible, trail of actions, is a huge advantage for long-term stability for all users, experienced or not. I’ve been through the school of hard knocks far too many times maintaining everything from server setups to gaming desktops the traditional way, and have committed to “early adopting” immutable distros for pretty much everything except the gaming setup (given the whole suite of proprietary and out-of-date/out-of-touch applications that are basically necessary in that space and not-fully-compatible with the sandboxes and abstraction layers necessary).
applications are installed with flatpak - basically little containers that contain everything a program needs. sort of like docker
so normally if you wanna install something - let’s say minecraft. you would also need to install java. the flatpak for minecraft would have java inside of it so it can be run in its own little container and you don’t need to install either
Doesn’t that lead to huge redundancy – say, multiple java copies effectively existing in the system? And also to software not optimized for the system (I assume flatpaks are pre-compiled)?
You’re right - having multiple copies of everything is a drawback of housing each application in its own container or VM. The standard rejoinder is that disk space is cheap. The validity of that rejoinder depends on what you’re doing and what hardware or budget you are working with.
Another problem is that old versions of these dependencies will be baked into an image that is then used for many years without updates. This ensures the application keeps working without being disrupted by an update to a shared library, but it also means things like security flaws persist. Arguably, this is mitigated by only that image having the problem, but one insecure app can be a real problem - especially when it accesses shared resources - and when the same problem applies to many applications.
Compiled code optimized for a specific system’s hardware is less relevant than it used to be - even Gentoo users do not focus on this anymore. Rolling your own container isn’t much harder than compiling with your own options.
Flatpaks really have the added benefit of things just work. Many distros have problems with codecs for example and need to install extra packages to get video working in Firefox. The flatpak version doesn’t require any of this and you can just install and move on with your life. Yes dependencies are “redundant” sometimes but you have the added benefit of a really clean base system without hundreds or thousands of lib or dev packages. Also sometimes you need a specific version of a dependency. Let’s say you need to update it for compatibility with a specific package but that breaks another which needs an older version. The system can stay especially clean when it comes to the toolbox utility and dev environments (this is available in other distros as distrobox I think).
I guess what I am trying to figure out is – how would the experience of using flatpak or other containerized software managers differ on an immutable system compared to a mutable one?
Or is the idea more that since you’re containerizing, you can lock everything else for stability in a way that you couldn’t before, because software installs needed to be installed in the system?
Or is the idea more that since you’re containerizing, you can lock everything else for stability in a way that you couldn’t before, because software installs needed to be installed in the system?
It’s this one. For example, with Silverblue, applications are all installed as flatpaks. The system level files are also made as read-only as possible, such that the base systems should look virtually identical across systems. You also get some like NixOS where most of the system setup is defined in a single config file. You can effectively copy your entire setup just by transplanting that file on a new system.
How does installing packages or configuring software work, if system files can’t be changed?
On reboot. You install your changes into a separate part of the filesystem that’s not running and then “switch parts” on next boot. Different distros do this differently. Vanilla OS has an AB system which basically works like Android does it, openSUSE uses btrfs snapshots and Fedora also uses btrfs I think but they got a more complex layering system on top.
I get that there’s a security benefit just in that malware can’t change system files – but that is achieved by proper permission management on traditional systems too.
Is it though? All it takes is a misconfiguration or exploit to bypass it, so having several layers of protection isn’t a bad thing and how any reasonably secure system works. And having parts of your system predetermined as read only is a comparably tough nut to crack.
yeah that’s true, even properly permissioned users can break their systems
especially properly permissioned users, even.
On reboot.
Not necessarily. NixOS has the option to change generations without rebooting.
I’m generally a Windows user, but on the verge of doing a trial run of Fedora Silverblue (just need to find the time). It sounds like a great solution to my… complicated… history with Linux.
I’ve installed Linux dozens of times going back to the 90s (LinuxPPC anyone? Yellow Dog?), and I keep going back to Windows because I tweak everything until it breaks. Then I have no idea how I got to that point, but no time to troubleshoot. Easily being able to get back to a stable system that isn’t a fresh install sounds great.
I’ve been using the same distro for at least 4 years now and I haven’t ever had any issues. Fedora on a desktop at home. It’s very stable. You don’t even need to know too much… although obviously knowing your way around the terminal and knowing some basic things about Linux helps
You don’t understand what it’s like for people who love to fiddle with settings and options without knowing what they do
You can also check btrfs and snapper
It’s not just about malware, but more about system stability and avoiding breaking your system by bad updates. Updates are atomic (all or nothing) Ideally if something goes wrong, the update isn’t applied at all. If you manage to boot to a bad config, you can fix it by rebooting in to the previous known good config.
This is immensely valuable for appliance-type devices that aren’t meant to be “administered” by end users, like the Steam-deck, set top boxes, even Android phones. For laptops / desktops I’m sure it has some value for people who want a stable base, with newer flatpak/AppImages for day to day use.
As for how updates and system packages are installed, I can’t answer the specific technologies used, but I believe the principle is that an entirely new/complete filesystem “image” is created / layered on top. Then you reboot to the new image.
The most basic benefit of immutable OSes like Fedora Silverblue is that you are prevented from messing up your system enough that you are unable to boot into it and fix it. This isn’t strictly true, you can always go out of your way to screw things up (say deleting required partitions), but in normal usage you will always have a backup to boot and fix whatever you messed up. It also makes it extremely easy to undo things even if they aren’t errors.
It’s possible to do this without immutable OSes using btrfs snapshots before you change anything system-wide, in fact I believe MicroOS uses btrfs snapshots for their immutable system, but that adds cognitive load as it requires you to remember to create a snapshot. OpenSUSE Tumbleweed provides snapshotting automatically and adds entries to the bootloader for previous iterations, but it isn’t immutable because you can still go and modify your root partition without taking a snapshot. MicroOS, however, has a read-only root partition so it becomes a lot more difficult to make a change without a snapshot. You can still do it, but you have to go out of your way to do it.